I read Iñigo Quilez's "Rendering Worlds with Two Triangles with raytracing on the GPU in 4096 bytes" not long after it came out, and was really fascinated with the principles that were outlined, but some of it was extremely far away from the type of programming I was doing at the time (mainly image processing based Java, Processing, and some Q.C.). I also read the infamous "Zeno.pdf" from 2000, where much of this originated. (Here's a treat; I've noticed links for this that used to be the "go-to" places to download it from seem to be broken, so here's a link to my copy of the Zeno.pdf .)
At that time it sounded really great, but my computer couldn't actually run many of the shaders referenced. The intel gpu just took a poop while running shaders that in "modern day"(a few years later!), run exceedingly quickly. So, at that time it became a bit of a curiosity, more than something that I considered that practical, at least for me and the hardware I had.
Over the years, I've noticed that the hardware I've encountered (my current computer, and others), run these kind of fragment shaders pretty well, to blazingly fast. With the release of Lion OS X, it's unveiled a bit of a new world where Mac users not only have access to "modern GL" through Core Profile, but the "classic flavor" fixed function pipeline GL stuff, can be accessed in web browser in many cases, which is tremendous. It seems to be causing there to be more innovation in what can be done in GLSL 1.2, entirely in the fragment shader, or at least, fueling people to share past innovation.
So, I have these moments, where I'm programming something in OpenGL in a more traditional way, generating a bunch of vertices, texturing them, etc. In the back of my mind, I can't help but thing "George... the fragment shader will give you better lighting and almost freebie shadows. George, the fragment shader renders countless replicas of an object with basically no measurable hit. George, the fragment shader will let you mix between objects quickly." Then I think, "oh yeah, none of that exactly matters for this instance."
There's the real world, knocking at the door. QC doesn't have a Core Profile context. So, you can't do "modern GL" stuff, like treat pixels and structures the same, use geometry shader(well, this isn't core profile, but still), declare multiple outs, etc. If you want to pass something like a structure, what can you do? Create a texture, and read the pixel values as if they're xyzw type values, do whatever math needs to be done to get it to "work" for you, and deal with it (the only workaround I've been able to think of). Then, you're fighting limited steps of resolution, texture limits, potentially stuff like Mac OS X color correction, color curves not working right, etc., etc. There's a whole toolbox that needs to be worked up, which is both exciting, and a big pain in the butt.
Working strictly with Core Profile is a possibility (theoretically), but I feel like it's a little too bleeding edge to go full bore with it on Mac right now. Maybe it's because I've been burned by lack of consistent support for OpenCL, and even fixed function OpenGL stuff across Macs. I'm even a little leery of using good old GLSL 1.20 shaders too much, lest I be caught with my pants down because of some obscure GPU bug, and that should be totally solid by now. I also wonder... do all of the new Macs, that have Intel GPU's, run this stuff ok? I don't really know, but past experiences with Intel GPU's make me really wary (like, the reason I originally *didn't* look into heavy frag shader use some years back, with my piss-poor X3100).
Still, I feel like there's going to be increasing interest in doing stuff "all in the fragment shader", and probably in other technologies that allow one to "talk to" the gpu in similar ways.
That's it for now :-)