(originally posted @ http://members.virtualtourist.com/m/28be5/c77/ )
This photo illustrates the principle. The closer islands are together, the stronger the appearance of the land below the water threshold. If the water level lowered (or the land raised somehow), one would have the impression of the islands growing into one another.
This particular implementation starts with creating white colored spheres. I'm using Quartz Composer for this, and animating the spheres x and y translations with interpolation patches. The main technologies being manipulated to create the processing chain are Core Image, while the rendering is uses GLSL shaders. Back to first step though! Hum-drum, white spheres.
After that, I process the spheres through a CIMorphologyLaplacian filter.
Even though this image looks as though it is entirely black, there's an important distinction to make; it isn't.
The filter is creating a steep up and down peak where the border of the white sphere and the "clear" patch meet. The "clear patch" is actually rendering alpha black/clear; we are viewing this in over blend mode with a pale blue background; we aren't seeing "clear". The morphology function also pushes that spike inwards and has the effect of filling small holes, performing a closing function ( http://homepages.inf.ed.ac.uk/rbf/HIPR2/close.htm#2 ) that contributes to a major element on our metaball glop factor.
Running a sobel process on the image shows us the edge created by the laplacian morphology very clearly. Since the sobel calculates the gradient of the image intensity function, and the source imagery is white/alpha, we see bright white pixel strip. We can see how the morphology aspect is opening and closing gaps at this point by looking at the areas where spheres overlap.
Inverting the image (pictured above) brings us back to a place where we're close to the color schema we started with. Running it through the Height Field From Mask (Apple CI Filter), is an important next step.
We can see that we have now created a gradient function. The ridge areas are darker than the inner areas, because the input source has is surrounded by black pixels, inside and out. This also doesn't simply make the output pixels a flat gray, but also "curves"; middle pixels are bright, growing progressively darker as they reach the black pixel strips. It creates gradient on the inner part of the imagery as well.
Next, this gets processed through a gamma adjustment to adjust the gradient slightly, then back through a CICheapMorphology function. This morphology function essentially takes all of the outer area, of darker grey pixel intensity, and makes it black.
So, now we are left with the inner white image, same as the source spheres we began with, that have a slight gaussian. We've taken the outer edge and eroded it away, in a sense, by making it completely black. Areas of greater pixel intensity start to "pucker" and merge as they get close to one another because of the first morphology function, and are intensified by this second "cheap", sans laplacian function run. The metaball effect is intense at this point. We actually could have lopped out some of my first outer ridge steps, but they make the final result more nuanced and "curved".
Where the actual spherical geometry is overlapping, we've managed to winnow away pixels, essentially raising the water line. We're managing to expose pixels at the center of these meeting points. This creates the gloopy effect of pixels pooling and pulling.
If one was to make the clear color (the pale blue area) black (image above), we would have a nice 2D metaball effect, and by products of the edge filtering wouldn't be too noticeable if at all. This is run through a gaussian filter at a radius of 2, to do a slight smoothing for the subsequent glsl processing.
What's being done next is to create a convex hull metaball with GLSL. I term this 2.5D, due to the fact that while the scene can be rotated, geometry is 3D, and we see 3D spherical objects, the function of the surfaces seeming to grow into one another only work in 2 dimensions, x and y. There are no objects "behind" other ones in the Z plane.
uniform sampler2D depthMap;
uniform float amount,clip;
varying float displacement;
gl_TexCoord = gl_TextureMatrix * gl_MultiTexCoord0;
//Transform vertex by modelview and projection matrices
gl_Position = gl_ModelViewProjectionMatrix * (gl_Vertex+vec4(0,0,displacement*amount,0));
gl_FrontColor = gl_Color;
We can see that the Vertex function is a displacement shader, where white intensity =+Z.
So, this gradient setup has been preparing us for feeding the 2D image to a GLSL function that creates a mesh that is sloped because of the fact the source is a smooth gradient. The black outer edge falls at z=0. The clip function allows us to essentially clip it from the output; this is part of the reasoning behind why I run that function. If we didn't have a gradient, we would be looking at something that appeared more like cylinders moving around. At this point, we've arrived at the final visual result.
So, now we're seeing the final result. Note how the spheres that are closer to one another appear to have points that are trying to attract into each other. Now, let me show the appearance of the 2D metaball image with the black outline after it had been run through the gaussian filter at a radius of 2; the source texture as well as the texture used for depth map, in this case.
It looks fugly to me. However, it's important in smoothing the edges of the extruded mesh in a way that gives a nice curve (when hyped by the gamma adjustment preceding), and also winds up creating a pleasant pseudo shaded lighting effect on the mesh texture. So, we've started with a two tone image, and now we have a essentially created a type of watershed scenario, where we've sunken the edge, and made intensity slope, as though we had something fairly close to a hemi-spherical or hemi-ovaloid shape.
After the gaussian blur, and the clip effect on the glsl shader, we wind up with what appears to be a kind of pleasant lighting effect, but it's really just the texture image. (This metaball scene is rotated approximately -15 degrees X.)
By having implemented the clip, two GLSL grids and shader environments can be places, with extrusion on one being the inverse of the other. This creates a hull. So, it doesn't look as though we have "mountains" jutting up from ground anymore, it appears as though we have legitimate spherical/ovaloid shapes. The curve of the gradient was enhanced by the gamma and gaussian steps to appear more curved, at these particular clip and extrusion levels.
Viewing from the side reveals the two limitations; no blobs are really in front or behind each other, and the pseudo lighting effect starts to look more unrealistic. It's not horrible though at all, and runs extremely quickly on the nVidia 9600 (around 60 fps at sizable rendering destinations). Many 3D metaball implementations cannot achieve this, while this one, with the shortcuts and somewhat non standard approach, can.
The upshoot of this, is that some of the principles can be applied to full 3D scenarios, allowing a kind of cheap mesh to be made. One can use similar principles inside of the OpenCL kernel to weight a skeleton, create a hull around the bones, and output all the geometry as one object (as opposed to the two grid route).
That's sort of easier in CL for me, as is specifying a bone hierarchy (to be discussed more in the future). By having each bone actually represent a kind of linear (or curved or whatever, actually) function of effecting the joint, to "not" effecting the joint (sort of like a linear gradient would on pushing out verts using the example GLSL code, actually), you can transform the skin in z and control how much each bone can effect of each vertex of the outer skin, as opposed to the kind of 2.5D implementation of the QC GLSL method used in the metaball implementation I've been discussing.
This was created by taking in joint position data from OpenNI/Nite frameworks, and then creating an OpenCL routine to warp a mesh created from the source skeletal data, with joints weighting the mesh. It's nowhere near where I want it to be yet (lo rez, super triangly), but it's a nascent implementation. Skinning doesn't have to be generated from incoming data; bones can be assigned to regions of a non-skeleton mesh, and have it warp the mesh as though there were a skeleton (proved in theory, but not optimized it yet, at least in my own work.)
Interestingly, the watershed method is pretty integral to many skeletonization and blob tree functions, so it's not so surprising that a technique of making procedural skin for a skeleton involves the inverse of similar processes.