The OpenCL kernel I've been using for depth extrusion is this:

//George Toledo, 2010. www.georgetoledo.com

__kernel void read_depth(__rd image2d_t depthImg, __rd image2d_t rgbaImg, float displacement, __global float4 *mesh, __global float4 *colors)

{

int tid_x = get_global_id(0),

tid_y = get_global_id(1),

indx = tid_y * get_global_size(0) + tid_x;

float4 depthColor, rgbaColor, vertex;

int2 pos = (int2)(tid_x, tid_y);

sampler_t smp = CLK_ADDRESS_CLAMP_TO_EDGE | CLK_FILTER_NEAREST;

//sample

depthColor = read_imagef(depthImg, smp, pos);

rgbaColor = read_imagef(rgbaImg, smp, pos);

//define output colors

colors[indx] = rgbaColor;

//displace by depth map

vertex = (float4)((float)tid_x/get_global_size(0)-0.5, 1.-(float)tid_y/get_global_size(1)-0.5, (displacement*length(depthColor.xyzw)), 1.);

mesh[indx] = vertex;

}

The overview of what's going on there is that two images are being sampled, one image that will determine the pixel color output, and another that will determine the amount of extrusion. Then, a structure of vertices and a structure of color is output.

One of the built in functions of OpenCL is the ability to get the "length" of a vector. The length of the vector is the result of adding each of the lanes (eg., xyzw) in the vector. In the case of this kernel, the length of the depthColor vector is equivalent to luminosity, or white. In turn, the length of the vector influences the amount of extrusion (displacement), because the length of the vector is multiplied by "displacement". So, length of depthColor vector directly influences placement of geometry on the Z-axis.

After using this to visualize the results of the kinect, I realized it would be handy to be able to "chop out" vertices that are a given distance from the camera; a "floor" function.

I thought about a few different routes. Finally, I decided that I would keep vertex count consistent, and color vertices alpha if they where below a certain luminosity value. That would give the effect of culling geometry, when using Over blend mode. While doing that, I decided I may as well add the inverse function as well, and allow the clipping of geometry near the camera.

The first step was to declare two new inputs, "floor" and "wall". Both of these values are floats, so all you have to do is write the new inputs into the kernel and declare them as floats.

__kernel void read_depth(__rd image2d_t depthImg, __rd image2d_t rgbaImg, float displacement, float floor, float wall, __global float4 *mesh, __global float4 *colors)

Secondly, the kernel has no idea what alpha is. So, where I'm declaring my float4's for depthColor, rgbColor, and vertex, I add in a line:

float4 alpha = (float4)(0.,0.,0.,0.);

This lets me call upon alpha in the kernel when I'm declaring what the output colors should be.

So, the good part. Using some logical reasoning, we can pseudocode out the problem.

If length of vector is less than floor, color should equal alpha.

If length of vector is greater than wall, color should equal alpha.

Thinking about it further, something becomes more obvious. If there is no displacement, there is no valid luminosity, and no valid depth channel. So, vector length must be 0. So, if there is no displacement or vector length things must be alpha. This rejiggers the logic. Alpha is default, and colors get assigned upon condition of there actually being luminosity. Now, I'm not saying this is a must, but it seems more appropriate to write the kernel this way, to me.

The logic becomes:

//define output colors

colors[indx] = alpha;

if(length(depthColor.xyzw) > floor)

if(length(depthColor.xyzw) < wall)

colors[indx] = rgbaColor;

The entire kernel is then:

//George Toledo, 2010. www.georgetoledo.com

__kernel void read_depth(__rd image2d_t depthImg, __rd image2d_t rgbaImg, float displacement, float floor, float wall, __global float4 *mesh, __global float4 *colors)

{

int tid_x = get_global_id(0),

tid_y = get_global_id(1),

indx = tid_y * get_global_size(0) + tid_x;

float4 depthColor, rgbaColor, vertex;

float4 alpha = (float4)(0.,0.,0.,0.);

int2 pos = (int2)(tid_x, tid_y);

sampler_t smp = CLK_ADDRESS_CLAMP_TO_EDGE | CLK_FILTER_NEAREST;

//sample colors

depthColor = read_imagef(depthImg, smp, pos);

rgbaColor = read_imagef(rgbaImg, smp, pos));

//define output colors

colors[indx] = alpha;

if(length(depthColor.xyzw) > floor)

if(length(depthColor.xyzw) < wall)

colors[indx] = rgbaColor;

//displace by depth map

vertex = (float4)((float)tid_x/get_global_size(0)-0.5, 1.-(float)tid_y/get_global_size(1)-0.5, (displacement*length(depthColor.xyzw)), 1.);

mesh[indx] = vertex;

}

I'm including a qtz reference file for the kernel in the box download widget, titled "Kinect Read Depth w Floor and Wall."