I'm playing around with voxel rendering and I need some interesting datasets. So far all I've found is a head CT scan of a cadaver. While interesting, it's only 256x256x113 voxels. It's also very convex, so it doesn't really show off shadow effects.
Does anyone know where to find some sample datasets?
Cool renderings: http://imgur.com/Wo0Tv6a
Both renderings are from the same dataset with the same camera and lighting settings, but with different density cutoffs.
Notice how the bandages cast a translucent shadow while being themselves translucent.
The weird lines around the mouth are an issue with the original dataset. I think the subject had lots of metal fillings. That tends to throw off CT scanners.
I believe if I simply do a trilinear (i.e. linear in three dimansions) interpolation, that should be enough to render slopes. But yes, for now this is good enough. Linear interpolation is always a hassle because you have to take care to mind your boundaries. It's also going to be slower because every density query requires looking at eight voxels.
What kind of interpolation are you doing now? I assume you are using lookups into a 3D texture in the shader, and the GPU is doing the trilinear interpolation for you. Without the hardware doing the interpolation it will be slower, but it's very strait forward calculation.
Right now I'm just using nearest neighbor. I'm doing something a bit more sophisticated than simple lookups that lets me render translucent volumes: I treat the voxel value as an alpha channel and lookup all voxels in the line of sight of a pixel until the sum of the alphas is at least 1. Additionally, for each of those voxels I compute an illumination value that depends on the sum of the alphas between that voxel and the light source (in this case, at infinity). The value that's displayed in the pixel is a combination of the illumination values of all visible voxels in that line of sight, with farther voxels contributing less.
Interpolating will help with the artifacts you are getting (the concentric rings).
Usually what is done is that you lookup the intensity value from the 3D texture, then use that to compute an index into another texture storing the transfer function (the transfer function maps intensity to color and opacity). The GPU hardware automatically interpolates the lookups. Then you can calculate the lighting.
Basically what you are doing is using transfer function that is a step function (ignoring small intensities, then treating all others the same). Using a more complicated tf, you can do a lot more, like coloring bone one color, and skin another, or making the skin and muscle semi-transparent and highlighting veins and bones, etc.
In my case the hardware doesn't interpolate automatically because I'm not using a shader. I'm doing this in CUDA.
Using a more complicated tf, you can do a lot more, like coloring bone one color, and skin another, or making the skin and muscle semi-transparent and highlighting veins and bones, etc.
I suppose that could be easily done by, after getting the illumination value, using the alpha to index an RGBA gradient and multiplying the result with the illumination, and then alpha-blending that into the final color value.