Volume Rendering describes the process of generating a 2D image of a 3D volumetric data set. Two major principles are commonly known for this purpose, surface extraction and direct volume rendering. Isosurface rendering is the most common method of surface extraction. Within the volumetric data volumetric pixels
(voxel) with equal properties are located. These voxels are than combined to form surfaces as polygonal meshes in the three dimensional space of the data volume. These meshes, build on vertices (points in the three dimensional space), edges, polygons and surfaces, can than be processed very efficient with modern video cards, as they just have to be handed to the hardware rendering pipeline. For the surface extraction there are various algorithms known for many years, for example the Marching Cubes algorithm. As this method creates surfaces it is very well suited for visualize sharp property borders within a volume data set. For CT and MRI this method is therefore working very well - e.g. for visualizing the bone structure or the boundaries of internal organs - and even the rendering with non-state-of-the-art video cards is possible.
For direct volume rendering a completely different approach is used. Each single voxel is mapped via some transfer function to color and transparency. Such transfer functions can range from simple ramp or windowing functions to somehow mathematically described functions or even arbitrary mappings using tables. After this mapping of the raw data the volume cube of color values has to be projected on to pixels in a 2D image presented to the user. For this step different rendering techniques are available. The two most commonly known are volume ray casting and texture mapping.
In volume ray casting the 2D image is constructed from the observers point of view. For each pixel in the image plane to be generated, one ray is cast from the observers position through the volume to be rendered in the 3D scene. This ray can than miss the objects or hit an object within the scene. The voxel hit by the ray determines the further proceeding. In the most easiest version it could just deliver back the color values to the pixel of the image, the ray was cast for, and thus determine the color of the pixel. A more complex version would be to accumulate voxel data along the ray and use this accumulated data to determine the color of the original pixel. This is especially needed, when not only solid colors are used, but partial transparency is used. In this case, the color could not only be determined by the very first voxel the ray
has contact with, but voxels laying behind the first one will have an impact to the final color, too. Going this direction
further on, a ray hitting a voxel could also being reflected or partly reflected by a voxel, enabling mirror effects. At this point this method crosses the line to ray tracing for generation of photo realistic images e.g. in CAD systems. This method delivers normally the most accurate and detailed images, but on the cost of relative high need of computational power. As modern video cards do not naturally implement this working principle, the standard rendering pipelines on the video cards can not be used and custom programming on CPU or GPU (as hardware accelerated volume ray casting) is needed.Volume Rendering describes the process of generating a 2D image of a 3D volumetric data set. Two major principles are commonly known for this purpose, surface extraction and direct volume rendering.
Isosurface rendering is the most common method of surface extraction. Within the volumetric data volumetric pixels (voxel) with equal properties are located. These voxels are than combined to form surfaces as polygonal meshes in thethree dimensional space of the data volume. These meshes, build on vertices (points in the three dimensional space), edges, polygons and surfaces, can than be processed very efficient with modern video cards, as they just have to be handed to the hardware rendering pipeline. For the surface extraction there are various algorithms known for many years, for example the Marching Cubes algorithm. As this method creates surfaces it is very well suited for visualize sharp property borders within a volume data set. For CT and MRI this method is therefore working very well - e.g. for visualizing the bone structure or the boundaries of internal organs - and even the rendering with non-state-of-the-art video cards is possible.
For direct volume rendering a completely different approach is used. Each single voxel is mapped via some transfer function to color and transparency. Such transfer functions can range from simple ramp or windowing functions to somehow mathematically described functions or even arbitrary mappings using tables. After this mapping of the raw data the volume cube of color values has to be projected on to pixels in a 2D image presented to the user. For this step different rendering techniques are available. The two most commonly known are volume ray casting and texture mapping.
In volume ray casting the 2D image is constructed from the observers point of view. For each pixel in the image plane to be generated, one ray is cast from the observers position through the volume to be rendered in the 3D scene. This ray can than miss the objects or hit an object within the scene. The voxel hit by the ray determines the further proceeding. In the most easiest version it could just deliver back the color values to the pixel of the image, the ray was cast for, and thus determine the color of the pixel. A more complex version would be to accumulate voxel data along the ray and use this accumulated data to determine the color of the original pixel. This is especially needed, when not only solid colors are used, but partial transparency is used. In this case, the color could not only be determined by the very first voxel the ray
has contact with, but voxels laying behind the first one will have an impact to the final color, too. Going this direction further on, a ray hitting a voxel could also being reflected or partly reflected by a voxel, enabling mirror effects. At this point this method crosses the line to ray tracing for generation of photo realistic images e.g. in CAD systems. This method delivers normally the most accurate and detailed images, but on the cost of relative high need of computational power. As modern video cards do not naturally implement this working principle, the standard rendering pipelines on the video cards can not be used and custom programming on CPU or GPU (as hardware accelerated volume ray casting) is needed.
No comments:
Post a Comment