Tuesday, December 17, 2013

lock-in amplifier

lock-in amplifier (also known as a phase-sensitive detector) is a type of amplifier that can extract a signal with a known carrier wave from an extremely noisy environment (the signal-to-noise ratio can be -60 dB or even less). It is essentially a homodyne detector followed by a steep low pass filter, making it very narrow band. Practical lock-in amplifiers use mixing, through a frequency mixer, to convert the signal's phaseand amplitude to a DC — actually a time-varying low-frequency — voltage signal.



Friday, December 6, 2013

Volume rendering Technique


Ray casting
Several implementations exist for ray casting. We describe the implementation used in Visualization Data Explorer. For every pixel in the output image a ray is shot into the data volume. At a predetermined number of evenly spaced locations along the ray the color and opacity va
lues are obtained by interpolation. The interpolated colors and opacities are merged with each other and with the background by compositing in back-to-front order to yield the color of the pixel. These compositing calculations are simply linear transformations.

Wednesday, December 4, 2013

Volume Rendering

Volume Rendering describes the process of generating a 2D image of a 3D volumetric data set. Two major principles are commonly known for this purpose, surface extraction and direct volume rendering. Isosurface rendering is the most common method of surface extraction. Within the volumetric data volumetric pixels
(voxel) with equal properties are located. These voxels are than combined to form surfaces as polygonal meshes in the three dimensional space of the data volume. These meshes, build on vertices (points in the three dimensional space), edges, polygons and surfaces, can than be processed very efficient with modern video cards, as they just have to be handed to the hardware rendering pipeline. For the surface extraction there are various algorithms known for many years, for example the Marching Cubes algorithm. As this method creates surfaces it is very well suited for visualize sharp property borders within a volume data set. For CT and MRI this method is therefore working very well - e.g. for visualizing the bone structure or the boundaries of internal organs - and even the rendering with non-state-of-the-art video cards is possible.
For direct volume rendering a completely different approach is used. Each single voxel is mapped via some transfer function to color and transparency. Such transfer functions can range from simple ramp or windowing functions to somehow mathematically described functions or even arbitrary mappings using tables. After this mapping of the raw data the volume cube of color values has to be projected on to pixels in a 2D image presented to the user. For this step different rendering techniques are available. The two most commonly known are volume ray casting and texture mapping.
In volume ray casting the 2D image is constructed from the observers point of view. For each pixel in the image plane to be generated, one ray is cast from the observers position through the volume to be rendered in the 3D scene. This ray can than miss the objects or hit an object within the scene. The voxel hit by the ray determines the further proceeding. In the most easiest version it could just deliver back the color values to the pixel of the image, the ray was cast for, and thus determine the color of the pixel. A more complex version would be to accumulate voxel data along the ray and use this accumulated data to determine the color of the original pixel. This is especially needed, when not only solid colors are used, but partial transparency is used. In this case, the color could not only be determined by the very first voxel the ray
has contact with, but voxels laying behind the first one will have an impact to the final color, too. Going this direction
further on, a ray hitting a voxel could also being reflected or partly reflected by a voxel, enabling mirror effects. At this point this method crosses the line to ray tracing for generation of photo realistic images e.g. in CAD systems. This method delivers normally the most accurate and detailed images, but on the cost of relative high need of computational power. As modern video cards do not naturally implement this working principle, the standard rendering pipelines on the video cards can not be used and custom programming on CPU or GPU (as hardware accelerated volume ray casting) is needed.Volume Rendering describes the process of generating a 2D image of a 3D volumetric data set. Two major principles are commonly known for this purpose, surface extraction and direct volume rendering.
Isosurface rendering is the most common method of surface extraction. Within the volumetric data volumetric pixels (voxel) with equal properties are located. These voxels are than combined to form surfaces as polygonal meshes in thethree dimensional space of the data volume. These meshes, build on vertices (points in the three dimensional space), edges, polygons and surfaces, can than be processed very efficient with modern video cards, as they just have to be handed to the hardware rendering pipeline. For the surface extraction there are various algorithms known for many years, for example the Marching Cubes algorithm. As this method creates surfaces it is very well suited for visualize sharp property borders within a volume data set. For CT and MRI this method is therefore working very well - e.g. for visualizing the bone structure or the boundaries of internal organs - and even the rendering with non-state-of-the-art video cards is possible.
For direct volume rendering a completely different approach is used. Each single voxel is mapped via some transfer function to color and transparency. Such transfer functions can range from simple ramp or windowing functions to somehow mathematically described functions or even arbitrary mappings using tables. After this mapping of the raw data the volume cube of color values has to be projected on to pixels in a 2D image presented to the user. For this step different rendering techniques are available. The two most commonly known are volume ray casting and texture mapping.
In volume ray casting the 2D image is constructed from the observers point of view. For each pixel in the image plane to be generated, one ray is cast from the observers position through the volume to be rendered in the 3D scene. This ray can than miss the objects or hit an object within the scene. The voxel hit by the ray determines the further proceeding. In the most easiest version it could just deliver back the color values to the pixel of the image, the ray was cast for, and thus determine the color of the pixel. A more complex version would be to accumulate voxel data along the ray and use this accumulated data to determine the color of the original pixel. This is especially needed, when not only solid colors are used, but partial transparency is used. In this case, the color could not only be determined by the very first voxel the ray
has contact with, but voxels laying behind the first one will have an impact to the final color, too. Going this direction further on, a ray hitting a voxel could also being reflected or partly reflected by a voxel, enabling mirror effects. At this point this method crosses the line to ray tracing for generation of photo realistic images e.g. in CAD systems. This method delivers normally the most accurate and detailed images, but on the cost of relative high need of computational power. As modern video cards do not naturally implement this working principle, the standard rendering pipelines on the video cards can not be used and custom programming on CPU or GPU (as hardware accelerated volume ray casting) is needed.


Three-CCD camera

three-CCD camera is a camera whose imaging system uses three separate charge-coupled devices (CCDs), each one taking a separate measurement of the primary colors, red, green, or blue light. Light coming into the lens is split by a trichroic prism assembly, which directs the appropriate wavelength ranges of light to their respective CCDs. The system is employed by still camerastelecine systems, professional video cameras and some prosumer video cameras.



Charge Couple Device

charge-coupled device (CCD) is a device for the movement of electrical charge, usually from within the device to an area where the charge can be manipulated, for example conversion into a digital value. This is achieved by "shifting" the signals between stages within the device one at a time. CCDs move charge between capacitive bins in the device, with the shift allowing for the transfer of charge between bins.
The CCD is a major piece of technology in digital imaging. In a CCD image sensorpixels are represented by p-doped MOS capacitors. These capacitors are biased above the threshold for inversion when image acquisition begins, allowing the conversion of incoming photons into electron charges at the semiconductor-oxide interface; the CCD is then used to read out these charges. Although CCDs are not the only technology to allow for light detection, CCD image sensors are widely used in professional, medical, and scientific applications where high-quality image data is required. In applications with less exacting quality demands, such as consumer and professional digital camerasactive pixel sensors (CMOS) are generally used; the large quality advantage CCDs enjoyed early on has narrowed over time.


Optical Coherence Tomography

Optical tomographic techniques are of particular importance in the medical field, because these techniques can provide non-invasive or non-contact diagnostic images.OCT can provide cross sectional images of tissue structure on the micron scale in situ and in real time. Optical coherence tomography (OCT) is a method of high-resolution imaging . Optical coherence tomography (OCT) is a  biomedical optical imaging technology which performs high-resolution and cross-sectional tomographic imaging of internal structure of the biological systems and materials by measuring the back-reflected or backscattered light.Optical Coherence Tomography (OCT) is an optical technique which permits cross-sectional tomographic imaging in highly turbid media. OCT is analogous to ultrasound technique except that it uses light instead of sound. OCT performs imaging by using low-coherence interferometry. It measures either the echo time delay or intensity of backscattered light from internal micro-structure in the tissue. Using OCT, image resolutions of few um can be achieved which is over the one or two order of magnitude higher than conventional ultrasound technique.  Optical Coherence Tomography having the potential to provide the micrometer scale imaging of internal micrometer in materials and biological tissues  and real time. Optical Coherence Tomography has revolutionized in the era of bio-medical science because of its huge applications.