Tags:
create new tag
view all tags

Paper Summaries


Fourier Optics Papers and Others

These works are related to optical image processing using Fourier optics. This list was originally made for Gordon's contrast reduction project, so not all papers are Fourier optics papers. This list was made on January 17, 2008.

Zernike Phase Contrast

It might be helpful to reference the original Zernike Phase Contrast method.
* Zernike, F. "Diffraction theory of knife-edge test and its improved form, the phase-contrast method," Royal Astronomy Society Monthly Notices: 94, 377-384, 1934.

All-optical spatial filtering with power-limiting materials

Chandra Yelleswarapu, Pengfei Wu, Sri-Rajasekhar Kothapalli, D.V.G.L.N. Rao, Brian Kimball, S. Siva Sankara Sai, R. Gowrishankar, S. Sivaramakrishnan
Optics Express 2006
Forward References: 0
http://www.opticsinfobase.org/abstract.cfm?URI=oe-14-4-1451
  • OBJECTIVE: edge enhanced images, control enhancement with input intensity
  • HOW: Put nonlinear material in Fourier plane to act as high pass filter. The nonlinear material absorbs high intensities and allows lower intensities below a power-limiting threshold to pass. Since low frequencies have high intensities, they are absorbed while lower intensity high frequencies are transmitted. When the input intensity is above the power-limiting threshold, a high pass filter is created and the edge enhancement can be controlled by increasing input intensity.
  • OPINION: Edge image results are pretty good (compared to what I've seen in other papers). This paper is worth looking at even though it doesn't have any forward references yet. Previous work included the necessity of aligning other beams into the Fourier plane with careful attention to overlap of the two beams for frequency selection (not so easy!, and problematic).

All-optical image processing by means of a photosensitive nonlinear liquid-crystal film: edge enhancement and image addition-subtraction

M.Y. Shih, A. Shishido, I.C. Khoo
Optics Letters 2001
Forward References: 7
http://www.opticsinfobase.org/abstract.cfm?&id=64908
  • OBJECTIVE: edge enhancement, image addition-subtraction
  • HOW: Put twisted PNLC (photosensitive nonlinear liquid-crystal) in F. plane. The high intensities of lower frequency components induce a polarization change in the PNLC making them have different polarization than the high frequency components. An analyzer (polarizer) controls the selection of frequencies and therefore edge enhancement. Image addition and subtraction are performed with a different setup (not Fourier optics) using INcoherent linearly polarized light and the PNLC sample. Signals A and B have different polarizations and are sent through the crystal. The angle of the polarizer on the output side of the crystal determines whether A and B are added or subtracted.
  • OPINION: This paper also seems nice, but I think the previous paper (which references this one) has slightly better edge results. The author I.C. Khoo has multiple papers in this area, which probably gives an indication of the quality of the work.
SIMILAR PAPER: All-optical neural-net-like image processing with photosensitive nonlinear nematic film
I.C. Khoo, K. Chen, A. Diaz
Optics Letters 2003
Forward References: 0
http://www.opticsinfobase.org/abstract.cfm?id=78022
  • OBJECTIVE: perform neural-net-like image processing optically. such as edge enhancement.
  • OPINION: This paper seems the same as his previous work; it's the same setup. I think he's just making the claim that the system can do neural-net-like image processing. I wouldn't really look at this.

All-optical image processing with a supranonlinear dye-doped liquid-crystal film

M.Y. Shih, A. Shishido, P.H. Chen, M.V. Wood, I.C. Khoo
Optics Letters 2000
Forward References: 3
http://www.opticsinfobase.org/abstract.cfm?&id=62054
  • OBJECTIVE: invert image intensities
  • HOW: Put nonlinear dye-doped nematic liquid crystal in F. plane to act as a phase modulation element. Amplitude modulation in the output signal is achieved by increasing the input beam power or using an external control beam. As the signal power is increased, the output goes from being a copy of the input to an inverted version of the input.
  • OPINION: Results show transition from input to inverted image. This work is also nice, but I would reference Yelleswarapu's "power-limiting material" paper over this one.

Programmable birefringent lenses with a liquid crystal display

Jeffrey Davis, Garrett Evans, Karlton Crabtree, Ignacio Moreno
Applied Optics 2004
Forward References: 0
http://www.opticsinfobase.org/abstract.cfm?URI=ao-43-34-6235
  • OBJECTIVE: create birefringent lens. add/subtract image for edge enhancement/blurring.
  • HOW: Not Fourier optics. Display a programmable diffractive lens on the LCD. Put LCD in back of normal glass lens. The LCD is vertically polarized, so for one polarization the lens+LCD becomes a lens combination, and for the other polarization only the glass lens affects it. This causes the object to be focused onto two image planes with different polarizations. A camera takes images focused at a single depth, so it captures a combination of the focused and blurred images of the same object. Since the focused and blurred images of the same object have different polarizations, a polarizer can be inserted into the path to control the addition/subtraction of the two images.
  • OPINION: Even though this paper does not use Fourier optics, it is interesting because it optically adds/subtracts focused and blurred versions of the same object. Figure 5 shows a) focused, b) blurred, c) addition, d) subtraction. The addition is edge blurring; the subtraction is edge enhancing. I don't know if this paper fits into your related work section though.

Detail-Preserving Contrast Reduction For Still Cameras

Prasanna Rangarajan, Panos Papamichalis
IEEE Intl. Conf. on Img. Proc. 2006
http://ieeexplore.ieee.org/iel5/4106439/4106440/04107172.pdf?tp=&arnumber=4107172&isnumber=4106440
  • ABSTRACT: This paper describes a detail-preserving contrast reduction technique for images with excessive illumination variation. In an attempt to preserve detail, an image is first separated into its illumination and reflectance components, by a partial differential equation. The illumination component is then scaled to globally reduce contrast. This enhances the perception of detail in the dark areas but at the expense of losing detail in the bright areas. The proposed approach minimizes this loss of detail using a novel recombination strategy. Additionally, the present algorithm preserves colors, avoids halo artifacts, and can be embedded into existing still-camera pipelines.

Adaptive optics with advanced phase-contrast techniques. I. High-resolution wave-front sensing.

Mikhail Vorontsov, Eric Justh, Leonid Beresnev
J. Opt. Soc. Am. A 2001
Forward References: 8
http://www.opticsinfobase.org/abstract.cfm?URI=josaa-18-6-1289
  • OBJECTIVE: high resolution phase-contrast wavefront sensors. contrast enhancement.
  • HOW: Based on the Zernike phase contrast method. They present 3 new sensor designs: differential, nonlinear, and optoelectronic Zernike filters. They also incorporate an intensity thresholding with the optoelectronic Zernike filter to further increase contrast. Intensity thresholding: "The phase is shifted by pi/2 for all spectral components q that satisfy I(q) >= 0.75*max(I(q))."
  • OPINION: This paper is part 1 of 2 parts. It looks like it is significant work in this area since it is a two-part paper with forward references on each part. This is a good reference if you're looking for a recent work on phase-contrast techniques.
SIMILAR PAPER: Adaptive optics with advanced phase-contrast techniques. II. High-resolution wave-front control
Eric Justh, Mikhail Vorontsov, Gary Carhart, Leonid Beresnev, P.S. Krishnaprasad
J. Opt. Soc. Am. A 2001
Forward References: 7
http://www.opticsinfobase.org/abstract.cfm?id=64351
  • OBJECTIVE: correct wavefront phase
  • HOW: Capture wavefront with sensor created in the Part I paper. Use gradient flow optimization to determine the correction phase. Display the correction phase on an SLM in the adaptive system.
  • OPINION: I wasn't too impressed with the resulting phase correction images.

Spatial phase-shift interferometry - a wavefront analysis technique for three-dimensional topometry

Shay Wolfling, Emmanuel Lanzmann, Moshe Israeli, Nissim Ben-Yosef, Yoel Arieli
J. Opt. Soc. Am. A 2005
Forward References: 1
http://www.opticsinfobase.org/abstract.cfm?URI=josaa-22-11-2498
  • OBJECTIVE: measure surface of object, generate height fields.
  • HOW: Insert an "optical manipulator (OM) such as a liquid-crystal-based device" in F. plane. The OM modulates amplitude and phase in F. domain in a binary mask way. The camera captures images for different modulations of amplitude and phase. Depending on the F. plane modulations, the images have different intensities for different surface heights. Put these images together to get object surface and height field.
  • OPINION: This paper might be interesting because it modulates amplitude as well as phase (in contrast to the Vorontsov papers above). Moreover, you can see the contrast differences in the image results in Figure 8.

Medical image processing using transient Fourier holography in bacteriorhodopsin films

Sri-Rajasekhar Kothapalli, Pengfei Wu, Chandra Yelleswarapu, D.V.G.L.N Rao
Applied Physics Letters 2004
http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=APPLAB000085000024005836000001&idtype=cvips&gifs=yes
  • OBJECTIVE: real time image processing, display selected spatial frequencies.
  • HOW: Fourier holography on bR film. (I don't quite understand Fourier holography, so this explanation won't be too clear.) There are two beams, one object beam and one reference beam. The bR film is located in the F. plane, where the two beams join together. The intensity of the reference beam is used to select spatial frequencies. When the object beam is blocked, an edge enhanced image is shown. An edge-softened image is shown when the reference beam is blocked.
  • OPINION: I'm not sure if this is a good reference. Yelleswarapu is an author on this paper (as well as the "power-limiting material" paper which references this one).

Optical scatter imaging: subcellular morphometry in situ with Fourier filtering

Nada Boustany, Scot Kuo, Nitish Thakor
Optics Letters 2001
Forward References: 8
http://www.opticsinfobase.org/abstract.cfm?URI=ol-26-14-1063
  • OBJECTIVE: detect wavelength-scale particle sizes
  • HOW: Dark field imaging technique. Put a variable iris with a center stop in F. plane. It blocks transmitted light to obtain an image of the scattered light.

Fractional derivatives - analysis and experimental implementation

Jeffrey Davis, David Smith, Dylan McNamara, Don Cottrell, Juan Campos
Applied Optics 2001
Forward References: 7
http://www.opticsinfobase.org/abstract.cfm?URI=ao-40-32-5943
  • OBJECTIVE: edge enhancement, examination of phase objects
  • HOW: Theoretical proof: mask D in Fourier plane represents the fractional derivative operator. The output is the superposition of two electric fields whose interference allows phase object imaging. Mask parameters can be changed to control which edge (in a 1D rect example) is emphasized more or less. A specific case of the fractional derivative operator is the fractional Hilbert operator.
  • OPINION: This paper is mainly theoretical. The experimental results are for a 1D rect example. Their 2D analysis is for the radially symmetric case. I don't think this paper has substantial practical results.

Spatial amplification: an image processing technique using the selective amplification of spatial frequencies

Tallis Chang, John Hong, Pochi Yeh
Optics Letters 1990
Forward References: 1
http://www.opticsinfobase.org/abstract.cfm?URI=ol-15-13-743
  • OBJECTIVE: amplify spatial frequencies instead of blocking frequencies as in spatial filtering
  • HOW: Put photorefractive crystal (BaTiO3) in F. plane. Align a pump beam to join the object beam in the F. plane. The spatial overlap of the pump beam and object beam determines the frequencies to amplify.
  • OPINION: This is a really bad paper. The idea seems problematic and not for practical use. The results are limited to filtering in the horizontal and vertical directions. They claim the setup is good for a cascading system, but they don't have experimental results to support that. In a practical sense, the setup with 2 beams overlapping to select spatial frequencies seems inexact; they don't talk about alignment of the beams to select spatial frequencies. The paper itself seems very elementary because it talks about basic concepts of spatial filtering that I thought were known already by 1990.
SIMILAR PAPER: Optical image processing by matched amplification
Tallis Chang, John Hong, Scott Campbell, Pochi Yeh
Optics Letters 1992
Forward References: 0
http://www.opticsinfobase.org/abstract.cfm?URI=ol-17-23-1694
  • OBJECTIVE: amplify selected frequencies of an image
  • HOW: Uses spatial amplification from previous paper. One beam contains an image while the other beam contains a selected part of the image. When the beams meet in the F. plane, the selected frequencies from the partial image are amplified. The result should be the original image with the selected part amplified.
  • OPINION: This doesn't work. This paper is even worse than the original spatial amplification paper. The results are really bad. The idea concept is problematic. The spatial frequencies of the selected image region are amplified instead of the spatial region itself. Processing in the frequency domain will do that (obviously!). For example, they have an image with some lines of text. They use the letter 'w' as their selected image region on the second beam, and they expect the output to amplify all the w's. Of course, frequencies from the letter w are present in other letters, and those other letters will be amplified as well. They try to perform a spatial operation in the frequency domain, and then spend a whole column and a half explaining why it didn't work as they expected. I thought this was common knowledge about frequency domain filtering by 1992.

Optical image encryption based on input plane and Fourier plane random encoding

Philippe Refregier, Bahram Javidi
Optics Letters 1995
Forward References: 89
http://www.opticsinfobase.org/abstract.cfm?URI=ol-20-7-767
  • OBJECTIVE: encrypt/decrypt images optically, convert images to/from stationary white noise
  • HOW: To encrypt the image, multiply the image with a random noise phase mask, n, in the input plane. Then put another random noise phase mask, b, in the Fourier plane. The output is an encrypted complex image with amplitude and phase. To decrypt the image, put the encoded complex image in the input plane and the random noise phase mask, b, in the F. plane. The output when imaged onto a CCD is the decrypted image since the CCD will capture the squared magnitude of the output beam.
  • OPINION: This paper is truly a significant and influential paper in this field. 89 forward references! It has become a standard in the field of optical encryption.

Email from Tom Grycewicz (Jan 17, 2008)

Tom Grycewicz is currently at The Aerospace Corporation in the Sensor Engineering and Exploitation Dept. His work is in optical engineering with quite a bit of work on optical correlators and more generally, optical system modeling and design (image chain analysis), and pattern recognition. I had asked him what he thought were some significant and preferably recent contributions in Fourier optics applied to image processing. This is part of his email response.
Here is the basic problem with image processing: in order to pass data through an optical processing system, the data first needs to be transferred to a spatial light modulator, and this bottleneck is about ten Mpixels per second. Reading the answer requires a camera--another ~ten Mpixel per second bottleneck. A good math coprocessor can handle a Mpixel FFT faster than the data transfer rate to or from the optical system. Add to this all of the advantages a programmable processor has over analog hardware for computing.

Fingerprint recognition was an "almost ran" application. A company in Canada developed and marketed a product which used an optical processor (using an optical binary phase-only Fourier-plane filter) for a commercial system in ~1993. I don't remember the company, the key engineer was Colin Soutar. He published a few SPIE conference papers. But the technology was quickly surpassed by all-digital processing, like the recognition system included in the IBM ThinkPad laptop.

The closest successful application of Fourier optical processing to image analysis I can think of is Fourier transform infrared (FTIR) spectroscopy This process allows a two-dimensional focal plane to capture a three-dimensional hyperspectral data cube. Many instruments of this type have been built, but I don't have references at my fingertips. A good example of the technology is the SPIRE instrument on the ESA Hershel space telescope.

An implementation of wavefront sensing used for Extreme Adaptive Optics (ExAO) uses a pinhole to spatially filter the input wavefront so that high-spatial frequency modes are not sent to the Shack-Hartmann wavefront sensor. The idea first surfaced around 2004. I think the inventor was Lisa Poyneer at Lawrence Livermore National Lab. Of course, this is considered an application of simple optics, not optical processing. Applications of holography to optical memory shows a lot of promise. Readout of a multilayer disk is essentially a holographic process. But of course no good engineer would mention optical disk readout and optical processing in the same breath. Optical processing is the technology of the future....and by unwritten agreement always will be.

Scalar Wave Optics, Time-Frequency Analysis, Wigner transform (Lukas)

Wigner and the LCT

Space-bandwidth product of optical signals and systems

Authors Adolf W. Lohmann, Rainer G. Dorsch, David Mendlovic, Zeev Zalevsky, and Carlos Ferreira
Journal Year JOSA A, Vol. 13, Issue 3, pp. 470-473 doi:10.1364/JOSAA.13.000470
Forward References: n
link http://www.opticsinfobase.org/abstract.cfm?URI=josaa-13-3-470
  • OBJECTIVE:
  • HOW:
  • OPINION: Short, informative paper on Space bandwidth product (SW) for optical signals. Argues that SW as a single number does not fully describe the situation as it denotes the energy (area in Wigner space) of the signal. However, some transforms (affine) may change the shape of the signal in Wigner space, but preserve area. Thus changing the ratio of the spatial / angular sampling, and effectively clipping the signal if kept constant. An argument for doing optical signal processing with Light Fields. Also shows the inversion I was talking about. Will it work on a LF?

Generalizing, optimizing, and inventing numerical algorithms for the fractional Fourier, Fresnel, and linear canonical transforms

Bryan M. Hennelly and John T. Sheridan
JOSA A, Vol. 22, Issue 5, pp. 917-927 doi:10.1364/JOSAA.22.000917
Forward References: n
http://www.opticsinfobase.org/josaa/abstract.cfm?URI=josaa-22-5-917
  • OBJECTIVE:
  • HOW:
  • OPINION: Builds on Lohmann et al. (see above) Tying together the Linear Canonical Transform (LCT) with some general fist order operations on a complex valued distribution. This corresponds to matrix optics of the LF / Wigner distribution function. Shows how the sampling requirement changes as the bounding box of the WDF is transformed by the LCT matrix operations. Thus showing the effect of different transform while staying in 2D. Discuss the SW of some know transforms and methods.
Note - Related paper from the same authors with a Fast LCT in the same issue: Fast numerical algorithm for the linear canonical transform Bryan M. Hennelly and John T. Sheridan JOSA A, Vol. 22, Issue 5, pp. 928-937 doi:10.1364/JOSAA.22.000928 http://www.opticsinfobase.org/josaa/abstract.cfm?URI=josaa-22-5-928 Don't know if necessary as the LCT matrices can be decomposed into already fast algorithms?

On the Existence of Discrete Wigner Distributions

JC O'Neill, P Flandrin, WJ Williams
IEEE Signal Processing Letters, 1999
http://perso.ens-lyon.fr/patrick.flandrin/IEEE_SPL1999.pdf

  • OPINION: Discussion on what happens when a discrete Wigner Distribution Function is used. Lists 6 properties, and show (via citation) that the WDF is the only Cohen class function that satisfies all of them. Then lists four types of signals, and there after show that the only discrete signal that satisfies the 6 properties is a periodic signal with odd length. The authors argue that the transforms for other types of discrete signals should not be called Wigner(-ville).

-Note: the last property is that the WDF should be real. This means that the complex wave field needs to be hermitian, which could be interesting to look at when trying to project LF to wave optics.

Ambiguity function and Wigner distribution function applied to partially coherent imagery

Brenner, K.-H.; Ojeda-Castaņeda, J.
Optica acta (Journal of Modern Optics) 1984, vol. 31, no2, pp. 213-233 (16 ref.)
http://www.informaworld.com/index/DJQTY58T494W9PAD.pdf
  • OPINION: Although the paper title indicate a focus on partially coherent light, also the simpler coherent light is discussed. Good recap on WDF/Ambiguity Function. Also relates to mutual intensity, and argue that the WDF/AF is more practical for some operations. I.e. hints at LCT. Shows what happens to WDF/AF in thin transparencies, transport, lenses, etc.

Quasi light fields: extending the light field to coherent radiation

Anthony Accardi and Gregory Wornell
JOSA A, Vol. 26, Issue 9, pp. 2055-2066
http://www.opticsinfobase.org/abstract.cfm?URI=josaa-26-9-2055

  • OBJECTIVE:

In short the paper tries to connect light fields (as in CG/computational photography, citing Levoy and Gortler's original papers, Zhang and Levoy; as well as a few others), the radiometry work of Walther [and Friberg, and Wolf], and wave optics. They take a small route through quantum optics (Wolf again) and signal processing (Ville, Wigner).

The paper is quite readable, and I expect that many of the cited papers give a good overview of the different fields. They themselves say that they wish to show the similarities.

What they do is basically to extend the light field to what they call a quasi light field that represent coherent (and partially coherent) light. It is not required to be constant along its rays. Then they show that all possible such LF can be described with the Cohen class of time-frequency distributions (the Wigner is one of them). There after they proceed to discuss how to (theoretically) capture a quasi light field. I.e. how to convert a scalar distribution (what I have called wave field at times). They argue that the process I used in the EG 07 paper (Ziegler et al. The work I did with M. Gross group at ETH) results in a form of quasi light field they call a spectrogram, and connects it to Zhang and Levoy's 09 paper, as they show how to record it. Probably correct in their framework.

They compare this to the Wigner Distribution, and finally settles for something they call the conjugate Rihaczek quasi light field. In contrast to Wigner dist, which is real valued,(note: for a numerical, discrete case only very specific scenarios give rise to a pure real valued WDF) the cRQLF is complex valued. Their argument for the cRQLF is that it is a good trade off between localization (problem of Fourier) and cross terms (problem of WDF).

Imaging is discussed and the authors treat imaging in the near field (far field can be treated using a standard light field) by constructing a version of cRQLF that is dependent on the scene geometry (as one would expect). Basically I believe that this can be seen as a type of light field with a known distance along each ray so that the phase-dependent effects may come in.

In section 5.B the authors argue that pure quasi light fields (i.e. the ones that is not distance dependent) will only image in the far field. This could confirm our suspicions that the Wigner distribution can not recover the LF from a hologram unless in the Fraunhofer region? Finally, they express the process as a Hermitian product of two beamformers. This needs to be looked into further.

Over all this paper does a good job of framing many of our ideas (distance dependence for LF that should do diffraction, et c.) and suspicions (WDF in the far field, et c.) with a good theoretical background in radiometry and signal processing. The reference list should provide some interesting further reading.

Digital Holography, general

Digital recording and numerical reconstruction of holograms

Ulf Schnars and Werner P O Juptner
Measurement Science and Technology. Vol. 13, no. 9, pp. R85-R101. Sept. 2002
Forward References: n
http://www.iop.org/EJ/article/0957-0233/13/9/201/e209r1.pdf?request-id=b3985 a6b-84cb-4970-aacc-9f2302a68f55
  • OBJECTIVE:
  • HOW:
  • OPINION: Review article introducing digital holography and a couple of applications (shape measurement and microscopy). Shows Fresnel transform - both one direct Fourier and convolution. Shows analytic FT of convolution kernel. Mentions reconstruction scaling introduced by direct method.

Free-space beam propagation between arbitrarily oriented planes based on

full diffraction theory: a fast Fourier transform approach N. Delen and B. Hooker
JOSA A, Vol. 15, Issue 4, pp. 857-867 doi:10.1364/JOSAA.15.000857
Forward References: n
http://www.opticsinfobase.org/abstract.cfm?URI=josaa-15-4-857
  • OBJECTIVE:
  • HOW:
  • OPINION: Quite nice paper showing how to perform Raylight-Sommerfeld diffraction between non parallel planes using the angular spectrum of plane waves. A general complex valued distribution in a plane can be viewed as a spectrum of plane waves through a Fourier transform. A rotation of the plane in 3D space is equivalent to transforming the Fourier coefficients.

Phase-shifting digital holography

Ichirou Yamaguchi and Tong Zhang
Optics Letters, Vol. 22, Issue 16, pp. 1268-1270 doi:10.1364/OL.22.001268
References: n
link http://www.opticsinfobase.org/abstract.cfm?URI=ol-22-16-1268
  • OBJECTIVE:
  • HOW:
  • OPINION: Well known phase-shifting paper where the authors show how to reconstruct phase and amplitude from four holograms with shifted phase (pi/2). They reconstruct the phase fully and thus have the complex valued distribution instead of the intensity valued holographic interference. Newer and older papers (especially in interferometry) exist, but this one is accessible and is targeted especially at holograms. Note - The four phase recordings algorithm is only one of a family, though not mentioned in this paper.

Frequency analysis of digital holography

Thomas M. Kreis
Opt. Eng., Vol. 41, 771 (2002); doi:10.1117/1.1458551
Forward References: n
http://spiedl.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=OPEGAR0000 41000004000771000001&idtype=cvips&gifs=yes
  • OBJECTIVE:
  • HOW:
  • OPINION: Commonly cited paper by Kreis where he analyses the effect of sampling a digital hologram. Basically a frequency analysis of the effects of pixel size and fill factor.
Note - I recently became aware of a comment to this paper, but have not had time to look at it closer: Effect of the fill factor of CCD pixels on digital holograms: comment on the papers "Frequency analysis of digital holography" and "Frequency analysis of digital holography with reconstruction by convolution" Opt. Eng., Vol. 42, 2768 (2003); doi:10.1117/1.1599841 Cheng-Shan Guo, Li Zhang, Zhen-Yu Rong, Hui-Tian Wang

Literature, review, and overview papers

Diffraction and Holography from a Signal Processing Perspective

L. Onural and H. M. Ozaktas HOLOGRAPHY Conference, 2005 Forward References: n
link
  • OBJECTIVE:
  • HOW:
  • OPINION: Not a great paper in itself; basically expresses the LCT (or form there of) as a Fractional Fourier Transform. However: the list of references has some good pointer to overview papers, classical papers, and 'recent' (late 1990 - ~2005) work in signal processing for optics. Not an extensive list, but once a some of the self references are washed out (although there are important work from Onural and Ozaktads in there), some good work remains. E.g: Papoulis, Sherman, Delen and Hooker, Lohmann, Kreis, Wolf, and some others. Have not read all of them.

Computer generated holograms: an historical review

G. Tricoles
Applied Optics, Vol. 26, Issue 20, pp. 4351-4357 doi:10.1364/AO.26.004351
http://www.opticsinfobase.org/ao/abstract.cfm?uri=ao-26-20-4351
  • OBJECTIVE:
  • HOW:
  • OPINION:Bit over 20 years old, so it is dated and lacks much of modern development in CGH. It is however a brief introduction to the general problem of CGH and has an extensive literature list.

Three-dimensional imaging and processing using computational holographic

imaging Yann Frauel, Thomas J. Naughton, Osamu Matoba, Enrique Tajahuerce, and Bahram Javidi
Proceedings of the IEEE, March 2006, Volume: 94, Issue: 3, 636-653, ISSN: 0018-9219
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1605208

  • OBJECTIVE:
  • HOW:
  • OPINION: An overview paper that describes the state of the (authors') research in computational holography at about 2006.OK introduction to the field, and to different applications and problems; compression, encryption, object recognition.

Computer Generated Holography (CGH)

Kyoji Matsushima and Sumio Nakahara
Applied Optics, Vol. 48, Issue 34, pp. H54-H63 doi:10.1364/AO.48.000H54
http://www.opticsinfobase.org/abstract.cfm?URI=ao-48-34-H54

  • OPINION: Interesting paper both summarizing and extending Matsushima's previous work. Use the sampled polygon method as developed by M. creates a 4.3 GigaPixel hologram of a 3D scene (Polygon model with a planar, textured background). Printing is done on a Heidelberg DWL 66, and reconstruction using a HeNe laser. However M. also states that reconstruction by LED is possible (though leading to a degraded image). Some additional notes:
    • Assembly of object field in the object plane before propagation to hologram plane (fewer samples?)
    • A complex valued float frame buffer of size 32 GB, thus a parallel/distributed method is used
    • Parallel method upper bound by message passing
    • Shifted Fresnel transform used for propagation
    • Visibility only computed as a shadow mask in the object plane, and only between object and background

Marc Levoy's reading list on Holographic Illumination

Holographic photolysis of caged neurotransmitters

Lutz C., Otis T.S., DeSars V., Charpak S., DiGregorio D.A., Emiliani V.,
Nature Methods, Vol. 5, No. 9, September 2008.

Marc's comments:

Uses an LC-SLM to modulate the phase of a laser beam. Figure 1 shows several phase masks and the resulting 3D intensity patterns. Cites other papers by this group (optics05, japanphysics04) for construction of the phase masks. One pattern is a collection of 0.4-micron spots; another is a collection of large spots. In the latter, the spots' edges are fairly sharp, but their interior intensities are noisy (15% variability). Each pattern is generated using a single phase mask, without scanning. Figure 2 shows a phase mask computed to generate a pattern observed through the microscope. This pattern would appear at the objective focal plane. Figure 3 shows that the axial extent of these spots (depth of focus) is much tighter than a Gaussian beam (i.e. simple laser beam). This makes sense, because a phase mask utilizes the entire aperture plane, while a Gaussian beam considerably underfills the aperture (i.e. it is paraxial). Question: how does this depth of focus compare to a refractively focused spot?

Discussion points out that DLPs would provide more time control than LC-SLMs, kHz vrs 60 Hz, but when generating a collection of spots, efficiency is much higher using phase masks than DLPs, ~50% vrs 10%. Scanning using accousto-optical deflectors (AODs) or galvanometers are an alternative, but scanning rates are limited. Mentions that phase masks can simultaneously correct for optical aberrations. Ends by mentioning that 3D patterning has many other applications in fluorescence microscopy.

Extraordinary paper, recommended by Rudolf Oldenbourg and Ramin Pashaie (in Karl Diesseroth's lab). Terse but well writen, with highly informative figures, each one a collage of several images and related illustrations. Includes a detailed description of the 3D pattern -> 2D phase mask algorithm as supplemental material. The forward and back propagation steps are Fourier transforms, as expected. They report that 5-8 iterations usually suffices, taking about 1 second on a PC. (This is only for a single plane. Their genetic algorithm for computing multiple planes presumably takes longer, perhaps more like the dozens or hundreds of iterations reported in Piestun ieee02.) Cites Gerchberg and Saxton, 1972, Fienup 1982, and other classic papers.

Wave front engineering for microscopy of living cells

Emiliani V., Cojoc D., Ferrari E., Garbin V., Durieux C., Coppey-Moisan M., Fabrizio E.D., Optics Express, Vol. 13, Issue 5, pp. 1395-1405, 7 March 2005.
http://www.opticsinfobase.org/oe/abstract.cfm?URI=OPEX-13-5-1395

Marc's comments:
An LC-SLM is used to modulate the phase of a laser beam to create focused points at any position in 3D. This capability is used to create optical tweezers whose position is held constant in the specimen despite axial motion of the objective during 3D confocal scanning.

Multiple Optical Trapping by Means of Diffractive Optical Elements

Cojoc D., Emiliani V., Ferrari E., Malureanu R., Cabrini S., Proietti R.Z., Fabrizio E.D.,
Japanese J. Appl. Phys., Vol. 43, 2004, pp. 3910-3915.
http://jjap.ipap.jp/link?JJAP/43/3910

Marc's comments:

Describes a genetic algorithm to solve for a phase-only DOE given a desired 3D intensity distribution. Sub-algorithm to achieve a desired distribution on a single focal plane looks alot like Piestun's (ieee02) first method. However, the two literatures seem entirely disjoint, having no cited papers in common! (Some share citations of the Gerchberg-Saxton algorithm, which was 1972.) DOE is implemented using an LC-SLM. Clearly written article, which also explains how the LC-SLM works, and shows an example phase mask. This capability is used to create optical tweezers that can trap multiple cells and hold them in a fixed position relative to the specimen despite axial motion of the stage (performed by hand).

Synthesis of Three-Dimensional Light Fields and Applications,

Piestun R., Shamir J.
Proc. IEEE, Vol. 90, No. 2, February 2002.
http://ieeexplore.ieee.org/xpls/abs_all.jsp?isnumber=21333&arnumber=989871&count=10&index=4

Marc's comments:

Presents theory on what kind of light fields in 3D can be generated using a propagating coherent 2D wavefront. The answer is a sphere whose radius is determined by the light's wavelength, the Ewald sphere. For a finite aperture, this becomes a spherical cap. If you don't care about phase in the generated field, it becomes a doughnut of twice the bandwidth (see figure 5). Unfortunately, the author is a poor explainer, so the paper gives little insight. Sheppard's papers (e.g. josa94) explain this same idea for imaging, and do it better. See also Gustafsson's papers. A more thorough explanation by Piestun of the limits on what can be generated can be found in his josa96 paper (reference [14]).

The paper also surveys ways to create light fields, and he describes several methods for generating any light field in 3D (that is generatable). The method he favors is a simple optimization that alternates between projecting the current solution onto the light field in 3D (what the author calls 3D wave fields) you wish to generate and projecting it onto the "diffraction propagator" (equation 15), which constricts you to physically generatable wave fields. His original paper on this method appears to be Piestun optics94. Regarding distinguishing those light fields in 3D (what the author calls 3D wave fields) that can be generated from those that cannot, page 228 lists a few constraints, but fails to give a comprehensive intuition.

The paper continues by describing the ability (using these methods) of generating non-diffracting beams (see Durnin josa87), dark beams (narrow black cores that propagate for a long time without defocusing), arbitrary curves of light in 3D, arbitrary patterns with extended depth of field (I looked at reference [107]; it doesn't give the form of the DOE), and multiple planar patterns, each one in focus at a different depth (figure 20).

Finally, his paper completely omits lenslet/microlens arrays, and he doesn't cite Lippman or Ives! He attended the UNCC/OSA Computational Imaging and Superresolution workshop in June 2008 with me, so he now knows. Cite this paper in your first paper on creating light fields. (Done, J. Micr., 2009.) By the way, reading this paper you would think that Piestun invented the idea of computing DOEs that implement particular 3D wave fields. (See for example the top of page 226.) However, there appears to be parallel developments in microscopy. See for example Cojoc et al.'s japanphysics04 paper. There are almost no citations in common that address the fundamental method until one goes back to Gerchberg-Saxton 1972.

Control of wave-front propagation with diffractive elements,

Piestun R., Shamir J.,
Optics Letters, Vol. 19, No. 11, June 1, 1994.
http://www.opticsinfobase.org/abstract.cfm?URI=ol-19-11-771

Marc's comments:

The original paper introducing Piestun's method; see his ieee02 paper. For a concrete example of non-diffracting beams, see cited Durnin josa87.pdf.

Wave fields in three dimensions: analysis and synthesis,

Piestun R., Spektor B., Shamir J.
J. Opt. Soc. Am. A, Vol. 13, No. 9, September 1996, p. 1837.
http://www.opticsinfobase.org/abstract.cfm?URI=josaa-13-9-1837

Marc's comments:

Actually shows an example diffractive optical element (DOE) that generates a particular 3D wave field. Also contains a fairly thorough explanation of the limits on the wave fields that can be generated, although without the figures that can be found in his ieee02 paper. In particular, we are reminded that one can specify only 2D worth of information about the 3D wave field, corresponding to a spherical cap in Fourier space, or equivalently to the DOE itself, which is of finite size and limited spatial resolution. The same is true, of course, of LFI, which we generate using a 2D DLP. The discussion of degrees of freedom (page 1842) is also interesting, including an example of how much axial control can be expected from a DOE with a particular aperture and resolution.

Lukas' comments:

  • Paper seem valid in Fresnel region, and derive a set of sampling conditions.
  • Show that sampling in depth is a function of the same. I.e. non constant sampling spaces.
  • Note: Connecting this to the 2009 JOSA A paper by Accardi and Wornell (see Lukas paper above) could prove a sampling requirement for the depth dependent rays in the near field light fields.
  • Show that the FT of a 3D distribution resulting from a 2D diffracting aperture is non zero only on a spherical surface in 3D. r = 1/ l.
  • Define resolution limit as minimum distance between two features possible to capture. Shows that this is a function of aperture and position.
  • Fig 4. shows an interesting aliasing frustum.
  • Formulates the general CGH problem (for a 3D distribution) as an inverse problem. Very nice to see it in print.

Other Topic

Title

Authors
Journal Year
Forward References: n
link
  • OBJECTIVE:
  • HOW:
  • OPINION:
Edit | Attach | Watch | Print version | History: r9 < r8 < r7 < r6 < r5 | Backlinks | Raw View |  Raw edit | More topic actions
Topic revision: r9 - 2009-10-13 - lukasa
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback