• Sorted by Date • Classified by Publication Type • Sorted by First Author Last Name • Classified by Author Last Name •
R. J. Woodham, E. Catanzariti, and Alan K. Mackworth. Analysis by Synthesis in Computational Vision with Application to Remote Sensing. Computational Intelligence, 1(2):71–79, 1985.
The central problem in vision is to determine scene properties from image properties. This is difficult because the problem,formally posed, is underconstrained. Methods that infer scene properties from images make assumptions about how the world determines what we see. In remote sensing, some of these assumptions can be dealt with explicitly. Available scene knowledge, in the form of a digital terrain model and a ground cover map, is used to synthesize an image for a given date and time.The synthesis process assumes that the surface is a perfectly diffuse or lambertian reflector. A scene radiance equation is described based on simple models of direct solar irradiance, diffuse sky irradiance, and atmospheric path radiance. Parameters of the model are estimated from the real image. A statistical comparison of the real image and the synthetic image is used to judge how well the model represents the mapping from scene to image. The methods presented for image synthesis are similar to those used in computer graphics. The motivation, however, is different. In graphics, the goal is to produce an effective rendering of the scene for a human viewer. Here, the goal is to predict properties of real images. In vision, one must deal with a confounding of effects due to surface shape, surface material, illumination, shadows, and atmosphere. These effects often detract from, rather than enhance, the determination of invariant scene characteristics.
@Article{CI85, author = {R. J. Woodham and E. Catanzariti and Alan K. Mackworth}, title = {Analysis by Synthesis in Computational Vision with Application to Remote Sensing}, year = {1985}, journal = {Computational Intelligence}, volume = {1}, number = {2}, pages = {71--79}, abstract = { The central problem in vision is to determine scene properties from image properties. This is difficult because the problem,formally posed, is underconstrained. Methods that infer scene properties from images make assumptions about how the world determines what we see. In remote sensing, some of these assumptions can be dealt with explicitly. Available scene knowledge, in the form of a digital terrain model and a ground cover map, is used to synthesize an image for a given date and time.The synthesis process assumes that the surface is a perfectly diffuse or lambertian reflector. A scene radiance equation is described based on simple models of direct solar irradiance, diffuse sky irradiance, and atmospheric path radiance. Parameters of the model are estimated from the real image. A statistical comparison of the real image and the synthetic image is used to judge how well the model represents the mapping from scene to image. The methods presented for image synthesis are similar to those used in computer graphics. The motivation, however, is different. In graphics, the goal is to produce an effective rendering of the scene for a human viewer. Here, the goal is to predict properties of real images. In vision, one must deal with a confounding of effects due to surface shape, surface material, illumination, shadows, and atmosphere. These effects often detract from, rather than enhance, the determination of invariant scene characteristics. }, bib2html_pubtype ={Refereed Journal}, bib2html_rescat ={}, }
Generated by bib2html.pl (written by Patrick Riley ) on Wed Apr 23, 2014 19:08:34