RGB image | CIE L* | our output |
no filter | blue filter | our output |
Here are two examples of color space optimizations: color to gray conversion (top row) and conversion of a six primary image to RGB (bottom row). Our output restores the contrast between the red and blue elements in the impressionist painting. This contrast was lost during the standard L* conversion to grayscale. In the bottom row, we combine the six channels from images taken with and without a blue filter in front of the camera. Our output preserves the difference between the fake lemon and real orange while remaining close to the natural image taken without the filter. |
Abstract
Transformations between different color spaces and gamuts are ubiquitous operations performed on images. Often, these transformations involve information loss, for example when mapping from color to grayscale for printing, from multispectral or multiprimary data to tristimulus spaces, or from one color gamut to another. In all these applications, there exists a straightforward "natural" mapping from the source space to the target space, but the mapping is not bijective, resulting in information loss due to metamerism and similar effects.
We propose a cluster-based approach for optimizing the transformation for individual images in a way that preserves as much of the information as possible from the source space while staying as faithful as possible to the natural mapping. Our approach can be applied to a host of color transformation problems including color to gray, gamut mapping, conversion of multispectral and multiprimary data to tristimulus colors, and image optimization for color deficient viewers.
Files
Paper | [pdf] |
Poster | [pdf] |
Video | [mov] |
BibTeX:
@inproceedings{Lau11, author = {C. Lau and W. Heidrich and R. Mantiuk}, title = {Cluster-Based Color Space Optimizations}, booktitle = {Proc. IEEE International Conference on Computer Vision}, pages = {1172--1179}, year = 2011 }
Supplementary Material
Here are the results of our method for different applications that map an image from a source space to a smaller target space. Our results improve upon the standard projection to the target space.
Click on any thumbnail to see the high quality, full resolution version and to compare all images in the row. On the full resolution page, roll the mouse over the image labels to switch back and forth between images easily.
Color To Gray
Gamut Mapping
Image Optimization for Color Deficient Viewers
Multispectral, Multiprimary, Multichannel Image Fusion
Image Sources and Permission to Use Images
Color to Gray
The first image is the input RGB color image. The second image is a standard mapping to grayscale. We show results for standard mappings of CIE L* and luma. The third image is our output.
Grayscale Projection: CIE L*
img=ImpressionSunriseColor2, nc=15, w=0.8, k=0.6
Original image from [Gooch et al. 2005].
Grayscale Projection: luma
img=Candy, nc=6, w=0.8, k=0.2
Original image courtesy of [Rasche et al. 2005].
img=Map, nc=9, w=0.8, k=0.1
Original image courtesy of Yahoo! Maps/NAVTEQ/DigitalGlobe.
Gamut Mapping
We apply our method to map to two different target gamuts. The first target gamut is a toy gamut with less saturated chromaticities than sRGB, shown in the xy chromaticity diagram below. We map sRGB images (first image) to this toy gamut. The second image is the standard HPMINDE clipping. The third image is our output. The fourth and fifth images are gamut alarm images that show out-of-gamut pixels in green for the HPMINDE mapped and output images, respectively.
The second target gamut is sRGB, and we map images in the HP Dreamcolor gamut to it. Since we cannot show the input images on a conventional display, we only show the HPMINDE mapped image to sRGB (first image) and our output (second image) followed by gamut alarm images (third and fourth images) for each.
Source Gamut: sRGB
Target Gamut: toy gamut with less saturated chromaticities than sRGB
img=Birds, nc=20, w=0.8, a=0.5, b=4.0, c=10.0
Original image courtesy of Kodak.
img=Hats, nc=20, w=0.8, a=0.5, b=4.0, c=10.0
Original image courtesy of Kodak.
img=Door, nc=12, w=0.8, a=0.5, b=4.0, c=10.0
Original image courtesy of Kodak.
img=Ski3, nc=35, w=0.8, a=0.5, b=4.0, c=10.0
Original image courtesy of Fujifilm Electronic Imaging Ltd. (UK).
Source Gamut: HP DreamColor LP2480zx gamut (similar to P3)
Target Gamut: sRGB
img=Grass, nc=20, w=0.8, a=0.5, b=4.0, c=10.0
Original image courtesy of Paul Trepanier.
Image Optimization for Color Deficient Viewers
We apply our method to image optimization for color deficient viewers. The first image is the input tristimulus color image. The second image is the simulated image as seen by a color deficient viewer, simulated using [Brettel et al. 1997]. The third image is our output image, containing only colors within the 2D space of colors distinguishable by the color deficient viewer.
Color Deficient Viewer: Protanope
img=Ishihara2, nc=5, w=0.8, c=10.0, k=0.4
Original image from Wikipedia.
img=Impatien, nc=5, w=0.8, c=10.0, k=0.3
Original image courtesy of [Rasche et al. 2005].
Color Deficient Viewer: Tritanope
img=JellyBeans, nc=10, w=0.8, c=10.0, k=0.3
Original image courtesy of [Rasche et al. 2005].
Multispectral, Multiprimary, Multichannel Image Fusion
Input: RGB + NIR
The first image is a visible RGB image, which is also the standard mapping. The second image is the near infrared image. We combine the RGB and NIR channels to get our output (third image) in RGB.
img=Alaska, nc=20, w=0.8, c=10.0, k=0.6
Original image courtesy of [Zhang et al. 2008].
img=Password, nc=5, w=0.3, c=10.0, k=1.0
Input: RGB + depth
The first image is an RGB image, which is also the standard mapping. The second image is a depth map. We combine the RGB image and the depth map to get our output (third image).
img=TreeStumpLarge, nc=15, w=0.8, c=10.0, k=0.6
Original image and depth map pair courtesy of Justin Manteuffel.
Input: Multiprimary
The first image is an RGB image of a fake lemon and real orange. This is also the standard mapping. The second image is the same scene captured with the same camera but with a blue color filter in front of the camera. The blue color filter effectively shifts the RGB primaries of the camera sensor. These six primaries, three from the normal image and three from the shifted image, are combined to produce our output (third image).
img=OrangesSep3, nc=5, w=0.8, c=10.0, k=0.6
Input: Multispectral
We convert our input spectral data of n bands in [380nm, 760nm] to RGB using the standard procedure of multiplying spectral responses by color matching functions and integrating. The first image is this standard mapping to RGB. The second image is our output of mapping n spectral bands to RGB, which enhances the contrast between metamers.
img=Lettuce3, nc=9, w=0.8, c=10.0, k=0.4
img=Metacow10nm, nc=55, w=0.8, c=10.0, k=0.4
Original image courtesy of RIT Munsell Color Science Laboratory.
img=Apples, nc=8, w=0.8, c=10.0, k=0.3
Original image courtesy of the Columbia Multispectral Image Database.