CSAIL Research Abstracts - 2005 link to http://publications.csail.mit.edu/abstracts/abstracts05/index.html link to http://www.csail.mit.edu
bullet Introduction bullet Architecture, Systems
& Networks
bullet Language, Learning,
Vision & Graphics
bullet Physical, Biological
& Social Systems
bullet Theory bullet

horizontal line

Material Perception

Edward H. Adelson, Lavanya Sharan & Yuanzhen Li


Perceiving properties of the materials in this image is vital to interpreting it
Figure 1: Perceiving properties of the materials in this image is vital to interpreting it.


We find it easy to distinguish a plastic spoon from a stainless steel spoon or a wooden spoon. This ability is known as material perception. It is different from object recognition, which is the ability to distinguish a spoon from say a fork. While object recognition has been widely studied, material perception has received less attention. We are trying to understand how humans perceive material properties and how such ability can be achieved by machines.


The world that we see around us is composed of objects as well as materials. We can discern subtle changes in color, texture, gloss and translucency and we employ this sensitivity daily e.g. judging if food is cooked (nicely browned onions) or if a product on eBay is worth bidding (shiny paint on a bicycle). If we can understand how humans assess visual qualities of materials and surfaces, we can build machines that can do the same. An immediate application would be in the field of computer graphics, where realistic depictions of hair, skin, clothes etc. are desired. Such knowledge would also benefit product design; aesthetically appealing products like really glossy lip gloss or natural looking leather can be manufactured. In terms of practical tasks, a domestic robot with material recognition capability can tell the difference between spilled flour and spilled yoghurt and use appropriate cleaning tools for each material.


Images of two dissimilar objects made of the same material differ greatly on a pixel by pixel basis (making techniques like template matching inapplicable), yet both convey the impression of the material. The problem of material recognition is related to texture analysis however there is an important difference. It is often assumed that textures are generated by a stationary stochastic processes but this is not true for samples of material (e.g., a chrome sphere) Nevertheless, the field of texture offers some useful ideas. Recent work in texture has shown that pixel and wavelet statistics are good texture descriptors. We are adapting these descriptors to deal with problems in material perception.

Dror et al [1] started with the simplified situation where they examined spheres of homogeneous reflectance in unknown illumination. They used image based textural statistics in a machine learning framework to classify the spheres as shiny, matte, white, gray etc. Fleming et al [2] made progress on the human vision aspect of this problem, by demonstrating that humans can estimate surface reflectance of objects in the absence of context, as long as the illumination conditions are representative of those found in the real world.


We photographed many real world materials under various illuminations. We find that pixel statistics like the moments and percentiles of the intensity histogram are correlated with the surface reflectance. We filtered the images with center surround and oriented filters in a multi-scale decomposition and observed that the moment and percentiles of the histograms of the filtered images are diagnostic of reflectance as well. We find that these simple image statistics perform as well as human subjects at a reflectance classification task, suggesting that the statistics capture perceptually relevant information.


We want to extend beyond reflectance classification to the more general reflectance estimation problem. We would like to find image statistics that are useful for reflectance estimation and compare their performance to that of human subjects. We will also explore the relation of these statistics to gloss perception.

Research Support

This work is supported by NTT, as part of collaboration with Dr. Shin’ya Nishida and his group. Support has also come from NIH and Unilever Research.

Material Perception Research Results
Figure 2: (a) Examples of material images in our database (b) ROC curves illustrate the utility of histogram statistics of filtered images for reflectance classification (c) Statistics perform as well as human subjects at a reflectance classification task.


[1] Ron O. Dror, Edward H. Adelson and Alan S. Willsky. Recognition of surface reflectance properties from a single image under unknown real-world illumination. In Proceedings of the Workshop on Identifying Objects Across Variation in Lighting at CVPR 2001, Hawaii, December 2001.

[2] Roland W. Fleming, Ron O. Dror and Edward H. Adelson. Real-world illumination and the perception of surface reflectance properties. Journal of Vision, 3:347-368, 2003.

[3] Edward H. Adelson, Yuanzhen Li and Lavanya Sharan. Image Statistics for Material Perception. Journal of Vision Volume 4, Number 8, Abstract 123. (http://journalofvision.org/4/8/123/) May 2004.

[4] Lavanya Sharan, Yuanzhen Li and Edward H. Adelson. Image Statistics and Reflectance Estimation. Vision Sciences Society Annual Meeting Abstracts, May 2005.

horizontal line

MIT logo Computer Science and Artificial Intelligence Laboratory (CSAIL)
The Stata Center, Building 32 - 32 Vassar Street - Cambridge, MA 02139 - USA
tel:+1-617-253-0073 - publications@csail.mit.edu
(Note: On July 1, 2003, the AI Lab and LCS merged to form CSAIL.)