Skip to main content
ARS Home » Plains Area » Houston, Texas » Children's Nutrition Research Center » Research » Publications at this Location » Publication #314354

Title: Saliency-aware food image segmentation for personal dietary assessment using a wearable computer

Author
item CHEN, HSIN - Washington University
item JIA, WENYAN - University Of Pittsburgh
item SUN, XIN - University Of Pittsburgh
item LI, ZHAOXIN - Harbin Institute Of Technology (HIT)
item LI, YUECHENG - University Of Pittsburgh
item FERNSTROM, JOHN - University Of Pittsburgh
item BURKE, LORA - University Of Pittsburgh
item BARANOWSKI, THOMAS - Children'S Nutrition Research Center (CNRC)
item SUN, MINGUI - University Of Pittsburgh

Submitted to: Measurement Science and Technology
Publication Type: Peer Reviewed Journal
Publication Acceptance Date: 12/17/2014
Publication Date: 2/1/2015
Citation: Chen, H.C., Jia, W., Sun, X., Li, Z., Li, Y., Fernstrom, J.D., Burke, L.E., Baranowski, T., Sun, M. 2015. Saliency-aware food image segmentation for personal dietary assessment using a wearable computer. Measurement Science and Technology. 26(2):025702.

Interpretive Summary: While important for studying what and how much people eat, diet assessment involves substantial error due to its self reported nature. Taking pictures/images of foods throughout the day can minimize a lot of that error, but automating the analysis of those images would make the method easier to employ. This manuscript describes a new method for segmenting or isolating the food in images from the background information. The new method performed better than the other currently available methods. Automation will next need to address accurately identifying the foods and estimating food quantities, before and after a meal/snack. When fully automated, these procedures should lead to a very accurate, minimum effort method for assessing dietary intake.

Technical Abstract: Image-based dietary assessment has recently received much attention in the community of obesity research. In this assessment, foods in digital pictures are specified, and their portion sizes (volumes) are estimated. Although manual processing is currently the most utilized method, image processing holds much promise since it may eventually lead to automatic dietary assessment. In this paper we study the problem of segmenting food objects from images. This segmentation is difficult because of various food types, shapes and colors, different decorating patterns on food containers, and occlusions of food and non-food objects. We propose a novel method based on a saliency-aware active contour model (ACM) for automatic food segmentation from images acquired by a wearable camera. An integrated saliency estimation approach based on food location priors and visual attention features is designed to produce a salient map of possible food regions in the input image. Next, a geometric contour primitive is generated and fitted to the salient map by means of multi-resolution optimization with respect to a set of affine and elastic transformation parameters. The food regions are then extracted after contour fitting. Our experiments using 60 food images showed that the proposed method achieved significantly higher accuracy in food segmentation when compared to conventional segmentation methods.