<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychol.</abbrev-journal-title>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2019.00307</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Food-Pics_Extended&#x2014;An Image Database for Experimental Research on Eating and Appetite: Additional Images, Normative Ratings and an Updated Review</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Blechert</surname> <given-names>Jens</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/80354/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Lender</surname> <given-names>Anja</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/639387/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Polk</surname> <given-names>Sarah</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Busch</surname> <given-names>Niko A.</given-names></name>
<xref ref-type="aff" rid="aff4"><sup>4</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/10143/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Ohla</surname> <given-names>Kathrin</given-names></name>
<xref ref-type="aff" rid="aff5"><sup>5</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/76632/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Department of Psychology, University of Salzburg</institution>, <addr-line>Salzburg</addr-line>, <country>Austria</country></aff>
<aff id="aff2"><sup>2</sup><institution>Centre for Cognitive Neuroscience, University of Salzburg</institution>, <addr-line>Salzburg</addr-line>, <country>Austria</country></aff>
<aff id="aff3"><sup>3</sup><institution>Department of Psychology and Education, Free University of Berlin</institution>, <addr-line>Berlin</addr-line>, <country>Germany</country></aff>
<aff id="aff4"><sup>4</sup><institution>Institute of Psychology, University of M&#x00FC;nster</institution>, <addr-line>M&#x00FC;nster</addr-line>, <country>Germany</country></aff>
<aff id="aff5"><sup>5</sup><institution>Research Center J&#x00FC;lich, Institute of Neuroscience and Medicine (INM-3), Cognitive Neuroscience</institution>, <addr-line>J&#x00FC;lich</addr-line>, <country>Germany</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Astrid M&#x00FC;ller, Hannover Medical School, Germany</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Lisa Sch&#x00E4;fer, Integriertes Forschungs- und Behandlungszentrum (IFB) AdipositasErkrankungen, Germany; Silke M. Mueller, University of Duisburg-Essen, Germany</p></fn>
<corresp id="c001">&#x002A;Correspondence: Jens Blechert, <email>jens.blechert@sbg.ac.at</email></corresp>
<fn fn-type="other" id="fn002"><p>This article was submitted to Eating Behavior, a section of the journal Frontiers in Psychology</p></fn></author-notes>
<pub-date pub-type="epub">
<day>07</day>
<month>03</month>
<year>2019</year>
</pub-date>
<pub-date pub-type="collection">
<year>2019</year>
</pub-date>
<volume>10</volume>
<elocation-id>307</elocation-id>
<history>
<date date-type="received">
<day>07</day>
<month>11</month>
<year>2018</year>
</date>
<date date-type="accepted">
<day>31</day>
<month>01</month>
<year>2019</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2019 Blechert, Lender, Polk, Busch and Ohla.</copyright-statement>
<copyright-year>2019</copyright-year>
<copyright-holder>Blechert, Lender, Polk, Busch and Ohla</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>Our current environment is characterized by the omnipresence of food cues. The taste and smell of real foods&#x2014;but also graphical depictions of appetizing foods&#x2014;can guide our eating behavior, for example, by eliciting food craving and anticipatory cephalic phase responses. To facilitate research into this so-called cue reactivity, several groups have compiled standardized food image sets. Yet, selecting the best subset of images for a specific research question can be difficult as images and image sets vary along several dimensions. In the present report, we review the strengths and weaknesses of popular food image sets to guide researchers during stimulus selection. Furthermore, we present a recent extension of our previously published database <italic>food-pics</italic>, which comprises an additional 328 food images from different countries to increase cross-cultural applicability. This <italic>food-pics_extended</italic> stimulus database, thus, encompasses and replaces <italic>food-pics</italic>. Normative data from a predominantly German-speaking sample are again presented as well as updated calculations of image characteristics.</p>
</abstract>
<kwd-group>
<kwd>experimental research</kwd>
<kwd>image database</kwd>
<kwd>eating behavior</kwd>
<kwd>food stimuli</kwd>
<kwd>cue reactivity</kwd>
</kwd-group>
<counts>
<fig-count count="0"/>
<table-count count="2"/>
<equation-count count="0"/>
<ref-count count="46"/>
<page-count count="9"/>
<word-count count="0"/>
</counts>
</article-meta>
</front>
<body>
<sec><title>Introduction</title>
<p>Our current environment is characterized by frequent cues of highly palatable foods. Many researchers partially attribute rising obesity rates and problems in eating-related self-regulation to this factor (e.g., <xref ref-type="bibr" rid="B13">Davis et al., 2011</xref>). Today&#x2019;s foods&#x2014;processed as well as unprocessed&#x2014;have reached a level of refinement that appeals strongly to our senses: visual, gustatory, olfactory, and oro-sensory food properties interact in creating hedonic pleasure. Pervasive advertisement penetrates real and virtual lives and constantly taxes self-regulation.</p>
<p>Research uses food images as experimental stimuli in a range of different paradigms. The food-viewing paradigm attempts to simulate environmental conditions in a controlled laboratory environment. Passive picture viewing is seen as a preparatory or anticipatory stage in food intake: natural eating settings often start with exposure to a food&#x2019;s visual appearance along with its smell. Such preparatory stages are of interest to research as anticipatory cephalic phase responses might underlie conditioned food cravings (<xref ref-type="bibr" rid="B2">Berthoud and Morrison, 2008</xref>; <xref ref-type="bibr" rid="B12">Dagher, 2012</xref>). Passive picture viewing is not the only way in which food images are used: Pavlovian or operant conditioning setups pair foods with neutral images and thereby tap into learning (e.g., <xref ref-type="bibr" rid="B4">Blechert et al., 2016</xref>;<xref ref-type="bibr" rid="B46">Wardle et al., 2018</xref>), memory setups tap into retention (<xref ref-type="bibr" rid="B28">Meule et al., 2012</xref>), and lateral or non-foveal presentations investigate spatial attention (<xref ref-type="bibr" rid="B7">Castellanos et al., 2009</xref>). Research has repeatedly demonstrated that food images capture attention (<xref ref-type="bibr" rid="B30">Nummenmaa et al., 2011</xref>; <xref ref-type="bibr" rid="B11">Cunningham and Egeth, 2018</xref>), are prioritized during neural processing (<xref ref-type="bibr" rid="B43">Toepel et al., 2009</xref>; <xref ref-type="bibr" rid="B27">Meule et al., 2013</xref>), and consistently activate brain areas associated with reward, salience, and cognitive control (<xref ref-type="bibr" rid="B12">Dagher, 2012</xref>; <xref ref-type="bibr" rid="B42">Tang et al., 2012</xref>; <xref ref-type="bibr" rid="B40">Spence et al., 2016</xref>). These reward-related neural responses can be enhanced by both the presentation of energy-dense food (<xref ref-type="bibr" rid="B22">Killgore et al., 2003</xref>; <xref ref-type="bibr" rid="B37">Schur et al., 2009</xref>) and by manipulations of hunger (<xref ref-type="bibr" rid="B44">Uher et al., 2006</xref>; <xref ref-type="bibr" rid="B18">Fuhrer et al., 2008</xref>; <xref ref-type="bibr" rid="B39">Siep et al., 2009</xref>) or cravings (<xref ref-type="bibr" rid="B32">Pelchat et al., 2004</xref>). Furthermore, individuals with obesity (compared to healthy weight controls) show increased activation in reward-related brain regions induced by particularly energy-dense cues (<xref ref-type="bibr" rid="B35">Pursey et al., 2014</xref>). In their meta-analysis, <xref ref-type="bibr" rid="B5">Boswell and Kober (2016)</xref> showed that food cue reactivity and craving predicted eating and weight gain, and that the effect sizes of this prediction were similar for visual food cues and real food exposure (and stronger than those predicted by olfactory cues). This is an impressive demonstration of the power of visual food cues on appetitive responding and health. More recently, research has used food images to change associated evaluations and response tendencies, such as in motor response inhibition trainings (<xref ref-type="bibr" rid="B41">Stice et al., 2016</xref>; <xref ref-type="bibr" rid="B21">Jones et al., 2018</xref>).</p>
<p>The predominance of picture viewing in experimental research has brought about the need for adequate stimulus material. While earlier research had used food images from cookbooks, unspecified internet searches, or other databases such as the International Affective Picture System (IAPS), it soon became clear that these images yielded both insufficient images quality (e.g., poor resolution or contrast) and limited variance (e.g., in food categories, portion sizes, or viewing angles). As a result, considerable effort has been invested in the development of standardized, high quality, and open source materials. Several sets of pictures have been published recently, providing researchers with more options. Based on the IAPS, <xref ref-type="bibr" rid="B29">Miccoli et al. (2014)</xref> published the open library of affective foods (OLAF) with a particular focus of naturalistic settings. The macronutrient picture system (MAPS) is a relatively small set but provides detailed macronutrient composition for each food image (<xref ref-type="bibr" rid="B23">King et al., 2018</xref>). Larger image sets were presented by <xref ref-type="bibr" rid="B17">Foroni et al. (2013)</xref> [FoodCast Research Image Database (FRIDa)] and <xref ref-type="bibr" rid="B8">Charbonnier et al., 2016</xref> [Food4Health (F4H); 2016], along with ratings from larger samples on various subjective properties such as energy density. Finally, <italic>food-pics</italic> (<xref ref-type="bibr" rid="B3">Blechert et al., 2014</xref>) was introduced by our group and includes a large number of images along with normative ratings and computational measures of image characteristics. However, requests from <italic>food-pics</italic> users to include further items motivated the search for additional images. For example, several food items popular in France, the United Kingdom, Austria, Germany, the Middle East, and Asia, a wider range of baked goods (e.g., different kinds of dark bread), a wider range of portion sizes for fruits and vegetables (including single foods and sliced fruits), as well as drinks were added. Also, improvements to the indices of image characteristics were made.</p>
<p>Existing image sets vary on several dimensions that might be of relevance to researchers looking for experimental stimuli. Hence, in addition to describing the extension of the <italic>food-pics</italic> database (aim <italic>i</italic>), the present report reviews other food image datasets popular in experimental research (aim <italic>ii</italic>), and assesses their strengths and weaknesses in order to guide researchers during the selection of the optimal database (aim <italic>iii</italic>). Toward this end, we pay particular attention to properties such as database size and intercultural applicability, the existence of normative data and computational measures of image characteristics, and the range and coverage of various food types and settings (single foods, solid foods vs. drinks, main meals vs. snacks, naturalistic vs. highly controlled settings). We review image sets that were freely available and established for the purpose of experimental picture-viewing paradigms in humans. Image sets established for the development and training of automatic recognition algorithms [e.g., Pittsburgh Fast-Food Image Dataset, <xref ref-type="bibr" rid="B9">Chen et al., 2009</xref>; University of Catania (UNICT) Food Dataset 889, <xref ref-type="bibr" rid="B15">Farinella et al., 2015</xref>; the ChineseFoodNet, <xref ref-type="bibr" rid="B10">Chen et al., 2017</xref>] are not reviewed, as they serve a different purpose. More generally, the present report aims to facilitate comparability and replicability of food-related research on the level of experimental stimuli.</p>
</sec>
<sec id="s1" sec-type="materials|methods">
<title>Materials and Methods</title>
<sec><title>Food-Pics_Extended</title>
<sec><title>Stimuli</title>
<p>The extended <italic>food-pics</italic> database added 328 food images to the original 568 images (for details see <xref ref-type="bibr" rid="B3">Blechert et al., 2014</xref>). Images were provided by several researchers using <italic>food-pics</italic><sup><xref ref-type="fn" rid="fn01">1</xref></sup>. Categories of foods include sweet (e.g., banana split), savory (e.g., ravioli), processed (e.g., fried chicken), and whole (e.g., orange) foods as well as beverages (e.g., milk). Several single images of individually presented foods were added to allow for a relatively precise estimation of nutritional composition and calorie content compared to foods that consist of several components. As in the original dataset, images comprised both single items (e.g., 1 blackberry) and numerous items (e.g., 11 blackberries) as well as meals (e.g., salmon and spinach). The same non-food items as previously described by <xref ref-type="bibr" rid="B3">Blechert et al. (2014)</xref> were included in the extended database for obtaining comparable normative ratings. For standardization, all images were edited onto a white background and homogenized according to viewing distance (&#x2248;80 cm), angle, and simple figure-ground composition. Plates and bowls were shown when necessary (e.g., ice cream sundae), though most foods could be presented without (e.g., fruits).</p>
</sec>
<sec><title>Image Characteristics</title>
<p>Image properties characterizing the images&#x2019; physical properties were computed using customized MATLAB scripts (The Mathworks, Inc. Natick, United States), which can be downloaded from the <italic>food-pics</italic> website<sup><xref ref-type="fn" rid="fn02">2</xref></sup>. A full description of image characteristic analysis is provided in the original report (<xref ref-type="bibr" rid="B3">Blechert et al., 2014</xref>). In brief, <italic>size</italic> was quantified as the proportion of non-white pixels. <italic>Color</italic> properties were quantified as the contribution of red, green, and blue color channels to the non-white pixels. Within-object <italic>contrast</italic> was quantified as the standard deviation of luminance values across non-white pixels. To describe how much an object stands out from the white background, we quantified its <italic>intensity</italic> as the mean of the pixel-wise luminance difference to the white background. Note that this property was previously referred to as &#x201C;brightness,&#x201D; but was renamed to intensity (i.e., inversed brightness). As intensity depends on both the luminance and number of non-white pixels (i.e., object size), we also provide a <italic>normalized intensity</italic> measure that is size-independent. To describe the spatial variations of luminance, we calculated the spatial frequency content with a bi-dimensional fast Fourier transform and a subsequent radial average of the two-dimensional power spectra. Thus, the <italic>median power</italic> quantifies variations in pixel luminance at different spatial scales, independent of their location in the image. In addition, <italic>complexity</italic> of an image was defined by the number or proportion (<italic>normalized complexity</italic>) of pixels representing contour outlines, as determined by a Canny edge detection algorithm (<xref ref-type="bibr" rid="B6">Canny, 1986</xref>) with adjusted parameters.</p>
</sec>
<sec><title>Macronutrients</title>
<p>Caloric information was estimated by students of nutritional science using the database <ext-link ext-link-type="uri" xlink:href="https://fddb.info">https://fddb.info</ext-link>. Each food was given a kcal/100 g and total kcal value for the depicted portion. Ratings were pooled across two to five raters.</p>
</sec>
</sec>
<sec><title>Normative Ratings</title>
<sec><title>Participants</title>
<p>Participants (<italic>n</italic> = 245) completed an anonymous online survey to provide normative data for the additional <italic>food-pics</italic> images (21.2% male, mean age = 31.4 years, 87.3% German; see <xref ref-type="table" rid="T1">Table 1</xref> for detailed participant demographics). Participants were recruited through different university mailing lists; thus, the sample comprised students and employees alike. Participants who rated less than three food images were excluded from the analyses. The survey was available between December 2016 and February 2017. Participants were offered participation in a raffle for 5 &#x00D7; 30 Euros. The ethics board of the University of Salzburg approved this study.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Subject demographics.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<th valign="top" align="left"><italic>N</italic> = 245</th>
<th valign="top" align="center"><italic>n</italic> (%)</th>
<th valign="top" align="center">Mean (<italic>SD</italic>)</th>
<th valign="top" align="center">Median (Range)</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Age (years)</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">31.4 (12.5)</td>
<td valign="top" align="center">27.0 (18&#x2013;74)</td>
</tr>
<tr>
<td valign="top" align="left"><bold>Gender</bold></td>
<td valign="top" align="center"></td>
<td valign="top" align="center"></td>
<td valign="top" align="center"></td></tr>
<tr>
<td valign="top" align="left">Male Female</td>
<td valign="top" align="center">52 (21.2%) 193 (78.8%)</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">&#x2013;</td>
</tr>
<tr>
<td valign="top" align="left"><bold>Nationality</bold></td>
<td valign="top" align="center"></td>
<td valign="top" align="center"></td>
<td valign="top" align="center"></td>
</tr>
<tr>
<td valign="top" align="left">Germany</td>
<td valign="top" align="center">214 (87.3%)</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">&#x2013;</td>
</tr>
<tr>
<td valign="top" align="left">Austria</td>
<td valign="top" align="center">15 (6.10%)</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">&#x2013;</td>
</tr>
<tr>
<td valign="top" align="left">Switzerland</td>
<td valign="top" align="center">2 (0.82%)</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">&#x2013;</td>
</tr>
<tr>
<td valign="top" align="left">Other European country</td>
<td valign="top" align="center">7 (2.90%)</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">&#x2013;</td>
</tr>
<tr>
<td valign="top" align="left">Non-European country</td>
<td valign="top" align="center">7 (2.90%)</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">&#x2013;</td>
</tr>
<tr>
<td valign="top" align="left">Body Mass Index (kg/m<sup>2</sup>)</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">23.1 (4.35)</td>
<td valign="top" align="center">22.2 (16.4 &#x2013; 44.0)</td>
</tr>
<tr>
<td valign="top" align="left"><bold>Eating style</bold></td>
<td valign="top" align="center"></td>
<td valign="top" align="center"></td>
<td valign="top" align="center"></td>
</tr>
<tr>
<td valign="top" align="left">Omnivore</td>
<td valign="top" align="center">191 (78.0%)</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">&#x2013;</td>
</tr>
<tr>
<td valign="top" align="left">Vegetarian</td>
<td valign="top" align="center">45 (18.4%)</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">&#x2013;</td>
</tr>
<tr>
<td valign="top" align="left">Vegan</td>
<td valign="top" align="center">9 (3.67%)</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">&#x2013;</td>
</tr>
<tr>
<td valign="top" align="left"><bold>Current dieting behavior</bold></td>
<td valign="top" align="center"></td>
<td valign="top" align="center"></td>
<td valign="top" align="center"></td>
</tr>
<tr>
<td valign="top" align="left">Currently dieting Not dieting</td>
<td valign="top" align="center">27 (11.0%) 218 (89.0%)</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">&#x2013;</td>
</tr>
<tr>
<td valign="top" align="left"><bold>Employment</bold></td>
<td valign="top" align="center"></td>
<td valign="top" align="center"></td>
<td valign="top" align="center"></td>
</tr>
<tr>
<td valign="top" align="left">College/University<sup>&#x2217;</sup></td>
<td valign="top" align="center">141 (57.6%)</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">&#x2013;</td>
</tr>
<tr>
<td valign="top" align="left">Apprenticeship</td>
<td valign="top" align="center">43 (17.6%)</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">&#x2013;</td>
</tr>
<tr>
<td valign="top" align="left">Self-employed</td>
<td valign="top" align="center">1 (0.41%)</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">&#x2013;</td>
</tr>
<tr>
<td valign="top" align="left">Other</td>
<td valign="top" align="center">60 (24.5%)</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">&#x2013;</td></tr>
<tr>
<td valign="top" align="left"></td></tr></tbody></table>
<table-wrap-foot>
<attrib><italic><sup>&#x2217;</sup>Studying psychology (85.0%), nutrition (2.92%), and other (11.4%).</italic></attrib>
</table-wrap-foot>
</table-wrap>
</sec>
<sec><title>Online Survey</title>
<p>Participants provided demographic information on their age, gender, height, weight, occupation, and nationality as well as on eating habits (omnivore/vegetarian/vegan, dieting, or not dieting; see <xref ref-type="table" rid="T1">Table 1</xref>) before they rated the pictures. Each participant viewed and rated a random selection of 40 foods out of the 328 new food images, as participants could not have reliably rated all 328 images. Participants further rated five food images from the old <italic>food-pics</italic> database and a random selection of eight non-food images out of all 315 non-foods to check for comparability of the old and the new rating sample. Participants were given a detailed explanation of each of the scales and shown an example rating for all scales. <italic>Familiarity</italic> (German: &#x201C;Bekanntheit&#x201D;) was defined as whether the participant recognized the object or not. <italic>Recognizability</italic> (German: &#x201C;Erkennbarkeit&#x201D;) was defined as whether the object was easy or difficult to identify. <italic>Complexity</italic> (German: &#x201C;Komplexit&#x00E4;t&#x201D;) was characterized by &#x201C;many components or details,&#x201D; and &#x201C;many colors/edges/pieces.&#x201D; <italic>Valence</italic> (German: &#x201C;Valenz&#x201D;) was characterized by how negatively or positively the participant viewed the object; that is, whether they found it was repulsive or attractive. <italic>Arousal</italic> (German: &#x201C;Erregung&#x201D;) was characterized by how much the object aroused an emotional reaction in the participant. <italic>Palatability</italic> (German: &#x201C;Schmackhaftigkeit&#x201D;) was characterized by how delicious the participant found the depicted food in general, regardless of whether they wanted to eat it in the moment or not. <italic>Desire to eat</italic> (German: &#x201C;Verlangen&#x201D;) was characterized by how much the participant would like to eat the depicted food if it were available at that moment. Each image was displayed individually and participants were asked to rate each aspect of the depicted food. Response options for <italic>familiarity</italic> and <italic>recognizability</italic> were dichotomous (yes/no) and visual analog scales (VAS; solid horizontal bars approximately 8 cm long) with anchors on either extreme were used for ratings of <italic>complexity</italic> (&#x201C;very little&#x201D; to &#x201C;very high&#x201D;), <italic>valence</italic> (&#x201C;very negative&#x201D; to &#x201C;very positive&#x201D;), <italic>arousal</italic> (&#x201C;not at all&#x201D; to &#x201C;extremely&#x201D;), <italic>palatability</italic> (&#x201C;not at all&#x201D; to &#x201C;extremely&#x201D;), and <italic>desire to eat</italic> (&#x201C;not at all&#x201D; to &#x201C;extremely&#x201D;). Responses were provided via mouse click and ranged from 0 (leftmost extreme) to 100 (rightmost extreme); the value was not shown to participants.</p>
</sec>
</sec></sec>
<sec><title>Results</title>
<sec><title>Normative Ratings</title>
<p>Each food image was rated by 14 to 47 participants (<italic>M</italic> = 28.21 images, <italic>SD</italic> = 5.26).</p>
<sec><title>Interrater Reliability</title>
<p>Intraclass correlation coefficients (ICC) were then calculated using SPSS statistics (Version 24; IBM Corp.) based on a mean rating (<italic>k</italic> = 2), consistency, two-way random-effects model to compare normative ratings from the <italic>food-pics</italic> sample with normative ratings of the <italic>food-pics_extended</italic> sample. For food images, reliability was good for recognizability, familiarity, complexity, palatability, valence, and arousal (ICC = 0.870, 0.810, 0.801, 0.772, 0.834, and 0.820, respectively), and moderate for craving (ICC = 0.658). For non-food images, reliability was good for recognizability, familiarity, complexity, and arousal (ICC = 0.815, 0.850, 0.791, and 0.756, respectively), and moderate for valence (ICC = 0.671).</p>
<p><italic>Food-pics_extended</italic> replaces <italic>food-pics</italic>, and images and metadata are available at <ext-link ext-link-type="uri" xlink:href="http://food-pics.sbg.ac.at">http://food-pics.sbg.ac.at</ext-link>. Users of both <italic>food-pics</italic> and <italic>food-pics_extended</italic> are asked to cite the present report describing <italic>food-pics_extended</italic>.</p>
</sec>
</sec>
<sec><title>Overview Over Selected Food Image Databases</title>
<p>To guide researchers in selecting images according to their needs, we have compiled a table describing all of the above-mentioned datasets (see <xref ref-type="table" rid="T2">Table 2</xref>). Regarding set size, the following ordering emerged: <italic>food-pics_extended</italic>, <italic>food-pics</italic>, F4H, FRIDa, MaPS, OLAF, IAPS_foods. Normative ratings were available for all datasets, with the most ratings per image available for IAPS_foods, followed by F4H, <italic>food-pics</italic>, <italic>food-pics_extended</italic>, FRIDa/MaPS, and OLAF. Image characteristics were available for <italic>food-pics</italic>, <italic>food-pics_extended</italic> and FRIDa. Energy density is available for all datasets but OLAF and IAPS_foods.</p>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p>Overview of stimulus sets.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<th valign="top" align="left">Database</th>
<th valign="top" align="left">Authors, Year</th>
<th valign="top" align="left"># Food images</th>
<th valign="top" align="left"># Non-food images</th>
<th valign="top" align="left">Sample # Participants Sample characterization<break/>&#x2022; Normative ratings # Ratings per image</th>
<th valign="top" align="left">Image characteristics</th>
<th valign="top" align="left">Food characteristics</th>
<th valign="top" align="left">Image types</th>
<th valign="top" align="left">Comment strengths/weaknesses</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left"><italic>Food-pics</italic></td>
<td valign="top" align="left"><xref ref-type="bibr" rid="B3">Blechert et al. (2014)</xref></td>
<td valign="top" align="left">568</td>
<td valign="top" align="left">315</td>
<td valign="top" align="left">German speaking adult sample: # Participants: 831 Age: 24.7 &#x00B1; 5.5 (range: 18&#x2013;65) 83.3 % female Predominantly from German-speaking countries (Austria, Germany, Switzerland) US-American adult sample: # Participants: 496 Age: 35.9 &#x00B1; 13.4 (range: 18&#x2013;77) 63.7% female Predominantly North American University of Hagen adult sample: # Participants: 638 Age: 32.8 &#x00B1; 10.1 (range: 17&#x2013;73) 82.8 % female Predominantly German Austrian underage sample: # Participants: 23 Age: 13.9 &#x00B1; 1.6 (range: 11&#x2013;18) 50.8 % female Predominantly Austrian<break/>&#x2022; Palatability<break/>&#x2022; Desire to eat<break/>&#x2022; Valence<break/>&#x2022; Arousal<break/>&#x2022; Recognizability <break/>&#x2022; Familiarity<break/># Ratings per image: &#x223C;49</td>
<td valign="top" align="left"><break/>&#x2022; Colors <break/>&#x2022; Brightness <break/>&#x2022; Size <break/>&#x2022; Contrast <break/>&#x2022; Spatial frequencies <break/>&#x2022; Complexity <break/>&#x2022; Norm. complexity</td>
<td valign="top" align="left"><break/>&#x2022; Energy density (calories), experimenter estimated <break/>&#x2022; Macronutrients</td>
<td valign="top" align="left">Food images: <break/>&#x2022; Fruits (76 images) <break/>&#x2022; Vegetables (118) <break/>&#x2022; Chocolate (65) <break/>&#x2022; Fish (13) <break/>&#x2022; Meat (63) <break/>&#x2022; Nuts (10) <break/>&#x2022; Drinks (9) Non-food images: <break/>&#x2022; Flowers/leaves (42) <break/>&#x2022; Animals (37) <break/>&#x2022; Tools (23) <break/>&#x2022; Non-kitchen household (89) <break/>&#x2022; Kitchen utensils (46) <break/>&#x2022; Office (20) <break/>&#x2022; Food packaging (33)</td>
<td valign="top" align="left"><break/>&#x2022; Wide range of foods <break/>&#x2022; Focus on Western foods <break/>&#x2022; Data from children and adults <break/>&#x2022; Detailed macronutrient data <break/>&#x2022; Open to add images</td>
</tr>
<tr>
<td valign="top" align="left"><italic>Addition to food-pics</italic> (Addition + <italic>food-pics</italic> = <italic>food-pics_extended</italic>)</td>
<td valign="top" align="left">This article</td>
<td valign="top" align="left">328 (896 total with f<italic>ood-pics)</italic></td>
<td valign="top" align="left">0</td>
<td valign="top" align="left"># Participants (adults): 245 Age: 31.4 &#x00B1; 12.5 (range: 18&#x2013;74) 78.8 % female Predominantly from German-speaking countries (Austria, Germany, Switzerland) <break/>&#x2022; Palatability <break/>&#x2022; Desire to eat <break/>&#x2022; Valence <break/>&#x2022; Arousal <break/>&#x2022; Recognizability <break/>&#x2022; Familiarity # ratings per image: &#x223C;28</td>
<td valign="top" align="left"><break/>&#x2022; Colors <break/>&#x2022; Intensity <break/>&#x2022; Norm. Intensity (formerly brightness) <break/>&#x2022; Size <break/>&#x2022; Contrast <break/>&#x2022; Spatial frequencies <break/>&#x2022; Complexity <break/>&#x2022; Norm. complexity</td>
<td valign="top" align="left"><break/>&#x2022; Energy density (calories), experimenter estimated</td>
<td valign="top" align="left">All food images: <break/>&#x2022; Fruit (64 images) <break/>&#x2022; Vegetables (102) <break/>&#x2022; Chocolate (30) <break/>&#x2022; Fish (16) <break/>&#x2022; Meat (49) <break/>&#x2022; Nuts (5) <break/>&#x2022; Drinks (2)</td>
<td valign="top" align="left"><break/>&#x2022; Wide range of foods <break/>&#x2022; Focus on Western, Asian and Middle Eastern food <break/>&#x2022; Open to add images</td>
</tr>
<tr>
<td valign="top" align="left">FRIDa</td>
<td valign="top" align="left"><xref ref-type="bibr" rid="B17">Foroni et al. (2013)</xref></td>
<td valign="top" align="left">295</td>
<td valign="top" align="left">582</td>
<td valign="top" align="left"># Participants (adults): 73 Age: 23.1 &#x00B1; 3.3 (range: 18&#x2013;30) 53.4 % female Predominantly Italian <break/>&#x2022; Calories <break/>&#x2022; Distance from edibility <break/>&#x2022; Level of transformation <break/>&#x2022; Valence <break/>&#x2022; Arousal <break/>&#x2022; Familiarity <break/>&#x2022; Typicality <break/>&#x2022; Ambiguity <break/># Ratings per food-image: &#x223C;5&#x2013;14 # Ratings per non-food-image &#x223C;8&#x2013;21</td>
<td valign="top" align="left"><break/>&#x2022; Size, <break/>&#x2022; Brightness, <break/>&#x2022; High spatial frequency</td>
<td valign="top" align="left"><break/>&#x2022; Energy density (calories), rated</td>
<td valign="top" align="left">Food images: <break/>&#x2022; Natural-food (99 images) <break/>&#x2022; Transformed-food (153) <break/>&#x2022; Rotten-food (43) Non-food images: <break/>&#x2022; Natural-non-food items (53) <break/>&#x2022; Artificial food-related objects (119) <break/>&#x2022; Artificial objects (299) <break/>&#x2022; Animals (54) <break/>&#x2022; Scenes (57)</td>
<td valign="top" align="left"><break/>&#x2022; Food from Mediterranean cuisine <break/>&#x2022; Small sample size</td>
</tr>
<tr>
<td valign="top" align="left">F4H</td>
<td valign="top" align="left"><xref ref-type="bibr" rid="B8">Charbonnier et al. (2016)</xref></td>
<td valign="top" align="left">370</td>
<td valign="top" align="left">41</td>
<td valign="top" align="left">Adult sample: # Participants: 449 Age: 33.7 &#x00B1; 13.1 (range: n.a.) 70.2 % female Scottish, British, Dutch, Greek Underage sample: # Participants: 191 Age: 12.5 &#x00B1; 2.3 (range: n.a.) 55 % female Dutch, German, Hungarian, Swedish <break/>&#x2022; Recognizability <break/>&#x2022; Liking <break/>&#x2022; (Calories) <break/>&#x2022; Healthiness <break/># Ratings per image: Adult sample: &#x223C;72&#x2013;77 Underage sample: &#x223C;44&#x2013;59</td>
<td valign="top" align="left">none</td>
<td valign="top" align="left"><break/>&#x2022; Energy density (calories), rated and experimenter estimated</td>
<td valign="top" align="left">Food images: <break/>&#x2022; Snacks, fruits, vegetables and main meals on plates Non-food images: <break/>&#x2022; Non-food objects on plates</td>
<td valign="top" align="left"><break/>&#x2022; Data from children and adults <break/>&#x2022; Pictures are taken in different regions of Europe <break/>&#x2022; Country specific food and different preparations <break/>&#x2022; Exclusively from Western countries <break/>&#x2022; High degree of standardization. <break/>&#x2022; Open to add images</td>
</tr>
<tr>
<td valign="top" align="left">IAPS_foods</td>
<td valign="top" align="left"><xref ref-type="bibr" rid="B25">Lang et al. (2008)</xref></td>
<td valign="top" align="left">48</td>
<td valign="top" align="left">1148</td>
<td valign="top" align="left"># Participants (age: n.a.) <break/>&#x2022; Valence <break/>&#x2022; Arousal <break/>&#x2022; Dominance <break/># Ratings per image: &#x223C;100</td>
<td valign="top" align="left">none</td>
<td valign="top" align="left">none</td>
<td valign="top" align="left">Food images: <break/>&#x2022; Main meals <break/>&#x2022; Desserts Non-food images: <break/>&#x2022; IAPS images</td>
<td valign="top" align="left"><break/>&#x2022; Large group of participants and raters per image <break/>&#x2022; Small image number <break/>&#x2022; Complex backgrounds <break/>&#x2022; Image content several decades old</td>
</tr>
<tr>
<td valign="top" align="left">MaPS:</td>
<td valign="top" align="left"><xref ref-type="bibr" rid="B23">King et al. (2018)</xref></td>
<td valign="top" align="left">144</td>
<td valign="top" align="left"></td>
<td valign="top" align="left"># Participants (adults): 25 Age: 20.6 &#x00B1; 1.1 (range: n.a.) 84 % female Predominantly North American <break/>&#x2022; Interest <break/>&#x2022; Appetite <break/>&#x2022; Nutrition <break/>&#x2022; Emotional Valence <break/>&#x2022; Liking <break/>&#x2022; Frequency <break/># Ratings per image: 25</td>
<td valign="top" align="left">none</td>
<td valign="top" align="left"><break/>&#x2022; Energy density (calories) <break/>&#x2022; Macronutrients</td>
<td valign="top" align="left"><break/>&#x2022; Foods with extreme values on fat, sugar, complex carbohydrate, and protein content</td>
<td valign="top" align="left"><break/>&#x2022; Small sample size <break/>&#x2022; Small image set size <break/>&#x2022; fMRI data <break/>&#x2022; Detailed macronutrient data</td>
</tr>
<tr>
<td valign="top" align="left">OLAF</td>
<td valign="top" align="left"><xref ref-type="bibr" rid="B29">Miccoli et al. (2014)</xref></td>
<td valign="top" align="left">96</td>
<td valign="top" align="left">36 (IAPS)</td>
<td valign="top" align="left"># Participants (underage): 559 Age: 14.2 &#x00B1; 1.4 (range: 11&#x2013;17) 50.8 % female Predominantly Spanish <break/>&#x2022; Valence <break/>&#x2022; Arousal <break/>&#x2022; Dominance <break/>&#x2022; Craving<break/># Ratings per image: 18</td>
<td valign="top" align="left">none</td>
<td valign="top" align="left">none</td>
<td valign="top" align="left">Food images: <break/>&#x2022; Food compositions and complex arrangements <break/>&#x2022; Fruits <break/>&#x2022; Vegetables <break/>&#x2022; Sweet high-fat foods <break/>&#x2022; Salty high-fat foods Non-food images: <break/>&#x2022; IAPS images</td>
<td valign="top" align="left"><break/>&#x2022; Display of food images on comples backgrounds <break/>&#x2022; Very close cutouts <break/>&#x2022; &#x201C;Eye-level&#x201D; photos <break/>&#x2022; Quality differs between the images (e.g., brightness) <break/>&#x2022; Ratings comparable to IAPS</td></tr>
<tr>
<td valign="top" align="left"></td></tr></tbody></table>
<table-wrap-foot>
<attrib><italic>N.a., information not available; FRIDa, foodcast research image database<bold>;</bold> F4H, full for heath image data base; IAPS, international affective picture system; MaPS, macronutrient picture system; OLAF, open library of affective foods.</italic></attrib>
</table-wrap-foot>
</table-wrap>
</sec>
</sec>
<sec><title>Discussion</title>
<p>The present report presents the <italic>food-pics_extended</italic> image dataset, an addition to the <italic>food-pics</italic> stimulus set that added 328 images to the original 568 images (the extended set, thus, replaces <italic>food-pics</italic> and contains a total of 896 images). In the following, we describe <italic>food-pics_extended</italic> (aim <italic>i</italic>), characterize ours and each of the major food image sets with a focus on advantages and limitations (aim <italic>ii</italic>), and finally present a guideline for choosing between sets by ranking sets on various dimensions (aim <italic>iii</italic>).</p>
<p>Regarding aim <italic>i</italic>, <italic>food-pics_extended</italic> enlarges and complements the <italic>food-pics</italic> database (<xref ref-type="bibr" rid="B3">Blechert et al., 2014</xref>): besides the mere addition of images, we amended the normative data in a way that allowed compatibility with the normative data of <italic>food-pics</italic>. Our agreement/consistency data indicate that this process was successful: normative ratings by the new raters were largely comparable to those of <italic>food-pics</italic> as evidenced by good interrater agreement for a subset of images presented to both subject pools. This suggests that researchers can use images and normative data from both image sets and, thus, <italic>food-pics_extended</italic> subsumes and replaces <italic>food-pics</italic>, so users of &#x201C;old&#x201D; and &#x201C;new&#x201D; images should refer to <italic>food-pics_extended</italic>. Some caution should be given for craving ratings, for which agreement indices were lower, and which are known to be very state-dependent and fluctuate (<xref ref-type="bibr" rid="B38">Shiffman, 2000</xref>). Images in <italic>food-pics</italic> and <italic>food-pics_extended</italic> were selected under the following principles: (A) all foods were set on a white background, mostly without context (plates are shown where necessary), (B) high recognizability for most images (though for some foods in <italic>food-pics_extended</italic>, recognition might depend on cultural knowledge; <xref ref-type="bibr" rid="B20">Jensen et al., 2016</xref>), (C) high image quality and esthetic appeal. Single foods as well as full meals and different combinations of single foods are included. Normative ratings are available from several large samples (German-speaking and North American). Thus, researchers interested in investigating certain subpopulations (e.g., older US females), can extract the respective normative ratings from the database and use them to select images accordingly (e.g., on high vs. low palatability, given high recognizability). <italic>Food-pics_extended</italic> comes with 315 non-food control images that can be matched in terms of physical stimulus properties to the food images on ratings of valence and arousal as well as on image characteristics.</p>
<p>Regarding aim <italic>ii</italic>, in reviewing established image databases, it became clear that while IAPS (<xref ref-type="bibr" rid="B25">Lang et al., 2008</xref>) has been of undebated importance for standardizing stimuli across laboratories, it is very limited in the food context. Its focus lies on images that vary strongly in valence and arousal. Its advantages include the inclusion of a large database of valence and arousal ratings and its extensive use in the literature. Users aiming to include non-food IAPS images in their study should thus opt for these images or for OLAF for reasons of comparability of the normative ratings. Yet, these advantages are offset by several shortcomings: food images are few in number (48) and images are embedded in varying and complex backgrounds that might influence the neural response as a result of their overall image complexity. Furthermore, rating data do not include important information such as palatability ratings or data on calorie density.</p>
<p>An approach similar to that of the IAPS was taken by the authors of the OLAF (<xref ref-type="bibr" rid="B29">Miccoli et al., 2014</xref>). Explicitly referring to the IAPS database, the authors provide 96 images that parallel the complex and contextualized character of the IAPS: images are taken &#x201C;on eye level,&#x201D; full meals are shown with an overrepresentation of high-energy and highly palatable foods, images are meant to particularly appeal to the observers&#x2019; affective response, and normative data are given in relation to other categories of the IAPS (negative, neutral, and positive IAPS). As a result of the naturalistic, contextualized setup, it is difficult to control aspects of the food (macronutrient content, energy density), its components (only parts of the foods visible), and constituents (several main and side dishes, gravy, toppings, etc.). Normative data (valence, arousal, dominance, craving) from a large group of Spanish children and adults are available, resulting in 18 ratings per image.</p>
<p>FRIDa from <xref ref-type="bibr" rid="B17">Foroni et al. (2013)</xref> was the largest set at the time of publication with 582 food images representing mostly Western foods, with a slight bias toward Mediterranean foods. It was the first set for which quantitative measures of image characteristics (i.e., size, mean brightness, and high spatial frequency power) were available. Such image characteristics are known to influence behavioral response times and performance (<xref ref-type="bibr" rid="B16">Felipe et al., 1993</xref>; <xref ref-type="bibr" rid="B26">Mace et al., 2005</xref>; <xref ref-type="bibr" rid="B45">VanRullen, 2006</xref>; <xref ref-type="bibr" rid="B31">O&#x2019;Donell et al., 2010</xref>) as well as neurophysiological responses (<xref ref-type="bibr" rid="B33">Pourtois et al., 2005</xref>; <xref ref-type="bibr" rid="B36">Schadow et al., 2007</xref>; <xref ref-type="bibr" rid="B24">Kovalenko et al., 2012</xref>). Therefore, providing information about image characteristics is important because they represent a potential confound for the comparison between groups of images, such as high caloric vs. low caloric food. They were also the first to include spoiled or rotten foods, allowing interesting comparisons within the food category but with varying valence/edibility (<xref ref-type="bibr" rid="B1">Becker et al., 2016</xref>). Their inclusion of natural and artificial non-foods further allow for interesting food/non-food contrasts. Clear advantages are set size, comparison categories, and rating data on &#x201C;degree of food transformation,&#x201D; &#x201C;distance from edibility,&#x201D; and calories, which are not available in any other image set. Disadvantages include the strict omission of plates (even for soups), which created edge artifacts for some images, limitation of their normative data to relatively few ratings (5&#x2013;14 ratings) per image from respondents predominantly from Italy.</p>
<p>The database F4H by <xref ref-type="bibr" rid="B8">Charbonnier et al. (2016)</xref> includes 370 images by the time of this writing. It was the first image set to publish a standardized image protocol that would allow the community to extend the image set with comparable parameters. It focuses on individual foods (mostly between one to &#x223C;30 pieces of one food on a plate) and on standardized presentation. This allowed the authors to provide exact estimates of calorie density along with the subjective ratings of participants. This standardized character, however, decreases the esthetic appeal and decontextualizes foods, which are often consumed in meals and compositions. F4H also includes 41 non-food images without any ratings. Food images represent foods from different Western countries. Strengths also include normative data from children and adults from seven European countries on healthiness, calories, and similarity of images with real food. The high level of control over food content allows for precise calculations of macronutrients for studies focusing on this aspect (however, no such data other than subjective calorie content are included). Limitations include the aforementioned de-contextualization, lack of image characteristics, a relatively small set of unrated non-food images, and the focus on European foods and European normative data.</p>
<p>Macronutrient picture system (<xref ref-type="bibr" rid="B23">King et al., 2018</xref>) is a rather small image set (144 images) with a specialized purpose: neurocognitive research on the neural representations of different macronutrients. Thus, foods are relatively homogenous (but extreme) with regard to fat, sugar, and protein content. Advantages include the presentation of functional magnetic resonance imaging (fMRI) data that show neural activation patterns for foods varying in macronutrient composition (sugar, fat, protein) and correspondence of image content with items in a food preference questionnaire (<xref ref-type="bibr" rid="B19">Geiselman et al., 1998</xref>) allowing the parallel investigation of habitual food consumption and neural correlates.</p>
<p>Regarding aim <italic>iii</italic> and a <italic>guideline</italic> for choosing between sets, our review illustrates that each of the presented databases has advantages but also limitations. Thus, each ranking of image sets has to been done in the light of the specific research question. One important attribute of any database is the <italic>number and variety of available images</italic>, because this affects many different research questions. For various reasons, such as variety of diets and culture, it seems important to not constrain image choice within a given set. Small sets run the risk of omitting typical and frequently consumed foods in a given geographical area (e.g., dark bread in Central Europe, rice dishes in Asia) or retraining variability within a given food category (e.g., salty snacks). Researchers interested in a large number of foods and/or different cultures may decide for one of the larger sets, such as <italic>food-pics_extended</italic>, F4H or FRIDa. A variety of items allows one not only to tap into a wide range of foods and potentially a wide range of cultures but also to match image subsets on other aspects. For instance, one may be interested in calorie density as an independent variable, but want to match stimulus groups on image characteristics (e.g., colors) and degree of processing, while keeping palatability comparable. This would require complex matching operations as these variables are sometimes correlated (<xref ref-type="bibr" rid="B17">Foroni et al., 2013</xref>; <xref ref-type="bibr" rid="B3">Blechert et al., 2014</xref>).</p>
<p>Almost equally important for a range of research questions is the amount of normative data provided. It requires extensive normative data to ensure reliable palatability matching from a population resembling the intended study sample. Accordingly, the size of image database is also related to another relevant choice dimension, namely, <italic>cross-cultural validity/applicability</italic> and <italic>availability of normative data</italic>. With regard to size and cross-cultural applicability, F4H and <italic>food-pics_extended</italic> would be the ideal sets, while researchers with a focus on Mediterranean diets and samples may also use FRIDa and OLAF. In the realm of neuroimaging, and particularly in electroencephalography (EEG) or magnetoencephalography (MEG) research and reaction time-based studies, researchers may consider recognizability and physical image characteristics such as complexity, brightness, and other attributes that might affect brain responses. For those, the choice might be between FRIDa and <italic>food-pics_extended</italic>. Regarding research focused on macronutrient content, MAPS and <italic>food-pics_extended</italic> would be recommendable. Drinks or food packaging are only available in <italic>food-pics_extended</italic> and <italic>food-pics</italic>. Researchers aiming to extend the data bases with images from their own labs may opt for open-ended stimulus sets such as F4H.</p>
<p>Certain limitations need to be kept in mind. First, our review was selective and might have overlooked some image sets. However, we aimed to review the most popular, free databases focusing on human appetite studies. Second, regarding <italic>food-pics_extended</italic>, even though we intended to include Middle Eastern and Asian foods, there is still ways to go to include typical foods from all major areas of the world. Also, drinks are underrepresented but may be important. Macronutrients are available for a subset of <italic>food-pics_extended</italic> (images of the former <italic>food-pics</italic>). In fact, pointing to the usefulness of such information for all images, recent research shows that both high fat and high carbohydrate content is more reinforcing than equicaloric foods with either high fat or high carbohydrate content (<xref ref-type="bibr" rid="B14">Difeliceantonio et al., 2018</xref>). Adaptations for different age groups may also be worthwhile. For example, a subset of <italic>food-pics_extended</italic> (images of the former <italic>food-pics</italic>) has been examined in US adolescents aged 12&#x2013;17, where an average of 75% of foods were recognized. There are no data yet available for younger participants. This indicates that while certainly enough images are available with good recognizability, there is still room for improvement for some foods. Cultural differences were also documented for <italic>food-pics</italic> ratings in Portugal (<xref ref-type="bibr" rid="B34">Prada et al., 2017</xref>), pointing to the need for further validation. Due to elevated public awareness of the issue of nutritional health, normative ratings may have to be updated periodically. For example, more recent samples gave higher valence ratings for low calorie foods than the original <italic>food-pics</italic> sample, tentatively pointing in that direction (although confounded with cultural differences, see <xref ref-type="bibr" rid="B34">Prada et al., 2017</xref>). Future research might extend normative data, image breadth, and include 3D images for virtual reality and more high-resolution images in various formats. Importantly, the normative ratings for <italic>food-pics_extended</italic> were obtained from a relatively homogenous sample of predominantly female, German-speaking, and educated individuals in their 30 s. A representative database would require inclusion of other age groups (particularly younger aged youth and children), less educated groups, more males, and importantly, participants from other geographical regions.</p>
</sec>
<sec><title>Data Availability</title>
<p>Publicly available datasets were analyzed in this study. This data can be found here: <ext-link ext-link-type="uri" xlink:href="http://food-pics.sbg.ac.at">http://food-pics.sbg.ac.at</ext-link>.</p>
</sec>
<sec><title>Ethics Statement</title>
<p>This study was carried out in accordance with the recommendations of the ethics board of the University of Salzburg with written informed consent from all subjects. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by the ethics board of the University of Salzburg.</p>
</sec>
<sec><title>Author Contributions</title>
<p>JB wrote the introduction, parts of the methods, and discussion. AL and SP compiled the images and the data base. KO and SP conducted the questionnaire study and ran the statistical analysis. NAB optimized the image characteristic scripts. All authors edited and approved the last version of the manuscript.</p>
</sec>
<sec><title>Conflict of Interest Statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<fn-group>
<fn fn-type="financial-disclosure">
<p><bold>Funding.</bold> This work was supported by the European Research Council (ERC) under the European Union&#x2019;s Horizon 2020 Research and Innovation Program (ERC-StG-2014 639445 NewEat).</p>
</fn>
</fn-group>
<ack>
<p>The authors thank Sarah Schmid for her work on the calorie data in the database and for contributions to the manuscript.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Becker</surname> <given-names>C. A.</given-names></name> <name><surname>Flaisch</surname> <given-names>T.</given-names></name> <name><surname>Renner</surname> <given-names>B.</given-names></name> <name><surname>Schupp</surname> <given-names>H. T.</given-names></name></person-group> (<year>2016</year>). <article-title>Neural correlates of the perception of spoiled food stimuli.</article-title> <source><italic>Front. Hum. Neurosci.</italic></source> <volume>10</volume>:<issue>302</issue>. <pub-id pub-id-type="doi">10.3389/fnhum.2016.00302</pub-id> <pub-id pub-id-type="pmid">27445746</pub-id></citation></ref>
<ref id="B2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Berthoud</surname> <given-names>H. R.</given-names></name> <name><surname>Morrison</surname> <given-names>C.</given-names></name></person-group> (<year>2008</year>). <article-title>The brain, appetite, and obesity.</article-title> <source><italic>Annu. Rev. Psychol.</italic></source> <volume>59</volume> <fpage>55</fpage>&#x2013;<lpage>92</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.psych.59.103006.093551</pub-id></citation></ref>
<ref id="B3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Blechert</surname> <given-names>J.</given-names></name> <name><surname>Meule</surname> <given-names>A.</given-names></name> <name><surname>Busch</surname> <given-names>N. A.</given-names></name> <name><surname>Ohla</surname> <given-names>K.</given-names></name></person-group> (<year>2014</year>). <article-title>Food-pics: an image database for experimental research on eating and appetite.</article-title> <source><italic>Front. Psychol.</italic></source> <volume>5</volume>:<issue>617</issue>. <pub-id pub-id-type="doi">10.3389/fpsyg.2014.00617</pub-id> <pub-id pub-id-type="pmid">25009514</pub-id></citation></ref>
<ref id="B4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Blechert</surname> <given-names>J.</given-names></name> <name><surname>Testa</surname> <given-names>G.</given-names></name> <name><surname>Georgii</surname> <given-names>C.</given-names></name> <name><surname>Klimesch</surname> <given-names>W.</given-names></name> <name><surname>Wilhelm</surname> <given-names>F. H.</given-names></name></person-group> (<year>2016</year>). <article-title>The Pavlovian craver: neural and experiential correlates of single trial naturalistic food conditioning in humans.</article-title> <source><italic>Physiol. Behav.</italic></source> <volume>58</volume> <fpage>18</fpage>&#x2013;<lpage>25</lpage>. <pub-id pub-id-type="doi">10.1016/j.physbeh.2016.02.028</pub-id> <pub-id pub-id-type="pmid">26905451</pub-id></citation></ref>
<ref id="B5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Boswell</surname> <given-names>R. G.</given-names></name> <name><surname>Kober</surname> <given-names>H.</given-names></name></person-group> (<year>2016</year>). <article-title>Food cue reactivity and craving predict eating and weight gain: a meta-analytic review.</article-title> <source><italic>Obes. Rev.</italic></source> <volume>17</volume> <fpage>159</fpage>&#x2013;<lpage>177</lpage>. <pub-id pub-id-type="doi">10.1111/obr.12354</pub-id> <pub-id pub-id-type="pmid">26644270</pub-id></citation></ref>
<ref id="B6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Canny</surname> <given-names>J.</given-names></name></person-group> (<year>1986</year>). <article-title>A computational approach to edge detection.</article-title> <source><italic>IEEE Trans. Pattern Anal.</italic></source> <volume>8</volume> <fpage>679</fpage>&#x2013;<lpage>698</lpage>. <pub-id pub-id-type="doi">10.1109/TPAMI.1986.4767851</pub-id></citation></ref>
<ref id="B7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Castellanos</surname> <given-names>E. H.</given-names></name> <name><surname>Charboneau</surname> <given-names>E.</given-names></name> <name><surname>Dietrich</surname> <given-names>M. S.</given-names></name> <name><surname>Park</surname> <given-names>S.</given-names></name> <name><surname>Bradley</surname> <given-names>B. P.</given-names></name> <name><surname>Mogg</surname> <given-names>K.</given-names></name><etal/></person-group> (<year>2009</year>). <article-title>Obese adults have visual attention bias for food cue images: evidence for altered reward system function.</article-title> <source><italic>Int. J. Obesity</italic></source> <volume>33</volume> <fpage>1063</fpage>&#x2013;<lpage>1073</lpage>. <pub-id pub-id-type="doi">10.1038/ijo.2009.138</pub-id> <pub-id pub-id-type="pmid">19621020</pub-id></citation></ref>
<ref id="B8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Charbonnier</surname> <given-names>L.</given-names></name> <name><surname>Van Meer</surname> <given-names>F.</given-names></name> <name><surname>Van Der Laan</surname> <given-names>L. N.</given-names></name> <name><surname>Viergever</surname> <given-names>M. A.</given-names></name> <name><surname>Smeets</surname> <given-names>P. A. M.</given-names></name></person-group> (<year>2016</year>). <article-title>Standardized food images: a photographing protocol and image database.</article-title> <source><italic>Appetite</italic></source> <volume>96</volume> <fpage>166</fpage>&#x2013;<lpage>173</lpage>. <pub-id pub-id-type="doi">10.1016/j.appet.2015.08.041</pub-id> <pub-id pub-id-type="pmid">26344127</pub-id></citation></ref>
<ref id="B9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>M.</given-names></name> <name><surname>Dhingra</surname> <given-names>K.</given-names></name> <name><surname>Wu</surname> <given-names>W.</given-names></name> <name><surname>Yang</surname> <given-names>L.</given-names></name> <name><surname>Sukthankar</surname> <given-names>R.</given-names></name> <name><surname>Yang</surname> <given-names>J.</given-names></name></person-group> (<year>2009</year>). &#x201C;<article-title>PFID: Pittsburgh fast-food image dataset</article-title>,&#x201D; in <source><italic>Proceedings for the 2009 16th IEEE International Conference on Image Processing (ICIP)</italic></source>, (<publisher-loc>Piscataway, NJ</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>289</fpage>&#x2013;<lpage>292</lpage>. <pub-id pub-id-type="doi">10.1109/ICIP.2009.5413511</pub-id></citation></ref>
<ref id="B10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>X.</given-names></name> <name><surname>Zhou</surname> <given-names>H.</given-names></name> <name><surname>Diao</surname> <given-names>L.</given-names></name></person-group> (<year>2017</year>). <article-title>ChineseFoodNet: a large-scale image dataset for Chinese food recognition.</article-title> <source><italic>arXiv</italic></source> [Preprint]. <ext-link ext-link-type="uri" xlink:href="https://arxiv.org/abs/1705.02743">arXiv:1705.02743</ext-link>.</citation></ref>
<ref id="B11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cunningham</surname> <given-names>C. A.</given-names></name> <name><surname>Egeth</surname> <given-names>H. E.</given-names></name></person-group> (<year>2018</year>). <article-title>The capture of attention by entirely irrelevant pictures of calorie-dense foods.</article-title> <source><italic>Psychon. Bull. Rev.</italic></source> <volume>25</volume> <fpage>586</fpage>&#x2013;<lpage>595</lpage>. <pub-id pub-id-type="doi">10.3758/s13423-017-1375-8</pub-id> <pub-id pub-id-type="pmid">29075994</pub-id></citation></ref>
<ref id="B12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dagher</surname> <given-names>A.</given-names></name></person-group> (<year>2012</year>). <article-title>Functional brain imaging of appetite.</article-title> <source><italic>Trends Endocrinol. Metab.</italic></source> <volume>23</volume> <fpage>250</fpage>&#x2013;<lpage>260</lpage>. <pub-id pub-id-type="doi">10.1016/j.tem.2012.02.009</pub-id> <pub-id pub-id-type="pmid">22483361</pub-id></citation></ref>
<ref id="B13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Davis</surname> <given-names>C.</given-names></name> <name><surname>Curtis</surname> <given-names>C.</given-names></name> <name><surname>Levitan</surname> <given-names>R. D.</given-names></name> <name><surname>Carter</surname> <given-names>J. C.</given-names></name> <name><surname>Kaplan</surname> <given-names>A. S.</given-names></name> <name><surname>Kennedy</surname> <given-names>J. L.</given-names></name></person-group> (<year>2011</year>). <article-title>Evidence that &#x2018;food addiction&#x2019; is a valid phenotype of obesity.</article-title> <source><italic>Appetite</italic></source> <volume>57</volume> <fpage>711</fpage>&#x2013;<lpage>717</lpage>. <pub-id pub-id-type="doi">10.1016/j.appet.2011.08.017</pub-id> <pub-id pub-id-type="pmid">21907742</pub-id></citation></ref>
<ref id="B14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Difeliceantonio</surname> <given-names>A. G.</given-names></name> <name><surname>Coppin</surname> <given-names>G.</given-names></name> <name><surname>Rigoux</surname> <given-names>L.</given-names></name> <name><surname>Edwin Thanarajah</surname> <given-names>S.</given-names></name> <name><surname>Dagher</surname> <given-names>A.</given-names></name> <name><surname>Tittgemeyer</surname> <given-names>M.</given-names></name><etal/></person-group> (<year>2018</year>). <article-title>Supra-additive effects of combining fat and carbohydrate on food reward.</article-title> <source><italic>Cell Metab.</italic></source> <volume>28</volume> <fpage>33</fpage>&#x2013;<lpage>44</lpage>.e3. <pub-id pub-id-type="doi">10.1016/j.cmet.2018.05.018</pub-id> <pub-id pub-id-type="pmid">29909968</pub-id></citation></ref>
<ref id="B15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Farinella</surname> <given-names>G. M.</given-names></name> <name><surname>Allegra</surname> <given-names>D.</given-names></name> <name><surname>Stanco</surname> <given-names>F.</given-names></name> <name><surname>Battiato</surname> <given-names>S.</given-names></name></person-group> (<year>2015</year>). &#x201C;<article-title>On the exploitation of one class classification to distinguish food vs non-food images</article-title>,&#x201D; in <source><italic>Proceedings of the International Conference on Image Analysis and Processing</italic></source>, (<publisher-loc>Berlin</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>375</fpage>&#x2013;<lpage>383</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-319-23222-5_46</pub-id></citation></ref>
<ref id="B16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Felipe</surname> <given-names>A.</given-names></name> <name><surname>Buades</surname> <given-names>M. J.</given-names></name> <name><surname>Artigas</surname> <given-names>J. M.</given-names></name></person-group> (<year>1993</year>). <article-title>Influence of the contrast sensitivity function on the reaction-time.</article-title> <source><italic>Vis. Res.</italic></source> <volume>33</volume> <fpage>2461</fpage>&#x2013;<lpage>2466</lpage>. <pub-id pub-id-type="doi">10.1016/0042-6989(93)90126-H</pub-id></citation></ref>
<ref id="B17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Foroni</surname> <given-names>F.</given-names></name> <name><surname>Pergola</surname> <given-names>G.</given-names></name> <name><surname>Argiris</surname> <given-names>G.</given-names></name> <name><surname>Rumiati</surname> <given-names>R. I.</given-names></name></person-group> (<year>2013</year>). <article-title>The foodcast research image database (FRIDa).</article-title> <source><italic>Front. Hum. Neurosci.</italic></source> <volume>7</volume>:<issue>51</issue>. <pub-id pub-id-type="doi">10.3389/fnhum.2013.00051</pub-id> <pub-id pub-id-type="pmid">23459781</pub-id></citation></ref>
<ref id="B18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fuhrer</surname> <given-names>D.</given-names></name> <name><surname>Zysset</surname> <given-names>S.</given-names></name> <name><surname>Stumvoll</surname> <given-names>M.</given-names></name></person-group> (<year>2008</year>). <article-title>Brain activity in hunger and satiety: an exploratory visually stimulated FMRI study.</article-title> <source><italic>Obesity</italic></source> <volume>16</volume> <fpage>945</fpage>&#x2013;<lpage>950</lpage>. <pub-id pub-id-type="doi">10.1038/oby.2008.33</pub-id> <pub-id pub-id-type="pmid">18292747</pub-id></citation></ref>
<ref id="B19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Geiselman</surname> <given-names>P. J.</given-names></name> <name><surname>Anderson</surname> <given-names>A. M.</given-names></name> <name><surname>Dowdy</surname> <given-names>M. L.</given-names></name> <name><surname>West</surname> <given-names>D. B.</given-names></name> <name><surname>Redmann</surname> <given-names>S. M.</given-names></name> <name><surname>Smith</surname> <given-names>S. R.</given-names></name></person-group> (<year>1998</year>). <article-title>Reliability and validity of a macronutrient self-selection paradigm and a food preference questionnaire.</article-title> <source><italic>Physiol. Behav.</italic></source> <volume>63</volume> <fpage>919</fpage>&#x2013;<lpage>928</lpage>. <pub-id pub-id-type="doi">10.1016/S0031-9384(97)00542-8</pub-id> <pub-id pub-id-type="pmid">9618017</pub-id></citation></ref>
<ref id="B20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jensen</surname> <given-names>C. D.</given-names></name> <name><surname>Duraccio</surname> <given-names>K. M.</given-names></name> <name><surname>Barnett</surname> <given-names>K. A.</given-names></name> <name><surname>Stevens</surname> <given-names>K. S.</given-names></name></person-group> (<year>2016</year>). <article-title>Appropriateness of the food-pics image database for experimental eating and appetite research with adolescents.</article-title> <source><italic>Eat. Behav.</italic></source> <volume>23</volume> <fpage>195</fpage>&#x2013;<lpage>199</lpage>. <pub-id pub-id-type="doi">10.1016/j.eatbeh.2016.10.007</pub-id> <pub-id pub-id-type="pmid">27842263</pub-id></citation></ref>
<ref id="B21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jones</surname> <given-names>A.</given-names></name> <name><surname>Hardman</surname> <given-names>C. A.</given-names></name> <name><surname>Lawrence</surname> <given-names>N.</given-names></name> <name><surname>Field</surname> <given-names>M.</given-names></name></person-group> (<year>2018</year>). <article-title>Cognitive training as a potential treatment for overweight and obesity: a critical review of the evidence.</article-title> <source><italic>Appetite</italic></source> <volume>124</volume> <fpage>50</fpage>&#x2013;<lpage>67</lpage>. <pub-id pub-id-type="doi">10.1016/j.appet.2017.05.032</pub-id> <pub-id pub-id-type="pmid">28546010</pub-id></citation></ref>
<ref id="B22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Killgore</surname> <given-names>W. D. S.</given-names></name> <name><surname>Young</surname> <given-names>A. D.</given-names></name> <name><surname>Femia</surname> <given-names>L. A.</given-names></name> <name><surname>Bogorodzki</surname> <given-names>P.</given-names></name> <name><surname>Rogowska</surname> <given-names>J.</given-names></name> <name><surname>Yurgelun-Todd</surname> <given-names>D. A.</given-names></name></person-group> (<year>2003</year>). <article-title>Cortical and limbic activation during viewing of high- versus low-calorie foods.</article-title> <source><italic>Neuroimage</italic></source> <volume>19</volume> <fpage>1381</fpage>&#x2013;<lpage>1394</lpage>. <pub-id pub-id-type="doi">10.1016/S1053-8119(03)00191-5</pub-id> <pub-id pub-id-type="pmid">12948696</pub-id></citation></ref>
<ref id="B23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>King</surname> <given-names>J. L.</given-names></name> <name><surname>Fearnbach</surname> <given-names>S. N.</given-names></name> <name><surname>Ramakrishnapillai</surname> <given-names>S.</given-names></name> <name><surname>Shankpal</surname> <given-names>P.</given-names></name> <name><surname>Geiselman</surname> <given-names>P. J.</given-names></name> <name><surname>Martin</surname> <given-names>C. K.</given-names></name><etal/></person-group> (<year>2018</year>). <article-title>Perceptual characterization of the macronutrient picture system (MaPS) for food image fMRI.</article-title> <source><italic>Front. Psychol.</italic></source> <volume>9</volume>:<issue>17</issue>. <pub-id pub-id-type="doi">10.3389/fpsyg.2018.00017</pub-id> <pub-id pub-id-type="pmid">29434559</pub-id></citation></ref>
<ref id="B24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kovalenko</surname> <given-names>L. Y.</given-names></name> <name><surname>Chaumon</surname> <given-names>M.</given-names></name> <name><surname>Busch</surname> <given-names>N. A.</given-names></name></person-group> (<year>2012</year>). <article-title>A Pool of Pairs of Related Objects (POPORO) for investigating visual semantic integration: behavioral and electrophysiological validation.</article-title> <source><italic>Brain Topogr.</italic></source> <volume>25</volume> <fpage>272</fpage>&#x2013;<lpage>284</lpage>. <pub-id pub-id-type="doi">10.1007/s10548-011-0216-8</pub-id> <pub-id pub-id-type="pmid">22218845</pub-id></citation></ref>
<ref id="B25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lang</surname> <given-names>P. J.</given-names></name> <name><surname>Bradley</surname> <given-names>M. M.</given-names></name> <name><surname>Cuthbert</surname> <given-names>B. N.</given-names></name></person-group> (<year>2008</year>). <source><italic>International Affective Picture System (IAPS): Affective Ratings of Pictures and Instruction Manual</italic>.</source> Technical Report A-8. <publisher-loc>Gainesville, FL</publisher-loc>: <publisher-name>University of Florida</publisher-name>.</citation></ref>
<ref id="B26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mace</surname> <given-names>M. J. M.</given-names></name> <name><surname>Thorpe</surname> <given-names>S. J.</given-names></name> <name><surname>Fabre-Thorpe</surname> <given-names>M.</given-names></name></person-group> (<year>2005</year>). <article-title>Rapid categorization of achromatic natural scenes: how robust at very low contrasts?</article-title> <source><italic>Eur. J. Neurosci.</italic></source> <volume>21</volume> <fpage>2007</fpage>&#x2013;<lpage>2018</lpage>. <pub-id pub-id-type="doi">10.1111/j.1460-9568.2005.04029.x</pub-id> <pub-id pub-id-type="pmid">15869494</pub-id></citation></ref>
<ref id="B27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Meule</surname> <given-names>A.</given-names></name> <name><surname>K&#x00FC;bler</surname> <given-names>A.</given-names></name> <name><surname>Blechert</surname> <given-names>J.</given-names></name></person-group> (<year>2013</year>). <article-title>Time course of electrocortical food-cue responses during cognitive regulation of craving.</article-title> <source><italic>Front. Psychol.</italic></source> <volume>4</volume>:<issue>669</issue>. <pub-id pub-id-type="doi">10.3389/fpsyg.2013.00669</pub-id> <pub-id pub-id-type="pmid">24098290</pub-id></citation></ref>
<ref id="B28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Meule</surname> <given-names>A.</given-names></name> <name><surname>Skirde</surname> <given-names>A. K.</given-names></name> <name><surname>Freund</surname> <given-names>R.</given-names></name> <name><surname>Vogele</surname> <given-names>C.</given-names></name> <name><surname>Kubler</surname> <given-names>A.</given-names></name></person-group> (<year>2012</year>). <article-title>High-calorie food-cues impair working memory performance in high and low food cravers.</article-title> <source><italic>Appetite</italic></source> <volume>59</volume> <fpage>264</fpage>&#x2013;<lpage>269</lpage>. <pub-id pub-id-type="doi">10.1016/j.appet.2012.05.010</pub-id> <pub-id pub-id-type="pmid">22613059</pub-id></citation></ref>
<ref id="B29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Miccoli</surname> <given-names>L.</given-names></name> <name><surname>Delgado</surname> <given-names>R.</given-names></name> <name><surname>Rodr&#x00ED;guez-Ruiz</surname> <given-names>S.</given-names></name> <name><surname>Guerra</surname> <given-names>P.</given-names></name> <name><surname>Garc&#x00ED;a-M&#x00E1;rmol</surname> <given-names>E.</given-names></name> <name><surname>Fern&#x00E1;ndez-Santaella</surname> <given-names>M. C.</given-names></name></person-group> (<year>2014</year>). <article-title>Meet OLAF, a good friend of the IAPS! The Open Library of Affective Foods: a tool to investigate the emotional impact of food in adolescents.</article-title> <source><italic>Plos One</italic></source> <volume>9</volume>:<issue>e114515</issue>. <pub-id pub-id-type="doi">10.1371/journal.pone.0114515</pub-id> <pub-id pub-id-type="pmid">25490404</pub-id></citation></ref>
<ref id="B30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nummenmaa</surname> <given-names>L.</given-names></name> <name><surname>Hietanen</surname> <given-names>J. K.</given-names></name> <name><surname>Calvo</surname> <given-names>M. G.</given-names></name> <name><surname>Hy&#x00F6;n&#x00E4;</surname> <given-names>J.</given-names></name></person-group> (<year>2011</year>). <article-title>Food catches the eye but not for everyone: a BMI&#x2013;contingent attentional bias in rapid detection of nutriments.</article-title> <source><italic>Plos One</italic></source> <volume>6</volume>:<issue>e19215</issue>. <pub-id pub-id-type="doi">10.1371/journal.pone.0019215</pub-id> <pub-id pub-id-type="pmid">21603657</pub-id></citation></ref>
<ref id="B31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>O&#x2019;Donell</surname> <given-names>B. M.</given-names></name> <name><surname>Barraza</surname> <given-names>J. F.</given-names></name> <name><surname>Colombo</surname> <given-names>E. M.</given-names></name></person-group> (<year>2010</year>). <article-title>The effect of chromatic and luminance information on reaction times.</article-title> <source><italic>Visual Neurosci.</italic></source> <volume>27</volume> <fpage>119</fpage>&#x2013;<lpage>129</lpage>. <pub-id pub-id-type="doi">10.1017/S0952523810000143</pub-id> <pub-id pub-id-type="pmid">20594382</pub-id></citation></ref>
<ref id="B32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pelchat</surname> <given-names>M. L.</given-names></name> <name><surname>Johnson</surname> <given-names>A.</given-names></name> <name><surname>Chan</surname> <given-names>R.</given-names></name> <name><surname>Valdez</surname> <given-names>J.</given-names></name> <name><surname>Ragland</surname> <given-names>J. D.</given-names></name></person-group> (<year>2004</year>). <article-title>Images of desire: food-craving activation during fMRI.</article-title> <source><italic>Neuroimage</italic></source> <volume>23</volume> <fpage>1486</fpage>&#x2013;<lpage>1493</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2004.08.023</pub-id> <pub-id pub-id-type="pmid">15589112</pub-id></citation></ref>
<ref id="B33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pourtois</surname> <given-names>G.</given-names></name> <name><surname>Dan</surname> <given-names>E. S.</given-names></name> <name><surname>Grandjean</surname> <given-names>D.</given-names></name> <name><surname>Sander</surname> <given-names>D.</given-names></name> <name><surname>Vuilleumier</surname> <given-names>P.</given-names></name></person-group> (<year>2005</year>). <article-title>Enhanced extrastriate visual response to bandpass spatial frequency filtered fearful faces: time course and topographic evoked-potentials mapping.</article-title> <source><italic>Hum. Brain Mapp.</italic></source> <volume>26</volume> <fpage>65</fpage>&#x2013;<lpage>79</lpage>. <pub-id pub-id-type="doi">10.1002/hbm.20130</pub-id> <pub-id pub-id-type="pmid">15954123</pub-id></citation></ref>
<ref id="B34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Prada</surname> <given-names>M.</given-names></name> <name><surname>Rodrigues</surname> <given-names>D.</given-names></name> <name><surname>Garrido</surname> <given-names>M. V.</given-names></name> <name><surname>Lopes</surname> <given-names>J.</given-names></name></person-group> (<year>2017</year>). <article-title>Food-pics-PT: portuguese validation of food images in 10 subjective evaluative dimensions.</article-title> <source><italic>Food Qual. Prefer.</italic></source> <volume>61</volume> <fpage>15</fpage>&#x2013;<lpage>25</lpage>. <pub-id pub-id-type="doi">10.1016/j.foodqual.2017.04.015</pub-id></citation></ref>
<ref id="B35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pursey</surname> <given-names>K. M.</given-names></name> <name><surname>Stanwell</surname> <given-names>P.</given-names></name> <name><surname>Callister</surname> <given-names>R. J.</given-names></name> <name><surname>Brain</surname> <given-names>K.</given-names></name> <name><surname>Collins</surname> <given-names>C. E.</given-names></name> <name><surname>Burrows</surname> <given-names>T. L.</given-names></name></person-group> (<year>2014</year>). <article-title>Neural responses to visual food cues according to weight status: a systematic review of functional magnetic resonance imaging studies.</article-title> <source><italic>Front. Nutr.</italic></source> <volume>1</volume>:<issue>7</issue>. <pub-id pub-id-type="doi">10.3389/fnut.2014.00007</pub-id> <pub-id pub-id-type="pmid">25988110</pub-id></citation></ref>
<ref id="B36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schadow</surname> <given-names>J.</given-names></name> <name><surname>Lenz</surname> <given-names>D.</given-names></name> <name><surname>Thaerig</surname> <given-names>S.</given-names></name> <name><surname>Busch</surname> <given-names>N. A.</given-names></name> <name><surname>Frund</surname> <given-names>I.</given-names></name> <name><surname>Rieger</surname> <given-names>J. W.</given-names></name><etal/></person-group> (<year>2007</year>). <article-title>Stimulus intensity affects early sensory processing: visual contrast modulates evoked gamma-band activity in human EEG.</article-title> <source><italic>Int. J. Psychophysiol.</italic></source> <volume>66</volume> <fpage>28</fpage>&#x2013;<lpage>36</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijpsycho.2007.05.010</pub-id> <pub-id pub-id-type="pmid">17599598</pub-id></citation></ref>
<ref id="B37"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schur</surname> <given-names>E. A.</given-names></name> <name><surname>Kleinhans</surname> <given-names>N. M.</given-names></name> <name><surname>Goldberg</surname> <given-names>J.</given-names></name> <name><surname>Buchwald</surname> <given-names>D.</given-names></name> <name><surname>Schwartz</surname> <given-names>M. W.</given-names></name> <name><surname>Maravilla</surname> <given-names>K.</given-names></name></person-group> (<year>2009</year>). <article-title>Activation in brain energy regulation and reward centers by food cues varies with choice of visual stimulus.</article-title> <source><italic>Int. J. Obes.</italic></source> <volume>33</volume> <fpage>653</fpage>&#x2013;<lpage>661</lpage>. <pub-id pub-id-type="doi">10.1038/ijo.2009.56</pub-id> <pub-id pub-id-type="pmid">19365394</pub-id></citation></ref>
<ref id="B38"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shiffman</surname> <given-names>S.</given-names></name></person-group> (<year>2000</year>). <article-title>Comments on craving.</article-title> <source><italic>Addiction</italic></source> <volume>95</volume> <fpage>S171</fpage>&#x2013;<lpage>S175</lpage>. <pub-id pub-id-type="doi">10.1046/j.1360-0443.95.8s2.6.x</pub-id></citation></ref>
<ref id="B39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Siep</surname> <given-names>N.</given-names></name> <name><surname>Roefs</surname> <given-names>A.</given-names></name> <name><surname>Roebroeck</surname> <given-names>A.</given-names></name> <name><surname>Havermans</surname> <given-names>R.</given-names></name> <name><surname>Bonte</surname> <given-names>M. L.</given-names></name> <name><surname>Jansen</surname> <given-names>A.</given-names></name></person-group> (<year>2009</year>). <article-title>Hunger is the best spice: an fMRI study of the effects of attention, hunger and calorie content on food reward processing in the amygdala and orbitofrontal cortex.</article-title> <source><italic>Behav. Brain Res.</italic></source> <volume>198</volume> <fpage>149</fpage>&#x2013;<lpage>158</lpage>. <pub-id pub-id-type="doi">10.1016/j.bbr.2008.10.035</pub-id> <pub-id pub-id-type="pmid">19028527</pub-id></citation></ref>
<ref id="B40"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Spence</surname> <given-names>C.</given-names></name> <name><surname>Okajima</surname> <given-names>K.</given-names></name> <name><surname>Cheok</surname> <given-names>A. D.</given-names></name> <name><surname>Petit</surname> <given-names>O.</given-names></name> <name><surname>Michel</surname> <given-names>C.</given-names></name></person-group> (<year>2016</year>). <article-title>Eating with our eyes: from visual hunger to digital satiation.</article-title> <source><italic>Brain Cogn.</italic></source> <volume>110</volume> <fpage>53</fpage>&#x2013;<lpage>63</lpage>. <pub-id pub-id-type="doi">10.1016/j.bandc.2015.08.006</pub-id> <pub-id pub-id-type="pmid">26432045</pub-id></citation></ref>
<ref id="B41"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stice</surname> <given-names>E.</given-names></name> <name><surname>Lawrence</surname> <given-names>N. S.</given-names></name> <name><surname>Kemps</surname> <given-names>E.</given-names></name> <name><surname>Veling</surname> <given-names>H.</given-names></name></person-group> (<year>2016</year>). <article-title>Training motor responses to food: a novel treatment for obesity targeting implicit processes.</article-title> <source><italic>Clin. Psychol. Rev.</italic></source> <volume>49</volume> <fpage>16</fpage>&#x2013;<lpage>27</lpage>. <pub-id pub-id-type="doi">10.1016/j.cpr.2016.06.005</pub-id> <pub-id pub-id-type="pmid">27498406</pub-id></citation></ref>
<ref id="B42"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tang</surname> <given-names>D. W.</given-names></name> <name><surname>Fellows</surname> <given-names>L. K.</given-names></name> <name><surname>Small</surname> <given-names>D. M.</given-names></name> <name><surname>Dagher</surname> <given-names>A.</given-names></name></person-group> (<year>2012</year>). <article-title>Food and drug cues activate similar brain regions: a meta-analysis of functional MRI studies.</article-title> <source><italic>Physiol. Behav.</italic></source> <volume>106</volume> <fpage>317</fpage>&#x2013;<lpage>324</lpage>. <pub-id pub-id-type="doi">10.1016/j.physbeh.2012.03.009</pub-id> <pub-id pub-id-type="pmid">22450260</pub-id></citation></ref>
<ref id="B43"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Toepel</surname> <given-names>U.</given-names></name> <name><surname>Knebel</surname> <given-names>J. F.</given-names></name> <name><surname>Hudry</surname> <given-names>J.</given-names></name> <name><surname>Le Coutre</surname> <given-names>J.</given-names></name> <name><surname>Murray</surname> <given-names>M. M.</given-names></name></person-group> (<year>2009</year>). <article-title>The brain tracks the energetic value in food images.</article-title> <source><italic>Neuroimage</italic></source> <volume>44</volume> <fpage>967</fpage>&#x2013;<lpage>974</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2008.10.005</pub-id> <pub-id pub-id-type="pmid">19013251</pub-id></citation></ref>
<ref id="B44"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Uher</surname> <given-names>R.</given-names></name> <name><surname>Treasure</surname> <given-names>J.</given-names></name> <name><surname>Heining</surname> <given-names>M.</given-names></name> <name><surname>Brammer</surname> <given-names>M. J.</given-names></name> <name><surname>Campbell</surname> <given-names>I. C.</given-names></name></person-group> (<year>2006</year>). <article-title>Cerebral processing of food-related stimuli: effects of fasting and gender.</article-title> <source><italic>Behav. Brain Res.</italic></source> <volume>169</volume> <fpage>111</fpage>&#x2013;<lpage>119</lpage>. <pub-id pub-id-type="doi">10.1016/j.bbr.2005.12.008</pub-id> <pub-id pub-id-type="pmid">16445991</pub-id></citation></ref>
<ref id="B45"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>VanRullen</surname> <given-names>R.</given-names></name></person-group> (<year>2006</year>). <article-title>On second glance: still no high-level pop-out effect for faces.</article-title> <source><italic>Vis. Res.</italic></source> <volume>46</volume> <fpage>3017</fpage>&#x2013;<lpage>3027</lpage>. <pub-id pub-id-type="doi">10.1016/j.visres.2005.07.009</pub-id> <pub-id pub-id-type="pmid">16125749</pub-id></citation></ref>
<ref id="B46"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wardle</surname> <given-names>M. C.</given-names></name> <name><surname>Lopez-Gamundi</surname> <given-names>P.</given-names></name> <name><surname>Flagel</surname> <given-names>S. B.</given-names></name></person-group> (<year>2018</year>). <article-title>Measuring appetitive conditioned responses in humans.</article-title> <source><italic>Physiol. Behav.</italic></source> <volume>188</volume> <fpage>140</fpage>&#x2013;<lpage>150</lpage>. <pub-id pub-id-type="doi">10.1016/j.physbeh.2018.02.004</pub-id> <pub-id pub-id-type="pmid">29408238</pub-id></citation></ref>
</ref-list>
<fn-group>
<fn id="fn01"><label>1</label><p>Many thanks to our <italic>food-pics_extended</italic> contributors:</p>
<p>Lo&#x00EF;c P. Heurley, Universit&#x00E9; Paris Ouest Nanterre La D&#x00E9;fense, Nanterre, France</p>
<p>Jang-Han Lee, Chung-Ang University, Seoul, Korea; Gal Sheppes, Tel Aviv University, Tel Aviv, Israel</p>
<p>Loukia Tzavella, Cardiff University, Wales, United Kingdom; Olga Pollatos, Universit&#x00E4;t Ulm, Ulm, Germany</p>
<p>Vaibhav Tyagi, Plymouth University, Plymouth, United Kingdom</p></fn>
<fn id="fn02"><label>2</label><p><ext-link ext-link-type="uri" xlink:href="http://food-pics.sbg.ac.at">http://food-pics.sbg.ac.at</ext-link></p></fn>
</fn-group>
</back>
</article>