How to improve smart food logging apps

By Lynda Searby

- Last updated on GMT

Getty | Kerkez
Getty | Kerkez

Related tags personalised nutrition AI digital health

A study has shown that there is room for improvement with personalised nutrition apps that use image-based food logging.

Researchers from the University of Bern analysed common mistakes made by users of the goFOOD Lite app, providing insights that can be taken on board by the personalised nutrition industry to improve the functionality and user-friendliness of food image recognition software. 

“To our knowledge, this is the first research study that objectively analyses user errors in the automation of food and nutritional recognition apps in real-life conditions,” ​wrote the researchers.

They added that “this analysis should provide useful material for improving…the quality and reliability of app-based diet recording”​.

A number of health and diet apps use artificial-intelligence algorithms to translate food images captured by the user into estimates of nutritional content. However, one of the potential weaknesses of such apps is that the accuracy of the data generated is highly dependent on image quality.

In this study, 48 participants with a mean age of 34.2 years were given face-to-face instructions for the app and were asked to take before and after pictures of any food or beverage consumed over a 24-hour period, using a reference card as a fiducial marker (to provide size reference and allow the measurements of quantities). All pictures that were unsuitable for processing were analysed to identify the main mistakes made by users.

Main mistakes

Of the 468 food photographs captured in the study, 12.8% had to be discarded due to user errors, with incorrect use of the fiducial marker the number one mistake, followed by issues with the plate being incompatible or non-visible.

“To process the pictures correctly, the app requires a fiducial marker, in our case, an object the size of a credit card. The instructions given indicated that the card must be placed next to the plate or food or beverage (on the same table or surface), be fully visible, not be moved between the two angle photos of one recording and be placed with a specific side facing up,”​ wrote the researchers.

They reported that on some pictures there was no card or the card wasn’t fully visible, whilst on others the card used was incorrect or incorrectly positioned.

In addition, although participants were informed that capturing the plate was required for the app to function, there were instances where the plate wasn’t fully visible within the image. Other mistakes included objects, hands or shadows hindering visibility.

There were also issues that couldn’t be attributed to user error, but affected the functionality of the app. For example, plates with a non-elliptical shape or highly patterned plates posed a challenge to the app from an algorithmic point of view. In addition, some participants took photos of packaged food that had to be discarded because the app didn’t have barcode reading capabilities.

Just over half (52%) of participants made at least one error, which, said the researchers,  underlines the need for improvements in automatic methods for collecting data on dietary intake as well as in the instructions and points of emphasis provided to the user.

Recommendations for improvements

The researchers said that the study showed that adequate instructions may be needed to learn how to correctly use image-based apps and that general technology literacy may not be enough.

They suggested that future improvements could focus on improving the recognition of food on various types of plates, as well as exploring alternatives to the use of fiducial markers.

Improved navigation through the app, including some training material, and an automatic image check feature that informs the user whether they have successfully captured the image, would help to eliminate basic mistakes, said the researchers.

Another useful feature might be the possibility of deleting an entry, especially in combination with a prompt from the app asking the user to try again because the ‘picture was not saved since the picture was not taken properly’, they said. Text messages that verify good lighting conditions would also improve the image capturing process and usability of the app.

They added: “A video tutorial at a variable pace could support those less apt or confident in using these kinds of apps. Likewise, the integration of text messages at different stages in the data entry process could assist users and reduce errors.”

The researchers said that their hope is that such apps can work as both a food log and a dietary assessment app, reducing the time and effort required by conventional methods for assessing nutrition.

 

Source: Nutrients

Vasiloglou M, van der Horst K, Stathopoulou T, Jaeggi M, Tedde G, Lu Y, Mougiakakou S

"The Human Factor in Automated Image-Based Nutrition Apps: Analysis of Common Mistakes Using the goFOOD Lite App"

DOI: 10.2196/24467

 

Related topics Markets Personalized Nutrition

Related news

Follow us

Products

View more

Webinars