Background: Dental diseases can be prevented through the management of dental plaques. Dental plaque can be identified using the light-induced fluorescence (LIF) technique that emits light at 405 nm. The LIF technique is more convenient than the commercial technique using a disclosing agent, but the result may vary for each individual as it still requires visual identification.
Objective: The objective of this study is to introduce and validate a deep learning–based oral hygiene monitoring system that makes it easy to identify dental plaques at home.
Methods: We developed a LIF-based system consisting of a device that can visually identify dental plaques and a mobile app that displays the location and area of dental plaques on oral images. The mobile app is programmed to automatically determine the location and distribution of dental plaques using a deep learning–based algorithm and present the results to the user as time series data. The mobile app is also built with convergence of naive and web applications so that the algorithm is executed on a cloud server to efficiently distribute computing resources.
Results: The location and distribution of users’ dental plaques could be identified via the hand-held LIF device or mobile app. The color correction filter in the device was developed using a color mixing technique. The mobile app was built as a hybrid app combining the functionalities of a native application and a web application. Through the scrollable WebView on the mobile app, changes in the time series of dental plaque could be confirmed. The algorithm for dental plaque detection was implemented to run on Amazon Web Services for object detection by single shot multibox detector and instance segmentation by Mask region-based convolutional neural network.
Conclusions: This paper shows that the system can be used as a home oral care product for timely identification and management of dental plaques. In the future, it is expected that these products will significantly reduce the social costs associated with dental diseases.
Dental plaque is a sticky biofilm associated with oral diseases such as tooth decay and periodontal disease. Management of dental plaque is one of the effective ways to prevent dental diseases, but its transparent and colorless properties make it difficult to visually identify and manage. Therefore, it is important to improve motivation for dental plaque management by making it easy to identify whether the dental plaque adheres to the tooth surface. The most commonly recommended method is to use a disclosing agent that can identify the dental plaque by staining [, ]. However, the assessment of dental plaque accumulation is subjective and error-prone because it is generally performed through self-monitoring. Therefore, accurate evaluation requires clinical examination by a clinician, which increases cost and time [ , ]. In addition, the mouth and tongue may be stained all day or temporarily by disclosing agents, and some products contain dyes that may cause allergic reactions [ ].
It has been demonstrated that dental plaque can also be identified by the light-induced fluorescence (LIF) technique [, ]. This technique is based on the red fluorescence property of porphyrins, metabolites of heterogeneous bacteria within dental plaque, when irradiated with narrow blue-violet light (centered at 405 nm wavelength) [ , ].
Since it was first reported in the 1920s that dental plaque emits red fluorescence by ultraviolet ray irradiation [, ], various studies using LIF characteristics have been published, such as fluorescence changes according to oral bacterial species, fluorescence imaging systems, and clinical diagnosis. Studies of fluorescence changes by oral bacterial species reported that red fluorescence was detected in Prevotella intermedia, P. melaninogenica, Actinomyces naeslundi, A. israelii, and Bifidobacterium dentium, green fluorescence was observed in Streptococcus oralis, S. salivarius, S. mutans, S. mitis, S. sobrinus, Fusobacterium nucleatum, and Propionibacterium acnes, and orange fluorescence was found in Latobacillus fermentans, L. rhamnosus, and L. casei, and Candida albicans [ , - ]. Commercial products reported in the scientific literature as LIF imaging systems for dental plaque detection include ACTEON SOPROLIFE (Henry Schein, Inc) [ ] and QLF-D Biluminator (Inspektor Research Systems BV) [ ]. These products have been used to evaluate plaque levels by comparing clinical and red fluorescent plaques [ - ], and some studies have shown that tooth defects such as caries, calculus, hypomineralization, and discoloration can be observed with red fluorescence [ , , ]. However, these products are not only classified as medical devices, but also have problems in size and price, making them unsuitable for ordinary users to motivate plaque management at home.
Therefore, we judged that an inexpensive, compact, and intuitively usable system would be suitable for motivating plaque management to ordinary users and developed an oral hygiene monitoring system consisting of a LIF device and a smartphone-based mobile app. The LIF device induces the fluorescence of plaque and the app serves to visualize the plaque area, but in order to greatly motivate, an image processing method that emphasizes the plaque area is required. The traditional image processing method is sufficient to emphasize the plaque area in a limited environment, but an ordinary user needs an image processing method suitable for taking oral images with various cameras such as an Android phone and an iPhone in various environments such as a room, a bathroom, and a living room. Recently, deep learning algorithms have been applied to detection, segmentation, classification, and prediction in various medical fields, including plaque classification, showing surprising results [, ]. Therefore, it was determined that the deep learning algorithm would be the best way to solve the above-mentioned disadvantages.
In this paper, we introduce a deep learning–based oral hygiene monitoring system that makes it easy to identify dental plaques in our home. The system consists of a device that can visually identify dental plaques and a mobile app that displays the location and area of dental plaques on the oral image. The mobile app was developed based on two deep learning models to sequentially detect tooth areas and highlight plaque areas.
In this study, we developed a hand-held LIF device that allows the user to easily monitor dental plaque with the naked eye in a mirror and a hybrid mobile app that provides oral hygiene information.shows the LIF device and the hybrid mobile app.
Development of Light-Induced Fluorescence Hardware Interface
As shown in, the LIF device consists of 5 parts: body, two light-emitting diodes (LEDs), filter, hood, and battery cover. The body has a cylindrical handle with 28 mm diameter for easy hand grip, a button manufactured through silicone insert injection molding for better assembly, and a wide viewer suitable for viewing the oral hygiene. The LED (GTPDTV64101, Shenzhen Getian Opto-Electronics Co Ltd) emits 1 W narrow-band spectrum in the wavelength range of 400 to 410 nm with the view angle of 120 degrees. The filter is manufactured through a color-mixing technique in injection molding so that it has a color correction function for identifying clean teeth as white and plaque as red in color. The hood is matte black to block ambient light and minimize light reflection and is interchangeable for each user. The battery cover is designed for high friction through serration technique and it is not easily separated from the body. The device is powered by three AAA batteries and can last for approximately two and a half hours.
Development of Hybrid Mobile App
A hybrid mobile app is one that combines the functionalities of both a native app, which allows access to the smartphone camera, and web application, which can easily be updated without installation by modifying only the code on the server. In this study, we built a hybrid mobile app using JAVA for Android and Swift for iOS and, as shown in, its architecture is divided into the mobile app part and the cloud server part. The mobile app consists of a camera application programming interface that captures LIF oral images via the device and a WebViewer that displays oral hygiene over time on the smartphone screen. The cloud server is built on Amazon Web Services, which has abundant computing resources such as CPU and GPU. It contains an Apache http server, which process requests and provides web assets and content over http protocol, a MariaDB, which transforms data into structured information, an Analysis server, which analyzes oral hygiene through deep learning algorithms, and web storage, which stores original images and analyzed results.
Oral Image Data Flow on Cloud Server
An oral image taken with the user’s smartphone camera is uploaded to the http server implemented in Apache using the http protocol. The http servers use load balancing technology to send images to the deep learning–based analysis servers, reducing the load on analyzing large numbers of oral images from multiple users. After analysis is completed, the result image is converted into thumbnails to reduce the amount of transmission and reception and then sent to the user’s smartphone screen to check the oral hygiene information. Information such as account, date, plaque location, and cleanliness generated during the oral hygiene analysis are stored and updated in the database on the MariaDB as time series data.
Deep Learning–Based Algorithm for Oral Hygiene Analysis
The oral hygiene analysis is performed through deep learning–based image processing algorithms including object detection, which determines whether the input image is an oral image and localizes the oral region, and instance segmentation, which extracts the dental plaque regions. In this analysis, datasets categorized into oral and nonoral images are used. The 2000 oral images are taken only via the device and stored on the server, and the 2000 nonoral images are Pascal visual object classes images  without teeth or gums. illustrates a flowchart of the oral hygiene analysis.
For the object detection, the ground-truth bounding box annotations are first performed, which outline the region of interest (RoI) involving only the teeth and gums for each image. Then the single shot multibox detector (SSD) , one of the most popular deep learning models for object detection, is applied to detect the RoI within the images (predicted bounding box). The SSD model training is performed with a training dataset after dividing the datasets into a 2000-image training dataset (1000 oral and 1000 nonoral images) and a 2000-image test dataset (1000 oral and 1000 nonoral images). In order to improve the learning performance, the training dataset is augmented with random sample crop, photometric distortion (random transformations in the color HSV domain), rotation, and mirroring. Then the resolutions of the images in the dataset are all converted to 300×300 pixels, and their RGB values are normalized between 0 and 1.
In order to finely extract the red fluorescence–emitted dental plaque region from the oral image, an instance segmentation technique capable of classification and detection of multiple instances in one class is required.
In this study, the Mask region-based convolutional neural network (R-CNN) , which is a Faster R-CNN [ ] with the addition of a small fully convolutional network that can act as object detection and mask segmentation for each RoI, is used as an instance segmentation technique. The RoI images extracted through the SSD model are used as the input data for Mask R-CNN model training.
Pixel-level annotation for Mask R-CNN training was performed on plaque areas emitting red fluorescence in tooth images according to three criteria:
- Connected components labeling, which groups its pixels into components based on pixel connectivity, is performed on plaque areas
- Plaque areas that span several teeth are divided at tooth boundary
- Plaque thinly distributed between teeth is classified as one instance without following the second criterion
The images are technically prepared in the Common Objects in Context (COCO) data format  and augmented with rotation and aspect ratio conversion.
The Mask R-CNN model uses a model pretrained with COCO data as an initial parameter, and the training of the model is performed by the loss function–based stochastic gradient descent method.
This research protocol was approved by the institutional review board (IRB #ERI19046), Seoul National University Dental Hospital. In order to protect users’ privacy information, no personally identifiable information such as name, age, or gender are included in the image, and except for the mouth and the device, the visible parts of the image were mosaicized to make them indistinguishable.
The LIF device consists of a body for easy grip, two LEDs for 400 to 410 nm light emission, filter for color correction, and hood for blocking ambient light, as shown in. The electrical features of the device are power consumption of about 2.1 W, current of about 468 mA with 4.5 V, and idle current of about 4.5 mA. When using three AAA batteries with a capacity of 1500 mAh in series, continuous use time is about 150 minutes, and an individual can use it for about 75 days if it is used for 2 minutes per day.
The hybrid mobile app has a scrollable web view, allowing users to easily observe changes in dental hygiene status using time series data as shown in(d). Oral hygiene analysis is then performed by taking an oral image through the app and sending it to the cloud server ( ). After analysis, the oral hygiene results are sent back to the user’s app. It takes 3.00 (SD 0.020) seconds for the deep learning graphics processing unit and other libraries to load, 1.69 (SD 0.019) seconds for the oral region detection process (SSD), and 4.38 (SD 0.024) seconds for the dental plaque region extraction process (Mask R-CNN). presents representative results obtained during object detection and instance segmentation.
Experimental Analysis of Plaque Detection Algorithm
Oral hygiene analysis was performed through object detection and instance segmentation, and each performance was verified using intersection over union (IoU) and average precision (AP). IoU is an evaluation metric used to verify the performance of object detection. Its value is calculated by dividing the area of overlap by the area of union of both the ground-truth bounding box and predicted bounding box. AP, also a metric for evaluating the accuracy of the object detector, is the average of precision values corresponding to recall values between 0 and 1. Here, recall indicates how well all positives are predicted, and precision indicates the accuracy of the predicted result. The object detection performance of SSD is expressed by AP, which is an average value for IoU threshold. The AP values at IoU thresholds of 0.50, 0.60, 0.70, 0.80, and 0.90 are 90.9, 81.8, 63.6, 42.0, and 9.1, respectively. The average over IoU thresholds from 0.50 to 0.95 with a step size of 0.05 is 53.31. The performance of Mask R-CNN is expressed as IoU for the whole segmentation result instead of AP because the dental plaque regions are atypical, and the result value is 0.31.
In this paper, we presented a deep learning–based oral hygiene monitoring system consisting of a LIF device and hybrid mobile app to facilitate oral hygiene at home using a smartphone. The most prominent feature of the LIF device is the filter. Filters of similar devices are manufactured by depositing a thin dielectric layer onto a glass substrate, while the filter of our device is simply manufactured by adopting a method of mixing colors with poly(methyl methacrylate). Thus, the filter of our device is relatively low in manufacturing cost while maintaining the main role of the filter to make the red fluorescence of the plaque stand out compared with the surrounding tooth color. However, performance of the filter is affected by the intensity of ambient light , so a hood is required to minimize degradation by ambient light. Although our device also has a black hood, it has been observed that overexposed images are taken because the hood cannot completely block ambient light outdoors or under strong lighting. These results are caused by light leakage due to different oral structures of individuals, and it is expected that these problems will be sufficiently solved when the hood is made of flexible materials or customized. Another source of ambient light is the large window for visual observation or camera shooting. This is fundamentally unable to block ambient light, but it is believed that providing a light-blocking agent will minimize the influence of ambient light.
The key feature of our hybrid mobile app is that it is programmed with a deep learning algorithm. Since most conventional LIF products are developed for medical purposes, nonmedical users require the help or education of a medical practitioner. On the other hand, our mobile app automatically determines the location and distribution of dental plaques without clinical examination or training. In addition, since the result is stored and displayed as time series data, it is convenient for the user to manage oral hygiene. The deep learning models for oral detection and dental plaque segmentation require high computing resources, but by performing analysis on the server, computing resources can be effectively managed, and results can be quickly generated.
As shown in, our segmentation algorithm is somewhat poor in performance. In order to determine the cause, 2000 images were randomly sampled from the images stored on the server and statistically analyzed. Images were categorized as normal, out-focusing, far away, overexposed, too dark, foggy, yawning, and no device. Each category was multiselected, and the results are shown in . shows representative images categorized in each category.
|Category||Value n (%)|
|Far away||501 (25.1)|
|Too dark||297 (14.9)|
Of the images from the server, only 31.35% (627/2000) were classified as normal, meaning that most images were described with one or more variables such as out-focusing, far away, overexposed, too dark, foggy, and yawning, which is thought to directly affect the performance of deep learning–based analysis. In addition to these cases, two tooth areas were found by reflecting the tooth area on the smartphone case, and the plaque area was not properly divided when the captured image had a red tone. In the future, if we provide guidelines to users on an appropriate environment for acquiring images and improve the algorithms based on continuously increasing numbers of images as the number of users of the product increases, we expect that plaque identification will improve .
Due to the different image sensor characteristics of smartphones, a consistent preprocessing method could not be applied. The other limitation was that the size of the training data was smaller than the collected data due to various optical environments.
The primary cause of dental caries and periodontal disease is the failure to remove dental plaque in a timely manner. Providing preventive care solutions to quickly identify and respond to dental plaques at home can significantly reduce social costs associated with oral disease. The LIF system introduced in this paper consists of a LIF device for visually identifying dental plaques and a mobile app for providing deep learning–based oral hygiene analysis results. The device allows the user to visually check oral hygiene in a mirror and the app motivates the user to perform oral hygiene management by providing the oral hygiene analysis results in time series. In this paper, we introduced a home oral care system, but in the future, we will introduce LIF-based medical devices for marginalized populations, including the elderly, people of lower socioeconomic standing, and those living where the internet is unavailable, by applying edge computing technique and developing low-cost devices in the form of smartphone accessories.
This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2019R1I1A1A01057598 and NRF-2020R1F1A1070042). LIF device (Dentinote) and mobile app (Dentinote app) were developed with Prizzmable Co Ltd, and this paper is based on the accumulated results from the development process.
Conflicts of Interest
Representative images categorized into each category: (a) normal, (b) out-focusing, (c) far away, (d) overexposured, (e) foggy, and (f) yawning.PNG File , 737 KB
- Becker W, Becker BE, Berg LE. Periodontal treatment without maintenance: a retrospective study in 44 patients. J Periodontol 1984 Sep;55(9):505-509. [CrossRef] [Medline]
- Lang NP, Ostergaard E, Löe H. A fluorescent plaque disclosing agent. J Periodontal Res 1972;7(1):59-67. [CrossRef] [Medline]
- Agarwal V, Zhang L, Zhu J, Fang S, Cheng T, Hong C, et al. Impact of predicting health care utilization via web search behavior: a data-driven analysis. J Med Internet Res 2016 Sep 21;18(9):e251 [FREE Full text] [CrossRef] [Medline]
- Pretty IA, Edgar WM, Smith PW, Higham SM. Quantification of dental plaque in the research environment. J Dent 2005 Mar;33(3):193-207. [CrossRef] [Medline]
- Datta DD. Disclosing solutions used in dentistry. World J Pharmaceut Res 2017 Jun 1:1648-1656. [CrossRef]
- Heinrich-Weltzien R, Kühnisch J, van der Veen M, de Josselin de Jong E, Stösser L. Quantitative light-induced fluorescence (QLF): a potential method for the dental practitioner. Quintessence Int 2003 Mar;34(3):181-188. [Medline]
- Thomas RZ, van der Mei HC, van der Veen MH, de Soet JJ, Huysmans MC. Bacterial composition and red fluorescence of plaque in relation to primary and secondary caries next to composite: an in situ study. Oral Microbiol Immunol 2008 Feb;23(1):7-13. [CrossRef] [Medline]
- Coulthwaite L, Pretty IA, Smith PW, Higham SM, Verran J. The microbiological origin of fluorescence observed in plaque on dentures during QLF analysis. Caries Res 2006;40(2):112-116. [CrossRef] [Medline]
- Kim Y, Lee E, Kwon H, Kim B. Monitoring the maturation process of a dental microcosm biofilm using the Quantitative Light-induced Fluorescence-Digital (QLF-D). J Dent 2014 Jun;42(6):691-696. [CrossRef] [Medline]
- Benedict HC. A note on the fluorescence of teeth in ultra-violet rays. Science 1928 Apr 27;67(1739):442. [CrossRef] [Medline]
- Van Denbergh A. On porphyrin in the mouth. Lancet 1928 Feb;211(5450):281-282. [CrossRef]
- Lennon AM, Buchalla W, Brune L, Zimmermann O, Gross U, Attin T. The ability of selected oral microorganisms to emit red fluorescence. Caries Res 2006;40(1):2-5. [CrossRef] [Medline]
- van der Veen MH, Thomas RZ, Huysmans MC, de Soet JJ. Red autofluorescence of dental plaque bacteria. Caries Res 2006;40(6):542-545. [CrossRef] [Medline]
- Volgenant CMC, van der Veen MH, de Soet JJ, ten Cate JM. Effect of metalloporphyrins on red autofluorescence from oral bacteria. Eur J Oral Sci 2013 Jun;121(3 Pt 1):156-161. [CrossRef] [Medline]
- Terrer E, Koubi S, Dionne A, Weisrock G, Sarraquigne C, Mazuir A, et al. A new concept in restorative dentistry: light-induced fluorescence evaluator for diagnosis and treatment. Part 1: Diagnosis and treatment of initial occlusal caries. J Contemp Dent Pract 2009 Nov 01;10(6):E086-E094. [Medline]
- van der Veen MH, Volgenant CMC, Keijser B, Ten Cate JBM, Crielaard W. Dynamics of red fluorescent dental plaque during experimental gingivitis: a cohort study. J Dent 2016 May;48:71-76 [FREE Full text] [CrossRef] [Medline]
- Rechmann P, Liou SW, Rechmann BMT, Featherstone JDB. Performance of a light fluorescence device for the detection of microbial plaque and gingival inflammation. Clin Oral Investig 2016 Jan;20(1):151-159. [CrossRef] [Medline]
- Volgenant CMC, Fernandez Y Mostajo M, Rosema NAM, van der Weijden FA, Ten Cate JM, van der Veen MH. Comparison of red autofluorescing plaque and disclosed plaque: a cross-sectional study. Clin Oral Investig 2016 Dec;20(9):2551-2558 [FREE Full text] [CrossRef] [Medline]
- Chen QG, Zhu HH, Xu Y, Lin B, Chen H. Quantitative method to assess caries via fluorescence imaging from the perspective of autofluorescence spectral analysis. Laser Phys 2015 Jul 08;25(8):085601. [CrossRef]
- Karlsson L. Caries detection methods based on changes in optical properties between healthy and carious tissue. Int J Dent 2010;2010:270729 [FREE Full text] [CrossRef] [Medline]
- Bakator M, Radosav D. Deep learning and medical diagnosis: a review of literature. Multimodal Technol Interact 2018 Aug 17;2(3):47. [CrossRef]
- Imangaliyev S, van der Veen MH, Volgenant C, Loos B, Keijser B, Crielaard W. Classification of quantitative light-induced fluorescence images using convolutional neural network. arXiv preprint arXiv. 2017. URL: https://arxiv.org/abs/1705.09193 [accessed 2020-09-24]
- Everingham M, Van Gool L, Williams CKI, Winn J, Zisserman A. The Pascal visual object classes (VOC) challenge. Int J Comput Vis 2009 Sep 9;88(2):303-338. [CrossRef]
- Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu CY. SSD: single shot multibox detector. Comput Vision 2016:21-37. [CrossRef]
- He K, Gkioxari G, Dollar P, Girshick R. Mask R-CNN. IEEE Trans Pattern Anal Mach Intell 2020 Feb;42(2):386-397. [CrossRef] [Medline]
- Ren S, He K, Girshick R, Sun J. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 2017 Jun;39(6):1137-1149. [CrossRef] [Medline]
- Lin T, Maire M, Belongie S, Hays J, Perona P, Ramanan D. Microsoft COCO: common objects in context. Comput Vision 2014:740-755. [CrossRef]
- Pretty IA, Edgar WM, Higham SM. The effect of ambient light on QLF analyses. J Oral Rehabil 2002 Apr;29(4):369-373. [CrossRef] [Medline]
- Zappone A, Di Renzo M, Debbah M. Wireless networks design in the era of deep learning: model-based, AI-based, or both? IEEE Trans Commun 2019 Oct;67(10):7331-7376. [CrossRef]
|AP: average precision|
|COCO: Common Objects in Context|
|IoU: intersection over union|
|LED: light-emitting diode|
|LIF: light-induced fluorescence|
|R-CNN: region-based convolutional neural network|
|RoI: region of interest|
|SSD: single shot multibox detector|
Edited by G Eysenbach; submitted 18.01.20; peer-reviewed by E Frontoni, L Zhang, G Lim; comments to author 20.04.20; revised version received 05.06.20; accepted 01.09.20; published 16.10.20Copyright
©Jun-Min Kim, Woo Ram Lee, Jun-Ho Kim, Jong-Mo Seo, Changkyun Im. Originally published in JMIR mHealth and uHealth (http://mhealth.jmir.org), 16.10.2020.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR mHealth and uHealth, is properly cited. The complete bibliographic information, a link to the original publication on http://mhealth.jmir.org/, as well as this copyright and license information must be included.