INTRODUCTION: Oral squamous cell carcinoma (OSCC) presents a significant global health challenge. The integration of artificial intelligence (AI) and computer vision holds promise for the early detection of OSCC through the analysis of digitized oral photographs. This literature review explores the landscape of AI-driven OSCC automatic detection, assessing both the performance and limitations of the current state of the art. MATERIALS AND METHODS: An electronic search using several data base was conducted, and a systematic review performed in accordance with PRISMA guidelines (CRD42023441416). RESULTS: Several studies have demonstrated remarkable results for this task, consistently achieving sensitivity rates exceeding 85% and accuracy rates surpassing 90%, often encompassing around 1000 images. The review scrutinizes these studies, shedding light on their methodologies, including the use of recent machine learning and pattern recognition approaches coupled with different supervision strategies. However, comparing the results from different papers is challenging due to variations in the datasets used. DISCUSSION: Considering these findings, this review underscores the urgent need for more robust and reliable datasets in the field of OSCC detection. Furthermore, it highlights the potential of advanced techniques such as multi-task learning, attention mechanisms, and ensemble learning as crucial tools in enhancing the accuracy and sensitivity of OSCC detection through oral photographs. CONCLUSION: These insights collectively emphasize the transformative impact of AI-driven approaches on early OSCC diagnosis, with the potential to significantly improve patient outcomes and healthcare practices.
No clinical trial protocols linked to this paper
Clinical trials are automatically linked when NCT numbers are found in the paper's title or abstract.PICO Elements
No PICO elements extracted yet. Click "Extract PICO" to analyze this paper.
Paper Details
MeSH Terms
Associated Data
No associated datasets or code repositories found for this paper.
Related Papers
Related paper suggestions will be available in future updates.