2025 The Journal of prosthetic den…

Evaluating the validity and consistency of artificial intelligence chatbots in responding to patients' frequently asked questions in prosthodontics.

, , , , , ,

The Journal of prosthetic dentistry Vol. 134 (1) : 199-206 • Jul 2025

STATEMENT OF PROBLEM: Healthcare-related information provided by artificial intelligence (AI) chatbots may pose challenges such as inaccuracies, lack of empathy, biases, over-reliance, limited scope, and ethical concerns. PURPOSE: The purpose of this study was to evaluate and compare the validity and consistency of responses to prosthodontics-related frequently asked questions (FAQ) generated by 4 different chatbot systems. MATERIAL AND METHODS: Four prosthodontics domains were evaluated: implant, fixed prosthodontics, complete denture (CD), and removable partial denture (RPD). Within each domain, 10 questions were prepared by full-time prosthodontic faculty members, and 10 questions were generated by GPT-3.5, representing its top frequently asked questions in each domain. The validity and consistency of responses provided by 4 chatbots: GPT-3.5, GPT-4, Gemini, and Bing were evaluated. The chi-squared test with the Yates correction was used to compare the validity of responses between different chatbots (alpha=.05). The Cronbach alpha was calculated for 3 sets of responses collected in the morning, afternoon, and evening to evaluate the consistency of the responses. RESULTS: According to the low threshold validity test, the chatbots' answers to ChatGPT's implant-related, ChatGPT's RPD-related, and prosthodontists' CD-related FAQs were statistically different (P<.001, P<.001, and P=.004, respectively), with Bing being the lowest. At the high threshold validity test, the chatbots' answers to ChatGPT's implant-related and RPD-related FAQs and ChatGPT's and prosthodontists' fixed prosthetics-related and CD-related FAQs were statistically different (P<.001, P<.001, P=.004, P=.002, and P=.003, respectively), with Bing being the lowest. Overall, all 4 chatbots demonstrated lower validity at the high threshold than the low threshold. Bing, Gemini, and ChatGPT-4 chatbots displayed an acceptable level of consistency, while ChatGPT-3.5 did not. CONCLUSIONS: Currently, AI chatbots show limitations in delivering answers to patients' prosthodontic-related FAQs with high validity and consistency.

No clinical trial protocols linked to this paper

Clinical trials are automatically linked when NCT numbers are found in the paper's title or abstract.
PICO Elements

No PICO elements extracted yet. Click "Extract PICO" to analyze this paper.

Paper Details
MeSH Terms
Associated Data

No associated datasets or code repositories found for this paper.

Related Papers

Related paper suggestions will be available in future updates.