ObjectiveTo assess the quality, reliability, readability, and similarity of the data that a recently created NLP-based artificial intelligence model ChatGPT 4 provides to users in Cleft Lip and Palate (CLP)-related information.DesignIn the evaluation of the responses provided by the OpenAI ChatGPT to the CLP-related 50 questions, several tools were utilized, including the Ensuring Quality Information for Patients (EQIP) tool, Reliability Scoring System (Adapted from DISCERN), Flesh Reading Ease Formula (FRES) and Flesch-Kinkaid Reading Grade Level (FKRGL) formulas, Global Quality Scale (GQS), and Similarity Index with plagiarism-detection tool. Jamovi (The Jamovi Project, 2022, version 2.3; Sydney, Australia) software was used for all statistical analyses.ResultsBased on the reliability and GQS values, ChatGPT demonstrated high reliability and good quality attributable to CLP. Furthermore, according to the FRES results, ChatGPT's readability is difficult, and the similarity index values of this software exhibit an acceptable level of similarity ratio. There is no significant difference in EQIP, Reliability Score System, FRES, FKGRL, GQS, and Similarity Index values among the two categories.ConclusionOpenAI ChatGPT provides a highly reliable, high-quality, but challenging to read, and acceptable similarity rate in providing information related to CLP. Ensuring that information obtained through these models is verified and assessed by a qualified medical expert is crucial.
No clinical trial protocols linked to this paper
Clinical trials are automatically linked when NCT numbers are found in the paper's title or abstract.PICO Elements
No PICO elements extracted yet. Click "Extract PICO" to analyze this paper.
Paper Details
MeSH Terms
Associated Data
No associated datasets or code repositories found for this paper.
Related Papers
Related paper suggestions will be available in future updates.