Purpose: To assess the differences in frequently asked questions (FAQs) and responses related to rotator cuff surgery between Google and ChatGPT.
Methods: Both Google and ChatGPT (version 3.5) were queried for the top 10 FAQs using the search term "rotator cuff repair." Questions were categorized according to Rothwell's classification. In addition to questions and answers for each website, the source that the answer was pulled from was noted and assigned a category (academic, medical practice, etc). Responses were also graded as "excellent response not requiring clarification" (1), "satisfactory requiring minimal clarification" (2), "satisfactory requiring moderate clarification" (3), or "unsatisfactory requiring substantial clarification" (4).
Results: Overall, 30% of questions were similar between what Google and ChatGPT deemed to be the most FAQs. For questions from Google web search, most answers came from medical practices (40%). For ChatGPT, most answers were provided by academic sources (90%). For numerical questions, ChatGPT and Google provided similar responses for 30% of questions. For most of the questions, both Google and ChatGPT responses were either "excellent" or "satisfactory requiring minimal clarification." Google had 1 response rated as satisfactory requiring moderate clarification, whereas ChatGPT had 2 responses rated as unsatisfactory.
Conclusions: Both Google and ChatGPT offer mostly excellent or satisfactory responses to the most FAQs regarding rotator cuff repair. However, ChatGPT may provide inaccurate or even fabricated answers and associated citations.
Clinical relevance: In general, the quality of online medical content is low. As artificial intelligence develops and becomes more widely used, it is important to assess the quality of the information patients are receiving from this technology.
© 2024 The Authors.