Other

ChatGPT Achieves 85% in Professional-Level Neurology Exam

In a recent cross-sectional study researchers explored the performance of large language models (LLMs) in neurology board-style examinations.

The study, which utilized a question bank approved by the American Board of Psychiatry and Neurology, revealed insights into these advanced language models.

ChatGPT Dominates Neurology Exam

The study involved two versions of the LLM ChatGPT—version 3.5 and version 4. The findings revealed that LLM 2 significantly outperforms its predecessor. Furthermore, even surpassing the mean human score on the neurology board examination.

According to the findings, LLM 2 correctly answered 85.0% of questions. Meanwhile, the mean human score is 73.8%.

This data suggests that, with further refinements, large language models could find significant applications in clinical neurology and healthcare.

ChatGPT Performs Better On Lower-Order Exam Questions

However, even the older model, LLM 1, demonstrated sufficient performance, albeit slightly below the human average, scoring 66.8%.

Both models consistently used confident language, irrespective of the correctness of their answers, indicating a potential area for improvement in future iterations.

According to the study categorized questions into lower-order and higher-order based on the Bloom taxonomy.

Both models performed better on lower-order questions. However, LLM 2 exhibited excellence in both lower and higher-order questions, showcasing its versatility and cognitive abilities.

Source

Click to rate this post!
[Total: 0 Average: 0]
Show More

Leave a Reply

Your email address will not be published. Required fields are marked *