Large Language Models Are Poor Medical Coders: A Study

Researchers at the Icahn School of Medicine at New York City-based Mount Sinai found that large language models were poor medical coders.

In a study published April 19 in NEJM AI, the researchers gathered more than 27,000 distinct diagnosis and procedure codes from a year of routine care at Mount Sinai Health System. They then utilized descriptions for each code to trigger models from OpenAI, Google and Meta to produce the most precise medical codes.


Next
Next

MACs Target Toxicology Labs for Audits