| |
 |
| Paper: |
Fine Tuning Large Language Models to Identify Science Misinformation |
| Volume: |
539, ASP 2024: Astronomy Across the Spectrum |
| Page: |
280 |
| Authors: |
Wenger, M.; Impey, C.; Garuda, N.; Danehy, A.; Golchin, S.; Stamer, S. |
| Abstract: |
Tremendous advances have been made in the field of artificial intelligence and machine learning. For this project we created a unique data set of curated and tagged articles about ten science topics where misinformation is abundant. This initial data set was used to generate an even larger corpus of data that was used to train a machine learning algorithm to detect science misinformation. This algorithm was successfully deployed but is limited to reporting a probability that the information is real or fake. After this successful proof-of-concept, the data was used to fine-tune several state-of-the art large language models (LLMs). These LLMs have the ability not only to assess articles for misinformation, but also to interact with users and provide meaningful feedback that explains which information is incorrect, and why. Instructors and students will be able to use these trained LLMs to help check sources and identify misinformation for projects and papers. |
|
|
 |
|
|