Can a bot become an author for medical content?
ChatGPT has made its formal debut in the scientific literature — racking up at least 4 authorship credits on published papers and preprints.
The Open AI bot was recently part of a research which concluded that the tool was “comfortably within the passing range” of a US medical licensing exam. The model demonstrated a high level of concordance and insight in its explanations, the researchers concluded. This could mean that such Artificial Intelligence models could possibly help with medical education and even clinical decision-making, the recent study mentioned.
In response to recent chaos, The World Association of Medical Editors (WAME) published a position paper on ChatGPT and Chatbots in relation to scholarly publications. It recommends that Chatbots cannot be authors, and the content of the paper is authors’ ownership and responsibility.
My first thought is that like all artificial intelligence technologies, ChatGPT is based on proficient analysis of existing knowledge. However, the model sometimes gives out plausible-sounding but incorrect responses. For the time being, a major limitation is its difficulty formulating research plans and gathering new data.