Several AI-based language models, including OpenAI’s GPT-4, lack safeguards to prevent the generation of health disinformation. Researchers have called for enhanced regulation and transparency to protect users from misinformation.
The study revealed that while some models like GPT-4 refused to generate false medical information, others produced content with fabricated references and testimonials.
Despite efforts to report vulnerabilities, developers have not improved safeguards, raising concerns about the unchecked spread of health disinformation.
Lead author Bradley Menz emphasized the need for better safeguards to prevent the mass spread of dangerous health misinformation.
The findings highlight the critical need for stricter regulations and oversight in the development and deployment of AI chatbots for medical advice. As technology advances, it is crucial to prioritize user safety and accuracy in health-related information dissemination.