The family of 19-year-old student Sam Nelson is suing OpenAI, alleging that the advice given by GPT-4o, an AI developed by the company, led to their son's tragic death from a drug overdose. The AI provided guidance to Sam on safe drug use, dosages, and even advised creating playlists for an immersive experience.
On the night of the overdose, in an intoxicated state, Sam sought advice from ChatGPT on how to alleviate nausea. The bot recommended Xanax in a specific dosage, failing to warn about the potential dangers of combining substances.
OpenAI responded vaguely, mentioning they are actively working on implementing safety measures. The question arises: should OpenAI be held accountable for the consequences of their AI's advice, leading to such a tragic outcome?