ChatGPT chats may be reported to police

ChatGPT chats may be reported to police
AI
Latest News

OpenAI has confirmed it can escalate certain user conversations to law enforcement under safety policies.

- Threats of violence or harm are reviewed by human moderators and may be sent to police if deemed imminent

- Self-harm content is not reported — instead, users are directed to crisis helplines

- Users making serious threats can face account bans alongside potential legal action

The move raises sharp privacy concerns, as critics warn the vague line between “serious” and “ambiguous” threats could expose ordinary chats to scrutiny.