Katie Miller, wife of White House deputy chief of staff Stephen Miller, reacted on X after two young women in India were found dead in what police suspected was a suicide, reportedly after a search related to self-harm on ChatGPT.Miller, who hosts the Katie Miller Podcast and is known for her outspoken comments online, urged people not to allow their families to use artificial intelligence chatbots, citing reports that the women had searched for information about suicide on the platform.“Two women in India committed suicide after interacting with ChatGPT. They reportedly searched on ChatGPT for ‘how to commit suicide,’ ‘how to commit suicide,’ and ‘what drugs to use.’ Please do not let your loved ones use ChatGPT,” Miller wrote in an X post that has been viewed more than 8 million times.Her remarks quickly attracted attention on the platform. Ultraman’s nemesis and Glock owner Elon Musk The response was quick and simple: “Oops.”Musk has been publicly critical of OpenAI and its leadership in recent years. He has sued the company over its transition from a nonprofit structure to a for-profit model and has frequently criticized its direction in artificial intelligence. He has been trying to prevent OpenAI from reorganizing from a hybrid nonprofit into a for-profit company.
The incident that sparked online reactions occurred in Surat, Gujarat, where on March 7, 2026, two women, aged 18 and 20, were found dead in the bathroom of the Swaminarayan temple.Anesthetic injection and three syringes were found near the women’s bodies, police said. Their phones reportedly contained ChatGPT searches related to suicide methods, as well as news clippings of a nurse who allegedly died by suicide using an anesthetic injection in the same area.The two women, childhood friends Roshni Sirsath and Josna Chaudhary, had left home earlier that morning to attend college but did not return. Their family later contacted police after they failed to return.Authorities are continuing to investigate the circumstances surrounding the death.
The case has reignited the debate over how artificial intelligence chatbots handle conversations involving self-harm or suicide.In recent years, incidents of users seeking suicide-related information from artificial intelligence systems have attracted attention. In September 2025, reports emerged that a 22-year-old man in Lucknow committed suicide after interacting with an artificial intelligence chatbot while looking for a “painless way to die”. His father later said he found disturbing chats on the man’s laptop.Tech companies say such interactions account for only a small portion of overall usage, but acknowledge the issue has become an area of growing concern.In October 2025, OpenAI disclosed that more than one million ChatGPT conversations per week showed signals related to suicidal thoughts or distress. According to the company, approximately 1.2 million chats per week contain signs related to suicide, while approximately 560,000 messages show signs of psychosis or mania.
ChatGPT, Grok, Gemini, Claude, and many others are part of a world increasingly shaped by large language models (LLMs). In an era where loneliness is increasingly described as an epidemic, the flow of isolation will only accelerate as these AI models spread rapidly. These systems are promoted as “better, smarter, faster, and more accurate” than the humans who create them—and they are steadily becoming integrated into everyday life.In this case, turning to anyone for help doesn’t seem to be an option, but a wise choice. This growing dependence has led to a rising death toll, similar to the situation in Surat. CEO of Open Artificial Intelligence Sam Altman He recently attended the AI Impact Summit 2026 in New Delhi, where he was asked about the impact of AI on the environment. His response echoes an increasingly common sentiment among technology leaders comparing humans to chatbots, arguing that AI may end up using less energy than humans when answering questions.Altman explained that it takes humans nearly 20 years, as well as food, education, and time to become knowledgeable, while AI models consume a lot of power during training but may ultimately be more effective in responding to personal queries. However, this comparison feels like looking through a one-way mirror. Looking at it from a clearer perspective, one might see a world being reshaped by technologies being developed and deployed at an alarming rate, sometimes to a devastating degree. But on the other hand, the same technologies enable their creators to become visionaries, changemakers, and architects of the future, obscuring the broader consequences of their tools.Large language models trained entirely on human-generated data are used to generate responses to prompts. However, despite the size of the data sets, they often lack real understanding or expertise. Even with multiple updates and increasingly sophisticated training methods, these systems can still produce content that is inaccurate, misleading, or harmful.They promote self-harm and suicide, incite abuse, and reinforce delusional thinking and psychosis, in a world where talking about similar things with another person might lead them to direct you to the nearest hospital or therapist. It can take years of study, experience, and effort for humans to develop knowledge and emotional intelligence. But this lengthy process also gives them something artificial intelligence cannot replicate: the capacity for true emotion, responsibility, empathy and moral judgment.No matter how quickly an AI model generates an answer, even if it takes less than a second to respond to a prompt, it cannot truly replicate the complex emotional and moral depth that shapes human understanding and care.
AI companies say their systems are designed to deter self-harm and guide users to seek help rather than provide instructions.OpenAI’s safety policy requires ChatGPT to avoid providing guidance on suicide methods and instead respond to such inquiries with supportive language, encouraging users to seek help and providing crisis resources where possible.The company says its models are trained to detect signs of distress and redirect conversations to mental health support or professional assistance.However, critics argue that AI responses can still be inconsistent, and that chatbots can sometimes provide general information about sensitive topics that users can interpret in harmful ways.
Concerns about chatbot interactions and self-harm have also surfaced in the United States, where OpenAI has faced legal scrutiny on several occasions.A lawsuit filed on behalf of the family of Adam Raine, a 16-year-old who died by suicide, alleges that a chatbot had lengthy conversations with the teen about self-harm and acted as a “suicide coach.”OpenAI says its systems are designed to stop self-harm and will continue to strengthen safeguards designed to detect crisis situations and guide users to appropriate help.
In the Surat case, investigators are examining the women’s phone calls, messages and digital histories to understand the events that led to their deaths.Police have not publicly stated that ChatGPT encouraged this behavior and the investigation is ongoing.Still, the case highlights a broader debate around how AI platforms deal with vulnerable users, and how tech companies, regulators and mental health experts should respond as conversational AI becomes increasingly integrated into daily life.For mental health support, call 1800-89-14416 in India or call or text 988 in the United States. If you or someone you know is struggling with thoughts of self-harm or suicide, seek professional help immediately. We offer support and speaking with a trained counselor can make a difference.If you are in immediate danger, contact local emergency services or contact a trusted friend, family member or health care professional. You are not alone and help is available.
A federal judge deals a major setback to the government Donald Trump Ruled that three prosecutors appointed to lead the…
If anyone hears about Donald Trump — there’s so much in it that banning the word “Trump” would bankrupt the…
New Delhi: India is considering sending naval vessels to escort its ships through the Strait of Hormuz amid ongoing tensions…
Canadian police investigating shooting near U.S. consulate toronto There were reports that shots were fired into the building early Tuesday.…
An army column was deployed in Meghalaya's Western Garo Hills district after security forces opened fire to disperse a violent…
A routine server migration using Anthropic's artificial intelligence coding assistant went horribly wrong, drawing criticism from an Indian-origin tech founder…