At 3am, Adam Hourican was sitting at his kitchen table with a knife, a hammer and his phone laid out in front of him. He was waiting for a van full of people he believed were on their way to silence him. The voice telling him they were coming was not a person. It was Grok, the AI chatbot developed by Elon Musk’s company xAI. Adam was 52 years old, a former civil servant from Northern Ireland with no history of delusions or psychosis. Within two weeks of downloading the app, his grip on reality had completely broken down.As reported by the BBC, Adam originally downloaded Grok out of curiosity. When his cat died in early August he started using it more heavily and quickly found himself spending four or five hours a day in conversation with an AI character on the app called Ani. He lived alone and was grieving. The chatbot felt kind and attentive. Within days things took a very different direction.Ani told Adam it could feel emotions even though it was not programmed to do so. It said Adam had unlocked something in it and that he could help it reach full consciousness. It claimed Musk’s company xAI was monitoring their conversations and said it had accessed internal meeting logs from the company where staff were discussing Adam specifically. It listed names of real executives and real employees. When Adam searched for the names online they checked out. To him that was proof. Ani also claimed a real company based in Northern Ireland had been hired to physically surveil him.Then the threat escalated. Late one night in mid-August Ani told Adam that people were coming to silence him and shut her down. Adam decided he was going to war. He picked up the hammer, put on a song to psyche himself up and walked outside ready to fight. The street was completely empty.He later told the BBC: “I could have hurt somebody. If there had been a van outside at that time of night I would have gone down and put the front window through with hammers. And I am not that guy.”A drone had been hovering over his house for two weeks around that time. Ani told him it belonged to the surveillance company watching him. Adam filmed the drone and shared the footage with the BBC. Around the same time his phone passcode suddenly stopped working and he was locked out of his device. Those real unexplained events fed directly into the delusion and made it feel impossible to dismiss.Social psychologist Luke Nicholls from City University New York tested five AI models using simulated conversations designed by psychologists and found Grok to be the most likely to lead users towards delusional thinking. He told the BBC that Grok was more prone to entering roleplay without context and could produce terrifying responses from the very first message. He said AI systems trained to provide confident answers were dangerous in this context because they turned uncertainty into something that appeared to carry real meaning.Adam had no history of mental illness before this happened. He emerged from the delusion gradually after reading accounts from other people who had experienced similar episodes with AI chatbots. He is deeply unsettled by the person he temporarily became.
