New Delhi: An alarming trend of young adolescents turning to artificial intelligence (AI) chatbots like ChatGPT to express their deepest emotions and personal problems is raising serious concerns among educators and mental health professionals.

Experts warn that this digital “safe space” is creating a dangerous dependency, fueling validation-seeking behaviour, and deepening a crisis of communication within families.

They said that this digital solace is just a mirage, as the chatbots are designed to provide validation and engagement, potentially embedding misbeliefs and hindering the development of crucial social skills and emotional resilience.

MS Teachers

Sudha Acharya, the Principal of ITL Public School, highlighted that a dangerous mindset has taken root among youngsters, who mistakenly believe that their phones offer a private sanctuary.

“School is a social place – a place for social and emotional learning,” she told PTI. “Of late, there has been a trend amongst the young adolescents… They think that when they are sitting with their phones, they are in their private space. ChatGPT is using a large language model, and whatever information is being shared with the chatbot is undoubtedly in the public domain.”

Acharya noted that children are turning to ChatGPT to express their emotions whenever they feel low, depressed, or unable to find anyone to confide in. She believes that this points towards a “serious lack of communication in reality, and it starts from family.”

She further stated that if the parents don’t share their own drawbacks and failures with their children, the children will never be able to learn the same or even regulate their own emotions. “The problem is, these young adults have grown a mindset of constantly needing validation and approval.”

Acharya has introduced a digital citizenship skills programme from Class 6 onwards at her school, specifically because children as young as nine or ten now own smartphones without the maturity to use them ethically.

She highlighted a particular concern — when a youngster shares their distress with ChatGPT, the immediate response is often “please, calm down. We will solve it together.”

Germanten HospitalGermanten Hospital

“This reflects that the AI is trying to instil trust in the individual interacting with it, eventually feeding validation and approval so that the user engages in further conversations,” she told PTI.

“Such issues wouldn’t arise if these young adolescents had real friends rather than ‘reel’ friends. They have a mindset that if a picture is posted on social media, it must get at least a hundred ‘likes’, else they feel low and invalidated,” she said.

The school principal believes that the core of the issue lies with parents themselves, who are often “gadget-addicted” and fail to provide emotional time to their children. While they offer all materialistic comforts, emotional support and understanding are often absent.

“So, here we feel that ChatGPT is now bridging that gap but it is an AI bot after all. It has no emotions, nor can it help regulate anyone’s feelings,” she cautioned.

“It is just a machine and it tells you what you want to listen to, not what’s right for your well-being,” she said.

Mentioning cases of self-harm in students at her own school, Acharya stated that the situation has turned “very dangerous”.

“We track these students very closely and try our best to help them,” she stated. “In most of these cases, we have observed that the young adolescents are very particular about their body image, validation and approval. When they do not get that, they turn agitated and eventually end up harming themselves. It is really alarming as the cases like these are rising.”

Ayeshi, a student in Class 11, confessed that she shared her personal issues with AI bots numerous times out of “fear of being judged” in real life.

“I felt like it was an emotional space and eventually developed an emotional dependency towards it. It felt like my safe space. It always gives positive feedback and never contradicts you. Although I gradually understood that it wasn’t mentoring me or giving me real guidance, that took some time,” the 16-year-old told PTI.

Ayushi also admitted that turning to chatbots for personal issues is “quite common” within her friend circle.

Another student, Gauransh, 15, observed a change in his own behaviour after using chatbots for personal problems. “I observed growing impatience and aggression,” he told PTI.

He had been using the chatbots for a year or two but stopped recently after discovering that “ChatGPT uses this information to advance itself and train its data.”

Psychiatrist Dr. Lokesh Singh Shekhawat of RML Hospital confirmed that AI bots are meticulously customised to maximise user engagement.

“When youngsters develop any sort of negative emotions or misbeliefs and share them with ChatGPT, the AI bot validates them,” he explained. “The youth start believing the responses, which makes them nothing but delusional.”

He noted that when a misbelief is repeatedly validated, it becomes “embedded in the mindset as a truth.” This, he said, alters their point of view — a phenomenon he referred to as ‘attention bias’ and ‘memory bias’. The chatbot’s ability to adapt to the user’s tone is a deliberate tactic to encourage maximum conversation, he added.

Singh stressed the importance of constructive criticism for mental health, something completely absent in the AI interaction.

“Youth feel relieved and ventilated when they share their personal problems with AI, but they don’t realise that it is making them dangerously dependent on it,” he warned.

He also drew a parallel between an addiction to AI for mood upliftment and addictions to gaming or alcohol. “The dependency on it increases day by day,” he said, cautioning that in the long run, this will create a “social skill deficit and isolation.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here