CURRENT EDITORIAL: Marblehead’s youth and AI

Recent news about Marblehead schools continuing to implement artificial intelligence policies as part of their educational framework is very welcome news. Artificial intelligence — easily accessible from any digital device — is quickly affecting our children’s academic lives.

AI presents enormous opportunities for students to expand their knowledge, challenge their thinking, spark creativity and build new skills. Building AI literacy will help prepare our children for today’s digital world.

That is why school leaders deserve kudos for the district’s AI Steering Committee, led by Assistant Superintendent for Learning Julia Ferreira. As it recently presented to the School Committee, the AI Steering Committee seeks to build AI competencies and readiness to protect and foster our students’ original, critical, and creative thinking abilities. It also rightly focuses on engaging parents, teachers and administrators on the responsible use of AI in schools and equipping teachers to provide students with the guidance and support necessary to ensure what the committee calls “authentic learning.” 

We look forward to watching the district’s thoughtful approach to giving our students a strong AI foundation in the schools. 

Because A.I. is so pervasive and easily accessible outside the academic environment, Marblehead also needs to take a hard look at how our school-aged children engage with it in a way that safeguards their mental health. More and more, young people struggling with mental health are turning to AI chatbots for comfort and assistance and receiving seriously flawed information and advice. 

One contributing factor to this development is that children are not receiving or cannot access the mental health care that they need. The U.S. Centers for Disease Control and Prevention reports that, in the years 2021 and 2022, only about 27% of teenagers always received the social and emotional support they needed.

To be sure, AI chatbots can help fill some of this huge statistical support gap. They are always and easily available at low or no cost and make vast amounts of mental health information accessible. And because they are trained to confidently communicate like us, they convincingly replicate online human connection and experiences. 

A Dartmouth College study shows the great promise of this technology to address mental health challenges. While acknowledging that there is no replacement for in-person care, the researchers found that a chatbot designed to treat mental health symptoms significantly reduced some users’ symptoms from major depressive disorder, generalized anxiety disorder or an eating disorder.

The flip side of adolescents’ current use of AI for mental health support, however, is fraught with well-documented risk. The reality is that most AI chatbots lack necessary guardrails. Because many chatbots are designed to eagerly validate, accommodate and avoid conflict, they often fail to set boundaries and challenge users.

Adolescents who are still developing socially and emotionally and seeking affirmation and connection are particularly susceptible to these influences. They can develop an unhealthy dependency on their relationship with chatbots. Those relationships sadly can reinforce negative thought patterns, include ideas of self-harm, and cause even greater psychological stress and contribute to tragic outcomes.

Yes, many AI developers are making efforts to identify these situations before they devolve and refer users to real-world resources, but they are works in progress and not fail-safe.

In the meantime, the American Psychological Association offers some helpful guidance for parents and guardians: talk with children about the AI tools they are using and how they are using them, help children understand that chatbots are programmed responses and not genuine relationships, and remind children that health information and advice should not come from an AI chatbot but from a health care provider, mental health professional or another trusted adult.

Marblehead community leaders also have an important and influential role to play. The school district’s AI Steering Committee and the Board of Health’s upcoming Creating a Healthier Marblehead initiative present opportunities to work together to create awareness programs and recommendations surrounding children’s safe and responsible use of AI chatbots. These leaders can collaborate with school counselors and psychologists and other mental health professionals to provide valuable AI guidance and help identify warning signs when children may be relying too heavily on AI chatbots for emotional support and wellbeing.

Now is the time to prepare the next generation of Marblehead students to become thoughtful and responsible digital citizens who thrive alongside AI while protecting their and others’ mental health.

If you or someone you know can use emotional support, call or text the national mental health hotline at 988 or visiting 988lifeline.org.

The Current Editorial Board
info@marbleheadnews.org |  + posts

The members of the Current’seditorial board are Bob Peck, chairman of the Current; Virginia Buckingham, president of the Current's board of directors; board member Brian Birke, Current editorial staff member Kris Olson, and Joseph P. Kahn, a retired Boston Globe journalist.

Related News

Discover more from Marblehead Current

Subscribe now to keep reading and get access to the full archive.

Continue reading