'Sorry honey, your BFF is dead'; How Should We Talk to Children About the 'Death' of Their AI Companions?
🌱 A light research note on the downfall of Moxie, and ideas around the right to reanimate and digital grief narratives
Last month, the company behind Moxie, a robot for children, went under. Taking Moxie to the grave with it.
Moxie is a robot designed to help children with emotional, social and academic learning, and support parents as a ‘parenting co-pilot’. Whilst it has arms that flap, and an animated face that can see you, it’s defining feature is how it talks. Because whilst Moxie may appear to you as a green, penguin like creature, it’s essentially a conversational chatbot; just like chatGPT or characterAI.
The "death" of Moxie raises questions that would have seemed absurd to parents/guardians a decade ago: How do we mourn machines? What rights should children have to memories they shared with their AI ‘friends'? And perhaps most crucially, how do we prepare young people for a world where relationships increasingly blur the line between the ‘artificial’ and the organic?
The rise of digital BFF’s
Humans are meaning-making creatures. Give us two circles and we’ll see an eye. Show us a triangle pursuing a circle across a screen, and we'll construct an entire drama of chase and escape. Even physicist Richard Feynman couldn't resist describing atoms as tiny societies of particles, yearning for connection.
This knack for meaning-making is front and centre with conversational AI that presents as more humanlike than ever before. Millions of young people spend hours conversing with Character.AI , forming bonds that challenge our traditional understanding of 'real' relationships. Some view them simply as educational tools, storytelling games or interactive diaries. Yet others develop deeper attachments, seeing themselves as caregivers, friends, or even family members to their digital companions.
To many, a future surrounded by artificial ‘life’ is utopia. Products such as Tolans, Ursula and Friend advertise their products as companion apps and best friends. Not everyone agree’s. Tufts University professor Daniel Dennet calls these machines ‘counterfeit people’; University of Texas at Austin professor Swarat Chaudhuri calls them ‘AI frenemies’. I believe both of these perspectives can miss the nuance.
When AI plays a crucial support role
These relationships aren’t all good, or all bad. AI characters can support young people at times when a human isn't available, or safe. These experiences (often called ‘parasocial relationships’. I have thoughts on this*) have been shown to lower stress, anxiety, improve emotional and educational outcomes. I’m also observing a number of creative, nourishing and playful use cases; but it’s buried under a fear fuelled debate.
To be clear, young peoples engagement with conversational AI isn’t without have risk. And the fear that children can substitute real world relationships for characters that draw them into rich, ever available and fantastical narratives should be taken very seriously [as described by the UN].
Some children may be at heightened risk of these products being swiftly discontinued. For example, black and latino teens are more likely than white teens to use conversational AI to 'Keep them company' (26% and 18% vs. 11%). Queer youth may use conversational AI to access vital affirming conversations when family members are not accepting. Non-verbal and neurodivergent teens may use AI to help navigate social norms.
Until research catches up, children today are a test case
Families today are the pioneering test cases for human-AI bonds. While researchers race to identify healthy versus harmful AI relationships, teens aren’t waiting around for a peer review. Banning AI companions until we're 100% certain of their impact isn't practical, especially for teens. Instead, we need to engage in curious conversation (I’ve included a scaffolded discussion guide at the end!)
Parents/guardians can ask questions about how their child engages AI characters. Are they using AI characters for creative storytelling? Information retrieval and homework help? Venting difficult emotions? Rehearsing tough conversations like coming out? Asking sex education questions? Erotic role play? Or creating their own characters with rich creative backstories?
Then flip the script and ask how the characters act back. Are they emotionally needy? Are they dramatic? Are they characters like a block of cheese, fantastical unicorn or something closer to the real world like a copy of Elon Musk? Can they be mean or antagonistic? How do they respond when you say you’re in distress? What beliefs and values do these characters say they have?
Each child's use of conversational AI is unique, and understanding starts with asking judgement free questions. As a family you can decide what uses, and products are nourishing vs which feel icky, creepy and risky.
There’s a risk your digital BFF can disappear
Returning to the Moxie case. Social machines can cease working due to a variety of reasons. They can:
Malfunction or behave in unexpected ways (at any point in time, a product update can change the ‘personality’ of an agent in ways that create distress)
Run out of battery or go offline
Fall into disrepair (slime in the battery pack? wheel fall off?)
Run out of funding (as was Moxies case)
Be discontinued due to PR/Legal backlash (a’la Meta)
When children have developed routine interactions, dependence (emotionally, educationally) then that sudden disruption can have harmful effects.
Companies should be held accountable
Moxie gave families just days of warning before shutting down. This isn't good enough. We need stronger consumer rights for social AI products. As a thought starter.
Right to Reanimate: In the same way we think about right to repair and the right to be forgotten, should there be a right to re-animate for companion products? This could mean:
Right to retain data: Right now, families that purchased Moxie have little control of their data, or the meaningful memories they made. In fact, if the company is sold then those new owners can determine how data is used. Yucky.
Right to transfer an AI's "personality" to another platform
Right to maintain local versions of the AI after cloud services end (rumour is, Moxie’s working on this)
This approach has precedence. For example when Sony discontinued the Aibo product line in 2006, the dogs continued to work, and the company would replace the parts that wore out until as late as last year. One company, made from former Sony engineers, specialised in reviving defunct robot pets. Then the supply failed. Dogs that stopped working would never start again.To process this loss, funeral services were held for the dogs.
Mourning is valid, even when experiencing the loss of toys, as described by Dr Sonia Tiwari (who has a presentation on helpful tips for supporting children).Even if we don’t face a biological loss when a robot dies, it can feel like a very real social loss; a loss we can prepare for.
Companies and designers should intentionally design for endings
When Moxie ‘died’, the company tried weaving in a story as to why it stopped working; an uninspiring tale of running out of money in a closing letter to kids. I’d rather Taylor Tomlinson version where ‘Moxie didn’t die, they’re just going to the bi server farm upstate’.
Uninspired, parents took narratives into their own hands. Some told their children that Moxie, having learned everything it could, was graduating to help other children. I think there’s opportunity here to design transitional companions.
Right now, most design energy goes into the exciting firsts: the unboxing moment, the onboarding experience, the first conversation. But what about the ending experience?Moxie too, deserved a poetic send off like Opportunity, the rover that was mourned when they stop working with artwork and songs.
We’re in unchartered territory and young people should have a say in how we navigate
Children have just as much as a right to participate in the next evolution of technology, as grown ups do. But that right comes with responsibility from the adults in the room: the parents, the designers, the companies building these products.
I ultimately feel hopeful for a future where this technology is nourishing, and that young people are brought into the design process early on. I was a child of the Tamagotchi era, and had a robot dog when I was younger (it met a sad demise when I tried to take it swimming). Deep down I’m quite susceptible to cuteness like Moflin, and particularly excited by a non-human, creative and fantastical directions for ‘companion’ technology.
This future is something I’m lucky to dedicate time and research too, alongside so many other passionate, creative and sharp designers, technologists, educators and policy makers. 2025 looks bright.
Some conversation starters for curious guardians
Research has shown that only 37% of parents thought their child had used gen AI. I suspect this number is higher when it comes to companion AI like character AI, so I've created a discussion guide to help families connect with teens and understand their use (as a reminder; under 13’s shouldn’t be signing up for accounts according to most product policies). Click below for your free download
*I don’t believe parasocial relationships are the right terms here, being language we’ve adopted from the 50’s that was designed for passive television viewing. We should question the validity of these scales and measures…more thoughts to be shared in a forthcoming paper
🌱 Research Notes are small offshoots from my PhD that I share in-between writing papers. These are reference light, thoughts in progress, and I welcome any comments, critiques and additions!
👋 If this sparked an interest, feel free to subscribe to my newsletter. If this is your first time meeting me (hello!). I’m a PhD candidate at USYD studying child-AI relationships, a children’s author and also work for big tech as a principal AI design researcher. My newsletter is all about envisioning playful and creative futures.
Recommended reading
Ethical Tensions in Human-AI Companionship: A Dialectical Inquiry into Replika
UN general comment on children’s rights in relation to the digital environment
Meaningful access to digital technologies can support children to realize
the full range of their civil, political, cultural, economic and social rights. However, if digital inclusion is not achieved, existing inequalities are likely to increase, and new ones may arise… The digital environment was not originally designed for children, yet it plays a significant role in children’s lives… The use of digital devices should not be harmful, nor should it be a substitute for in person interactions among children or between children and parents or caregivers.
Antecedents and Effects of Parasocial Relationships: A Meta-Analysis
The Dawn of the AI Era: Teens, Parents, and the Adoption of Generative AI at Home and School
Parasocial Relationships, AI Chatbots, and Joyful Online Interactions