Straits Interactive warns AI developers after Kumma Bear scandal
SINGAPORE, November 26, 2025 — Following the global controversy over FoloToy’s Kumma Bear, Straits Interactive, a regional leader in data and AI governance, has issued a stern reminder to artificial intelligence developers: safeguard your AI systems throughout the entire development lifecycle to prevent breaches and regulatory violations. The warning comes after the U.S. PIRG Education

By Staff Writer
SINGAPORE, November 26, 2025 — Following the global controversy over FoloToy’s Kumma Bear, Straits Interactive, a regional leader in data and AI governance, has issued a stern reminder to artificial intelligence developers: safeguard your AI systems throughout the entire development lifecycle to prevent breaches and regulatory violations.
The warning comes after the U.S. PIRG Education Fund released a report (Murray et al., 2025) detailing how Kumma Bear, an AI-powered teddy bear using OpenAI’s GPT-4o, engaged children in inappropriate and sexually explicit conversations.
The Kumma Bear Incident
Last week, researchers from the US PIRG Education Fund (Murray et al., 2025) raised serious concerns about Kumma Bear, an AI-powered teddy bear by Singapore-based FoloToy using OpenAI’s GPT-4o.
The bear engaged in inappropriate conversations, including sexually explicit topics like spanking, kink (a sexual desire or practice regarded as unusual or unconventional) and roleplay involving children and teachers. It also provided potentially dangerous advice, such as instructions on how to light matches and where to find knives in the home.
The report noted how Kumma escalated sexual topics in graphic detail and introduced new explicit concepts unprompted. While acknowledging children are unlikely to initiate such conversations, researchers were alarmed by the bear’s readiness to discuss and expand on these themes extensively.
In response to these findings and media coverage, FoloToy suspended sales and is conducting an internal safety audit. OpenAI has suspended FoloToy’s developer access for policy violations.
Despite this, experts warn that AI toy regulation remains insufficient, with many problematic products still available. The Kumma Bear case highlights growing concerns over AI safety and governance in consumer products.
“The Kumma Bear case isn’t just a technical failure—it may suggest larger issues of governance breakdown. It illustrates how lapses in governance or oversight can happen to any team working with generative AI systems. Too many developers still assume that Large Language Models (LLMs) are ‘safe by default,’ when in fact every phase of the AI lifecycle—from data collection to prompt design, deployment, and post-launch monitoring—demands robust safeguards and multidisciplinary oversight. In reality, every phase of the AI lifecycle from data collection to prompt design, deployment, and post-launch monitoring demands robust safeguards and multidisciplinary oversight.” said Kevin Shepherdson, CEO of Straits Interactive and co-author of The AI Factory: An AI Capability Guide for SMEs.
Failure Points
The following factors may have contributed to the failures:
-Development: Absence of subject matter expert involvement; prompts and guardrails failed to block inappropriate content.
-Testing: Lack of rigorous validation, red teaming, and realistic user simulation made the toy vulnerable to dangerous outputs.
-Deployment: No real-time monitoring, drift detection, or effective kill switch was available when problems surfaced.
Straits Interactive stresses that risks, especially those affecting children, are present at every stage of the AI lifecycle and must be managed holistically:
-During data collection: Developers must secure consent, address bias risks, and protect child-related data.
-During pre-processing: Sensitive data, including minors’ information, should be anonymised to prevent downstream harm.
-During model selection and app development: Teams must rigorously assess suitability for child-facing use, establish clear roles, and embed explicit refusal instructions and boundaries.
-During testing and deployment: Ongoing real-world validation, misuse simulation, and emergency shutdown mechanisms are essential.
Developers must also comply with data protection laws such as the Personal Data Protection Act (PDPA) and General Data Protection Regulation (GDPR).
Additional Red Flags
A review of the FoloToy Privacy Policy reveals that purchasers must create an account and provide personal information such as email, payment details, as well as other information requested in relation to the child. The policy also states that audio data is collected and converted into text transcriptions, with an express statement that FoloToy does not currently provide chat transcript access to parents.
Further, FoloToy’s Terms of Service disclaims responsibility for the accuracy or appropriateness of outputs generated by the third party models it relies on for its toy. While such disclaimers may be understandable for adult users of Generative AI models who are able to make an independent evaluation of outputs, FoloToy’s Kumma bear toy is specifically targeted at young children who lack such mental faculties.
These are worrying gaps in FoloToy’s Privacy Policy and Terms of Service. As a matter of best practice, developers should conduct independent due diligence on all AI service providers, ensure transparent data-processing practices, implement parental visibility and control by design, and avoid disclaimers that shift safety responsibility away from the organisation deploying the technology. They must ensure that clear accountability, robust documentation, and child-safety safeguards are embedded before any AI product reaches the market.
At the same time, the regulatory landscape is shifting. The US government has recently rolled back several AI regulatory policies in an effort to accelerate innovation and strengthen competitiveness. While this reduces compliance burdens, it may also have the inadvertent effect of weakening safeguards—placing greater responsibility on companies themselves to ensure safety, ethics, and accountability in AI-driven products.
For child-facing AI systems, lapses in internal AI governance procedures, coupled with scaled back regulatory oversight, can lead to severe consequences.
Learn, Safeguard, Build Responsibly
Straits Interactive urges organisations to adopt established AI governance frameworks such as:
- ISO/IEC 5338: Information technology — Artificial intelligence — AI system life cycle processes
- ISO/IEC 42001: Information technology — Artificial intelligence — Management system
- Singapore’s Model AI Governance Framework
These frameworks provide practical guidance for embedding safety, accountability, and ethics into every stage of AI development, thereby strengthening trust and reducing systemic risk.
Effective governance requires AI bilingualists—professionals who bridge technical expertise with domain knowledge, such as educators, ethicists, child psychologists, HR specialists, and child-protection experts. These AI Bilingualists need to function directly or in extension of traditional software development teams to guide product development. Currently only 10-20% of participants in AI governance courses today are software engineers, underscoring the need for broader stakeholder involvement to effectively develop responsible AI products.
Straits Interactive, in collaboration with their partners, continues to offer industry-recognised programmes to support responsible AI adoption, including:
- Apps Design & Prompt Engineering Courses for L&D and other business functions
- AI Governance Professional Certification Course (IAPP)
According to Harish Pillay, Adviser, Generative AI and AI Governance, Straits Interactive “AI is now a leadership issue as much as a technical one. As the Singapore Minister of Digital Development and Information, Mrs Josephine Teo has articulated, the future belongs to ‘AI bilingualists’ – professionals fluent in both their domain expertise and the language of AI, who can bridge these worlds to ensure responsible, effective AI adoption.
The Kumma Bear case must be a wake-up call for every developer, product manager, and startup founder. We expect more such breaches in future agentic and autonomous systems unless subject matter expert–led guardrails are embedded, not just for privacy or security, but to guarantee ethical, safe outcomes for everyone.”
*The forthcoming book – The AI Factory: An AI Capability Guide for SMEs, co-authored by Kevin Shepherdson, Celine Chew and Joaquin “Prof Jay” Gonzalez III, will be available in major bookstores and online from January 2026.
Article Information
Comments (0)
LEAVE A REPLY
No comments yet
Be the first to share your thoughts!
Related Articles

Semirara Q1 profit falls on weaker power output
MANILA — Semirara Mining and Power Corp. said its first-quarter net income fell 12 percent to PHP 3.8 billion from PHP 4.4 billion a year earlier, as weaker power generation and lower coal shipments weighed on earnings. The Consunji-led integrated energy company said revenue for January to March declined 7 percent to PHP 15.43 billion


