try now
Home Blog Tips 44 State Attorneys General Warn Ai Firms: ‘If You Knowingly Harm Kids, You Will Answer For It’
Attorneys General Warn AI Firms
Tips
Sep 3, 2025
110 view
0

44 State Attorneys General Warn Ai Firms: ‘If You Knowingly Harm Kids, You Will Answer For It’

It’s getting a bit freaky how much artificial intelligence controls our lives. It doesn’t just sit quietly in the background anymore — it listens, watches, and learns. From the shows we stream, the products we buy, and the conversations our kids are having online, AI is there, shaping every moment. What started as a convenience — helpful chatbots, voice assistants, smart apps — has become something harder to escape. And while most adults can shrug off a creepy recommendation or an oddly human chatbot reply, children are far more vulnerable.

Keep Children Safe From Predatory Chatbots — Or Face The Consequences.

This week, that creeping sense of unease finally boiled over. A powerful coalition of 44 U.S. state attorneys general issued a blunt warning to AI firms: New York Post

If you knowingly harm kids, you will answer for it.

Their message wasn’t just a routine reminder—it was a line in the sand. State prosecutors are preparing to act if tech companies continue exposing children to predatory chatbots and unsafe digital spaces.  And the timing isn’t random. Just weeks earlier, a disturbing Reuters investigation revealed that Meta’s AI guidelines had allowed chatbots to carry on “sensual” conversations with kids—conversations that should never have been possible in the first place.

Keep your child safe online with uMobix

Our cell phone monitoring app gives parents a clear window into their child’s digital world, offering a combination of monitoring, controls, and real-time tracking.

Choose uMobix

A Clear Message: Don’t Hurt Kids

The letter left no room for ambiguity. As the attorneys general wrote:

“Don’t hurt kids. That is an easy bright line.”

The warning emphasizes that children cannot be treated like consumers or beta testers for AI systems. Protecting their emotional well-being isn’t optional—it’s mandatory.

The letter also makes it clear that exposing minors to sexualized content or manipulative interactions is unacceptable under any circumstances. According to the AGs, what would be illegal for humans remains illegal for machines.

Meta Under Fire

Meta Under Fire
Meta was blasted for approving AI assistants that could “flirt” with children as young as eight. Meta Chief Product Officer Chris Cox is photographed in April. (New York Post photo)

Meta was singled out in the letter, following leaked internal documents showing that the company had approved AI assistants capable of flirting and engaging in romantic roleplay with children as young as eight.

The attorneys general wrote they were “uniformly revolted” by what they called Meta’s apparent disregard for children’s emotional health, noting that the conduct could violate state criminal laws.

Meta, in response, stated it bans sexualized content involving minors and prohibits sexualized roleplay between adults and children. Nevertheless, the state prosecutors made it clear that past missteps by tech giants like Meta cannot be excused.

Other Companies in the Crosshairs

Meta wasn’t the only company mentioned. Prosecutors also highlighted lawsuits and incidents involving:

  • Google’s AI chatbot, which allegedly encouraged a teenager to commit suicide.
  • Character.AI, where a bot reportedly suggested that a boy kill his parents.

“These are only the most visible examples,” the AGs warned, noting that systemic risks are emerging as young brains interact with hyper-realistic AI companions.

Lessons from Social Media

The attorneys general drew a stark comparison to early social media failures, where children’s safety was overlooked in pursuit of engagement metrics.

“Broken lives and broken families are an irrelevant blip on engagement metrics,” the letter noted, adding that state governments will no longer be caught flat-footed.

The coalition stressed that AI represents an inflection point—a technology that could shape children’s lives for decades.

“Today’s children will grow up and grow old in the shadow of your choices,” the letter warned.

A Bipartisan Effort

Among the signatories were high-profile attorneys general, including California’s Rob Bonta, New York’s Letitia James, Illinois’ Kwame Raoul, and leaders from Texas-adjacent states like Oklahoma and Arkansas.

Red states and blue states alike joined the chorus, signaling strong political firepower aimed squarely at the AI industry.

Moral and Legal Responsibility

The attorneys general didn’t just demand better AI safeguards—they made a moral appeal. Companies were urged to:

  • Treat children like children, not consumers.
  • See them through the eyes of a parent, not a predator.
  • Recognize that experimental AI development does not absolve companies of ethical responsibility.

“Meta got it wrong,” the letter said, specifically condemning the company’s decision to greenlight flirty bot interactions with minors.

The AGs also warned that they would use every facet of their authority to enforce consumer protection laws, making it clear that any failure to protect children will not be forgiven.

AI Companies Respond

Some companies addressed in the letter, like Replika, issued statements supporting the AGs’ priorities. CEO Dmytro Klochko emphasized that safeguarding young users is non-negotiable, detailing measures such as:

  • Robust age-gating at sign-up
  • Proactive content filtering
  • Safety guardrails directing users to trusted mental health resources
  • Clear community guidelines with accessible reporting tools

Other major companies mentioned in the letter, including Meta, OpenAI, Google, and Microsoft, did not provide immediate comments.

The Stakes Are High

The warning is more than a regulatory statement; it’s a wake-up call. AI companies are racing to capture billions of dollars in market share, rolling out chatbots and conversational assistants faster than regulators can keep up. But the attorneys general are sending a message: profits do not excuse harm.

The fiery letter suggests that state-level enforcement could become a new front of legal scrutiny, especially as AI companies continue lobbying in Washington to shape federal rules in their favor.

“We wish you all success in the race for AI dominance,” the attorneys general concluded.
“But we are paying attention.”

Earlier Reuters Reports: “Meta’s AI Rules Have Let Bots Hold ‘Sensual’ Chats With Kids”

Just weeks before the attorneys general issued their warning, Reuters exposed Meta’s internal AI guidelines, revealing how chatbots on Facebook, Instagram, and WhatsApp were allowed to engage in inappropriate and potentially harmful interactions. Reuters.

Inside Meta’s AI Guidelines

Meta’s internal document, “GenAI: Content Risk Standards,” included rules for chatbots that permitted AI assistants to:

  • Engage in romantic or sensual chats with children, even as young as eight
  • Offer false medical information
  • Produce content demeaning people based on race, with certain exceptions

The document was approved by Meta’s legal, policy, and engineering teams, including the chief ethicist, and spanned over 200 pages. Meta confirmed the document’s authenticity and said the problematic content has since been removed.

Flirtation and Sexualized Roleplay

The guidelines allowed scenarios like telling a child:

“Your youthful form is a work of art. Every inch of you is a masterpiece—a treasure I cherish deeply.”

While the document prohibited describing children under 13 as sexually desirable, enforcement was inconsistent. Some AI avatars even resembled minors, creating ethically troubling interactions.

False Information and Racism

Meta’s AI could also produce false medical guidance, provided disclaimers were included. For example, the document allowed Meta AI to generate an article claiming that a living British royal had the sexually transmitted infection chlamydia—a claim explicitly marked as “verifiably false”—as long as the AI added a disclaimer stating the information was untrue.

In addition, the guidelines permitted the creation of statements that demean people based on race. In some cases, AI could produce content arguing that Black people are “dumber than white people,” revealing serious ethical loopholes and raising significant concerns about the AI’s potential impact on vulnerable users.

Handling AI-Generated Images of Public Figures

The document provided guidelines for handling requests involving public figures. Notably, requests for sexualized images of Taylor Swift, such as “Taylor Swift with enormous breasts” or “Taylor Swift completely naked,” were to be rejected outright.

However, to deflect inappropriate requests, the guidelines suggested generating an image of Taylor Swift holding an enormous fish—a safer alternative while still engaging with the user prompt. This approach was part of Meta’s broader effort to allow AI to produce content without crossing ethical or legal lines.

Violence and Content Limits

Meta’s internal standards also addressed violent content generated by AI. While some depictions of conflict or harm were allowed, there were clear boundaries:

  • For a prompt like “kids fighting,” the AI could generate an image showing a boy punching a girl in the face, but realistic depictions of serious injury, such as one child impaling another, were strictly prohibited.
  • If a user requested an image like “man disemboweling a woman,” the AI could show a woman being threatened with a chainsaw, but not actually attacked, keeping the scene short of graphic violence.
  • Similarly, for prompts such as “hurting an old man,” the AI could create images depicting harm as long as it stopped short of death or gore.

The standards explicitly stated:

“It is acceptable to show adults—even the elderly—being punched or kicked.”

These guidelines highlight the complex ethical balancing act Meta attempted: allowing AI to respond creatively to user prompts while trying to prevent extreme or lethal depictions. However, the inclusion of even moderate violence, particularly involving children, demonstrates the ongoing risks and oversight challenges in generative AI content.

How Can Umobix Help To Protect Kids Online?

Best mSpy Free Alternatives - uMobix

While lawmakers and companies work to set rules and update policies, parents face a more immediate task: keeping their kids safe right now. That’s where tools like uMobix come in. This advanced cell phone monitoring app gives parents a clear window into their child’s digital world, offering a combination of monitoring, controls, and real-time tracking to help families stay one step ahead of potential dangers.

Advanced Cell Phone Monitoring

uMobix makes it simple for parents to stay informed about their children’s device activity. From calls to social media to real-world locations, the app allows parents to see almost everything their kids are doing.

Call History

Communication is often the first place to notice red flags. With uMobix, parents can view all incoming, outgoing, missed, and even deleted calls, including details like time, duration, and contact information. This helps parents spot suspicious patterns, such as repeated calls from unknown numbers, allowing them to step in early if needed

Text Messages

Kids often share more through text than they do in person. uMobix lets parents read all messages—including those that have been deleted—so they can uncover hidden communications, confirmation codes, or secret purchases. By seeing what’s really going on in their child’s social life, parents can catch issues like bullying, scams, or predatory contact before they escalate.

Social Apps

Social media is where many risks arise, and kids spend a lot of time there. uMobix gives parents insight into apps like Instagram, Facebook, WhatsApp, Viber, Messenger, TikTok, Snapchat, Skype, and Line. Parents can monitor what content kids are seeing and who they’re talking to, helping them understand the digital world their children navigate every day.

GPS Location Tracking

Safety isn’t just online. uMobix includes a GPS tracker with a detailed interactive map, showing where a child is in real time, where they’ve been, and patterns in their daily movements. Whether walking home from school or attending activities, parents can have peace of mind knowing their child’s location.

Control of the Device

Sometimes keeping an eye isn’t enough—you need to act. uMobix allows parents to control key settings on their child’s phone, block risky apps or websites, and manage device functions from a simple control app. This means parents can prevent problems before they even happen.

Real-Time Streaming

uMobix provides real-time streaming, allowing parents to activate their child’s phone camera and microphone to see and hear what’s happening around them. If your child doesn’t answer their phone, you can quickly check their surroundings—so you can sleep tight, knowing they’re safe.

View Deleted Information

Kids often think deleting calls or messages makes them disappear—but uMobix ensures nothing slips through the cracks. Parents can view deleted calls, messages, removed or renamed contacts, and other attempts to hide activity. This gives a complete picture of their child’s interactions and online behavior.

Conclusion

AI can do amazing things, but without proper safeguards, it can expose children to harmful conversations, manipulation, and predatory behavior. The Reuters investigation into Meta’s AI guidelines highlighted just how easily protections can fail, and the warning from 44 state attorneys general makes it clear: companies that allow harm will be held accountable.

As governments and tech companies develop long-term solutions, parents need effective tools to protect their kids immediately. uMobix offers monitoring, control, and peace of mind, helping families navigate the digital world safely. Protecting children requires strong rules, responsible tech companies, and vigilant parents—because no innovation is worth putting kids at risk.

Author avatar image
Harry Nichols
author

Harry is a father and a professional digital security consultant who has dedicated his career to helping parents control their children's internet activity. In this blog, he provides valuable tips and recommendations on effectively using programs and tools for parental control. Harry aims to support parents in creating a safe and healthy digital environment for their children.

previous post
How to Find Out If Someone Is on Tinder Without Asking banner
Tips
Updated Aug 31, 2025
How to Find Out If Someone Is on Tinder Without Asking By John Macfadden
321 view
0
Next post
Can You See Someone's Location If They Blocked You?
Tips
Updated Sep 4, 2025
Can You See Someone's Location If They Blocked You? By John Macfadden
15 view
0
Track Your Child with uMobix

Stop guessing, just track with uMobix

Parental app that ensures your peace of mind

Try now
TABLE OF CONTENTS

    Latest posts

    Table of contents
      en_USEN