← Back To All Posts
Date:
June 12, 2022

Parental Control Vs. Age Ratings

As Covid-19 concerns persist and stay-at-home/social distancing continues, it's difficult to find a single industry unchanged. Even streaming companies, most of whom just gained a bevy of new subscribers, have changed their services. In early April Netflix dropped streaming quality in order to ease overworked broadband servers as people increasingly plopped down in front of the TV to escape the current state of things. In addition, the streaming company announced that they will alter specific elements of the parental control features. Among the changes-in addition to rating-is a new capability to filter based on titles . Though the broadband restrictions are easing as the pandemic limitations ease, the filter settings are the new standard.

The shift is an interesting one. On the one hand, it seems an obvious move: every child is different, and not all of them are going to be able to handle the same thematic content. On the other, shouldn't that be covered under ratings?

Most Netflix territories allow self-rating with government-determined age ratings for the video-on-demand content offerings. For countries with vague local ratings a generic set are used, usually ALL, 7+, 13+, 16+, 18+. The first two are defined by the FAQ as appropriate for "little kids" by the FAQ. For the U.S., TV-Y, TV-Y7, G, TV-G, PG, TV-PG are considered "kids" ratings. With all those options, the ability to tag specific titles as inappropriate for specific children regardless of ratingseems like an acknowledgement that age-based ratings aren't the panacea of what's appropriate for children. Parents have known this for years: not every episode of Darkwing Duckis good for every kid. But with the spread available, you'd think that would mean particularly borderline episodes would just get bumped up a notch—from ALL to 7+, for example, allowing the parents of sensitive children to only allow shows rated ALL and be done with it. But what's a 7+ for one parentis a 13+ for another. And ratings only cover so many of the sociocultural differences that go into age appropriateness.

Some differences are obvious

Singapore, in the latest iteration of its video-on-demand code of practice , specifies that films with content involving homosexuality must be rated at an NC16 or above. If the content is board-rated, they flag such themes with a specific content advisory, just like they would violence or sexuality. In contrast the British Board of Film Classification specifies that all content is rated without considering sexuality . A sitcom with minimal profanity, no sexuality and no violence but where the two main characters are a happily married gay couple might be rated PG in the UK—and up to R21 in Singapore.

The MPAA's treatment of the "f-word" is the stuff of legend. Excepting extreme circumstances, there's one allowed per PG-13; two is borderline and anything more than that results in an immediate R for coarse language. Lower ratings have similar rules for language: when " Detective Pikachu "featured a halfway uttered "sh*t," that represented a loosening of those rules, which traditionally didn't allow harsher expletives at a PG. TV stations will (mostly) bleep any profanity at a PG level, and even shows like the notoriously violent "24" only had one use of the "f-word."

In Europe however, profanity isn't usually even a consideration when assigning an age rating. Netherlands' rating system, Kijkwijzer, explicitly states that there isn't any rating associated with profanity, though it is used as a content advisory. In part, the system states that the science around when children pick up profanity is fuzzy, and while a younger child imitating what they hear on TV might be harmful, it's difficult to know specifics. It's widely recognized how common profanity is, particularly in older teens, and different parents have contrasting opinions on who can say what, so it's difficult to legislate. Similarly, Germany and France don't particularly concern themselves with language alone. Though specific uses of the "f-word" might bump the age (anything used aggressively or sexually, for example), there are no definite rules. A character stubbing their toe and using that top-tier expletive wouldn't raise eyebrows—or ratings. In the U.S., though that alone is enough to nab a PG-13 .

Subjects like violence or sexuality are trickier. An explosive car crash with a driver sitting on the side of the road, forehead injury oozing blood, is a G in one country and a 12 in another; a couple kissing passionately in the back of a truck overlooking the city can get anywhere from a G to an 18 depending on the region. Even graphic sexual references receive a lower rating if they're educational or comedic, depending on what country you're distributing to.

This may seem to undermine the point of age ratings: after all, a 13-year-old in Singapore is the same as a 13-year-old in Mexico. But that assumption relies on a similarity of culture, particularly culture with regards to taboo subjects that frankly does not exist. The U.S. culture around profanity, for example, is unique. An American couple allowing their children watch Netflix in the Netherlands, then, should expect to hear a lot more cussing at a 7+ than ever allowed at that U.S. age rating and might want to appropriately adjust their filtering settings. That could mean not allowing any rating higher than the very lowest, but with the new options that could mean filtering content parents find specifically harmful and not eliminating the rest. Unfortunately, the reverse is not yet true: it's not possible to set the child's account only allowing TV-PG and below, except for a set of titles the parents choose. A Dutch couple who moves to the U.S., then, is stuck with the U.S. rules regarding profanity. rules regarding profanity.

Assumption of accuracy

Of course, that's assuming the titles are rated correctly. With Netflix rating its own original content and spending the bulk of its budget on creating that original content, correctly rating each episode (or series) for each territory's cultural expectation is more difficult. That includes what ages the country in question uses for its ratings. Not all countries care about the same thing, and not all countries care about the same age groups seeing the same things. Netflix's new series " Hollywood " is a great example of this: the miniseries is rated TV-MA in the U.S., and though several episodes have nudity and sexuality, the reason for that rating is the frequent use of harsh profanity. Given the U.S. rules for content classification, that's a correct rating.

The Dutch rating for the same series, however, is a 14, a relatively new rating , reserved at present for theatrical content that's too intense for the usual 12 and not at the level of a 16. That's where things get distorted: there's sex in the show, and certainly adult themes, along with nudity, some of it sexual. But the violence is minimal, and heavily discouraged. The Kijkwijzer system rates violence much more strictly than it does sexuality, but even " 1917 " and " Jojo Rabbit, " both relatively violent war movies, were released at a 12+ in the country. " The Danish Girl, " which has fully nude characters and graphic sex scenes, got the same rating. Given that, is a 14 the most appropriate rating for "Hollywood?" If not, does it make more sense given the updates to the parental controls to drop the rating for countries where it ought to be dropped, and let parents make their own choices?

That's if the ratings are correct. South Africa's official rating system for television involves PG, 7-9PG, 10-12PG, among others. Netflix uses its standard system of ALL, 7+, 13+, 16+, 18+. Given those differences, it makes sense to allow parents to more finely tune their children's ability to view content.

Complexity of ratings

With the difficulty of identifying the appropriate age ratings (no mean feat in some countries), then identifying the specific rules associated with those ratings, and then figuring out how to apply them, adding additional ways for parents to filter content is the most expedient option. After all, age ratings exist to ensure that children are protected from harmful content. The additional ability to filter out specific titles only assists in that aim—and as streaming video-on-demand becomes commonplace, more tools are sure to come.

Related Insights

The Global Rules of Content Are Changing

Across the past eight issues of Spherex’s weekly World M&E News newsletter, one theme has become undeniable: regulation, censorship, and compliance are rewriting the rules of global media. From AI policy to platform accountability, from creative freedom to cultural oversight, content creation is now inseparable from compliance.

1. Platforms Tighten Control Through Age and Safety Laws

U.S. states such as Wyoming and South Dakota have enacted age-verification laws that mirror strict internet safety rules already seen in the U.K., signaling a broader legislative trend toward restricting access to mature material.

At the same time, Saudi Arabia’s audiovisual regulator ordered Roblox to suspend chat functions and hire Arabic moderators to protect minors—an example of government-imposed moderation replacing voluntary compliance.

Elsewhere, Instagram’s PG-13 policy update illustrates how platforms are preemptively adapting before new government rules arrive.

2. Censorship Expands — Even as Its Methods Evolve

Censorship remains pervasive but increasingly localized. India’s Central Board of Film Certification demanded one minute, 55 seconds of cuts from They Call Him OG, removing what they considered violent imagery and nudity.

In China, the horror film Together was digitally altered so that a gay couple became straight using AI. Responding to Malaysia’s stricter limits on sexual or suggestive content, censors excised a “swimming pool” scene from Chainsaw Man – The Movie.

Israel’s culture minister threatened to pull funding from the Ophir national film awards after a Palestinian-themed film about a 12-year-old boy won best picture.

3. AI and Content Creation: Between Innovation and Oversight

AI remains both catalyst and controversy. Netflix announced new internal policies limiting how AI can be used in production to protect creative rights and data ownership.

OpenAI’s decision to allow adult content on ChatGPT under “freedom of expression” principles sparked industry debate about whether platforms or creators set the moral boundaries of AI. OpenAI’s CEO Sam Altman emphasized in a statement, the company is “not the moral police.”

Meanwhile, California passed the Digital Likeness Protection Act to combat unauthorized use of celebrity images in AI-generated ads.

4. Governments Target Global Platforms

The Indonesian government is advancing a sweeping plan to filter content on Netflix, YouTube, Disney+ Hotstar, and others using audience-specific content suitability metrics.

At the same time, the U.K. and EU are reexamining long-standing broadcast rules, with Sweden’s telecom authority proposing the deregulation of domestic broadcasting to encourage competition.

These diverging approaches—tightening in one market, loosening in another—underscore the growing fragmentation of global compliance standards.

5. Compliance as Competitive Advantage

The real shift is strategic: companies now see compliance as value creation, not red tape. As Spherex has argued in recent Substack articles, The Hidden Costs of Non-Compliance in Video Content Production and Why Content Differentiation Matters More Than Ever, studios and creators who anticipate regulatory complexity and make necessary edits on their terms while remaining true to their stories can reach more markets and larger audiences with fewer risks.

In other words, understanding compliance early has become the difference between limited release and global scale.

Conclusion

From new age-verification laws to AI disclosure acts and streaming filters, regulation now defines the boundaries of creativity. The next evolution of media will belong to those who can move fastest within those boundaries—leveraging compliance not as constraint but as clarity.

Read Now

Spherex Wins MarTech Breakthrough Award for Best AI-Powered Ad Targeting Solution

The annual MarTech Breakthrough Awards are conducted by MarTech Breakthrough, a leading market intelligence organization that recognizes the world’s most innovative marketing, sales, and advertising technology companies. 

This year’s program attracted over 4,000 nominations from across the globe, with winners representing the most innovative solutions in the industry. This year’s roster includes Adobe, HubSpot, Sprout Social, Cision, ZoomInfo, Optimizely, Sitecore, and other top technology leaders, alongside in-house martech innovations from companies such as Verizon and Capital One.

At the heart of this win is SpherexAI, our multimodal platform that powers contextual ad targeting at the scene level. By analyzing video content across visual, audio, dialogue, and emotional signals, SpherexAI enables advertisers to deliver messages at the most impactful moments. Combined with our Cultural Knowledge Graph, the platform ensures campaigns resonate authentically across more than 200 countries and territories while maintaining cultural sensitivity and brand safety.

“Spherex is leveraging its expertise in video compliance to help advertisers navigate the complexities of brand safety and monetization,” Teresa Phillips, CEO of Spherex, said in a statement. “SpherexAI is the only solution that blends scene-level intelligence with deep cultural and emotional insights, giving advertisers a powerful tool to ensure strategic ad placement and engagement.”

This recognition underscores Spherex’s commitment to building the next generation of AI solutions where cultural intelligence, relevance, and brand safety define success. The award also highlights the growing importance of cultural intelligence in global advertising. As audiences consume more content across borders and devices, brands need solutions that go beyond surface-level targeting to connect meaningfully with viewers. SpherexAI provides that bridge, empowering advertisers to scale campaigns that are not only effective but also contextually relevant and culturally respectful.

Read Now

YouTube Thumbnails Can Get You in Trouble

Here’s Why Creators Should Pay Attention

When we talk about content compliance on YouTube, most people think of the video content itself — what’s said, what’s shown, and how it’s edited. But there’s another part of the video that carries serious consequences if it violates YouTube policy: the thumbnail.

Thumbnails aren’t just visual hooks — they’re promos and they’re subject to the same content policies as videos. According to YouTube’s official guidelines, thumbnails that contain nudity, sexual content, violent imagery, misleading visuals, or vulgar language can be removed, age-restricted, or lead to a strike on your channel. Repeat offenses can even result in demonetization or channel termination. That’s a steep price to pay for what some may think of as a simple promotional image.

The Hidden Risk in a Single Frame

The challenge? The thumbnail is often selected from the video itself — either manually or auto-generated from a frame. Creators under tight deadlines or managing high-volume channels may not take the time to double-check every frame. They may let the platform choose it automatically. This is where things get risky.

A few seconds of unblurred nudity, a fleeting violent scene, or a misleading expression of shock might seem harmless in motion. But when captured as a still image, those same moments can trigger YouTube’s moderation systems — or worse, violate the platform’s Community Guidelines.

Let’s say your video includes a horror scene with simulated gore. It might pass YouTube’s rules with an age restriction. But if the thumbnail zooms in on a blood-splattered face, that thumbnail could be removed, and your channel could be penalized. Even thumbnails that are simply “too suggestive” or “misleading” can get flagged.

Misleading Thumbnails: Not Just Clickbait — a Violation

Another common mistake is using a thumbnail that implies something the video doesn’t deliver — for example, suggesting nudity, shocking violence, or sexually explicit content that never appears in the video. These aren’t just bad for audience trust; they’re a clear violation of YouTube’s thumbnail policy.

Even if your content is compliant, the wrong thumbnail can cause very real problems.

The Reality for Content Creators

It’s essential to recognize that YouTube’s thumbnail policy doesn’t exist in isolation. It intersects with other rules around child safety, nudity, vulgar language, violence, and more. A thumbnail with vulgar text, even if the video is educational or satirical, may still result in age restrictions or removal. A still frame with a suggestive pose, even if brief and unintended in the video itself, can be enough to get flagged.

And for creators monetizing their work, especially across multiple markets, the risk goes beyond visibility. A flagged thumbnail can reduce ad eligibility, limit reach, or cut off monetization entirely. Worse, a pattern of violations can threaten a channel’s long-term viability.

What’s a Creator to Do?

First, you need to know how to spot the problem and then know what to do about it. Second, you need to know if the changes you make might affect its acceptance in other markets or countries. Only then can you manually scrub through your video looking for risky frames. You can review policies and try to stay up to date on the nuances of what YouTube considers “gratifying” versus “educational” or “documentary.” But doing this at scale — especially for a growing content library — is overwhelming.  

That’s where a tool like SpherexAI can help.

A Smarter Way to Stay Compliant

SpherexAI uses frame-level and scene-level analysis to flag potential compliance issues — not just in your video, but in any frame that could be selected as a thumbnail. Using its patented knowledge graph, which includes every published regulatory and platform rule, it will prepare detailed and accurate edit decision lists that tell you not only what the problem is, but also for each of your target audiences. Whether you're publishing to a single audience or distributing globally, SpherexAI checks your content against YouTube’s policies and localized cultural standards.

For creators trying to grow their brand, monetize their work, and stay in good standing with platforms, that kind of precision can mean the difference between success and a takedown notice.

Want to know if your content is at risk? Learn how SpherexAI can help you protect your channel and optimize every frame — including the thumbnail. Contact us to learn more.

Read Now