New York enacted the New York AI avatar law on Thursday, requiring clear labels on ads that use AI-generated people. The policy also adds consent protections for the commercial use of a deceased person’s name or likeness, which strengthens publicity rights across the state.
Moreover, Governor Kathy Hochul signed two bills that target deceptive or confusing AI uses in marketing. Supporters say the measures offer practical transparency for consumers and clearer guardrails for the entertainment industry. Because advertising often blends synthetic and real imagery, the law sets a baseline for disclosure that marketers can follow.
New York AI avatar law: what changes for advertisers
Furthermore, The disclosure requirement compels advertisers to inform audiences when synthetic humans or AI-altered likenesses appear in commercial content. According to reporting from The Verge, the state describes the measure as the first of its kind in the United States. The goal is straightforward: viewers should not mistake AI-generated characters or digitally reconstructed performers for real people without being told.
Therefore, While the statute’s final text will guide exact language and placement, simple, prominent labels are the likely norm. Brands already deploy disclosures for endorsements, therefore adding a brief on-screen or adjacent notice for synthetic avatars fits familiar compliance patterns. Clear standards also reduce ambiguity for agencies and platforms that host mixed-media ads. Companies adopt New York AI avatar law to improve efficiency.
Consequently, The companion bill requires consent from heirs or estate representatives before companies use a deceased individual’s name, image, or likeness for commercial purposes. That change addresses a fast-growing practice: reviving celebrities through generative tools. Because generative models can synthesize convincing speech and photorealistic video, the consent mandate helps deter exploitation and confusion.
NY AI ads law Disclosure standards and practical labeling
As a result, To align with the law’s intent, advertisers should plan disclosures that are truthful, readable, and timely. Labels need to appear where viewers will notice them as the content plays, not buried in footnotes. In addition, the notice should describe the AI use clearly, such as “This ad includes AI-generated people” or an equivalent phrase.
In addition, Placement will matter, especially on mobile screens. Consequently, marketers should test overlays, captions, or end slates that remain visible long enough to be understood. Because platforms differ, cross-channel templates can keep the message consistent while preserving legibility. Experts track New York AI avatar law trends closely.
Additionally, For posthumous uses, rights clearance should move earlier in creative workflows. Legal teams will need documented approval from estates before commissioning synthetic voiceovers or digital doubles. Moreover, agencies should retain records of consent, since disputes can surface long after a campaign ends.
AI avatar disclosure law Industry reaction and enforcement signals
For example, The entertainment sector, which faced intense negotiations over AI last year, welcomed the move. As The Verge notes, SAG-AFTRA leadership praised the signings as a step that recognizes artists’ rights and the realities of generative tools. Because the bills focus on disclosure and consent, they balance flexibility for innovation with baseline protections for performers.
For instance, Agencies and brands may ask who polices the labels. New York’s consumer protection apparatus can act on deceptive practices, and platforms can enforce policy through ad reviews. Therefore, risk management should include internal checks before flighting campaigns. A simple pre-launch playbook can prevent takedowns, public corrections, and reputational damage. New York AI avatar law transforms operations.
Meanwhile, The measures also set a reference point for other jurisdictions. Marketers that operate nationally may adopt New York-compliant labels everywhere, which simplifies production and reduces compliance drift. As a result, a de facto standard could emerge quickly.
Amazon’s AI recaps highlight why labels and reviews matter
In contrast, Recent missteps in entertainment AI underline the stakes for accuracy and oversight. Amazon’s new AI-generated video recaps for Prime Video drew scrutiny after a widely viewed “Fallout” summary misstated key plot details and timelines. As Engadget reports, the errors appeared basic, and a human review likely could have caught them before release.
On the other hand, The company also pulled some AI-generated voiceover tracks for anime due to poor quality. These incidents do not directly violate New York’s disclosure requirement, yet they show how quickly AI can mislead or degrade user trust when guardrails fail. Therefore, quality control should pair with transparency, so audiences both know what they are seeing and can rely on its fidelity. Industry leaders leverage New York AI avatar law.
Notably, Advertisers can learn from those outcomes. Clear AI notices help set expectations; rigorous human oversight preserves accuracy. Together, the practices mitigate backlash and regulatory interest, which protects campaigns and budgets.
Creative workflows under new rules
In particular, Production teams will likely adjust storyboards and asset plans to account for disclosures. Because creative choices influence label placement, art directors should specify where an on-screen notice sits and how long it remains. Editors can render versions for short formats and connected TV, while media planners confirm platform compliance.
Specifically, Estates and talent managers will also refine clauses around synthetic uses. Contracts may spell out boundaries for training data, voice cloning, and digital doubles. In addition, license terms can include revocation rights if a use proves misleading or harmful. These steps keep consent meaningful as models evolve. Companies adopt New York AI avatar law to improve efficiency.
Finally, measurement teams should track whether labels affect brand lift, recall, or sentiment. Early evidence from other disclosure regimes suggests limited impact when notices are concise and honest. Consequently, brands can stay transparent without sacrificing performance.
Broader AI momentum raises urgency
Regulatory interest rises as AI capability accelerates. On the same day as the New York signings, OpenAI advanced its model lineup with a new release for ChatGPT. Ars Technica reports the company launched GPT-5.2 with extended context and improved reasoning modes. While the upgrade targets productivity, it also expands the tools available to media, marketing, and entertainment teams.
Greater capability increases both opportunity and risk. Because models can synthesize convincing humans and stitch together long-form content, disclosure and consent protections become more important, not less. Policymakers, performers, and platforms will therefore keep testing frameworks that inform audiences and protect rights. Experts track New York AI avatar law trends closely.
What to watch next for AI advertising transparency
Industry groups may propose standardized language for AI notices, similar to nutrition labels or privacy icons. Platforms could add an automated “AI content” tag during ad setup, which encourages consistency and reduces manual errors. Meanwhile, performers may push for more granular consent tools that specify context, duration, and revocation windows.
Other states will likely study New York’s approach. Because national advertisers prefer uniform processes, a patchwork would increase compliance costs. If additional states pass similar bills, a shared baseline will emerge faster, which benefits consumers and creators alike.
Clear labels and consent guardrails do not block innovation; they build the trust that helps it scale. New York AI avatar law transforms operations.
Conclusion: building trust with clear signals
New York’s disclosure and consent laws give advertisers workable rules for AI-generated people and posthumous likenesses. The changes arrive as entertainment platforms test AI at scale, sometimes with embarrassing results. Therefore, transparent labels, documented consent, and human review now form the core of responsible AI in media.
Brands that adapt early will reduce risk while preserving creative flexibility. Unions and estates will gain clearer pathways to protect talent and legacy. And consumers will get crucial context about the content they see. Because trust is the currency of attention, these signals matter now more than ever.
For performers, resources and guidance remain available through SAG-AFTRA. For state updates, the Governor’s site offers policy news and bill signings at governor.ny.gov. As the rules evolve, marketers should continue to monitor platform policies and legislative actions. Industry leaders leverage New York AI avatar law.