I love AI.
I give platforms nicknames (chat gpt is “Chad” my main man and when he is not performing up to my standards I hit up my side piece “Claude,”).
Most days I also tap into the genius of Perplexity, Notebook LM, Midjourney, and tinker with Eleven Labs/Replit/Pictory, but Sora has evaded me (and most of the world) until now – a premature launch mucked up their works but she is now up and running.
And I’m not so sure I love AI anymore,
What Is Sora?
Is Sora Perfect? No, but it is pretty close.
Sora seems to be better with nature than people or text, but let’s remember this is the first iteration – who knows how much better it will get.
Here are my Sora’s and how they were created
It begins with a prompt, just like Midjourney and pretty much all generative AI’s.
Video #1
Prompt
Which resulted in this video:
Video #2
Prompt
Which resulted in this video:
AI Current Legislation
Right now there are emerging legal requirements in several jurisdictions that mandate the disclosure that a piece of content is AI-generated or has been manipulated by AI. While these requirements are relatively new and still evolving, a few key examples stand out:
1. China:
Regulations on Deep Synthesis Technology (Effective January 2023): China introduced rules governing “deep synthesis” technology, which covers AI-generated media like deepfakes, synthetic voices, and images created with generative AI. Under these regulations, providers of such services must clearly label AI-generated or synthetically altered media to ensure users can distinguish it from authentic content.
Generative AI Regulations (Effective August 2023): China’s rules for generative AI services also impose labeling requirements. Companies are required to prominently mark content produced by generative AI models, including text, images, videos, or other media, to inform consumers that it is not human-generated.
2. European Union (Proposed):
EU AI Act (Proposed): Although not yet law, the draft AI Act currently under consideration by the EU Parliament and the Council includes a provision for transparency obligations. One such obligation is that AI-generated content—particularly manipulated audio or video content (e.g., deepfakes)—must be clearly identified as such unless this disclosure serves a recognized public interest (such as journalism or satire). If passed as proposed, it will create a legal mandate for labeling AI-generated media within the EU.
EU Code of Practice on Disinformation: While not a binding law, this voluntary code encourages online platforms to label AI-driven manipulations and deepfakes. The code has gained traction, and it is anticipated that parts of it may be incorporated into future binding rules, especially under the Digital Services Act (DSA).
3. United States (Emerging and Sector-Specific):
State-Level Laws: Some U.S. states have proposed or enacted limited regulations around political deepfakes. For instance, California and Texas have laws prohibiting the distribution of AI-manipulated media intended to mislead voters shortly before an election. While not all of these laws explicitly require labeling, they prohibit deceptive uses, which may indirectly encourage disclosures.
FTC Guidance: The Federal Trade Commission (FTC) has issued warnings about the deceptive use of AI, and while not a specific labeling requirement, the guidance suggests that businesses risk regulatory action if they mislead consumers by presenting AI-generated content as human-produced. Future U.S. federal or state laws may move toward formal disclosure mandates.
4. Industry Self-Regulation and Platform Policies:
Many large platforms (e.g., social media companies) have their own policies requiring or encouraging labels for AI-generated content. While these are not laws, they can set industry standards and precedents that regulators may adopt later.
In Summary:
While the global regulatory landscape is still taking shape, China’s rules are the clearest example of current mandatory labeling laws for AI-generated media. The EU is on the verge of formalizing similar requirements. Elsewhere, like in the U.S., laws are more fragmented and focus largely on preventing deceptive practices, though explicit labeling mandates may emerge in the near future as lawmakers, regulators, and industry stakeholders increasingly push for transparency in AI-generated content.
So now what?
Can we ever believe what we see or hear from this point forward? I don’t know, but maybe it is time to grab a few boxes of Reynolds Wrap and start crafting some tin foil hats.