On September 30, 2025, Sora AI 2 was released by the tech company OpenAI; Famously known for their development of ChatGPT with 800 million weekly users. Sora 2 is an artificial intelligence software that generates videos and photos based on any given prompt you feed to it. This new model of Sora, compared to the first, has a newly trained software that’s able to produce both higher quality and more realistic videos. The app so far is only accessible through invites only, and videos can be generated once creating an account. The program overall is free to use for those currently with access to it.
With Sora’s recent release, the videos that have been generated so far have been deep fakes of celebrities, characters from popular media and content involving children. One of these videos gaining popularity while circulating the internet, are videos of well-known Youtube star Jake Paul giving makeup tutorials. Many express that they could not tell that those videos were completely fake. More videos created through the software include fictional commercials of young children playing with adult toys and other inappropriate prompts. Sora users would also mainly generate content that included copyrighted characters, a popular character being SpongeBob SquarePants who would be generated as a Nazi. This began the start of trouble for OpenAI, resulting in an enforced restriction for Sora users.
OpenAI’s company would get into legal trouble due to others freely generating content that included characters with ownership rights. This caused the company to strictly enforce copyright restrictions toward their users, meaning whenever any prompt would be typed relating to a copyrighted character the AI software would produce nothing. This became a big deal for the Sora community as they were very outraged by this decision and started to protest through generating parody images of celebrities, criticizing OpenAI. This would not effectively stop significant usage of the app, only cause more fictional generated content of real life individuals.
A growing concern regarding this new generation of Sora is the ability to produce fake videos of real people, which can be used with malicious intent and cause disinformation across the internet. OpenAI seems to recognize and respond to such possibilities, and as a result they’ve placed more restrictions on the type of content able to be generated. Tested by the New York Times, Sora refused to generate prompts involving graphic violence, political content and famous celebrities that have not granted their permission. Videos are also given a watermark to indicate its in-authenticity. Despite these safety precautions, users have found ways to bypass them. With some editing and cropping, the watermark can easily be removed by users, and harmful videos have been spotted despite the restrictions; some examples being depictions of crime such as robberies or shoplifting, false media clips of global war and videos including children.
The public opinion following Sora 2’s release seems to be divided. Some believe that this new development of technology will advance and benefit society, while others believe there will be no good outcome from such development. At the rate these advancements are progressing, it’s safe to assume similar tech will only continue to grow and improve in the near future. Only time will tell how positive or negative the effects of generative AI will be for mass society.