Generative AI has been an issue for content creators for a long time. Whether an artist creating paintings or a writer experimenting with a new way of telling a story, it can all potentially feed into an AI software database for their generative capabilities.
Getty, a provider of stock images, sued Stability AI last year for using Getty images to train their generative AI without authorisation. Some AI-generated pictures even had a distorted “Getty Images” watermark!
From analysing our shopping habits to fixing our grammar, AI has been part of our lives for some time without us realising it.
The use of AI became more known by the public after OpenAI gained popularity and accessibility with services like ChatGPT and DALL.E. This allowed other services to add further dimensions to their offerings. Canva, for example, added an AI image generator to their platform.
As writers here on Vocal, we witnessed many unattributed AI-generated stories and poems appear on the top stories tab, so we have also experienced the effects of AI as well.
While many AI-enabled features proved harmful for artists and content creators, many enhance creativity and support creators. AI has been and will continue to be a controversial topic, and the next step of generative AI will only light the fires on these arguments.
I have watched a video on Marques Brownlee’s YouTube channel that shocked me. In the video, Marques talked about the innovation in generative AI: video generation.
Once again, OpenAI created a new way to use generative AI. AI went from creating text to images and now even video.
In Marques’s video, he compares an AI-generated video from a year ago to what OpenAI’s Sora can create now. The difference is staggering. AI video generation advanced by leaps and bounds in a single year to where it can now generate very realistic videos that would fool many of us. Here is a link to their website. You may explore this yourself when you have the time.
Let’s think about what this could mean. Should this tool exist? A few years ago, deep fake videos were creating havoc on the internet. Deepfake videos are ones where a video would have some aspect of it changed by CGI. In most deepfake videos, it is the face of a character that changes. An example is putting Jim Carry’s face on Jack Nicholson’s body in The Shining. Check it out here.
For a deepfake, AI software would scan the face and place it into an existing video. With Sora generative AI, on the other hand, you can create whole videos from scratch! With deepfakes, I would assume whoever creates the video would need some video-editing skills, but with this new technology, anyone can do it. Can you imagine the chaos?
Marques put these issues into perspective by talking about the elections. This kind of technology can be a tool for one party to sabotage another by creating fake videos of the opposing candidate doing something they shouldn’t be doing.
Taking this even further, the mere existence of this technology allows for casting doubt on legitimate videos. You can imagine how useful that would be to a politician. It would almost be a “get-out-of-jail” card for them.
As with other AI tools, this is not all doom and gloom. The primary use I thought of for this kind of technology, which Marques mentioned in the video, is creating stock footage.
Many writers on Vocal have used AI tools like Nightcafe to create an image for their story cover because they had something specific in mind that they couldn’t find on stock image websites. Sora would solve that issue for video creators with similar particular needs.
While a fascinating innovation, Sora sparks many questions: how would this affect the stock video industry? How will this affect photographers and videographers? Will it have restrictions to prevent deepfake-like issues? Will it be as readily available as image-generative AI?
I will be excitedly waiting to answer these questions in the next few months (if the speed of AI innovation is anything to go by).
****
Check out my article on Google’s Gemini AI here.
Check out Marques Brownlee’s video below: