Like everyone else in the video industry, our Picturelab team has been following the developments of Sora, Runway, Adobe, and other innovations in AI-generated video.
And like everyone else, we’ve been blown away at how far it’s come in just a few months.
While we acknowledge that some aspects of our industry will disappear forever, we’re more excited about how AI will augment our capabilities and expand our creativity.
Here are some thoughts on how AI will affect our services.
How AI will affect b-roll for post
It’s widely agreed that stock footage licensing, as we know it, will soon go the way of the flip-phone (won’t disappear overnight but eventually). As a creative video production company, creating stock content was never our thing, but we have hundreds of hours of b-roll footage that we had captured in the course of our productions.
Some of the b-roll we’ve collected wouldn’t have been possible with GenAI (e.g. a customer testimonial b-roll with the client in the shot), but for any random b-roll, a simple prompt on Sora will generate the b-roll we need.
This is obviously significant for post-production. We’ve spent hundreds of hours sourcing, reviewing and obtaining client approval on stock clips. There are tools like Invideo AI that automatically pull stock clips from libraries like iStock and edit them into a video, but currently the deliverables are not professional quality. Over time, video generators will use GenAI instead of stock clips, which will significantly improve the final product. We’ll still need to spend some time manually tweaking a video to get it just right, but GenAI, and the video generators that will leverage it, will make the post workflow much more efficient.
How AI will affect b-roll for production
GenAI’s impact on b-roll will also improve production. We don’t expect Sora will replace all creative filmmaking. Any production with a specific artistic vision, whether it’s for a commercial or a feature film, could not rely solely on AI.
What GenAI can help with is producing shots that are b-roll in nature but not as random as stock footage. For example, say we’re shooting a commercial for a mobile app. An actor, wearing a dark gray jacket, stands on a historical street in Seoul looking at his phone. While we still have to shoot our actor’s face, we can generate his hand and the phone. Normally, we’d have to take time to set up the shot (either on location or at a stage), but if AI can generate a realistic clip, that’s one shot we can omit from the schedule.
Leveraging generative fill in video production
In some cases, we’re already leveraging AI to correct minor audio and dialogue errors. This has resulted in significant cost savings since we don’t need actors to re-record and our editors to do ADR.
Also, we’ll be able to use generative fill to easily correct things like wardrobe mistakes or remove objects from a shot. Again, this is a huge benefit for post-production, but also for production. We spend valuable time on shoots physically removing unwanted objects, or settle for a different angle or location if the objects can’t be moved. With generative fill, it’s much easier to change, replace or remove these items in post. So shoots become more efficient and creative since we can focus on impactful filmmaking rather than spending valuable production time unscrewing a wall fixture or other annoying tasks.
AI will unleash creativity, not restrain it
Runway and Adobe have already made rotoscoping easier, and with the power of AI to generate assets quickly, video projects won’t demand huge budgets to create high-level creative productions. Our editors and animators won’t require days and weeks to create Hollywood-quality effects.
Case in point, we shot a commercial for Nvidia in the early 2010s to announce the launch of Tegra 3. The idea was to convey that the GPU could empower users to do any media activity from anywhere. Shot in POV with a clever rig our team conjured up overnight, it involved a roller coaster, a stunt plane, a hot air balloon, a boxing ring, and several other locations. Needless to say, a very ambitious concept that would have taken weeks to do in post. So we shot everything practically. The budget was in the six-figures — definitely doable for a company like Nvidia, but not for most of our other clients. With GenAI, even the wildest of ideas could be explored for any company of any size.
Our concepts will always have a ceiling due to time and budget, but with AI, that ceiling will be much, much higher.
In Conclusion
With the developments of Generative AI for video, many brands will decide to create marketing videos themselves. But Generative AI is a tool, and there will always be a place for creatives that can effectively craft with that tool. Agencies like Picturelab will leverage GenAI to augment our services and provide our clients with elevated productions. Our mission – to create videos that are effective in driving results and inspiring audiences to act – won’t change. Our ability to accomplish that mission will just be that much greater with GenAI. We’re excited!