Clicky

Generative AI Video Tools: Google VEO 3 vs Adobe Firefly

For our most recent GenAI commercial, we leveraged a combination of various tools including Google VEO 3, Adobe Firefly, Adobe Express and Adobe Premiere.

We used Google VEO 3 for most of our AI footage generations and Adobe Firefly for a supplementary scene.  Though VEO 3 footage can be edited directly with Google Flow, we decided to use Premiere for the editing.

Here’s what we learned about each tool.

Google VEO 3 vs Adobe Firefly

As an Adobe house, we began our AI journey with Firefly.  It’s fully capable of generating simple b-roll type clips like landscapes, drone-style footage, and as in this last case, floating donuts.  It, however, did not generate people very well.

Below are a couple clips generated with Adobe Firefly.  As you can see, the drone flying over a mountain lake is pretty amazing, but the businessman looks off.

 

 

Of course the biggest issue with Adobe Firefly was that it did not generate audio (as of July 2025).

That’s when we turned to Google VEO 3.

VEO 3 generated lifelike people with clear accurate voices and background sounds.  Check out these generations in our spot for Primo’s Donuts:

As noted in a previous post, Google VEO 3 wasn’t perfect and we had to apply some human creativity to make it all work.  We also had to make some compromises.  For example, we originally intended to use various accents, but VEO 3 did not consistently generate accents.  We ultimately decided not to use accents since we were tight on budget and could not purchase more credits.

Here are a few examples of the types of problems we faced with the generations.

Prompt: Medium shot of an elderly Asian woman selling fruit at a busy farmer’s market.  She is holding a donut in one hand and a bag of fruit in the other hand.  She hands the bag of fruit to a customer, then turns to the camera and in a thick Asian accent, says, “I’m here for the donuts”

In this first clip, VEO 3 did not generate audio.  Also, she has very slippery fingers and don’t seem to care.

 

In this second generation, her accent is pretty good, but she doesn’t look at the camera, which would have been inconsistent with the other clips in the video. She also does a funny thing with the bag of fruit.

 

We decided to use a younger woman, and created these two clips.  The first one is pretty good but she makes a strange face gesture and also stutters.

 

We finally got one we could work with on the fourth generation, but we had to edit around the strange face gestures and head movement.

 

So, to create this short 2-second scene, we had to generate four clips and had to edit around it to make it work.  Similarly, the generations for the other scenes all had strange issues as well, and so we had to generate multiple clips and use some creative editing.

Because of budget (we ran out of VEO 3 credits), we used Firefly for the scene with the floating donuts.

Google Flow vs Adobe Premiere

Flow is integrated with Google VEO 3, but we decided to use Premiere to edit since that’s what we use for our traditional videos.  We were also incorporating footage from Firefly and creating additional sound and graphic assets.  In most situations, Flow is a good choice for editing solely VEO 3 videos.

Everything in GenAI will change in a few months

This technology is moving at light speed so this time next year, we won’t be surprised if we’re using different tools altogether like OpenAI Sora or others.  We also plan to use Midjourney a lot more as we develop our Picturelab AI services.

To learn more about Picturelab AI, contact us at info@picturelab.com.

Scroll to Top