CREATING A CAR COMMERCIAL BY TAKING A HYBRID APPROACH WITH AI & LIVE ACTION PRODUCTION
THE OPPORTUNITY
After meeting with brands and agencies over the past year about AI and its applications in creating advertising content, we’re consistently asked: “How can we apply AI to our commercial content production process?” “Can AI deliver high-quality brand storytelling and fidelity?” and “Will AI allow us to be more efficient?”
We wanted to put our money where our mouth is so we wrote and produced a :45 experimental spot for the Land Rover Defender.
THE PROS & CONS OF AI / LIVE ACTION
Rather than rushing out trying to create everything using only AI, we wanted to take an approach that would use AI to provide a production solution for portions of the spot that we feel it can deliver to the standards we expect in commercial filmmaking.
We started by looking at what AI and live action do well. GenAI platforms like Runway, Kling and Midjourney have gotten to a place that we can generate photo-real images and add realistic motion to them. Where GenAI is not strong is with Human Performance and consistency of a moving product (e.g., a car). Taking this into account, we wanted to produce a car commercial that borrowed production techniques from each, while allowing us to create a spot that would maintain the level of brand storytelling and fidelity that a major brand would be happy with.
THE CONCEPT
Taking the above into account, we created a script that intended to showcase the adventurous nature of the Land Rover Defender in a fun and playful way, Ben and Erich (co-directors) wrote a script about a Defender that is tired of its monotonous life - “what if?” a car can dream? Where would it want to go?
If the car is “living” in a major metropolitan area, it’ll be going to and from work, everyday life activities like driving to restaurants and the grocery store. A Defender would want to unleash its inner adventure and go off-roading, to the mountains, driving through mud and snow, etc.
So we crafted a :45 spot around this called “”Who Says Cars Can’t Dream?”
THE SHOOT
We shot for 1 day. On the day of the shoot, we focused on telling the parts of the story that we didn’t feel that AI can currently achieve – human performance and capturing the Defender in motion.
RESPONSIBLE AI APPROACH
We took owned images that we photographed (350 stills) and ran these through the motion software listed below. This allowed us to know the images were owned by Tool. In addition, we generated imagery using Midjourney, leveraging some of our photographed shots for tone, composition and lighting reference. Since we were using MJ to generate environmental shots it seems fair to take the approach as this is low risk content to be generated.
HOW WE LEVERAGED AI
We scripted the concept with the production logistics in mind. We knew that we wanted to create a spot where we used photoreal AI generated visuals as our “dream” footage, allowing the live action footage in the opening and closing of the spot to book-end the AI content. AI generated content made up about 50% of the visuals in the :45 spot. We were unsure whether we wanted to include a shot of the Defender generated in AI. We were concerned that it wouldn’t be at a fidelity level that a brand like Land Rover would approve. After pushing ourselves, we got results that we were happy with. You can see the AI generated Defender at the :15 mark in the spot.
HERE IS OUR PROCESS
1. Generate Imagery: Based on the “dream” concepts in the script, we generated stills in MidJourney, as well as took photographs of environmental scenes. Then we took these stills and processed them through AI video platforms to generate motion. We generated over 2,400 images and selected 28 AI images per spot that we added motion to and incorporated into the edit.
2. Photo-Real Defender: We challenged ourselves to see if we could create a photo-real version of the Defender in AI. Having worked in the industry for many years, we recognize that a brand’s legal and creative teams won’t approve this unless it’s a 1:1 match of the car. After several iterations, we felt good with the quality of the car within our visuals. Our approach to achieving this was that we took images of the Defender and retouched them in Photoshop (using Adobe’s AI tools) to refine the overall output, as well as correct features of the car by using plates that were shot on the shoot.
3. Add Motion To the Imagery: We would run each selected image through 3x different AI video programs and then select the one that created the most realistic motion. In total, we generated over 1,300 videos to deliver to our editor for determining which ones made the final edit Here were some of our key takeaways for this project:
RUNWAY
- Gen3 Turbo is good for connecting shots.
- Gen3 Alpha is better in details and responds better to prompts.
- Runway makes animations brighter and more contrasty.
- Allowed us to transpose settings between images. For example, once we landed on a shot we were happy with, we’d use the prompt and seed to generate very similar motion with alternate images. This was very useful when creating the snow and Mud variations.
LUMA
- Better for motion blur.
- Can often appear more cinematic (especially without hard edges).
KLING AI
- Kling AI can generate very dynamic shots.
- It retains much of the original color space.
- And it has negative prompts, which Runway and Luma do not have.
4. Upscaling The Content: We used Magnific to upscale the content to the resolution we’d need for a broadcast level spot, as well as adding more realism to the AI generated visuals.
5. In total, AI generated content made up about 50% of the visuals used in the final spot.
6. Versioning & Personalizing The Spots: We were able to create 2x “hero” versions of the spot, one focused on off-roading activities and the other on snow/skiing. Since we were using AI to generate the footage, it allowed us to more easily create different versions.
Off-Road Adventure
Winter Adventure
AI isn't perfect, here are some outtakes:
THE BENEFITS
Taking a hybrid approach allowed us to use the best of what Live-Action and AI allows for in creating commercial content. We estimate this production would have taken us 2-3x days of shooting if we took the traditional route. This approach allowed us to take a 1 day shoot and mix this with an AI approach that we used to generate all of the driving footage and gave us a larger range of location and weather conditions that didn’t require travel and extensive VFX.
CREDITS
Creative Lead & Production Company: Tool
Co-Directors: Ben Tricklebank & Erich Joiner
Executive Producer: Dustin Callif
Head of Production: Amy Delossa
Director of Photography: Justin Gurnari
AI Image Generation: Ben Tricklebank & Janos Deri
AI Motion: Ben Tricklebank & Ariel Klevecz
Still Photography: Ben Tricklebank
Music Composition: Marco Lehmann
Post Production: Therapy Studios
Executive Producer: Margaret Ward
Editor: Doobie White
Assistant Editor: Hope Abrom
Colorist: Omar Inguanzo
Sound Designer & Mixer: Dillion Cahill & Dori Holly
Flame Artists: Faby Zumaran, Wren Waters, & David Rivas