Your resource for web content, online publishing
and the distribution of digital products.
S M T W T F S
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 
 
 

Runway Act-One generates animations from video and voice inputs

DATE POSTED:October 24, 2024
Runway Act-One generates animations from video and voice inputs

Runway has announced the release of its latest tool, Act-One, designed to enhance character animation with greater realism and expressiveness. This new addition to the Gen-3 Alpha suite marks a significant advancement in how generative models are used for creating live action and animated content.

How does Runway’s Act-One work?

Traditionally, creating facial animations requires complex workflows involving motion capture, manual face rigging, and multiple footage references. These methods often aim to capture and replicate the actor’s emotions in a digital character. However, the challenge lies in preserving the original emotion and nuance of the performance.

With Act-One, Runway introduces a streamlined process. The tool generates animations directly from an actor’s video and voice performance, removing the need for additional equipment like motion capture devices. This simplification makes it easier for creators to animate characters without compromising the expressiveness of the original performance.

Act-One is versatile, allowing creators to apply animations to a wide variety of reference images, regardless of the proportions of the source video. It can accurately translate facial expressions and movements into characters that may differ in size and shape from the original. This opens new doors for inventive character design, particularly in fields like animated content creation.

The tool also shines in live-action settings, producing cinematic, realistic outputs that maintain fidelity across different camera angles. This functionality helps creators develop characters that resonate with viewers by delivering genuine emotion and expression, strengthening the connection between audience and content.

Runway bets big on AI with $5 million fund for experimental films

Runway is positioning Act-One as a solution for creating expressive dialogue scenes that were previously difficult to achieve with generative models. With only a consumer-grade camera and a single actor, creators can now generate scenes involving multiple characters, each portrayed with emotional depth.

“Our approach uses a completely different pipeline, driven directly and only by a performance of an actor and requiring no extra equipment,” Runway said in its blog post, highlighting the tool’s focus on ease of use for creators.

Runway remains committed to ensuring its tools are used responsibly. Act-One comes with a range of safety features, including measures to detect and block attempts to create content featuring public figures. Additional protections include verifying that users have the rights to the voices they create using Custom Voices and continuously monitoring for potential misuse of the platform.

“As with all our releases, we’re committed to responsible development and deployment,” the company stated. The Foundations for Safe Generative Media serve as the basis for these safety measures, ensuring that the tool’s potential is used in a secure, ethical way.

A broader vision for the future of animation

With the gradual rollout of Act-One starting today, Runway aims to make advanced animation tools more accessible to a wider range of creators. By removing barriers to entry and simplifying the animation process, the company hopes to inspire new forms of creative storytelling.

“Act-One is another step forward in our goal to bringing previously sophisticated techniques to a broader range of creators and artists,” Runway emphasized.

Featured image credit: Runway