News

Runway, the co-creator of Stable Diffusion, launches the first Gen-1 Generative AI for video calling

After being behind the process of creating the image-generating tool by artificial intelligence, stable diffusionthe startup Runway goes one step further and announces the creation of Gen-1. It is a powerful and efficient application capable of modifying already created videos and movies and transforming them into completely new ones using only text input.

It seems Runway hasn’t been able to contain their innovative and creative spirit, having only helped give birth to Stable Diffusion a year ago. But between that tool and the new Gen-1 there are two clear differences; the first is that Stable Diffusion was open source, while the second is expected to only be in the hands of a small segment at the moment. The other variant is that Stable Diffusion created images with AI and Gen-1 will be able to transform existing videos into completely new ones through a reference image, a text message or prompts based on its algorithmic capabilities.

This is how Gen-1 works

To launch Stable Diffusion, he was inspired by other image generators such as midjourney either DALL-E, but now they become pioneers in this regard. Through a demo tape published on the Runway website, it can be seen that its software will be able to turn people clips into mere plasticine puppetsand even a series of stacked books in an urban landscape. For the CEO of the company, Cristobal valenzuela“2023 will be the year of video”.

Gen-1 runs in the cloud, on the Runway website, so not being open source only certain users will have access to it, there being an important waiting list in a Google Docs document. Valenzuela’s intention is to put the new tool in the hands of creative professionals and generate complete feature films that will later be seen online.

Gen-1 will have five possible areas of use:

  • styling: Transfer the style of an image to a video.
  • Graphic script: Convert mockups into stylized and animated renders.
  • Mask: By means of simple prompts you will be able to transform elements of a video.
  • Rendering: Add textures and effects to renders that do not have them.
  • Personalization: Adaptation of the model to produce more complex results.

a laborious process

And it is that Gen-1 is the result of many hard years of work by Runway until they have found this revolutionary element. However, since the creation of the company in New York in 2018, Runway has not stopped providing tools for TikTokers, Youtubers and film and television studios. Without going any further, the creators of ‘The Late Show’ with Stephen Colbert used Runaway’s software to edit the show’s graphics, or the film ‘Everything Everywhere All at Once’which used its technology to create visual effects in certain scenes.

We are talking about Gen-1 in 2023, but already in 2021, Runway collaborated with researchers from the University of Munich to devise the first version of Stable Diffusion together with Stability AI, the Anglo-Saxon startup that paid the IT costs. A year later, they managed to turn the project into a global phenomenon.

It is curious, but the existing links between the two companies no longer exist. Some like Getty have sued Stability AI for taking pictures of them without permission.

Gen-1 follows steps pioneered by other text-to-video AI models such as Meta Make-a-Video and Google Phenaki, who are capable of producing video clips from scratch. It also has clear similarities with Dreamix, a generative AI tool from Google that creates videos from existing ones by applying specific styles, although not with as much quality as Gen-1. However, unlike its main competitors, Google and Meta, Runway has created its model with customers in mind with the aim of producing a community of video creators.

Other AI projects

Adding to the efficient and economical Gen-1 visual effects system is an artificial intelligence-based tool called soundify. With it, Runway wants to revolutionize the audio sector. It will be in charge of taking a video input, analyzing what it is about and what happens, and then creating the corresponding audio.

Consequently, we realize that Generative artificial intelligence is here to stay and that their models are becoming more powerful, complex and moldable, so they will not understand limits in certain sectors such as cinema.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *