Runway (company)
American artificial intelligence company From Wikipedia, the free encyclopedia
Runway AI, Inc. (also known as Runway and RunwayML) is an American company headquartered in New York City that specializes in generative artificial intelligence research and technologies.[1] The company is primarily focused on creating products and models for generating videos, images, and various multimedia content. It is most notable for developing the commercial text-to-video and video generative AI models Gen-1, Gen-2,[2][3] Gen-3 Alpha[1] and Gen-4.[4]
![]() | |
Company type | Private |
---|---|
Industry | Artificial intelligence machine learning software development |
Founded | 2018 |
Headquarters | Manhattan, New York City, U.S. |
Area served | Worldwide |
Key people |
|
Products | Gen-1 Gen-2 Gen-3 Alpha Frames Gen-4 |
Number of employees | 86 |
Website | runwayml |
Runway's tools and AI models have been utilized in films such as Everything Everywhere All At Once,[5] in music videos for artists including A$AP Rocky,[6] Kanye West,[7] Brockhampton, and The Dandy Warhols,[8] and in editing television shows like The Late Show[9] and Top Gear.[10]
History
Summarize
Perspective
The company was founded in 2018 by the Chileans Cristóbal Valenzuela,[11] Alejandro Matamala and the Greek Anastasis Germanidis after they met at New York University Tisch School of the Arts ITP.[12] The company raised US$2 million in 2018 to build a platform to deploy machine learning models at scale inside multimedia applications.
In December 2020, Runway raised US$8.5 million[13] in a Series A funding round.
In December 2021, the company raised US$35 million in a Series B funding round.[14]
In August 2022, the company co-released an improved version of their Latent Diffusion Model called Stable Diffusion together with the CompVis Group at Ludwig Maximilian University of Munich and a compute donation by Stability AI.[15][16]
On December 21, 2022, Runway raised US$50 million[17] in a Series C round. Followed by a $141 million Series C extension round in June 2023 at a $1.5 billion valuation[18][19] from Google, Nvidia, and Salesforce[20] to build foundational multimodal AI models for content generation to be used in films and video production.[21][22]
In February 2023 Runway released Gen-1 and Gen-2 the first commercial and publicly available foundational video-to-video and text-to-video generation model[1][2][3] accessible via a simple web interface.
In June 2023 Runway was selected as one of the 100 Most Influential Companies in the world by Time magazine.[23]
On 3 April 2025, Runway raised $308 million in a funding round led by General Atlantic, valuing it at over $3 billion.[24][25]
Services and technologies
Summarize
Perspective
Runway is focused on generative AI for video, media, and art. The company focuses on developing proprietary foundational model technology that professionals in filmmaking, post-production, advertising, editing, and visual effects can utilize. Additionally, Runway offers an iOS app aimed at consumers.[26]
The Runway product is accessible via a web platform and through an API as a managed service.
Stable Diffusion
Stable Diffusion is an open-source deep learning, text-to-image model released in 2022 based on the original paper High-Resolution Image Synthesis with Latent Diffusion Models published by Runway and the CompVis Group at Ludwig Maximilian University of Munich.[27][28][16] Stable Diffusion is mostly used to create images conditioned on text descriptions.
Gen-1
Gen-1 is a video-to-video generative AI system that synthesize new videos by applying the composition and style of an image or text prompt to the structure of a source video. The model was released in February 2023. The Gen-1 model was trained and developed by Runway based on the original paper Structure and Content-Guided Video Synthesis with Diffusion Models from Runway Research.[29] Gen-1 is an example of generative artificial intelligence for video creation.
Gen-2
Gen-2 is a multimodal AI system that can generate novel videos with text, images or video clips. The model is a continuation of Gen-1 and includes a modality to generate video conditioned to text. Gen-2 is one of the first commercially available text-to-video models.[30][31][32][33]
Gen-3 Alpha
Gen-3 Alpha is the first of an upcoming series of models trained by Runway on a new infrastructure built for large-scale multimodal training. It is a major improvement in fidelity, consistency, and motion over Gen-2, and a step towards building General World Modelссвs.[2]
Training data for Gen-3 has been sourced from thousands of YouTube videos and potentially pirated films. A former Runway employee alleged to 404 Media that a company-wide effort was to compile videos into spreadsheets, which was then downloaded using youtube-dl through proxy servers to avoid being blocked by YouTube. In tests, 404 Media discovered that names of YouTubers would generate videos in their respective styles.[34]
Gen-4
On March 31, 2025, Runway released its latest flagship model, Gen-4.
Gen-4 is able to precisely generate consistent characters, locations and objects across scenes. One simply needs to set look and feel and the model will maintain coherent world environments while preserving the distinctive style, mood and cinematographic elements of each frame. Then can regenerate those elements from multiple perspectives and positions within scenes.[4]
Gen-4 can utilize visual references, combined with instructions, to create new images and videos utilizing consistent styles, subjects, locations and more. Giving unprecedented creative freedom to tell story.[4]
Runway Gen-4 allows to generate consistent characters across endless lighting conditions, locations and treatments. All with just a single reference image of characters.[4]
Gen-4 excels in its ability to generate highly dynamic videos with realistic motion as well as subject, object and style consistency with superior prompt adherence and best in class world understanding.[4]
AI Film Festival
Runway hosts an annual AI Film Festival[35] in Los Angeles and New York City.[36][37]
References
External links
Wikiwand - on
Seamless Wikipedia browsing. On steroids.