Author
VP Land
Date Published
Apr 26, 2024
Tags
Multimodal AI
Foundation models
Video Search API
Generative AI
Share
Join our newsletter
You’re now subscribed to the Twelve Labs Newsletter! You'll be getting the latest news and updates in video understanding.
Oh no, something went wrong.
Please try again.

‍

VP Land sits down with Soyoung Lee from Twelve Labs, a company building multimodal video foundation models that can understand everything inside a video. Their AI technology is set to revolutionize the way creators and businesses search, analyze, and manage their video content.

‍

⏱ CHAPTERS

‍00:00 Intro

‍00:20 What is Twelve Labs?

‍01:07 Building video-first AI models

‍02:36 Twelve Labs' unique approach

‍03:49 Applications of Twelve Labs' technology

‍06:00 A single, powerful model

‍06:57 Image and audio-based video search

‍08:08 AI in the media landscape

‍

πŸ“° CREDITS

Producer & Host: Joey Daoud

Production Manager: Steve Regalado

Director of Photography: Justin Guo

Editors: Kirill Slepynin & Jan Gonzales

Graphic Designer: Kristina Leongard

Generation Examples
No items found.
No items found.
Comparison against existing models
No items found.

Related articles

A Recap of Our Multimodal AI in Media & Entertainment Hackathon in Sunny Los Angeles!

Twelve Labs co-hosted our first in-person hackathon in Los Angeles!

James Le
Introducing the Multimodal AI in Media & Entertainment Hackathon

Twelve Labs will co-host our first in-person hackathon in Los Angeles!

James Le
S.Korea's Twelve Labs ranks among world's top 50 generative AI startups

The company has independently developed a massive AI model geared toward video understanding

The Korea Economic Daily
AI 100: The most promising artificial intelligence startups of 2023

Twelve Labs recognized as one of the most innovative AI companies in search by CB Insights.

CB Insights