Cision
Author
Amber Moore
Amber Moore
Date Published
Mar 16, 2022
Tags
Video understanding
Share
Join our newsletter
You’re now subscribed to the Twelve Labs Newsletter! You'll be getting the latest news and updates in video understanding.
Oh no, something went wrong.
Please try again.
Twelve Labs has a true understanding of video through billions of AI parameters– and search is just the beginning. Our mission is to help developers build programs that can see, listen, and understand the world as we do by giving them the most power.

‍

Twelve Labs, the video search and understanding company, today announced the launch of a first-of-its-kind cloud-native suite of APIs that enable comprehensive video search in less than one second. The company’s proprietary video understanding AI system can locate exact moments almost instantly across massive video archives – and makes video search as fast and easy as CTRL-F.

Index Ventures led Twelve Labs’ $5 million seed funding round and Index partner Kelly Toole joined the board of directors. Radical Ventures, Expa and Techstars Seattle also participated, along with angel investors Dr. Fei-Fei Li of Stanford University, Alexandr Wang, CEO of Scale AI, and Jack Conte, CEO of Patreon.

“Videos are becoming the fundamental method by which we share, consume, and store information online,” said Kelly Toole, Partner at Index Ventures. “And yet, despite their ubiquity, video search today relies heavily on simple things like keywords, tags, and titles. This leaves the richness of the actual content largely untapped. Twelve Labs is changing this with truly transformational technology that will power the next generation of video-centric products.”

Innovative Solution for the Modern Landscape

Eighty percent of the world’s data now resides in video form. Almost every part of our lives is deeply rooted in video– from organizational knowledge, meetings, and communications, to online learning, to our entertainment needs. It has reached a level where Gen-Zers spend one-third of their waking hours watching or creating content.

Twelve Labs’ technology makes it possible for any company to unlock the power of video for the first time. From locating noteworthy discussion points within an organization’s extensive Zoom recordings, to urgently needed scenes within a media company’s footage archive, to pinpointing sensitive content on a streaming platform, the Twelve Labs video search API enables text-based semantic search, as easy as CTRL+F. The cloud-native video understanding infrastructure is powered by an AI system that understands visual and conversational contexts to make sense of any scene or video moment, without manual input.

“There is nothing in the world like Twelve Labs,” said Pedro Almeida, CEO and co-founder of Mindprober. “We’ve tried so-called solutions from tech giants, and Twelve Labs is so far ahead of what they can do. Twelve Labs was not only easy to integrate, but it finds what’s valuable, and the accuracy of results is astounding. Knowing that I can reliably access any information we need in our video data opens new doors to business areas we’ve only imagined.”

The Power of Twelve Lab’s Technology

Twelve Labs’ groundbreaking AI technology recently won first place in the world’s largest competition for video understanding, the ICCV VALUE (Video and Language Understanding Evaluation) Challenge - Video Retrieval Track. The company’s video understanding algorithm, ViSeRet, beat out several of the largest, most advanced tech companies in existence, such as Baidu and Tencent, and outperformed Microsoft’s SOTA baseline model in the video retrieval (search) track. Twelve Labs has also secured support from some of the brightest luminaries in AI, including Fei-Fei Li (Stanford), Silvio Savarese (Stanford), Oren Etzioni (AI2), and Aidan Gomez (co-creator of Transformer), as well as founders from innovative companies like Alexandr Wang (Scale), Aaron Katz (Clickhouse), John Kim (Sendbird), Dug Song (Duo Security), Jean Paoli (Docugami), and more.

“Twelve Labs has a true understanding of video through billions of AI parameters– and search is just the beginning,” said Jae Lee, CEO and co-founder of Twelve Labs “Our mission is to help developers build programs that can see, listen, and understand the world as we do by giving them the most powerful video understanding infrastructure.”

To read about Twelve Labs founding story and dive further into its mission and technology, go here.

To get the Twelve Labs API, starting today, go to https://docs.twelvelabs.io/.

About Twelve Labs

The Twelve Labs team believes that to understand video is to understand the world. To this end, the company was founded to make video instantly, intelligently, and easily searchable. Twelve Labs’ state-of-the-art video understanding technology enables the accurate and timely discovery of valuable moments within an organization’s vast sea of videos so that users can do and learn more. The company is backed by leading venture capitalists, AI luminaries and founders of cutting edge technology companies. It is headquartered in San Francisco, with an APAC office in Seoul. Learn more at twelvelabs.io.

Generation Examples
No items found.
No items found.
Comparison against existing models
No items found.

Related articles

Pegasus-1 Open Beta: Setting New Standards in Video-Language Modeling

Our video-language foundation model, Pegasus-1. gets an upgrade!

Minjoon Seo, James Le
Introducing Marengo-2.6: A New State-of-the-Art Video Foundation Model for Any-to-Any Search

This blog post introduces Marengo-2.6, a new state-of-the-art multimodal embedding model capable of performing any-to-any search tasks.

Aiden Lee, James Le
A Tour of Video Understanding Use Cases

Let's embark on a captivating tour of video understanding and explore its diverse range of use cases

James Le
Managing Video Data with ApertureDB and Twelve Labs

This article examines the technical challenges of video data management and looks at how ApertureDB and Twelve Labs help solve them

James Le