Author
Jae Lee
Date Published
June 04, 2024
Tags
Investment
Share
Join our newsletter
You’re now subscribed to the Twelve Labs Newsletter! You'll be getting the latest news and updates in video understanding.
Oh no, something went wrong.
Please try again.

We’re incredibly thrilled to announce our Series A funding co-led by New Enterprise Associates (NEA) and NVIDIA's NVentures, with participation from previous investors including Index Ventures, Radical Ventures, Wndr Co, and Korea Investment Partners.

Over the last 3 years we’ve been laser focused on building the best video understanding platform driven by leading perceptual-reasoning research. We’ve brought together some of the sharpest minds in research, engineering, product, and go to market to make our vision a reality. We believe that AI systems need to learn from video to understand the world the way humans do. Given this, a video-first multimodal approach is critical for solving human-level perceptual reasoning problems.

We’re incredibly lucky to serve tens of thousands of users across a variety of industries. We’re grateful to our early backers, partners, customers, and users for their invaluable support and most importantly their confidence in trusting us with their most critical assets and workflows.

Today, our state of the art Marengo 2.6 and Pegasus 1.0 models and now our Embeddings API are at the cutting edge of multimodal AI. We’ve seen great interest from some of the best partners across media & entertainment, advertising, automotive, and more. We have a clear roadmap research and products ahead and we’re excited to build on top of the strong foundations we’ve built as a business. 

While we’ve built a world class team, have industry leading models, a wealth of data expertise, and incredible customers, we still have a lot of work to do. We’re excited to be taking on the most gnarly research and engineering challenges head on to push the boundaries of what’s possible with multimodal AI and help more users across more industries solve more problems. 

This funding will help us invest in our most valuable assets, our team and our industry leading multimodal models. We have big plans in store to expand the team across functions and have open roles across the org. If you’re intrigued by working on tough problems with highly dedicated peers, please reach out, we’d love to chat.

Jae, on behalf of the Twelve Labs Team

Generation Examples
No items found.
No items found.
Comparison against existing models
No items found.

Related articles

To make the world’s videos searchable, Twelve Labs raises $5M

Twelve Labs partners with Index Ventures and Radical Ventures to bring video foundation models to market.

Jae Lee
Jae Lee
Twelve Labs lands $12M for AI that understands the context of videos

Deepening the relationship with existing partners and welcoming new partners.

Techcrunch
Kyle Wiggers