Partnerships
12 Min

We combine Twelve Labs' rich, contextual embeddings with Pinecone's vector database to store, index, and query these video embeddings, creating a powerful video chat application.

James Le, Manish Maheshwari, Alex Owen
Partnerships
12 Min

We combine Twelve Labs' rich, contextual embeddings with Pinecone's vector database to store, index, and query these video embeddings, creating a powerful video chat application.

James Le, Manish Maheshwari, Alex Owen
Building a Video Highlight Generator with Twelve Labs

The YouTube Chapter Highlight Generator is a tool developed to automatically generate chapter timestamps for YouTube videos. By analyzing the video's content, it identifies key segments and creates timestamps that can be used to create chapters for better video navigation and user experience.

Hrishikesh Yadav
Building a Video Content Quiz Generator with Twelve Labs

The Video Content MCQ Generator is an innovative application designed to transform video based learning experiences. The application is powered by Twelve Labs, this application automatically generates Multiple Choice Questions (MCQs) from video content, making learning and assessment more engaging and efficient.

Hrishikesh Yadav
Crop and Seek: Experience Advanced Video Search

Crop and Seek demonstrates the power of advanced video search capabilities using the Twelve Labs API. By implementing both text and image-based search, along with the unique image cropping feature, this application provides a flexible and powerful tool for discovering relevant video content.

Meeran Kim
Accelerate Your Film Production with Twelve Labs

See how video foundation models can radically accelerate your film making timeline.

Simran Butalia
Semantic Video Search Engine with Twelve Labs and ApertureDB

Learn how to build a semantic video search engine with the powerful integration of Twelve Labs' Embed API with ApertureDB for advanced semantic video search.

James Le
Supercharge Your Video Search with Twelve Labs and Roe AI

Combine Twelve Labs' video embedding model with Roe AI's data management and search capabilities.

James Le, Manish Maheshwari
Our SOC 2 Type 2 Certification

Twelve Labs has successfully completed its SOC 2 Type 2 audit, marking a significant milestone in our commitment to data security and privacy.

Ulises Cardenas
TWLV-I: Analysis and Insights from Holistic Evaluation on Video Foundation Models

Twelve Labs introduces a robust evaluation framework for video understanding, emphasizing both appearance and motion analysis.

Lucas Lee, Kilian Baek, James Le
Building a Video AI Interview Analyzer with Twelve Labs

The AI Interview Analyzer is a powerful tool designed to revolutionize the interview preparation process and assist in the hiring of employees.

Hrishikesh Yadav
Building An Olympic Video Classification Application with Twelve Labs

The Olympics Video Clips Classification Application is a powerful tool designed to categorize various Olympic sports using video clips.

Hrishikesh Yadav
Lights, Camera, AI-ction: Twelve Labs Brings Video-Language Models to Center Stage at NeurIPS 2024

Get ready for a blockbuster event as Twelve Labs hosts the first-ever Workshop on Video-Language Models at NeurIPS 2024!

Aiden Lee
Mastering Multimodal AI: Advanced Video Understanding with Twelve Labs + Databricks Mosaic AI

This article guides developers through integrating Twelve Labs' Embed API with Databricks Mosaic AI Vector Search to create advanced video understanding applications, including similarity search and recommendation systems, while addressing performance optimization, scaling, and monitoring considerations.

James Le
Building a Shade Finder App: Using Twelve Labs' API to Pinpoint Specific Colors in Videos

Whether you're looking to find the perfect berry-toned lipstick or just curious about spotting specific colors in your videos, this guide will help you leverage cutting-edge AI to do so effortlessly.

Meeran Kim
Building Advanced Video Understanding Applications: Integrating Twelve Labs Embed API with LanceDB for Multimodal AI

Leverage Twelve Labs Embed API and LanceDB to create AI applications that can process and analyze video content with unprecedented accuracy and efficiency.

James Le, Manish Maheshwari
A Recap of Denver Multimodal AI Hackathon

We had fun interacting with the AI community in Denver!

James Le
Advanced Video Search: Leveraging Twelve Labs and Milvus for Semantic Retrieval

Harness the power of Twelve Labs' advanced multimodal embeddings and Milvus' efficient vector database to create a robust video search solution.

James Le, Manish Maheshwari
Building Semantic Video Search with Twelve Labs Embed API and MongoDB Atlas

Learn how to create a powerful semantic video search application by combining Twelve Labs' advanced multimodal embeddings with MongoDB Atlas Vector Search.

James Le, Manish Maheshwari
Unlocking Video Insights: The Power of Phyllo and Twelve Labs Collaboration

The collaboration between Phyllo and Twelve Labs is set to revolutionize how we derive insights from video content on social media

James Le
Introducing Jockey: A Conversational Video Agent Powered by Twelve Labs APIs and LangGraph

Jockey represents a powerful fusion of LangGraph's flexible agent framework and Twelve Labs' cutting-edge video understanding APIs, opening up new possibilities for intelligent video processing and interaction.

James Le, Travis Couture
A Recap of Our Multimodal AI in Media & Entertainment Hackathon in Sunny Los Angeles!

Twelve Labs co-hosted our first in-person hackathon in Los Angeles!

James Le
Our Series A to Build the Future of Multimodal AI

We've raised $50M for our Series A funding from NVIDIA and NEA

Jae Lee
Introducing the Multimodal AI in Media & Entertainment Hackathon

Twelve Labs will co-host our first in-person hackathon in Los Angeles!

James Le
Semantic Content Discovery for a Post-Production World

Explores the benefits of semantic search in post-production, the key technologies powering it, how it integrates with media asset management systems, and where it's headed in the future.

James Le
Effortlessly Craft Social Media Content from Video

Use this app to effortlessly create social media posts of any type from short, fun Instagram updates to in-depth blog posts loaded with details1

Meeran Kim
Twelve Labs: Building Multimodal Video Foundation Models for Better Understanding

Co-founder Soyoung Lee shares how Twelve Labs' AI models are reshaping video understanding and content management

VP Land
CineSys partners with Twelve Labs

CineSys partners with Twelve Labs to transform content search with AI-enhanced CineViewer

CineSys
Multimodal AI and How Video Understanding Will Revolutionize Media

A beginner guide to video understanding for M&E with MASV and Twelve Labs

James Le
NAB 2024: The Bold Innovations You Probably Missed at the Show

Twelve Labs gets featured as a multimodal AI company that deserves buzz at NAB 2024

SVG
Nvidia-backed Twelve Labs is building AI that understands videos like humans

Twelve Labs, a South Korean AI startup, aspires to achieve a 'ChatGPT' moment for video

The Chosun Daily
Twelve Labs Named to the CB Insights AI 100 List for the Third Consecutive Year

Company recognized for achievements in multimodal video understanding

Cision
EMAM and Twelve Labs Announce an Integrated Solution for Video AI

The joint offering maximizes the value of video content.

eMAM
Twelve Labs Partners With Blackbird to Craft the Future of Narrative Excellence Through Video

Partnership makes it faster and easier than ever before to find specific moments in video content that can be used to amplify the human touch of storytelling

Blackbird
Unleash the Power of Auto-Generating Video Title, Topics, and Hashtags

"Generate titles and hashtags" app can whip up a snazzy topic, a catchy title, and some trending hashtags for any video you fancy.

Meeran Kim
Pegasus-1 Open Beta: Setting New Standards in Video-Language Modeling

Our video-language foundation model, Pegasus-1. gets an upgrade!

Minjoon Seo, James Le
How to make AI Startup worth over $30Mã…£Twelve Labs Jae Lee

Meet Jae Lee, the founder and CEO of Twelve Labs, who spearheaded a $30M seed funding round and forged partnerships with industry giants NVIDIA, Samsung, and Intel.

EO
Introducing Marengo-2.6: A New State-of-the-Art Video Foundation Model for Any-to-Any Search

This blog post introduces Marengo-2.6, a new state-of-the-art multimodal embedding model capable of performing any-to-any search tasks.

Aiden Lee, James Le
Arvato Systems and Twelve Labs Forge Partnership to Take Video AI to the Next Level

New integration of innovative AI technology in VidiNet enables the simplest search and analysis of video content

Arvato Systems
How to Automatically Get a Written Summary of a YouTube Video?

"Summarize a Youtube Video" app gets your back when you need a speedy text summary of any video in your sights.

Meeran Kim
Search Your Videos Semantically with Twelve Labs and FiftyOne Plugin

A Twelve Labs and FiftyOne Plugin Tutorial

James Le
How to identify the right influencer partner using Twelve Labs API?

"Who Talked About Us?" is an influencer-filtering app that enables deep contextual video searches.

Meeran Kim
Introducing Video-To-Text and Pegasus-1 (80B)

This article introduces the suite of video-to-text APIs powered by our latest video-language foundation model, Pegasus-1.

Aiden Lee
Aiden Lee, Jae Lee
Introducing The Multimodal AI (23Labs) Hackathon

Twelve Labs will co-host our first in-person hackathon in San Francisco!

James Le
A Tour of Video Understanding Use Cases

Let's embark on a captivating tour of video understanding and explore its diverse range of use cases

James Le
S.Korea's Twelve Labs ranks among world's top 50 generative AI startups

The company has independently developed a massive AI model geared toward video understanding

The Korea Economic Daily
The Multimodal Evolution of Vector Embeddings

This post will give a brief definition of embeddings, walk through various unimodal embeddings, explore multimodal video embeddings, and glance at embeddings in production.

James
James Le
What Is Multimodal AI?

Applications, Principles, and Core Research Challenges in Multimodal AI

James Le
Managing Video Data with ApertureDB and Twelve Labs

This article examines the technical challenges of video data management and looks at how ApertureDB and Twelve Labs help solve them

James Le
AI 100: The most promising artificial intelligence startups of 2023

Twelve Labs recognized as one of the most innovative AI companies in search by CB Insights.

CB Insights
How to find brand logos within videos using Twelve Labs API?

A tutorial on performing logo detection within videos using Twelve Labs API

Ankit
Ankit Khare
How to perform Video OCR using Twelve Labs API?

A tutorial on performing Video OCR using Twelve Labs API

Ankit
Ankit Khare
The Past, Present, and Future of Video Understanding Applications

A review of how far video understanding research has come, what potential remains untapped, and where it is headed in the future

James Le
James Le
How to classify videos effortlessly with Twelve Labs API: No ML training required!

A tutorial on Video Classification using Twelve Labs API

Ankit Khare
Ankit Khare
Search precisely within videos: combining queries with Twelve Labs API

Combined Queries using Twelve Labs API - An Overview

Ankit Khare
Ankit Khare
What makes Foundation Models special?

Capabilities and Applications of Foundation Models in Layman Terms

James Le
James Le
How to find specific moments within your video using Twelve Labs simple search API

Find specific moments within your video using Twelve Labs simple search API

Ankit Khare
Ankit Khare
Foundation models are going multimodal

A primer on foundation models: what they are, how they've evolved, and where they're going.

James Le
James Le
Twelve Labs featured as a pioneer in multimodal AI in GTC 2023

Deepening the partnership with OCI and leveraging the latest H100 GPUs.

NVIDIA
NVIDIA
Twelve Labs named to Fast Company’s Most Innovative Companies of 2023

Twelve Labs recognized as one of Fast Company's Most Innovation Companies for it's work on building video foundation models.

Fastcompany
Nicole Laporte
Why I joined Twelve Labs as the Chief Scientist

Pushing the boundaries of multimodal video understanding, the obvious next step forward.

Minjoon Seo
Minjoon Seo
Twelve Labs lands $12M for AI that understands the context of videos

Deepening the relationship with existing partners and welcoming new partners.

Techcrunch
Kyle Wiggers
Meet the latest Techstars Seattle cohort: 10 startups on how they’ve adapted to the pandemic

Video search AI that recognizes features such as faces, movement, speech, and shot types to make scenes searchable.

Geekwire
Cara Khulman
Companies are commercializing multimodal AI models to analyze videos and more

Makes any video database analyzable by transforming clips into vector embeddings.

Kyle Wiggers
Kyle Wiggers
AI 100: The most promising artificial intelligence startups of 2022

Twelve Labs recognized as one of the most innovative AI companies in search by CB Insights.

CB Insights
CB Insights
To make the world’s videos searchable, Twelve Labs raises $5M

Twelve Labs partners with Index Ventures and Radical Ventures to bring video foundation models to market.

Jae Lee
Jae Lee
Twelve Labs launches transformative AI-powered video search platform

Company secures $5 million in seed funding from Index Ventures to bring CTRL+F to video.

Amber Moore
Amber Moore
2021 Techstars Seattle Accelerator Companies

Twelve Labs participates in 2021 Techstars Seattle program.

Techstars
Isaac Kato