The YouTube Chapter Highlight Generator is a tool developed to automatically generate chapter timestamps for YouTube videos. By analyzing the video's content, it identifies key segments and creates timestamps that can be used to create chapters for better video navigation and user experience.
The Video Content MCQ Generator is an innovative application designed to transform video based learning experiences. The application is powered by Twelve Labs, this application automatically generates Multiple Choice Questions (MCQs) from video content, making learning and assessment more engaging and efficient.
Crop and Seek demonstrates the power of advanced video search capabilities using the Twelve Labs API. By implementing both text and image-based search, along with the unique image cropping feature, this application provides a flexible and powerful tool for discovering relevant video content.
See how video foundation models can radically accelerate your film making timeline.
Learn how to build a semantic video search engine with the powerful integration of Twelve Labs' Embed API with ApertureDB for advanced semantic video search.
Combine Twelve Labs' video embedding model with Roe AI's data management and search capabilities.
Twelve Labs has successfully completed its SOC 2 Type 2 audit, marking a significant milestone in our commitment to data security and privacy.
Twelve Labs introduces a robust evaluation framework for video understanding, emphasizing both appearance and motion analysis.
The AI Interview Analyzer is a powerful tool designed to revolutionize the interview preparation process and assist in the hiring of employees.
The Olympics Video Clips Classification Application is a powerful tool designed to categorize various Olympic sports using video clips.
Get ready for a blockbuster event as Twelve Labs hosts the first-ever Workshop on Video-Language Models at NeurIPS 2024!
This article guides developers through integrating Twelve Labs' Embed API with Databricks Mosaic AI Vector Search to create advanced video understanding applications, including similarity search and recommendation systems, while addressing performance optimization, scaling, and monitoring considerations.
Whether you're looking to find the perfect berry-toned lipstick or just curious about spotting specific colors in your videos, this guide will help you leverage cutting-edge AI to do so effortlessly.
Leverage Twelve Labs Embed API and LanceDB to create AI applications that can process and analyze video content with unprecedented accuracy and efficiency.
We had fun interacting with the AI community in Denver!
Harness the power of Twelve Labs' advanced multimodal embeddings and Milvus' efficient vector database to create a robust video search solution.
Learn how to create a powerful semantic video search application by combining Twelve Labs' advanced multimodal embeddings with MongoDB Atlas Vector Search.
The collaboration between Phyllo and Twelve Labs is set to revolutionize how we derive insights from video content on social media
Jockey represents a powerful fusion of LangGraph's flexible agent framework and Twelve Labs' cutting-edge video understanding APIs, opening up new possibilities for intelligent video processing and interaction.
Twelve Labs co-hosted our first in-person hackathon in Los Angeles!
We've raised $50M for our Series A funding from NVIDIA and NEA
Twelve Labs will co-host our first in-person hackathon in Los Angeles!
Explores the benefits of semantic search in post-production, the key technologies powering it, how it integrates with media asset management systems, and where it's headed in the future.
Use this app to effortlessly create social media posts of any type from short, fun Instagram updates to in-depth blog posts loaded with details1
Co-founder Soyoung Lee shares how Twelve Labs' AI models are reshaping video understanding and content management
CineSys partners with Twelve Labs to transform content search with AI-enhanced CineViewer
A beginner guide to video understanding for M&E with MASV and Twelve Labs
Twelve Labs gets featured as a multimodal AI company that deserves buzz at NAB 2024
Twelve Labs, a South Korean AI startup, aspires to achieve a 'ChatGPT' moment for video
Company recognized for achievements in multimodal video understanding
The joint offering maximizes the value of video content.
Partnership makes it faster and easier than ever before to find specific moments in video content that can be used to amplify the human touch of storytelling
"Generate titles and hashtags" app can whip up a snazzy topic, a catchy title, and some trending hashtags for any video you fancy.
A Twelve Labs and MindsDB Tutorial
Our video-language foundation model, Pegasus-1. gets an upgrade!
Meet Jae Lee, the founder and CEO of Twelve Labs, who spearheaded a $30M seed funding round and forged partnerships with industry giants NVIDIA, Samsung, and Intel.
This blog post introduces Marengo-2.6, a new state-of-the-art multimodal embedding model capable of performing any-to-any search tasks.
New integration of innovative AI technology in VidiNet enables the simplest search and analysis of video content
"Summarize a Youtube Video" app gets your back when you need a speedy text summary of any video in your sights.
A Twelve Labs and FiftyOne Plugin Tutorial
"Who Talked About Us?" is an influencer-filtering app that enables deep contextual video searches.
This article introduces the suite of video-to-text APIs powered by our latest video-language foundation model, Pegasus-1.
Twelve Labs will co-host our first in-person hackathon in San Francisco!
Let's embark on a captivating tour of video understanding and explore its diverse range of use cases
The company has independently developed a massive AI model geared toward video understanding
This post will give a brief definition of embeddings, walk through various unimodal embeddings, explore multimodal video embeddings, and glance at embeddings in production.
Applications, Principles, and Core Research Challenges in Multimodal AI
This article examines the technical challenges of video data management and looks at how ApertureDB and Twelve Labs help solve them
Twelve Labs recognized as one of the most innovative AI companies in search by CB Insights.
A tutorial on performing logo detection within videos using Twelve Labs API
A tutorial on performing Video OCR using Twelve Labs API
A review of how far video understanding research has come, what potential remains untapped, and where it is headed in the future
A tutorial on Video Classification using Twelve Labs API
Combined Queries using Twelve Labs API - An Overview
Capabilities and Applications of Foundation Models in Layman Terms
Find specific moments within your video using Twelve Labs simple search API
A primer on foundation models: what they are, how they've evolved, and where they're going.
Deepening the partnership with OCI and leveraging the latest H100 GPUs.
Twelve Labs recognized as one of Fast Company's Most Innovation Companies for it's work on building video foundation models.
Pushing the boundaries of multimodal video understanding, the obvious next step forward.
Deepening the relationship with existing partners and welcoming new partners.
Introducing the Twelve Labs Playground.
Video search AI that recognizes features such as faces, movement, speech, and shot types to make scenes searchable.
Makes any video database analyzable by transforming clips into vector embeddings.
Twelve Labs recognized as one of the most innovative AI companies in search by CB Insights.
Twelve Labs partners with Index Ventures and Radical Ventures to bring video foundation models to market.
Resourceful and purpose-driven underdogs always come out on top.
Twelve Labs backed by the best.
Company secures $5 million in seed funding from Index Ventures to bring CTRL+F to video.
Twelve Labs participates in 2021 Techstars Seattle program.