Author
James Le
Date Published
May 24, 2024
Tags
Applications
Multimodal AI
search api
Partnership
Share
Join our newsletter
You’re now subscribed to the Twelve Labs Newsletter! You'll be getting the latest news and updates in video understanding.
Oh no, something went wrong.
Please try again.
This blog post is co-authored with Rob Gonsalves - Engineering Fellow at Avid

‍

1 - Introduction

Quickly and easily finding the perfect content in vast media libraries is crucial  in the world of media production. Traditionally, this meant manually tagging media assets with keywords, but this method has its limitations in accuracy, scalability, and understanding of the context.  Through AI-powered semantic search, which understands the context, meaning, and relationships between media assets by analyzing the content,  users can find relevant content based on its semantic meaning— not just keywords.

Thanks to rapid advancements in multimodal AI, semantic search is now a reality in media production. Foundation models help our smart machines understand media content. They power semantic search engines that index media assets, making them searchable based on their semantic content.

Semantic search plays a huge role in media production. It helps media professionals find exactly what they need quickly, saving time and sparking new ideas for content repurposing and creative storytelling. Plus, it can uncover hidden gems that may have been overlooked due to inadequate manual tagging.

In this post, we'll explore the awesome applications and benefits of semantic search in post-production, the key technologies powering it, how it integrates with media asset management systems, and where it's headed in the future.

‍

2 - The Evolution of Semantic Search in Media Production

2.1 - Metadata-Based Search

The transition from metadata-based searches to semantic searches represents a significant advancement in media production workflows. 

Avid’s MediaCentral | Production Management and MediaCentral | Asset Management systems have successfully enabled teams of up to hundreds of users to effectively log and search metadata for many years. This has included utilizing AI services from Cloud providers to enrich metadata with automated tagging, speech to text transcription, optical character recognition and more, to generate more searchable data.

An Example of AI-enhanced Metadata being Searched in MediaCentral | Cloud UX

These traditional metadata-based searches rely on manually extracted information or predefined taxonomies, which, while highly effective, can limit the ability to find truly relevant content.

Components of an AI-based Search, Image Source

Metadata-based searches traditionally have several limitations:

  1. Manual metadata extraction is time-consuming and prone to human error. Automated metadata extraction can help, but it still relies on predefined taxonomies and keywords, failing to capture the true context and meaning of the content.
  2. These searches only return results that match the exact keywords or metadata, often missing out on related or semantically similar content that could be highly relevant.

‍

2.2 - Semantic Search

In contrast, semantic search leverages state-of-the-art foundation models to understand the actual meaning and context behind the content. By analyzing the visual elements, spoken words, and other data within the media assets, semantic search engines can comprehend the underlying concepts and relationships, rather than relying solely on predefined keywords or taxonomies.

Components of a Traditional Metadata-based Search, Image Source

The semantic search process is depicted above:

  1. A media encoder is a tool that takes raw media, such as videos or audio files, and converts them into a format that computer systems can understand and analyze, much like a translator that helps computers "read" media files.
  2. During this process, it extracts features like images, sounds, and words, converting them into numerical representations called embeddings, which serve as digital fingerprints capturing the essence of the content.
  3. These embeddings are stored in an embedding database, a digital library that allows the system to quickly locate and compare similar media files based on these numerical representations, as part of a semantic search.

Twelve Labs offers a powerful Semantic Video Search solution that simultaneously integrates all modalities inside video and captures the complex relationships among them to deliver a more nuanced, humanlike interpretation. That results in much faster and far more accurate video search and retrieval from cloud object storage. Instead of time-consuming and ineffective manual tagging, video editors can use natural language to quickly and accurately search vast media archives to unearth video moments and hidden gems that otherwise might go unnoticed.

An Example of Semantic Video Search in Twelve Labs Playground

The accuracy and efficiency of semantic search are particularly valuable in media production environments, where vast libraries of text, audio, video, and image assets need to be searched and retrieved quickly. By understanding the true meaning and context of the content, semantic search engines can deliver highly relevant results, even when the user's query does not match the exact keywords or metadata associated with the media assets.

‍

3 - Key Technologies Powering Semantic Search

3.1 - The Foundation with CLIP

OpenAI's Contrastive Language-Image Pre-training (CLIP) model is at the heart of modern semantic search capabilities. CLIP is a neural network that learns to encode both images and text into a shared embedding space. By training on a massive dataset of image-text pairs, CLIP develops the ability to associate visual concepts with their linguistic representations.

The CLIP model consists of two main components: a visual encoder and a text encoder. The visual encoder, typically a Vision Transformer (ViT), analyzes the image and generates a visual embedding. Simultaneously, the text encoder, a Transformer-based language model, encodes the text input into a textual embedding. These embeddings are then compared, and the model learns to align the visual and textual representations, enabling cross-modal retrieval and understanding. You can see how this works in the diagram below.

Components of a Semantic Search System, Image Source

For example, if the user searches for “youth hockey coach,” CLIP encodes this text and compares it to embeddings from the media library to find matches. The system ranks video clips by relevance. The highest-scoring video closely aligns with the search, demonstrating CLIP's ability to understand and retrieve content semantically.

‍

3.2 - CLIP Extensions

Building upon CLIP's success, researchers have developed advanced models to enhance semantic search capabilities across different media formats and languages. One notable extension is Multilingual CLIP, which extends the original CLIP text encoder to support multiple languages. By leveraging techniques like cross-lingual teacher learning, Multilingual CLIP enables cross-lingual search and retrieval, allowing users to query media content using text in various languages.

Another significant development is LAION's CLAP (Contrastive Language-Audio-Visual Pre-training) model, which incorporates audio encoding capabilities into the multi-modal framework. CLAP learns to encode audio waveforms, text data, and visual information into a shared embedding space, enabling a comprehensive semantic understanding of multimedia content.

‍

3.3 - Marengo-2.6
The Marengo-2.6 Model from Twelve Labs, Image Source

Twelve Labs' Marengo-2.6 model provides advanced video encoding and retrieval capabilities for video search applications. As a state-of-the-art video foundation model, Marengo-2.6 extracts semantic features from video content, allowing users to search for and retrieve relevant video clips based on text queries or reference videos.

Astoundingly, Marengo-2.6's expanded capabilities allow for any-to-any (cross-modality) retrieval tasks, making it a versatile tool for a wide range of applications. This includes text-to-video, text-to-image, text-to-audio, audio-to-video, and image-to-video tasks, bridging different media types. Watch the webinar session below for qualitative demonstration of such capabilities:

These multimodal models work together to enhance media search capabilities across different formats and languages. CLIP and its extensions, such as Multilingual CLIP and CLAP, encode images, text, and audio into searchable embeddings. These embeddings are then stored in embedding databases, enabling efficient retrieval and matching based on semantic similarity. For video content, Marengo-2.6 leverages self-supervised learning with contrastive loss to embed and search video clips based on text queries or reference videos.

By combining these technologies, users can perform semantic searches across vast media libraries, finding relevant content based on their intent and the contextual meaning of their queries.

‍

4 - Applications and Benefits of Semantic Search in Post-Production

Semantic search introduces transformative benefits and applications in post-production tasks. By using advanced foundation models mentioned above, media professionals can easily locate specific clips and images through descriptive queries. For instance, a producer might search for "intense soccer match under rain at night," and the system will retrieve video clips that visually match this description without relying on precise tags.

AI-based systems can provide enhanced analytics and insights through the utilization of clustering and semantic mapping. Semantic search can analyze video frames and cluster them into meaningful groups, allowing editors to quickly find scenes of interest or discover thematic patterns across large datasets. For example, semantic embeddings can be used to plot a 2-dimensional semantic map of video clips, providing a visual representation of content relationships and thematic consistencies. You can see an example of this in the image below.

A 2-Dimensional Visualization of CLIP Frame Embeddings, Image Source

The image shows a representation of CLIP video frame embeddings from a sports highlight reel reduced to two dimensions. You can see how similar frames in the reel are grouped together by semantic similarity, like the swimming shots in groups 9, 15, and 12.

Extending semantic search capabilities to include spoken phrases and ambient sounds enriches the scope of search in audiovisual content. The integration of media embedding models like Marengo from Twelve Labs and LAION’s CLAP enhance the ability to search video and audio content by semantic similarity, not just text match, allowing users to find media that contains specific looks and sounds like bustling cityscapes or serene nature scenes.

‍

5 - Extending Semantic Search for Comprehensive Media Insights

Semantic search extends beyond simple retrieval to provide comprehensive insights and analytics. This capability is exemplified by the potential to create interactive displays from semantic embeddings, enabling producers and editors to derive in-depth analytics from media content. For instance, using media embedding models, users can visually explore how different themes are represented across a media library, identify trends, and predict future content preferences.

Furthermore, semantic search can drastically enhance the process of metadata management in media libraries. Typically, metadata is manually tagged, which is labor-intensive and prone to inconsistencies. By automatically generating rich, descriptive metadata from the content, semantic search tools can ensure that every asset is uniformly described, making it far easier to retrieve and analyze. This automated metadata enrichment process leverages the deep learning capabilities of media embedding models to interpret complex media content, including the mood, themes, and key visual elements, thus providing a richer dataset for further analysis and utilization.

These insights are valuable in understanding the existing content and guiding the creation of new media that aligns with audience interests and ongoing trends. The ability to analyze semantic relationships and cultural contexts within media libraries opens up possibilities for predictive analytics and targeted content recommendations.

6 - Integrating Semantic Search into Media Asset Management Systems

Integrating semantic search technologies into existing media asset management (MAM) systems can significantly enhance the efficiency and effectiveness of media libraries. This integration facilitates more intelligent search capabilities, which can understand the content and context of media files, thereby improving the accessibility and discoverability of assets.

Integration of semantic search into MAM systems also facilitates better archival and retrieval processes, crucial in post-production workflows. For example, when editors need to pull content from archives that span decades, semantic search can quickly filter through various formats and eras to find content that matches current production needs without manual browsing. This capability speeds up the retrieval process and ensures valuable archival footage is more accessible, promoting its reuse and maximizing the value of existing assets. This represents a significant shift from traditional keyword-based systems, which often require extensive manual input and upkeep to remain effective.

Moreover, semantic search can provide context-aware recommendations based on the user's current project or past searches. This feature speeds up the workflow and inspires new creative ideas by exposing editors to potentially relevant content they might not have considered.

Avid has demonstrated research in this area on various proofs of concept at major trade show events such as NAB and IBC. This has included a Recommendation Engine in the web-based application MediaCentral | Cloud UX, where journalists are offered media related to the script which they are writing, or on the audio in a voiceover on a timeline. The system not only offers suggestions based on a literal analysis of the text, it also generates related sentences or phrases based on the context of the script to offer further suggestions.

An Example of a Recommendation Engine in MediaCentral | Cloud UX

Avid is continuing to implement AI-enabled technologies within a range of products, under the banner of Avid Ada – an overarching framework for AI across its portfolio.

Twelve Labs has integrated with multiple MAM providers to bring video understanding to their users. A notable example is our partnership with Vidispine - An Arvato Systems Brand. We first worked together for a joint client from the sports industry to improve the video browsing experience for the client. The joint solution enables easier navigation through video content and uncovers previously undetectable elements, such as specific moves or player conversations. It quickly became clear that the integration had the potential to be even more.

Integrating Twelve Labs’ video-language foundation models in the intuitive user interface of Vidispine’s MediaPortal changes the way users can search for material as it eliminates the need to index all static metadata fields in the core service VidiCore. Vidispine users can now find exact moments within their videos using natural language queries and combine them with metadata from Vidispine applications.

‍

7 - Challenges and Future Directions

While semantic search technologies have made significant strides in recent years, several challenges remain in their implementation and widespread adoption in the media production industry.

‍

7.1 - Challenges

One of the primary challenges is the significant computational power and resources required to effectively process and analyze large volumes of multimedia data. Generating high-quality semantic embeddings and performing complex contextual understanding demands substantial computational resources, including powerful hardware accelerators (GPUs) and ample storage capacity. As media libraries continue to grow exponentially, the computational demands will only increase, necessitating the development of more efficient algorithms and hardware acceleration techniques to make semantic search scalable and practical.

Although current language and vision foundation models have made remarkable progress in understanding context, there is still room for improvement in capturing nuanced meaning, handling ambiguity, and accounting for real-world knowledge. Developing more sophisticated multimodal foundation models that can better grasp the intricate context and relationships within multimedia content will be crucial for enhancing the relevance and accuracy of search results.

‍Seamlessly integrating and fusing diverse modalities, such as text, images, video, and audio, into a unified semantic search framework presents technical challenges. Advancing methods to align and combine these heterogeneous data sources will be important for delivering comprehensive, cross-modal search capabilities that can effectively leverage the complementary information present in different modalities.

7.2 - Future Directions

Despite these challenges, the future of semantic search in media production holds immense potential and promises to revolutionize the way media professionals search, discover, and utilize content.

The ongoing development of multimodal foundation models, which aim to capture and fuse information across various modalities, could pave the way for more sophisticated semantic search engines. These models (such as Marengo and Pegasus from Twelve Labs), trained on massive multimodal datasets, have the potential to uncover intricate relationships and patterns across different data types, enabling more accurate and comprehensive search capabilities.

Moreover, integrating other forms of production data, such as knowledge graphs, scripts, and transcripts, into a semantic search system can significantly enhance its capabilities. Knowledge graphs can provide a structured representation of relationships between various entities, enriching the search process with contextual information. Scripts and transcripts offer a detailed textual account of media content, allowing the search engine to index and retrieve specific dialogues, scenes, and narrative elements. By leveraging these diverse data sources, semantic search systems can deliver more precise and contextually relevant results, ultimately improving the efficiency of content discovery and utilization in media production.

‍Furthermore, the incorporation of personalized semantic search, which tailors search results based on user preferences and past behavior, could enhance the relevance and utility of search results in media production environments. By understanding the specific needs and contexts of individual users, personalized semantic search can surface the most pertinent content, facilitating more efficient and effective content discovery and utilization.

‍

8 - Conclusion

Semantic search is definitely the new cool kid on the block in the world of news, broadcast and, of course, post-production. It's all about harnessing the power of advanced AI techniques to understand the deeper meaning and context of media assets. Forget the old school keyword-based search methods, this is a transformative approach that is revolutionizing how we manage and use media in production workflows.

Think about models like OpenAI’s CLIP or innovations like Multilingual CLIP, LAION’s CLAP, and Twelve Labs’ Marengo and continuous advancements from Avid. These are just a few examples of how fast things are moving in this field. They're making the search process more intuitive, helping media professionals find content that matches their creative vision with unprecedented precision and speed. With the amount of digital media out there, being able to quickly find what you need will become increasingly vital.

The journey of semantic search is still unfolding, with each new development adding a whole new level of sophistication and capability. By embracing semantic search, we're making things more efficient and boosting our creative processes, giving content creators a whole new way to tell their stories.

‍

9 - Call to Action

Semantic search technologies are integral to the future of media production. As a media professional, adopting these innovations is crucial.

  • Use semantic search in all stages of production, from planning to editing.
  • Test various semantic search models to enhance your content discovery and management. Stay updated on AI advancements in media production through workshops and webinars.
  • Consider partnering with tech providers and participate in pilot programs to tailor semantic search to your needs. This investment will increase efficiency, creativity, and competitive edge.

Twelve Labs' semantic video search solution stands at the forefront of this revolution. Our video understanding platform seamlessly integrates with existing media asset management systems, empowering their users to navigate vast video libraries with unprecedented ease. Check out our recent integrations with Vidispine, Blackbird, EMAM, Nomad, and Cinesys.

Over the past few years, Avid has conducted research on the use of AI for media production, including semantic media search. They developed Avid Ada, a digital assistant that supports making workflows more efficient. In addition to feeding the results of their research into their product roadmaps, Avid also publishes and shares with the media industry.

Generation Examples
No items found.
No items found.
Comparison against existing models
No items found.

Related articles

Building Semantic Video Search with Twelve Labs Embed API and MongoDB Atlas

Learn how to create a powerful semantic video search application by combining Twelve Labs' advanced multimodal embeddings with MongoDB Atlas Vector Search.

James Le, Manish Maheshwari
Unlocking Video Insights: The Power of Phyllo and Twelve Labs Collaboration

The collaboration between Phyllo and Twelve Labs is set to revolutionize how we derive insights from video content on social media

James Le
A Recap of Our Multimodal AI in Media & Entertainment Hackathon in Sunny Los Angeles!

Twelve Labs co-hosted our first in-person hackathon in Los Angeles!

James Le
Introducing the Multimodal AI in Media & Entertainment Hackathon

Twelve Labs will co-host our first in-person hackathon in Los Angeles!

James Le