Developer Hub

Get quick access to our vast documentation library.

Easily build features like semantic search, anomaly detection, content recommenders and capabilities tailored to you. Our API unlocks your video’s full potential.

Try our API.

  1. Create index

  1. Upload video

  1. Search

  1. Generate (Video to Text)

  1. Embeddings

Python

Node

import requests
url = "https://api.twelvelabs.io/v1.3/indexes"
headers = {
    "accept": "application/json",
    "Content-Type": "application/json"
}
response = requests.post(url, headers=headers)
print(response.text)

Follow these steps for a running start using TwelveLabs’ multimodal API. 

Try our API.

  1. Create index

  1. Upload video

  1. Search

  1. Generate (Video to Text)

  1. Embeddings

Python

Node

import requests
url = "https://api.twelvelabs.io/v1.3/indexes"
headers = {
    "accept": "application/json",
    "Content-Type": "application/json"
}
response = requests.post(url, headers=headers)
print(response.text)

Follow these steps for a running start using TwelveLabs’ multimodal API. 

Try our API.

  1. Create index

  1. Upload video

  1. Search

  1. Generate (Video to Text)

  1. Embeddings

Python

Node

import requests
url = "https://api.twelvelabs.io/v1.3/indexes"
headers = {
    "accept": "application/json",
    "Content-Type": "application/json"
}
response = requests.post(url, headers=headers)
print(response.text)

Follow these steps for a running start using TwelveLabs’ multimodal API. 

Try our API.

  1. Create index

  1. Upload video

  1. Search

  1. Generate (Video to Text)

  1. Embeddings

Python

Node

import requests
url = "https://api.twelvelabs.io/v1.3/indexes"
headers = {
    "accept": "application/json",
    "Content-Type": "application/json"
}
response = requests.post(url, headers=headers)
print(response.text)

Follow these steps for a running start using TwelveLabs’ multimodal API. 

Jump right in with a
Sample App...

Discover what TwelveLabs can do by experimenting with our fully functional sample applications.

Discover what TwelveLabs can do by experimenting with our fully functional sample applications.

Python

Who talked about us

Use semantic search capabilities of the platform to identify the most suitable influencers to reach out to.

Node

Generate social media posts for your videos

Simplify the cross-platform video promotion workflow by generating unique posts for each social media platform.

Python

Shade finder

This application uses the image-to-video search feature to find color shades in videos.

Jump right in with a
Sample App...

Discover what TwelveLabs can do by experimenting with our fully functional sample applications.

Python

Who talked about us

Use semantic search capabilities of the platform to identify the most suitable influencers to reach out to.

Node

Generate social media posts for your videos

Simplify the cross-platform video promotion workflow by generating unique posts for each social media platform.

Python

Shade finder

This application uses the image-to-video search feature to find color shades in videos.

Our stable of models.

Learn more about TwelveLabs’ world-leading video foundation models.

At TwelveLabs, we’re developing video-native AI systems that can solve problems with human-level reasoning. Helping machines learn about the world — and enabling humans to retrieve, capture, and tell their visual stories better.

Cover image
Logo animated
Marengo

Our breakthrough video foundation model analyzes frames and their temporal relationships, along with speech and sound — a huge leap forward for search and any-to-any retrieval tasks.

Cover image
Logo animated
Marengo

Our breakthrough video foundation model analyzes frames and their temporal relationships, along with speech and sound — a huge leap forward for search and any-to-any retrieval tasks.

Cover image
Logo animated
Marengo

Our breakthrough video foundation model analyzes frames and their temporal relationships, along with speech and sound — a huge leap forward for search and any-to-any retrieval tasks.

Cover
Logo animated
Pegasus

Our powerful video-first language model integrates visual, audio, and speech information — and employs this deep video understanding to reach new heights in text generation.

Cover
Logo animated
Pegasus

Our powerful video-first language model integrates visual, audio, and speech information — and employs this deep video understanding to reach new heights in text generation.

Cover
Logo animated
Pegasus

Our powerful video-first language model integrates visual, audio, and speech information — and employs this deep video understanding to reach new heights in text generation.

Support and guidance

Contact

Have a question? Get in touch with a member of the TwelveLabs team for help.

Community

Connect with the TwelveLabs community for ideas, tips, and knowledge sharing.

Support and guidance

Contact

Have a question? Get in touch with a member of the TwelveLabs team for help.

Community

Connect with the TwelveLabs community for ideas, tips, and knowledge sharing.