Try our API.

Analyze
Embed
Search

Get your API keys from the dashboard page and ensure the TwelveLabs SDK is installed on your computer:

PYTHON
NODE

$

pip install twelvelabs

You can copy and paste the code below to analyze videos and generate text based on their content. Replace the placeholders surrounded by <> with your values.

Titles, topics, & hashtags
Summaries, chapters, or higlights
Open-ended analasys
PYTHON
NODE

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

from twelvelabs import TwelveLabs

from twelvelabs.indexes import IndexesCreateRequestModelsItem

from twelvelabs.tasks import TasksRetrieveResponse

 

client = TwelveLabs(api_key="<YOUR_API_KEY>")

 

index = client.indexes.create(

index_name="<YOUR_INDEX_NAME>",

models=[

IndexesCreateRequestModelsItem(

model_name="pegasus1.2", model_options=["visual", "audio"]

)

]

)

print(f"Created index: id={index.id}")

 

task = client.tasks.create(

index_id=index.id, video_url="<YOUR_VIDEO_URL>")

print(f"Created task: id={task.id}")

 

def on_task_update(task: TasksRetrieveResponse):

print(f" Status={task.status}")

 

task = client.tasks.wait_for_done(task_id=task.id, callback=on_task_update)

if task.status != "ready":

raise RuntimeError(f"Indexing failed with status {task.status}")

print(

f"Upload complete. The unique identifier of your video is {task.video_id}.")

 

gist = client.gist(video_id=task.video_id,types=["title", "topic", "hashtag"])

print(f"Title={gist.title}\nTopics={gist.topics}\nHashtags={gist.hashtags}")

Jump right in with a
Sample App...

Discover what TwelveLabs can do by experimenting with our fully functional sample applications.

Discover what TwelveLabs can do by experimenting with our fully functional sample applications.

Python

Who talked about us

Use semantic search capabilities of the platform to identify the most suitable influencers to reach out to.

Node

Generate social media posts for your videos

Simplify the cross-platform video promotion workflow by generating unique posts for each social media platform.

Python

Shade finder

This application uses the image-to-video search feature to find color shades in videos.

Jump right in with a
Sample App...

Discover what TwelveLabs can do by experimenting with our fully functional sample applications.

Python

Who talked about us

Use semantic search capabilities of the platform to identify the most suitable influencers to reach out to.

Node

Generate social media posts for your videos

Simplify the cross-platform video promotion workflow by generating unique posts for each social media platform.

Python

Shade finder

This application uses the image-to-video search feature to find color shades in videos.

Our stable of models.

Learn more about TwelveLabs’ world-leading video foundation models.

At TwelveLabs, we’re developing video-native AI systems that can solve problems with human-level reasoning. Helping machines learn about the world — and enabling humans to retrieve, capture, and tell their visual stories better.

Cover image
Logo animated
Marengo

Our breakthrough video foundation model analyzes frames and their temporal relationships, along with speech and sound — a huge leap forward for search and any-to-any retrieval tasks.

Cover image
Logo animated
Marengo

Our breakthrough video foundation model analyzes frames and their temporal relationships, along with speech and sound — a huge leap forward for search and any-to-any retrieval tasks.

Cover image
Logo animated
Marengo

Our breakthrough video foundation model analyzes frames and their temporal relationships, along with speech and sound — a huge leap forward for search and any-to-any retrieval tasks.

Cover
Logo animated
Pegasus

Our powerful video-first language model integrates visual, audio, and speech information — and employs this deep video understanding to reach new heights in text generation.

Cover
Logo animated
Pegasus

Our powerful video-first language model integrates visual, audio, and speech information — and employs this deep video understanding to reach new heights in text generation.

Cover
Logo animated
Pegasus

Our powerful video-first language model integrates visual, audio, and speech information — and employs this deep video understanding to reach new heights in text generation.

Support and guidance

Contact

Have a question? Get in touch with a member of the TwelveLabs team for help.

Community

Connect with the TwelveLabs community for ideas, tips, and knowledge sharing.

Support and guidance

Contact

Have a question? Get in touch with a member of the TwelveLabs team for help.

Community

Connect with the TwelveLabs community for ideas, tips, and knowledge sharing.