
Tutorial
Tutorial
Tutorial
Building an Interactive Learning App from Video Content


Hrishikesh Yadav
Hrishikesh Yadav
Hrishikesh Yadav
Video2Game is an application which takes educational videos or Learning associated (like lectures, tutorials, or any type of the explainer videos) and transforms them into interactive games. Leveraging TwelveLabs for video understanding and SambaNova for code game generation, this project bridges passive video learning with immersive, game based experiences.
Video2Game is an application which takes educational videos or Learning associated (like lectures, tutorials, or any type of the explainer videos) and transforms them into interactive games. Leveraging TwelveLabs for video understanding and SambaNova for code game generation, this project bridges passive video learning with immersive, game based experiences.


뉴스레터 구독하기
최신 영상 AI 소식과 활용 팁, 업계 인사이트까지 한눈에 받아보세요
AI로 영상을 검색하고, 분석하고, 탐색하세요.
2025. 8. 12.
2025. 8. 12.
2025. 8. 12.
13 Min
13 Min
13 Min
링크 복사하기
링크 복사하기
링크 복사하기
Introduction
What if educational videos could do more than just show and tell? 🎮
Imagine turning passive video lectures into interactive, game based learning experiences—where students play through the concepts instead of just watching them.
In this tutorial, we’ll build a TwelveLabs powered application that transforms YouTube educational videos into engaging learning games. By combining TwelveLabs analyze engine with the inference model from SambaNova, we can extract meaningful content from videos and convert it into interactive learning modules.
Let’s explore how the Video2Game application works and how you can build a similar solution with the video understanding and the fast inference LLM by using the TwelveLabs Python SDK and SambaNova.
You can explore the demo of the application here: Video2Game Application
Prerequisites
Generate an API key by signing up at the TwelveLabs Playground.
Sign Up on SamabNova and generate the API KEY to access the model.
Find the repository for this application on Github Repository.
You should be familiar with Python, Flask, and Next.js
Demo Application
This demo application showcases the video understanding by analyzing the video content which helps for the creation of the Interactive Learning Application, which is done with the fast inference LLM code generation. Users can just provide the video url or directly select the video from an already indexed video by connecting the TwelveLabs API KEY.
The purpose of the application is to get the new way of the learning experience with the interactivity from video content. Here, you can find the detailed walkthrough of the code and the demo —

Working of the Application
The application supports two modes of interaction -
Using a YouTube Video URL: Provide a YouTube video URL, and the system will process the video, analyze its content, and automatically generate an interactive, gamified learning application based on the video using the Fast Inference LLM model using SambaNova.
Using the TwelveLabs API Key: Connect your personal TwelveLabs API Key to access and interact with your own already indexed videos. Once connected, you can select an existing Index ID and respective video, and the system will generate an interactive application based on the selected video content.
When a YouTube URL is provided, the video is first processed and stored to the server locally. Following this, the indexing process begins, where the video is indexed to the specified index_id
. Once indexing is complete, a video_id
is generated and returned, which is then used for video content analysis.
The video content is analyzed to generate a descriptive analysis result based on the video understanding. This result provides instructions that help guide the development of an interactive game application. The analysis text is included in the user prompt, along with formatting instructions and a system prompt. This system prompt ensures that the generated output is a complete, functional interactive application, adhering to the guidelines and directions inferred from the analysis text.
The code generation process is then initiated through SambaNova, producing a single HTML file that includes HTML, CSS, and JavaScript. The model used here is DeepSeek-V3-0324
, processes the prompt by thinking before generating the appropriate code. Once the code generation is complete, streaming stops.

The utility function file_service.process_html_content
processes the HTML content in the response. The resulting HTML file is then saved using file_service.save_html_file
and stored in a unified JSON structure. This enables quick retrieval for rendering the interactive application and displaying the generated code.
A "Regenerate" button is provided to facilitate experimentation. It re-triggers the analysis and code generation process using the already indexed video, allowing for exploration of different interaction possibilities based on the same video content.

An alternative way to interact with the application is by connecting your personal TwelveLabs API key. Once the API key is connected, you can select your desired index_id
and the specific video associated with it. After video selection, the corresponding video_id
is used to perform analyze and generate descriptive text, which is then passed through the Sambanova game code generation pipeline. The file handling process remains the same as previously described to extract and store the html file.
To interact with your own video, the recommended approach is to first generate an Index ID via the TwelveLabs Playground and upload the video for indexing. Once that is complete, connect your API key here to fully utilize the application’s features.
Preparation Steps
Obtain your API key from the TwelveLabs Playground and set up your environment variable.
Do create the Index, with the pegasus-1.2 selection via TwelveLabs Playground, and get the index_id specifically for this application.
Clone the project from Github.
Do obtain the API KEY from the SambaNova Dashboard.
Create a .env file containing your TwelveLabs and SambaNova credentials.
Once you've completed these steps, you're ready to start developing!
Walkthrough for the Video2Game App
This tutorial focuses on how you can build the application which transforms the video content into the Interactive Learning App. The app uses Next.js for the frontend and Flask API with CORS enabled for the backend. We'll focus on implementing the core backend utility for the Video2Game application and setting up the application.
This tutorial focuses on using Fast Inference LLM for game code generation by accessing via SambaNova. Let's go over how easy it is to access. For detailed code structure and setup instructions, check the README.md on GitHub.
1 - To generate an interactive learning app from your already indexed TwelveLabs video
1.1 Config the essentials
All essential configurations are defined in config.py, which loads values from environment variables. This centralized configuration approach ensures that all variables are easily accessible and helps maintain a well-organized and easily manageable application setup.
backend/app/config.py (7-24 line)
class Config: ` # API credentials loaded from environment variables TWELVELABS_API_KEY = os.getenv("TWELVELABS_API_KEY") SAMBANOVA_API_KEY = os.getenv("SAMBANOVA_API_KEY") TWELVELABS_INDEX_ID = os.getenv("TWELVELABS_INDEX_ID") SAMBANOVA_BASE_URL=os.getenv("SAMBANOVA_BASE_URL") APP_URL = os.getenv("APP_URL", "http://localhost:8000") # Directory paths for file organization GAMES_DIR = "generated_games" # Directory to store generated game files CACHE_DIR = "game_cache" # Directory for caching game-related data INSTRUCTIONS_DIR = "instructions" # # Directory containing prompt files # SambaNova service configuration SAMBANOVA_BASE_URL = SAMBANOVA_BASE_URL SAMBANOVA_MODEL = "DeepSeek-V3-0324" # LLM text generation parameters GENERATION_TEMPERATURE = 0.1 GENERATION_TOP_P = 0.1 GENERATION_MAX_TOKENS = 16000 MIN_HTML_LENGTH = 7000
After setting up the credentials for TwelveLabs and SambaNova, the INSTRUCTIONS_DIR
is initialized. This directory contains all the predefined prompts used throughout the process. All generated interactive games are stored in the GAMES_DIR
and cached in the CACHE_DIR
.
The model used for code generation is DeepSeek-V3-0324
, configured with a temperature of 0.1
to reduce randomness, and a defined max_tokens
limit. If you wish to experiment, you can modify these parameters or switch to a different model within the configuration.
1.2 To GET the Indexes
Once the user connects their TwelveLabs API Key via the portal, the application uses it to fetch all indexes associated with their account. The response is then structured and sent to the frontend for display.
def get_indexes(self): try: print("Fetching indexes...") # Use direct HTTP request if not self.api_key: print("No API key available") return [] # To get the list of indexes url = "https://api.twelvelabs.io/v1.3/indexes" headers = { "accept": "application/json", "x-api-key": self.api_key } response = requests.get(url, headers=headers) if response.status_code == 200: data = response.json() result = [] # Structuring the relevant information needed for index in data.get('data', []): result.append({ "id": index['_id'], "name": index['index_name'] }) return result else: print(f"Failed to fetch indexes: Status {response.status_code}") return [] except Exception as e: print(f"Error fetching indexes: {e}") return []
1.3 To GET the videos, for the respective selected index_id
Once the user selects an index, the corresponding index_id
is captured. To generate the interactive learning application, a specific video must be selected from that index. Using the selected index_id
, the method here is called to retrieve all videos associated with that index.
def get_videos(self, index_id): try: # Use direct HTTP request if not self.api_key: print("No API key available") return [] # List of Videos info of that respective index id url = f"https://api.twelvelabs.io/v1.3/indexes/{index_id}/videos" headers = { "accept": "application/json", "x-api-key": self.api_key } response = requests.get(url, headers=headers) if response.status_code == 200: data = response.json() result = [] for video in data.get('data', []): system_metadata = video.get('system_metadata', {}) # Structuring the response to get relevant information to showcase on the frontend result.append({ "id": video['_id'], "name": system_metadata.get('filename', f'Video {video["_id"]}'), "duration": system_metadata.get('duration', 0) }) return result else: print(f"Failed to fetch videos: Status {response.status_code}") return [] except Exception as e: print(f"Error fetching videos for index {index_id}: {e}") return []
On selecting the video, the video_id
is been sent for further analyze process of that video content.
1.4 Analyze the video content with pegasus-1.2
The purpose of the analyze utility is to process the video content and generate meaningful instructions based on its context. These instructions serve as a foundation for building the interactive application and guide the subsequent code generation process. The prompt used for this analysis is predefined in the backend as a system prompt. Here, you can find the analyze prompt.
def analyze_video(self, video_id, prompt): try: # Send video analysis request to TwelveLabs API with custom prompt analysis_response = self.client.analyze( video_id=video_id, prompt=prompt ) return analysis_response.data except Exception as e: print(f"Error analyzing video {video_id}: {e}") raise e
The video_id
and the prompt are provided as parameters to self.client.analyze
, which processes and analyzes the video content to generate the instructional result.
1.5 Interactive game code generation using SambaNova
Once the analysis result is received, the generated code utility is triggered. In this step, the analysis output is passed as the user prompt, along with a predefined system prompt. The model used DeepSeek-V3-0324
, is the same one configured earlier in step 1.1.
The system prompt outlines the core requirements and expected structure of the output, ensuring the model understands the purpose and constraints of the task. Meanwhile, the user prompt incorporates the analysed result along with additional detailed instructions regarding layout, design, color scheme, and formatting. Together, these prompts guide the model to generate a focused and well-structured interactive application.
def generate_game_stream(self, system_prompt, user_prompt): print("Starting streaming generation...") if not self._test_api_connection(): yield "Error: Sambanova API is not reachable" return try: start_time = time.time() response = self.client.chat.completions.create( model=current_app.config['SAMBANOVA_MODEL'], messages=[ {"role": "system", "content": system_prompt}, {"role": "user", "content": user_prompt} ], temperature=current_app.config['GENERATION_TEMPERATURE'], top_p=current_app.config['GENERATION_TOP_P'], max_tokens=current_app.config['GENERATION_MAX_TOKENS'], stream=True ) print(f"API call initiated in {time.time() - start_time:.2f}s") chunk_count = 0 total_content = "" last_chunk_time = time.time() for chunk in response: current_time = time.time() if current_time - last_chunk_time > 30: # 30 second timeout print(f"No chunk received for 30 seconds, breaking...") break delta = getattr(chunk.choices[0], 'delta', None) content = getattr(delta, 'content', '') if delta else '' if content: chunk_count += 1 total_content += content last_chunk_time = current_time # Print live streaming progress if chunk_count % 20 == 0: # Print every 20 chunks print(f"Stream: {chunk_count} chunks, {len(total_content)} chars") yield content print(f"Streaming completed: {chunk_count} chunks, {len(total_content)} total chars") print(f"Total streaming time: {time.time() - start_time:.2f}s") except Exception as e: error_msg = f"Streaming error: {str(e)}" print(error_msg) print(f"Error type: {type(e).__name__}") if hasattr(e, 'response'): print(f"Response status: {e.response.status_code}") print(f"Response text: {e.response.text}") yield error_msg
When you call SambaNova's API with stream=True
, it sends back pieces of the generated content as they're created, which this function immediately passes along to the user. The clever part is the built-in safety mechanisms, it tests the API connection first to avoid wasted calls, implements a 30 second timeout to catch stuck connections, and logs detailed metrics every 20 chunks so you can monitor performance in real-time.
1.6 Function calling pipeline for the generation
In this section, we will be discussing the full pipeline of the process, where the previously discussed utility functions are called and executed. It begins by validating the presence of the video_id
and the associated API key, both of which are required for the analysis. Once confirmed, the analysis prompt is loaded and the analyze_video
utility is triggered to generate the instructional result. Following that, the SambaNova system prompt is loaded and used to initiate the code generation process. The result from SambaNova is streamed to the frontend in chunks, allowing users to see the live generation of the application.
Once the generation is complete and streaming ends, the utility function file_service.process_html_content
extracts the HTML content from the response. This extracted HTML is then saved as a separate file using file_service.save_html_file
. Along with the saved HTML path, other relevant details—such as video_id
, video_analysis
, and associated metadata—are stored in a unified JSON file for easy retrieval later.
@api_bp.route('/twelvelabs/regenerate', methods=['POST', 'GET']) @handle_errors def twelvelabs_regenerate(): from app.services.sample_games_service import SampleGamesService from app.services.twelvelabs_service import TwelveLabsService from app.services.sambanova_service import SambanovaService from app.services.file_service import FileService def event_stream(video_id, api_key): # Initialize service for managing generated games, consisting the loading the file handling sample_games_service = SampleGamesService() # Validate required parameters before processing if not video_id: yield f"event: error\ndata: video_id is required\n\n" return if not api_key: yield f"event: error\ndata: API key is required\n\n" return try: # Step 1 - Analyze video content using TwelveLabs yield f"event: progress\ndata: Starting analysis with TwelveLabs...\n\n" # Initialize TwelveLabs service with provided API key twelvelabs_service = TwelveLabsService(api_key=api_key) # Get analysis prompt template and analyze the video analysis_prompt = current_app.prompt_service.get_prompt('analysis') video_analysis = twelvelabs_service.analyze_video(video_id, analysis_prompt) yield f"event: analysis\ndata: {video_analysis}\n\n" # Step 2 - Generate game code using SambaNova AI yield f"event: progress\ndata: Generating game with SambaNova...\n\n" sambanova_service = SambanovaService() # Prepare prompts for game generation using video analysis game_generation_prompt = current_app.prompt_service.get_prompt('game_generation', video_analysis=video_analysis) system_prompt = current_app.prompt_service.get_prompt('system') try: # Stream game generation in real-time chunks streamed_html = "" # Process each chunk of generated content as it arrives for chunk in sambanova_service.generate_game_stream(system_prompt, game_generation_prompt): if chunk: streamed_html += chunk # Send each chunk to client for real-time display yield f"event: game_chunk\ndata: {chunk}\n\n" # Step 3 - Save and process the complete generated game if streamed_html: # Process and clean up the generated HTML content file_service = FileService() html_content = file_service.process_html_content(streamed_html) # Save HTML file to disk with video ID as identifier html_file_path = file_service.save_html_file(html_content, video_id) # Create game metadata and save to file storage game_data = sample_games_service.create_game_data( video_id, video_analysis, html_file_path, twelvelabs_video_ids=[video_id], youtube_url='', video_title=f'TwelveLabs Video {video_id}', channel_name='TwelveLabs', view_count='Generated' ) sample_games_service.save_game(game_data) else: # No content was generated - report error to client yield f"event: error\ndata: No game content was generated\n\n" return except Exception as e: print(f"Error in SambaNova streaming: {str(e)}") yield f"event: error\ndata: Error streaming from SambaNova: {str(e)}\n\n" return yield f"event: done\ndata: [DONE]\n\n" except Exception as e: print(f"Error in TwelveLabs processing: {str(e)}") yield f"event: error\ndata: Error processing TwelveLabs video: {str(e)}\n\n" # Handle different HTTP methods for parameter extraction if request.method == 'GET': video_id = request.args.get('video_id') api_key = request.args.get('api_key') else: data = request.get_json() video_id = data.get('video_id') api_key = request.headers.get('X-Twelvelabs-Api-Key') if not video_id: return jsonify({"error": "video_id is required"}), 400 return Response(stream_with_context(event_stream(video_id, api_key)), mimetype='text/event-stream')
As soon as the generation process is complete and file handling is done, the generated application and its code are rendered on the portal by loading them from the stored JSON.
2 - Function calling pipeline for the generation via Video URL
For the alternative approach using a YouTube URL to generate the interactive learning app—the overall pipeline remains largely the same. The key difference lies in the initial steps, the video is first processed and stored locally or on the server, and then indexed using the TwelveLabs API. You can incorporate a utility that handles downloading videos from any supported platform URL, after which the essential step of indexing needs to be applied.
To understand how the indexing is performed, refer to the following code snippet:
def _index_video_file(self, file_path, index_id): try: # Create a new indexing task for the video file task = self.twelvelabs_service.client.task.create( index_id=index_id, file=file_path, ) print(f"Indexing task created: {task.id}") max_wait_time = 900 start_time = time.time() while time.time() - start_time < max_wait_time: task = self.twelvelabs_service.client.task.retrieve(task.id) # Handle different task status outcomes if task.status == "ready": print(f"Indexing completed successfully. Video ID: {task.video_id}") # Indexing completed successfully - video is now searchable/analyzable return task.video_id elif task.status == "failed": print(f"Indexing failed: {task.status}") return None elif task.status in ["processing", "pending"]: elapsed = int(time.time() - start_time) print(f"Indexing in progress... Status: {task.status} (Elapsed: {elapsed}s)") time.sleep(15) else: print(f"Unknown status: {task.status}") time.sleep(10) print("Indexing timed out") return None except Exception as e: print(f"Error during video indexing: {str(e)}") return None
The complete process pipeline—from YouTube URL to interactive app generation can be found here.
The regenerate functionality for videos from YouTube URLs works exactly the same as described earlier, since the video is already indexed and ready for re-analyze and regeneration.
Here is a quick glimpse of how the portal enables interaction helping users better understand video content while offering a new way to experience, interact with, learn from, and play with the video.

More Ideas to Experiment with the Tutorial
Understanding how video content transforms into interactive experiences opens the door to more engaging and adaptive learning. Here are some experimental directions you can explore with analyze engine and the Sambanova:
🎮 Faster Iteration via Feedback Loops — Incorporate user feedback directly into the game code generation loop powered by SambaNova.
🔍 Deep Video Content Research — Use fast inference models to conduct in depth analysis of video analysis. Automatically extract key insights, identify references, and surface relevant citations or external learning materials.
🧠 Video Agentic AI — Integrate a reasoning-based inference model as part of an agentic pipeline. This allows the system to plan, coordinate, and utilize the right tools for transforming the video editing.
Conclusion
This tutorial explores how video understanding can redefine the way we learn and interact with the video content. By combining TwelveLabs for video analysis generation and SambaNova for fast, code-based game generation, we've built a system that transforms passive educational videos into interactive, game-based learning experiences. Video2Game goes beyond traditional video playback by bridging multimodal understanding, and playful learning to create immersive and personalized education.
Additional Resources
Learn more about the analyze video engine—Pegasus-1.2. To explore TwelveLabs further and enhance your understanding of video content analysis, check out these resources:
Explore More Use Cases: Visit the SambaNova model Hub to learn and explore about the varied model and how to implement similar workflows tailored to your business needs.
Join the Conversation: Share your feedback on this integration in the TwelveLabs Discord.
Explore Tutorials: Dive deeper into TwelveLabs capabilities with our comprehensive tutorials
We encourage you to use these resources to expand your knowledge and create innovative applications using TwelveLabs video understanding technology.
Introduction
What if educational videos could do more than just show and tell? 🎮
Imagine turning passive video lectures into interactive, game based learning experiences—where students play through the concepts instead of just watching them.
In this tutorial, we’ll build a TwelveLabs powered application that transforms YouTube educational videos into engaging learning games. By combining TwelveLabs analyze engine with the inference model from SambaNova, we can extract meaningful content from videos and convert it into interactive learning modules.
Let’s explore how the Video2Game application works and how you can build a similar solution with the video understanding and the fast inference LLM by using the TwelveLabs Python SDK and SambaNova.
You can explore the demo of the application here: Video2Game Application
Prerequisites
Generate an API key by signing up at the TwelveLabs Playground.
Sign Up on SamabNova and generate the API KEY to access the model.
Find the repository for this application on Github Repository.
You should be familiar with Python, Flask, and Next.js
Demo Application
This demo application showcases the video understanding by analyzing the video content which helps for the creation of the Interactive Learning Application, which is done with the fast inference LLM code generation. Users can just provide the video url or directly select the video from an already indexed video by connecting the TwelveLabs API KEY.
The purpose of the application is to get the new way of the learning experience with the interactivity from video content. Here, you can find the detailed walkthrough of the code and the demo —

Working of the Application
The application supports two modes of interaction -
Using a YouTube Video URL: Provide a YouTube video URL, and the system will process the video, analyze its content, and automatically generate an interactive, gamified learning application based on the video using the Fast Inference LLM model using SambaNova.
Using the TwelveLabs API Key: Connect your personal TwelveLabs API Key to access and interact with your own already indexed videos. Once connected, you can select an existing Index ID and respective video, and the system will generate an interactive application based on the selected video content.
When a YouTube URL is provided, the video is first processed and stored to the server locally. Following this, the indexing process begins, where the video is indexed to the specified index_id
. Once indexing is complete, a video_id
is generated and returned, which is then used for video content analysis.
The video content is analyzed to generate a descriptive analysis result based on the video understanding. This result provides instructions that help guide the development of an interactive game application. The analysis text is included in the user prompt, along with formatting instructions and a system prompt. This system prompt ensures that the generated output is a complete, functional interactive application, adhering to the guidelines and directions inferred from the analysis text.
The code generation process is then initiated through SambaNova, producing a single HTML file that includes HTML, CSS, and JavaScript. The model used here is DeepSeek-V3-0324
, processes the prompt by thinking before generating the appropriate code. Once the code generation is complete, streaming stops.

The utility function file_service.process_html_content
processes the HTML content in the response. The resulting HTML file is then saved using file_service.save_html_file
and stored in a unified JSON structure. This enables quick retrieval for rendering the interactive application and displaying the generated code.
A "Regenerate" button is provided to facilitate experimentation. It re-triggers the analysis and code generation process using the already indexed video, allowing for exploration of different interaction possibilities based on the same video content.

An alternative way to interact with the application is by connecting your personal TwelveLabs API key. Once the API key is connected, you can select your desired index_id
and the specific video associated with it. After video selection, the corresponding video_id
is used to perform analyze and generate descriptive text, which is then passed through the Sambanova game code generation pipeline. The file handling process remains the same as previously described to extract and store the html file.
To interact with your own video, the recommended approach is to first generate an Index ID via the TwelveLabs Playground and upload the video for indexing. Once that is complete, connect your API key here to fully utilize the application’s features.
Preparation Steps
Obtain your API key from the TwelveLabs Playground and set up your environment variable.
Do create the Index, with the pegasus-1.2 selection via TwelveLabs Playground, and get the index_id specifically for this application.
Clone the project from Github.
Do obtain the API KEY from the SambaNova Dashboard.
Create a .env file containing your TwelveLabs and SambaNova credentials.
Once you've completed these steps, you're ready to start developing!
Walkthrough for the Video2Game App
This tutorial focuses on how you can build the application which transforms the video content into the Interactive Learning App. The app uses Next.js for the frontend and Flask API with CORS enabled for the backend. We'll focus on implementing the core backend utility for the Video2Game application and setting up the application.
This tutorial focuses on using Fast Inference LLM for game code generation by accessing via SambaNova. Let's go over how easy it is to access. For detailed code structure and setup instructions, check the README.md on GitHub.
1 - To generate an interactive learning app from your already indexed TwelveLabs video
1.1 Config the essentials
All essential configurations are defined in config.py, which loads values from environment variables. This centralized configuration approach ensures that all variables are easily accessible and helps maintain a well-organized and easily manageable application setup.
backend/app/config.py (7-24 line)
class Config: ` # API credentials loaded from environment variables TWELVELABS_API_KEY = os.getenv("TWELVELABS_API_KEY") SAMBANOVA_API_KEY = os.getenv("SAMBANOVA_API_KEY") TWELVELABS_INDEX_ID = os.getenv("TWELVELABS_INDEX_ID") SAMBANOVA_BASE_URL=os.getenv("SAMBANOVA_BASE_URL") APP_URL = os.getenv("APP_URL", "http://localhost:8000") # Directory paths for file organization GAMES_DIR = "generated_games" # Directory to store generated game files CACHE_DIR = "game_cache" # Directory for caching game-related data INSTRUCTIONS_DIR = "instructions" # # Directory containing prompt files # SambaNova service configuration SAMBANOVA_BASE_URL = SAMBANOVA_BASE_URL SAMBANOVA_MODEL = "DeepSeek-V3-0324" # LLM text generation parameters GENERATION_TEMPERATURE = 0.1 GENERATION_TOP_P = 0.1 GENERATION_MAX_TOKENS = 16000 MIN_HTML_LENGTH = 7000
After setting up the credentials for TwelveLabs and SambaNova, the INSTRUCTIONS_DIR
is initialized. This directory contains all the predefined prompts used throughout the process. All generated interactive games are stored in the GAMES_DIR
and cached in the CACHE_DIR
.
The model used for code generation is DeepSeek-V3-0324
, configured with a temperature of 0.1
to reduce randomness, and a defined max_tokens
limit. If you wish to experiment, you can modify these parameters or switch to a different model within the configuration.
1.2 To GET the Indexes
Once the user connects their TwelveLabs API Key via the portal, the application uses it to fetch all indexes associated with their account. The response is then structured and sent to the frontend for display.
def get_indexes(self): try: print("Fetching indexes...") # Use direct HTTP request if not self.api_key: print("No API key available") return [] # To get the list of indexes url = "https://api.twelvelabs.io/v1.3/indexes" headers = { "accept": "application/json", "x-api-key": self.api_key } response = requests.get(url, headers=headers) if response.status_code == 200: data = response.json() result = [] # Structuring the relevant information needed for index in data.get('data', []): result.append({ "id": index['_id'], "name": index['index_name'] }) return result else: print(f"Failed to fetch indexes: Status {response.status_code}") return [] except Exception as e: print(f"Error fetching indexes: {e}") return []
1.3 To GET the videos, for the respective selected index_id
Once the user selects an index, the corresponding index_id
is captured. To generate the interactive learning application, a specific video must be selected from that index. Using the selected index_id
, the method here is called to retrieve all videos associated with that index.
def get_videos(self, index_id): try: # Use direct HTTP request if not self.api_key: print("No API key available") return [] # List of Videos info of that respective index id url = f"https://api.twelvelabs.io/v1.3/indexes/{index_id}/videos" headers = { "accept": "application/json", "x-api-key": self.api_key } response = requests.get(url, headers=headers) if response.status_code == 200: data = response.json() result = [] for video in data.get('data', []): system_metadata = video.get('system_metadata', {}) # Structuring the response to get relevant information to showcase on the frontend result.append({ "id": video['_id'], "name": system_metadata.get('filename', f'Video {video["_id"]}'), "duration": system_metadata.get('duration', 0) }) return result else: print(f"Failed to fetch videos: Status {response.status_code}") return [] except Exception as e: print(f"Error fetching videos for index {index_id}: {e}") return []
On selecting the video, the video_id
is been sent for further analyze process of that video content.
1.4 Analyze the video content with pegasus-1.2
The purpose of the analyze utility is to process the video content and generate meaningful instructions based on its context. These instructions serve as a foundation for building the interactive application and guide the subsequent code generation process. The prompt used for this analysis is predefined in the backend as a system prompt. Here, you can find the analyze prompt.
def analyze_video(self, video_id, prompt): try: # Send video analysis request to TwelveLabs API with custom prompt analysis_response = self.client.analyze( video_id=video_id, prompt=prompt ) return analysis_response.data except Exception as e: print(f"Error analyzing video {video_id}: {e}") raise e
The video_id
and the prompt are provided as parameters to self.client.analyze
, which processes and analyzes the video content to generate the instructional result.
1.5 Interactive game code generation using SambaNova
Once the analysis result is received, the generated code utility is triggered. In this step, the analysis output is passed as the user prompt, along with a predefined system prompt. The model used DeepSeek-V3-0324
, is the same one configured earlier in step 1.1.
The system prompt outlines the core requirements and expected structure of the output, ensuring the model understands the purpose and constraints of the task. Meanwhile, the user prompt incorporates the analysed result along with additional detailed instructions regarding layout, design, color scheme, and formatting. Together, these prompts guide the model to generate a focused and well-structured interactive application.
def generate_game_stream(self, system_prompt, user_prompt): print("Starting streaming generation...") if not self._test_api_connection(): yield "Error: Sambanova API is not reachable" return try: start_time = time.time() response = self.client.chat.completions.create( model=current_app.config['SAMBANOVA_MODEL'], messages=[ {"role": "system", "content": system_prompt}, {"role": "user", "content": user_prompt} ], temperature=current_app.config['GENERATION_TEMPERATURE'], top_p=current_app.config['GENERATION_TOP_P'], max_tokens=current_app.config['GENERATION_MAX_TOKENS'], stream=True ) print(f"API call initiated in {time.time() - start_time:.2f}s") chunk_count = 0 total_content = "" last_chunk_time = time.time() for chunk in response: current_time = time.time() if current_time - last_chunk_time > 30: # 30 second timeout print(f"No chunk received for 30 seconds, breaking...") break delta = getattr(chunk.choices[0], 'delta', None) content = getattr(delta, 'content', '') if delta else '' if content: chunk_count += 1 total_content += content last_chunk_time = current_time # Print live streaming progress if chunk_count % 20 == 0: # Print every 20 chunks print(f"Stream: {chunk_count} chunks, {len(total_content)} chars") yield content print(f"Streaming completed: {chunk_count} chunks, {len(total_content)} total chars") print(f"Total streaming time: {time.time() - start_time:.2f}s") except Exception as e: error_msg = f"Streaming error: {str(e)}" print(error_msg) print(f"Error type: {type(e).__name__}") if hasattr(e, 'response'): print(f"Response status: {e.response.status_code}") print(f"Response text: {e.response.text}") yield error_msg
When you call SambaNova's API with stream=True
, it sends back pieces of the generated content as they're created, which this function immediately passes along to the user. The clever part is the built-in safety mechanisms, it tests the API connection first to avoid wasted calls, implements a 30 second timeout to catch stuck connections, and logs detailed metrics every 20 chunks so you can monitor performance in real-time.
1.6 Function calling pipeline for the generation
In this section, we will be discussing the full pipeline of the process, where the previously discussed utility functions are called and executed. It begins by validating the presence of the video_id
and the associated API key, both of which are required for the analysis. Once confirmed, the analysis prompt is loaded and the analyze_video
utility is triggered to generate the instructional result. Following that, the SambaNova system prompt is loaded and used to initiate the code generation process. The result from SambaNova is streamed to the frontend in chunks, allowing users to see the live generation of the application.
Once the generation is complete and streaming ends, the utility function file_service.process_html_content
extracts the HTML content from the response. This extracted HTML is then saved as a separate file using file_service.save_html_file
. Along with the saved HTML path, other relevant details—such as video_id
, video_analysis
, and associated metadata—are stored in a unified JSON file for easy retrieval later.
@api_bp.route('/twelvelabs/regenerate', methods=['POST', 'GET']) @handle_errors def twelvelabs_regenerate(): from app.services.sample_games_service import SampleGamesService from app.services.twelvelabs_service import TwelveLabsService from app.services.sambanova_service import SambanovaService from app.services.file_service import FileService def event_stream(video_id, api_key): # Initialize service for managing generated games, consisting the loading the file handling sample_games_service = SampleGamesService() # Validate required parameters before processing if not video_id: yield f"event: error\ndata: video_id is required\n\n" return if not api_key: yield f"event: error\ndata: API key is required\n\n" return try: # Step 1 - Analyze video content using TwelveLabs yield f"event: progress\ndata: Starting analysis with TwelveLabs...\n\n" # Initialize TwelveLabs service with provided API key twelvelabs_service = TwelveLabsService(api_key=api_key) # Get analysis prompt template and analyze the video analysis_prompt = current_app.prompt_service.get_prompt('analysis') video_analysis = twelvelabs_service.analyze_video(video_id, analysis_prompt) yield f"event: analysis\ndata: {video_analysis}\n\n" # Step 2 - Generate game code using SambaNova AI yield f"event: progress\ndata: Generating game with SambaNova...\n\n" sambanova_service = SambanovaService() # Prepare prompts for game generation using video analysis game_generation_prompt = current_app.prompt_service.get_prompt('game_generation', video_analysis=video_analysis) system_prompt = current_app.prompt_service.get_prompt('system') try: # Stream game generation in real-time chunks streamed_html = "" # Process each chunk of generated content as it arrives for chunk in sambanova_service.generate_game_stream(system_prompt, game_generation_prompt): if chunk: streamed_html += chunk # Send each chunk to client for real-time display yield f"event: game_chunk\ndata: {chunk}\n\n" # Step 3 - Save and process the complete generated game if streamed_html: # Process and clean up the generated HTML content file_service = FileService() html_content = file_service.process_html_content(streamed_html) # Save HTML file to disk with video ID as identifier html_file_path = file_service.save_html_file(html_content, video_id) # Create game metadata and save to file storage game_data = sample_games_service.create_game_data( video_id, video_analysis, html_file_path, twelvelabs_video_ids=[video_id], youtube_url='', video_title=f'TwelveLabs Video {video_id}', channel_name='TwelveLabs', view_count='Generated' ) sample_games_service.save_game(game_data) else: # No content was generated - report error to client yield f"event: error\ndata: No game content was generated\n\n" return except Exception as e: print(f"Error in SambaNova streaming: {str(e)}") yield f"event: error\ndata: Error streaming from SambaNova: {str(e)}\n\n" return yield f"event: done\ndata: [DONE]\n\n" except Exception as e: print(f"Error in TwelveLabs processing: {str(e)}") yield f"event: error\ndata: Error processing TwelveLabs video: {str(e)}\n\n" # Handle different HTTP methods for parameter extraction if request.method == 'GET': video_id = request.args.get('video_id') api_key = request.args.get('api_key') else: data = request.get_json() video_id = data.get('video_id') api_key = request.headers.get('X-Twelvelabs-Api-Key') if not video_id: return jsonify({"error": "video_id is required"}), 400 return Response(stream_with_context(event_stream(video_id, api_key)), mimetype='text/event-stream')
As soon as the generation process is complete and file handling is done, the generated application and its code are rendered on the portal by loading them from the stored JSON.
2 - Function calling pipeline for the generation via Video URL
For the alternative approach using a YouTube URL to generate the interactive learning app—the overall pipeline remains largely the same. The key difference lies in the initial steps, the video is first processed and stored locally or on the server, and then indexed using the TwelveLabs API. You can incorporate a utility that handles downloading videos from any supported platform URL, after which the essential step of indexing needs to be applied.
To understand how the indexing is performed, refer to the following code snippet:
def _index_video_file(self, file_path, index_id): try: # Create a new indexing task for the video file task = self.twelvelabs_service.client.task.create( index_id=index_id, file=file_path, ) print(f"Indexing task created: {task.id}") max_wait_time = 900 start_time = time.time() while time.time() - start_time < max_wait_time: task = self.twelvelabs_service.client.task.retrieve(task.id) # Handle different task status outcomes if task.status == "ready": print(f"Indexing completed successfully. Video ID: {task.video_id}") # Indexing completed successfully - video is now searchable/analyzable return task.video_id elif task.status == "failed": print(f"Indexing failed: {task.status}") return None elif task.status in ["processing", "pending"]: elapsed = int(time.time() - start_time) print(f"Indexing in progress... Status: {task.status} (Elapsed: {elapsed}s)") time.sleep(15) else: print(f"Unknown status: {task.status}") time.sleep(10) print("Indexing timed out") return None except Exception as e: print(f"Error during video indexing: {str(e)}") return None
The complete process pipeline—from YouTube URL to interactive app generation can be found here.
The regenerate functionality for videos from YouTube URLs works exactly the same as described earlier, since the video is already indexed and ready for re-analyze and regeneration.
Here is a quick glimpse of how the portal enables interaction helping users better understand video content while offering a new way to experience, interact with, learn from, and play with the video.

More Ideas to Experiment with the Tutorial
Understanding how video content transforms into interactive experiences opens the door to more engaging and adaptive learning. Here are some experimental directions you can explore with analyze engine and the Sambanova:
🎮 Faster Iteration via Feedback Loops — Incorporate user feedback directly into the game code generation loop powered by SambaNova.
🔍 Deep Video Content Research — Use fast inference models to conduct in depth analysis of video analysis. Automatically extract key insights, identify references, and surface relevant citations or external learning materials.
🧠 Video Agentic AI — Integrate a reasoning-based inference model as part of an agentic pipeline. This allows the system to plan, coordinate, and utilize the right tools for transforming the video editing.
Conclusion
This tutorial explores how video understanding can redefine the way we learn and interact with the video content. By combining TwelveLabs for video analysis generation and SambaNova for fast, code-based game generation, we've built a system that transforms passive educational videos into interactive, game-based learning experiences. Video2Game goes beyond traditional video playback by bridging multimodal understanding, and playful learning to create immersive and personalized education.
Additional Resources
Learn more about the analyze video engine—Pegasus-1.2. To explore TwelveLabs further and enhance your understanding of video content analysis, check out these resources:
Explore More Use Cases: Visit the SambaNova model Hub to learn and explore about the varied model and how to implement similar workflows tailored to your business needs.
Join the Conversation: Share your feedback on this integration in the TwelveLabs Discord.
Explore Tutorials: Dive deeper into TwelveLabs capabilities with our comprehensive tutorials
We encourage you to use these resources to expand your knowledge and create innovative applications using TwelveLabs video understanding technology.
Introduction
What if educational videos could do more than just show and tell? 🎮
Imagine turning passive video lectures into interactive, game based learning experiences—where students play through the concepts instead of just watching them.
In this tutorial, we’ll build a TwelveLabs powered application that transforms YouTube educational videos into engaging learning games. By combining TwelveLabs analyze engine with the inference model from SambaNova, we can extract meaningful content from videos and convert it into interactive learning modules.
Let’s explore how the Video2Game application works and how you can build a similar solution with the video understanding and the fast inference LLM by using the TwelveLabs Python SDK and SambaNova.
You can explore the demo of the application here: Video2Game Application
Prerequisites
Generate an API key by signing up at the TwelveLabs Playground.
Sign Up on SamabNova and generate the API KEY to access the model.
Find the repository for this application on Github Repository.
You should be familiar with Python, Flask, and Next.js
Demo Application
This demo application showcases the video understanding by analyzing the video content which helps for the creation of the Interactive Learning Application, which is done with the fast inference LLM code generation. Users can just provide the video url or directly select the video from an already indexed video by connecting the TwelveLabs API KEY.
The purpose of the application is to get the new way of the learning experience with the interactivity from video content. Here, you can find the detailed walkthrough of the code and the demo —

Working of the Application
The application supports two modes of interaction -
Using a YouTube Video URL: Provide a YouTube video URL, and the system will process the video, analyze its content, and automatically generate an interactive, gamified learning application based on the video using the Fast Inference LLM model using SambaNova.
Using the TwelveLabs API Key: Connect your personal TwelveLabs API Key to access and interact with your own already indexed videos. Once connected, you can select an existing Index ID and respective video, and the system will generate an interactive application based on the selected video content.
When a YouTube URL is provided, the video is first processed and stored to the server locally. Following this, the indexing process begins, where the video is indexed to the specified index_id
. Once indexing is complete, a video_id
is generated and returned, which is then used for video content analysis.
The video content is analyzed to generate a descriptive analysis result based on the video understanding. This result provides instructions that help guide the development of an interactive game application. The analysis text is included in the user prompt, along with formatting instructions and a system prompt. This system prompt ensures that the generated output is a complete, functional interactive application, adhering to the guidelines and directions inferred from the analysis text.
The code generation process is then initiated through SambaNova, producing a single HTML file that includes HTML, CSS, and JavaScript. The model used here is DeepSeek-V3-0324
, processes the prompt by thinking before generating the appropriate code. Once the code generation is complete, streaming stops.

The utility function file_service.process_html_content
processes the HTML content in the response. The resulting HTML file is then saved using file_service.save_html_file
and stored in a unified JSON structure. This enables quick retrieval for rendering the interactive application and displaying the generated code.
A "Regenerate" button is provided to facilitate experimentation. It re-triggers the analysis and code generation process using the already indexed video, allowing for exploration of different interaction possibilities based on the same video content.

An alternative way to interact with the application is by connecting your personal TwelveLabs API key. Once the API key is connected, you can select your desired index_id
and the specific video associated with it. After video selection, the corresponding video_id
is used to perform analyze and generate descriptive text, which is then passed through the Sambanova game code generation pipeline. The file handling process remains the same as previously described to extract and store the html file.
To interact with your own video, the recommended approach is to first generate an Index ID via the TwelveLabs Playground and upload the video for indexing. Once that is complete, connect your API key here to fully utilize the application’s features.
Preparation Steps
Obtain your API key from the TwelveLabs Playground and set up your environment variable.
Do create the Index, with the pegasus-1.2 selection via TwelveLabs Playground, and get the index_id specifically for this application.
Clone the project from Github.
Do obtain the API KEY from the SambaNova Dashboard.
Create a .env file containing your TwelveLabs and SambaNova credentials.
Once you've completed these steps, you're ready to start developing!
Walkthrough for the Video2Game App
This tutorial focuses on how you can build the application which transforms the video content into the Interactive Learning App. The app uses Next.js for the frontend and Flask API with CORS enabled for the backend. We'll focus on implementing the core backend utility for the Video2Game application and setting up the application.
This tutorial focuses on using Fast Inference LLM for game code generation by accessing via SambaNova. Let's go over how easy it is to access. For detailed code structure and setup instructions, check the README.md on GitHub.
1 - To generate an interactive learning app from your already indexed TwelveLabs video
1.1 Config the essentials
All essential configurations are defined in config.py, which loads values from environment variables. This centralized configuration approach ensures that all variables are easily accessible and helps maintain a well-organized and easily manageable application setup.
backend/app/config.py (7-24 line)
class Config: ` # API credentials loaded from environment variables TWELVELABS_API_KEY = os.getenv("TWELVELABS_API_KEY") SAMBANOVA_API_KEY = os.getenv("SAMBANOVA_API_KEY") TWELVELABS_INDEX_ID = os.getenv("TWELVELABS_INDEX_ID") SAMBANOVA_BASE_URL=os.getenv("SAMBANOVA_BASE_URL") APP_URL = os.getenv("APP_URL", "http://localhost:8000") # Directory paths for file organization GAMES_DIR = "generated_games" # Directory to store generated game files CACHE_DIR = "game_cache" # Directory for caching game-related data INSTRUCTIONS_DIR = "instructions" # # Directory containing prompt files # SambaNova service configuration SAMBANOVA_BASE_URL = SAMBANOVA_BASE_URL SAMBANOVA_MODEL = "DeepSeek-V3-0324" # LLM text generation parameters GENERATION_TEMPERATURE = 0.1 GENERATION_TOP_P = 0.1 GENERATION_MAX_TOKENS = 16000 MIN_HTML_LENGTH = 7000
After setting up the credentials for TwelveLabs and SambaNova, the INSTRUCTIONS_DIR
is initialized. This directory contains all the predefined prompts used throughout the process. All generated interactive games are stored in the GAMES_DIR
and cached in the CACHE_DIR
.
The model used for code generation is DeepSeek-V3-0324
, configured with a temperature of 0.1
to reduce randomness, and a defined max_tokens
limit. If you wish to experiment, you can modify these parameters or switch to a different model within the configuration.
1.2 To GET the Indexes
Once the user connects their TwelveLabs API Key via the portal, the application uses it to fetch all indexes associated with their account. The response is then structured and sent to the frontend for display.
def get_indexes(self): try: print("Fetching indexes...") # Use direct HTTP request if not self.api_key: print("No API key available") return [] # To get the list of indexes url = "https://api.twelvelabs.io/v1.3/indexes" headers = { "accept": "application/json", "x-api-key": self.api_key } response = requests.get(url, headers=headers) if response.status_code == 200: data = response.json() result = [] # Structuring the relevant information needed for index in data.get('data', []): result.append({ "id": index['_id'], "name": index['index_name'] }) return result else: print(f"Failed to fetch indexes: Status {response.status_code}") return [] except Exception as e: print(f"Error fetching indexes: {e}") return []
1.3 To GET the videos, for the respective selected index_id
Once the user selects an index, the corresponding index_id
is captured. To generate the interactive learning application, a specific video must be selected from that index. Using the selected index_id
, the method here is called to retrieve all videos associated with that index.
def get_videos(self, index_id): try: # Use direct HTTP request if not self.api_key: print("No API key available") return [] # List of Videos info of that respective index id url = f"https://api.twelvelabs.io/v1.3/indexes/{index_id}/videos" headers = { "accept": "application/json", "x-api-key": self.api_key } response = requests.get(url, headers=headers) if response.status_code == 200: data = response.json() result = [] for video in data.get('data', []): system_metadata = video.get('system_metadata', {}) # Structuring the response to get relevant information to showcase on the frontend result.append({ "id": video['_id'], "name": system_metadata.get('filename', f'Video {video["_id"]}'), "duration": system_metadata.get('duration', 0) }) return result else: print(f"Failed to fetch videos: Status {response.status_code}") return [] except Exception as e: print(f"Error fetching videos for index {index_id}: {e}") return []
On selecting the video, the video_id
is been sent for further analyze process of that video content.
1.4 Analyze the video content with pegasus-1.2
The purpose of the analyze utility is to process the video content and generate meaningful instructions based on its context. These instructions serve as a foundation for building the interactive application and guide the subsequent code generation process. The prompt used for this analysis is predefined in the backend as a system prompt. Here, you can find the analyze prompt.
def analyze_video(self, video_id, prompt): try: # Send video analysis request to TwelveLabs API with custom prompt analysis_response = self.client.analyze( video_id=video_id, prompt=prompt ) return analysis_response.data except Exception as e: print(f"Error analyzing video {video_id}: {e}") raise e
The video_id
and the prompt are provided as parameters to self.client.analyze
, which processes and analyzes the video content to generate the instructional result.
1.5 Interactive game code generation using SambaNova
Once the analysis result is received, the generated code utility is triggered. In this step, the analysis output is passed as the user prompt, along with a predefined system prompt. The model used DeepSeek-V3-0324
, is the same one configured earlier in step 1.1.
The system prompt outlines the core requirements and expected structure of the output, ensuring the model understands the purpose and constraints of the task. Meanwhile, the user prompt incorporates the analysed result along with additional detailed instructions regarding layout, design, color scheme, and formatting. Together, these prompts guide the model to generate a focused and well-structured interactive application.
def generate_game_stream(self, system_prompt, user_prompt): print("Starting streaming generation...") if not self._test_api_connection(): yield "Error: Sambanova API is not reachable" return try: start_time = time.time() response = self.client.chat.completions.create( model=current_app.config['SAMBANOVA_MODEL'], messages=[ {"role": "system", "content": system_prompt}, {"role": "user", "content": user_prompt} ], temperature=current_app.config['GENERATION_TEMPERATURE'], top_p=current_app.config['GENERATION_TOP_P'], max_tokens=current_app.config['GENERATION_MAX_TOKENS'], stream=True ) print(f"API call initiated in {time.time() - start_time:.2f}s") chunk_count = 0 total_content = "" last_chunk_time = time.time() for chunk in response: current_time = time.time() if current_time - last_chunk_time > 30: # 30 second timeout print(f"No chunk received for 30 seconds, breaking...") break delta = getattr(chunk.choices[0], 'delta', None) content = getattr(delta, 'content', '') if delta else '' if content: chunk_count += 1 total_content += content last_chunk_time = current_time # Print live streaming progress if chunk_count % 20 == 0: # Print every 20 chunks print(f"Stream: {chunk_count} chunks, {len(total_content)} chars") yield content print(f"Streaming completed: {chunk_count} chunks, {len(total_content)} total chars") print(f"Total streaming time: {time.time() - start_time:.2f}s") except Exception as e: error_msg = f"Streaming error: {str(e)}" print(error_msg) print(f"Error type: {type(e).__name__}") if hasattr(e, 'response'): print(f"Response status: {e.response.status_code}") print(f"Response text: {e.response.text}") yield error_msg
When you call SambaNova's API with stream=True
, it sends back pieces of the generated content as they're created, which this function immediately passes along to the user. The clever part is the built-in safety mechanisms, it tests the API connection first to avoid wasted calls, implements a 30 second timeout to catch stuck connections, and logs detailed metrics every 20 chunks so you can monitor performance in real-time.
1.6 Function calling pipeline for the generation
In this section, we will be discussing the full pipeline of the process, where the previously discussed utility functions are called and executed. It begins by validating the presence of the video_id
and the associated API key, both of which are required for the analysis. Once confirmed, the analysis prompt is loaded and the analyze_video
utility is triggered to generate the instructional result. Following that, the SambaNova system prompt is loaded and used to initiate the code generation process. The result from SambaNova is streamed to the frontend in chunks, allowing users to see the live generation of the application.
Once the generation is complete and streaming ends, the utility function file_service.process_html_content
extracts the HTML content from the response. This extracted HTML is then saved as a separate file using file_service.save_html_file
. Along with the saved HTML path, other relevant details—such as video_id
, video_analysis
, and associated metadata—are stored in a unified JSON file for easy retrieval later.
@api_bp.route('/twelvelabs/regenerate', methods=['POST', 'GET']) @handle_errors def twelvelabs_regenerate(): from app.services.sample_games_service import SampleGamesService from app.services.twelvelabs_service import TwelveLabsService from app.services.sambanova_service import SambanovaService from app.services.file_service import FileService def event_stream(video_id, api_key): # Initialize service for managing generated games, consisting the loading the file handling sample_games_service = SampleGamesService() # Validate required parameters before processing if not video_id: yield f"event: error\ndata: video_id is required\n\n" return if not api_key: yield f"event: error\ndata: API key is required\n\n" return try: # Step 1 - Analyze video content using TwelveLabs yield f"event: progress\ndata: Starting analysis with TwelveLabs...\n\n" # Initialize TwelveLabs service with provided API key twelvelabs_service = TwelveLabsService(api_key=api_key) # Get analysis prompt template and analyze the video analysis_prompt = current_app.prompt_service.get_prompt('analysis') video_analysis = twelvelabs_service.analyze_video(video_id, analysis_prompt) yield f"event: analysis\ndata: {video_analysis}\n\n" # Step 2 - Generate game code using SambaNova AI yield f"event: progress\ndata: Generating game with SambaNova...\n\n" sambanova_service = SambanovaService() # Prepare prompts for game generation using video analysis game_generation_prompt = current_app.prompt_service.get_prompt('game_generation', video_analysis=video_analysis) system_prompt = current_app.prompt_service.get_prompt('system') try: # Stream game generation in real-time chunks streamed_html = "" # Process each chunk of generated content as it arrives for chunk in sambanova_service.generate_game_stream(system_prompt, game_generation_prompt): if chunk: streamed_html += chunk # Send each chunk to client for real-time display yield f"event: game_chunk\ndata: {chunk}\n\n" # Step 3 - Save and process the complete generated game if streamed_html: # Process and clean up the generated HTML content file_service = FileService() html_content = file_service.process_html_content(streamed_html) # Save HTML file to disk with video ID as identifier html_file_path = file_service.save_html_file(html_content, video_id) # Create game metadata and save to file storage game_data = sample_games_service.create_game_data( video_id, video_analysis, html_file_path, twelvelabs_video_ids=[video_id], youtube_url='', video_title=f'TwelveLabs Video {video_id}', channel_name='TwelveLabs', view_count='Generated' ) sample_games_service.save_game(game_data) else: # No content was generated - report error to client yield f"event: error\ndata: No game content was generated\n\n" return except Exception as e: print(f"Error in SambaNova streaming: {str(e)}") yield f"event: error\ndata: Error streaming from SambaNova: {str(e)}\n\n" return yield f"event: done\ndata: [DONE]\n\n" except Exception as e: print(f"Error in TwelveLabs processing: {str(e)}") yield f"event: error\ndata: Error processing TwelveLabs video: {str(e)}\n\n" # Handle different HTTP methods for parameter extraction if request.method == 'GET': video_id = request.args.get('video_id') api_key = request.args.get('api_key') else: data = request.get_json() video_id = data.get('video_id') api_key = request.headers.get('X-Twelvelabs-Api-Key') if not video_id: return jsonify({"error": "video_id is required"}), 400 return Response(stream_with_context(event_stream(video_id, api_key)), mimetype='text/event-stream')
As soon as the generation process is complete and file handling is done, the generated application and its code are rendered on the portal by loading them from the stored JSON.
2 - Function calling pipeline for the generation via Video URL
For the alternative approach using a YouTube URL to generate the interactive learning app—the overall pipeline remains largely the same. The key difference lies in the initial steps, the video is first processed and stored locally or on the server, and then indexed using the TwelveLabs API. You can incorporate a utility that handles downloading videos from any supported platform URL, after which the essential step of indexing needs to be applied.
To understand how the indexing is performed, refer to the following code snippet:
def _index_video_file(self, file_path, index_id): try: # Create a new indexing task for the video file task = self.twelvelabs_service.client.task.create( index_id=index_id, file=file_path, ) print(f"Indexing task created: {task.id}") max_wait_time = 900 start_time = time.time() while time.time() - start_time < max_wait_time: task = self.twelvelabs_service.client.task.retrieve(task.id) # Handle different task status outcomes if task.status == "ready": print(f"Indexing completed successfully. Video ID: {task.video_id}") # Indexing completed successfully - video is now searchable/analyzable return task.video_id elif task.status == "failed": print(f"Indexing failed: {task.status}") return None elif task.status in ["processing", "pending"]: elapsed = int(time.time() - start_time) print(f"Indexing in progress... Status: {task.status} (Elapsed: {elapsed}s)") time.sleep(15) else: print(f"Unknown status: {task.status}") time.sleep(10) print("Indexing timed out") return None except Exception as e: print(f"Error during video indexing: {str(e)}") return None
The complete process pipeline—from YouTube URL to interactive app generation can be found here.
The regenerate functionality for videos from YouTube URLs works exactly the same as described earlier, since the video is already indexed and ready for re-analyze and regeneration.
Here is a quick glimpse of how the portal enables interaction helping users better understand video content while offering a new way to experience, interact with, learn from, and play with the video.

More Ideas to Experiment with the Tutorial
Understanding how video content transforms into interactive experiences opens the door to more engaging and adaptive learning. Here are some experimental directions you can explore with analyze engine and the Sambanova:
🎮 Faster Iteration via Feedback Loops — Incorporate user feedback directly into the game code generation loop powered by SambaNova.
🔍 Deep Video Content Research — Use fast inference models to conduct in depth analysis of video analysis. Automatically extract key insights, identify references, and surface relevant citations or external learning materials.
🧠 Video Agentic AI — Integrate a reasoning-based inference model as part of an agentic pipeline. This allows the system to plan, coordinate, and utilize the right tools for transforming the video editing.
Conclusion
This tutorial explores how video understanding can redefine the way we learn and interact with the video content. By combining TwelveLabs for video analysis generation and SambaNova for fast, code-based game generation, we've built a system that transforms passive educational videos into interactive, game-based learning experiences. Video2Game goes beyond traditional video playback by bridging multimodal understanding, and playful learning to create immersive and personalized education.
Additional Resources
Learn more about the analyze video engine—Pegasus-1.2. To explore TwelveLabs further and enhance your understanding of video content analysis, check out these resources:
Explore More Use Cases: Visit the SambaNova model Hub to learn and explore about the varied model and how to implement similar workflows tailored to your business needs.
Join the Conversation: Share your feedback on this integration in the TwelveLabs Discord.
Explore Tutorials: Dive deeper into TwelveLabs capabilities with our comprehensive tutorials
We encourage you to use these resources to expand your knowledge and create innovative applications using TwelveLabs video understanding technology.
관련된 아티클



Building Brand Integration Assistant and Ad Break Finder App with Twelve Labs



Building Video Semantic Recommendation with TwelveLabs Embedding and Qdrant Search



Building a Multimodal Retrieval-Augmented Generation Application with Twelve Labs and Milvus



Building a Security Analysis Application with Twelve Labs