CASE STUDY

How AffiliateNetwork Reviews Thousands of Creator Posts in Seconds, Not Weeks

AffiliateNetwork connects leading consumer apps and brands with 60,000+ creators producing short-form organic marketing ads, processing thousands of creator videos and millions of views daily.

By leveraging TwelveLabs’ video understanding models, Dyn can now rapidly identify, extract, and repurpose key moments from thousands of hours of sports footage.

Graphic

The Problem

AffiliateNetwork connects leading consumer apps and brands with 60,000+ creators producing short-form organic marketing ads, processing thousands of creator videos and millions of views daily.

The challenge? Verifying creator content meets brand guidelines.

UGC ads aren't movies. They're 10 to 45 seconds of creativity: quick cuts, overlays, voiceovers, memes, flashing logos. Traditional general-purpose video AI pipelines were not built for high-tempo UGC, multimodal overlays, and fast iteration cycles, and simply couldn't keep up.

"As our platform scaled to tens of thousands of posts per day, we realized our existing AI content-review pipeline was not delivering the throughput or video-understanding accuracy we needed. We knew we shouldn't accept the status quo and looked for a better solution."

Sean Kim, Head of AI & ML at AffiliateNetwork

What Failed

General Purpose AI Models: Frame-by-frame vision + audio transcription

  • Too slow (minutes vs. seconds)

  • Missed logos, context, scene changes

  • Couldn't fully grasp multimodal content

  • No fast iteration

The issue: Video treated as images + audio, not a unified whole.

Why TwelveLabs

Sean researched TwelveLabs' architecture and found something different: video treated as video, not stacked frames.

Pegasus: Temporal reasoning across videos with low latency, localizing events and actions enabled the team to analyze creator content for key themes, topics, and trends. Finding specific details, objects, and themes allowed the team to significantly streamline the review process.

"What stood out was that TwelveLabs actually understands video as a unified fusion of media, rather than treating it as separate modalities.”

The Solution: AI Post Verifier in 4 Steps:

Index once → Marengo creates embeddings in shared vector space

  1. Natural language rules → "Show logo for 2+ seconds" or "must show app in a positive light."

  2. Instant results → Pass/fail, timestamps, explanations

  3. Real-time iteration → Campaign managers refine rules in seconds

"After indexing, querying is instant. Campaign managers iterate in real time."

Impact

Speed

  • Videos verified in seconds

  • Rules tuned without re-watching

  • Faster creator feedback

Scale

  • Lean team serves 60k+ creators and billions of views

  • One system for all clients

  • Bootstrap-friendly

Accuracy

  • Reliable logo detection in compressed UGC

  • Understands true messaging, humor, and creativity

  • Handles multiple formats such as split-screens, voiceovers, bet slips, and more

What's Next

Rolling out across:

  • More formats (texting stories, bet slips, split-screens)

  • Content discovery and pattern analysis

  • Automated creator analysis

Cover CTA

Want to learn how we can
help your business?

Reach out to our sales team or try our playground.

Cover thread

Want to learn how we can
help your business?

Reach out to our sales team or try our playground.