AssemblyAI SF based audio intelligence API developer AssemblyAI earned $30M in Series B funding.
Insight Partners served as the round’s lead investor, and existing investors from Accel and Y Combinator also took part.
The funds will be used by the firm to expand its AI research team, which currently consists of researchers from DeepMind, Google Brain, Meta AI, BMW, and Cisco, as well as to build out its AI infrastructure and speed AI research.
State-of-the-art AI models for transcribed, comprehended, and analysing audio and video data are created by AssemblyAI, led by Dylan Fox, founder & CEO, using the same AI technology used to create popular AI models like DALL-E 2, GPT-3, and Google’s LaMDA model. These models include Transformers, Large Language Models, enormous GPU clusters, and large datasets.
Over 1,000 paying clients, including startups like CallRail, Algolia, Veed, and Fathom, as well as large corporations like the WSJ, NBC Universal, and Spotify, are currently using the company’s APIs to process millions of audio/video files every single day.
In the last six months, they have contributed important upgrades to our Auto Chapters and Summarization models, Real-Time Transcription models, Content Moderation models, and countless other product updates. These 15 new languages include Spanish, German, French, Italian, Hindi, and Japanese.
AssemblyAI disclosed a $28 million Series A round headed by Accel 4 months ago. Y Combinator, the Stripe founders John and Patrick Collison, Nat Friedman, and Daniel Gross also participated.