Inside the Ambistream MVP: A First Look at Multilayered Social TV
We've been building toward a simple idea: people want to curate digital environments that help them feel however they want to feel. There's usually a giant TV or projector around, and it should be as easy to throw a visual mix on those screens as it is to control music. We've done plenty of research and thrown enough parties to know what works.
We're confident that giving people access to layers and the ability to switch whole channels or individual pieces in real time is going to be expected in the near future. As a startup, we've had to prioritize carefully, so we've been updating and testing our MVP to include these features. We just hit a new milestone that we got ready for our LGBTQ+ party in Lisbon during Web Summit, and we pulled it off just barely.
Most mainstream streaming devices like Chromecast or Apple TV don't understand the idea of sending more than one layer of media to the TV at a time. Our tech team made a way to broadcast and receive the layers so you can hit buttons on the screen and turn things on and off in real time. Like most technical things, this was working in one location but breaking in Lisbon just a few days before the event. It turned out to be a hardware issue, and the easiest fix was to buy a used streaming device instead of spending thousands of dollars fixing the content.
It still wasn't working until two hours before the event. But the moment I was able to cast the interactive layers onto my projector and on the TV at the venue was glorious. It felt like the clouds parted and a new world of possibility opened up. What we were really testing at the event was seeing how it felt when a user picked up the phone and was able to control the channels and turn layers on and off in real time. It's not something they're expecting to do, but it's something that feels good to them and "like how it should be" once they do it.
This post walks through what we have today, what we just unlocked, and the future visualizations we haven't fully built yet but will be testing soon.
The MVP Today: Casting with Real-Time Layer Toggling
Our MVP runs on an iPhone app. You open it, step into channels, and cast to any nearby screen (Chromecast works best right now, with Apple TV and smart TVs coming soon). You treat the phone like a remote, hitting channel up and channel down to jump between channels. Our current lineup is a mix of yoga, art, nature, and dance content, but it will soon work for all sorts of streaming options. You scan a QR code on the TV to access more info about the creators and any sponsored content.
The Big Breakthrough: Toggle Layers of Media
Most casting systems freak out when you try to play multiple types of media together. They want one audio file or one video file and freeze when you try to change layers in real time. That limitation belongs to an older world.
Our team created a new broadcaster and receiver architecture that handles many layers at once. We can stack animated GIFs, looping videos, video on top of video, graphics, real-time animations, logos, and QR codes. You can turn layers on or off instantly. This unlocks the future version of Ambistream where your content behaves more like a programmable set design than a passive stream.
Note: we're adding more audio overlays soon, but when we're at parties and events we've been partnering with DJs who want to control the audio, and we don't blame them.
Behind the Scenes: Dashboards and Vibecoding
Our backend dashboard lets us upload and configure content quickly. We can drop in mixes, assign them to channels, and push updates in real time. Even in this early stage the workflow is smooth and the results show up immediately on connected screens.
I've had sketches and writings about dashboards across various documents and slides for a while, and it's been deprioritized for other things. I attended a few workshops and classes for different AI coding environments, each with a different focus. Sometimes I'm doing other work in the background and absorbing the ideas, other times I'm actively building things I've been doing manually for a while.
I started with our Data Mix page, the one that interprets the metadata of the creators, the mix, the descriptions, the comments, and most importantly, the attribution of the video and audio contributors. I was able to specify I needed a sponsored section with calls to action. That worked surprisingly well, so I had it map out a more WYSIWYG editor (what you see is what you get). It has drag and drop elements.
We designed early versions of the vibecoding dashboard based on patent work, developer workshop sessions and recent vibecoding events. These prototypes show how people can configure layers across devices (our interns actually created that interface initially), showing more WYSIWYG interfaces for media logic and interactive behavior. The backend isn't fully wired to it yet, but it gives us a blueprint for planning, budgeting, and UX decisions before code gets written.
What We're Tracking: Metrics and Attribution
The MVP already tracks core activity: visits, time spent, layer changes, and channel swaps. These metrics will evolve into revenue insights for ad impressions, premium subscriptions, partner earnings, provenance and attribution, and creative payouts. Right now we're still in beta so the revenue modeling is hypothetical, but the pipes exist and the logic is ready for expansion.
The dashboard includes components for tracking media origins, usage, permissions, licensing, payouts, and advertiser data. This supports fractionalized licensing and allows communities, creators, and rights holders to participate in the ecosystem without being swallowed by black-box platforms.
The Vision and What's Next
We envision a world where people can turn on and off audio and visual layers of media in real time. The content can be created by the original person who made the media, by your friends, or by AI that decided what would be the most interesting thing for you (while considering your personal preferences and privacy settings, of course). There can be new commentary, graphics, scribbles, diagrams over this stuff. Switch out the music or make the teacher be quiet.
This means we're entering a fun stage where we get to play more with the media and UX, both to figure out what users respond to well and how to express the media in new ways. We're experimenting with outside tools for video analysis and internal tools for AI content generation. We own a patent for the AI content producer and the initial UX is rough, but the concept is strong. The goal is to build a flow where creators take a single input and generate many variations that play across Ambistream channels. This is where multilayered media becomes powerful.
The pre-seed round will go toward multilayered media infrastructure, AI content tools, dashboards, UX/UI, analytics, licensing systems, and partner integrations. The stack is ambitious. The early architecture is working. The next stage is compound growth.
Want to Test the Build?
If you have an iCloud developer account, we can send you a login to test the MVP. We’re looking ready for people who want to help shape an entirely new category of social TV, so fill out this form on the contact page if you want to be a part of that soon.