Project History

The project was first open sourced way back in 2006 - the same year Google purchased YouTube. It was developed by me, Doug Anarino, initially for a large broadcaster in India who wanted small local businesses to be able to edit ads over unreliable internet connections.

Over several years, the system was repurposed for many clients who all helped focus the UI and features for a general web audience. Brands like MTV and Kawasaki deployed it to run ad contests. The Museum of Moving Image based their Living Room Candidate site off it to edit political ads. Several startups built elaborate script editing applications around it. Every new feature we developed was rolled back into the open source project.

ES4

The editor was initially coded in ES4 (aka ActionScript) with PHP scripts that constructed FFMpeg commands triggered through Apache. Since it was a Flash application, there was support for custom fonts, real-time audio mixing, and advanced compositing effects when these were unavailable in browsers.

From the start, the editor was designed to work within low bandwidth constraints, transcoding video into separate audio file and low resolution frame images for use in the browser. On the server, the original video was used to encode at high resolution for broadcast.

ES5

The editor was ported to ES5 and the AngularJS framework in 2014, once the WebAudio API was widely supported in browsers. The backend was rewritten in Ruby and made available as an AMI in the AWS Marketplace.

The server was based on NGINX, tightly integrated with AWS services like S3, SQS, and EC2. A Docker image was made available for local development. This was the start of the API, with initial support for encode and transcode functionality.

ES6

In 2022, the editor was ported to ES6 TypeScript and the React framework. The backend was rewritten in NodeJS for hosting through ExpressJS. Support was added for a database, and the API was expanded to include media related endpoints.

Masking of clips was introduced, along with a simpler tweening mechanism for dynamically changing position, scale, and color. A new vector-first approach was taken with the introduction of SVG shapes and refactoring of font handling. Dramatic performance improvements were achieved in the client by switching from Canvas to SVG based previews.

ES2020

In 2024, the editor was ported to ES2020 and the Lit framework (Web Components). TypeScript was abandoned in favor of JSDoc commented JavaScript to avoid a bundling step. Extensive support for CSS variables was added for dynamic timeline reflow without JavaScript, even when zooming. The Intersection Observer API was used to lazy load clips and improve performance.

A powerful plugin architecture was introduced, with the most commonly overridden methods on both the client and server migrating to plugins. These are packaged separately now, so the server library has no dependencies, while the client only relies on lit.

The data model was extended to separate media from its underlying resources, with metadata migrating to a dedicated resource type. The API was correspondingly extended to include resource related endpoints. The AWS Marketplace AMI was deprecated in favor of a Docker image for use with ECS, Lambda, etc.

Additionally, great effort has gone into making the codebase AI-ready by implementing strict naming conventions and more common design patterns. In many cases the names and patterns were actually suggested by AI, so most of the LLMs seem to grok it well.