The browser (HTML and CSS) has a powerful rendering engine built in for displaying web pages. It would be great to reuse that awesome code for styling videos.
I wondered if it was possible, and my short answer is this: Brisa Videos.
I've always been fascinated with video and audio tools. The tools look confusing but technically interesting, they make me use a creative part of my brain that's usually dormant, and what comes out of these tools is valuable and useful. There's also a weird dimensional aspect to them - in the case of video, you have a picture, and you need to describe how it changes over time while only seeing a single slice.
I've used kdenlive a lot, and I love it! I've created promo videos based on screen recordings, faded images in and out and made basic transitions. It's a fun process despite some bugs, and it always engages my mind.
But there's a problem. I hate working with kdenlive title screens! It feels so limiting. One of the main problems is that I know a decent amount of css, and when I'm trying to add overlays to a video, it never seems to look very good. A secondary problem is lack of options when it comes to animating those elements. With kdenlive, I basically only zoomed in and moved them in to show them, and then faded them out.
I want a tool like kdenlive, but with a powerful way to style and display things other than the video I import.
I decided that it was time to try to see how I could use (or abuse) the web browser to generate something other than a web page.
So I made a list of questions I could find answers to:
- Can you capture HTML elements as an image?
- Are there tools to animate elements and a way to capture their state at a given time?
- Can the browser do all of this quickly?
- Is there a UI that can make sense of elements changing over time?
I did some preliminary research on each of these topics.
1. Can you capture HTML elements as an image?
A little more digging and I found that you can render SVGs (scaleable vector graphics) to a canvas element, and there's a way of embedding HTML inside of SVGs. This felt totally backwards to me! I need to generate HTML, then embed it in an SVG, which I can then draw on a canvas?! Oh yeah, there are also tons of limitations on the type of content and where it comes from. There are good reasons for it, but it's still a lot to overcome.
So this was left a bit unanswered in the beginning. I knew I'd be able to get something, but not sure how much.
But I was undeterred. I wanted HTML/CSS-based animations to work badly enough that I assumed I'd get what I needed out of this in the end.
2. Are there tools to animate elements and a way to capture their state at a given time?
This was a bit easier than my first question, although there were some question marks in my head.
I've done some basic CSS animation, and it was pretty fun and easy, but my gut told me this wouldn't be a viable option - there's no way to tell the browser to animate something a little bit, then pause.
But after a little phone boredom while out one day, I stumbled on AnimeJS. I was immediately impressed with the site, the fun documentation, and the powerful features. I wasn't sure I had everything I needed, but it was promising. A little more digging and I found a similar timeline where I could explicitly state start/durations of animations, and a control widget that let you seek to animations.
Great! That question is answered!
3. Can the browser do all of this quickly?
Honestly, I just assumed the answer was "yes, quickly enough." AnimeJS was already animating complex elements and styles quickly, the seek tool seemed fast, and I've seen great design UIs built in the browser. Fingers crossed - let's hope this is fine!
4. Is there a UI that can make sense of elements changing over time?
Honestly, I'm not sure I've answered this question. It makes sense to me, but I know how I wrote it, and my bad UIs always make sense to me ;).
But as an avid user of kdenlive, I already had a sense of how the UI might work. You have a preview window that shows how elements look, and then a timeline below it with a bar to seek to a specific time or play it back. Below that, a list of elements with little highlighted areas where changes are supposed to take place.
The big question for me was: Can I make it easy to add animation areas and allow the user to change how the element looks at that time.
I think I managed to do that in the end.
Again, you can try out the end result: Brisa Videos.
So I got to work, built a little VueJS app, struggled through a simple timeline, and then a way to create elements. I added a property editor so you could type in some text, add CSS properties, and it would show them on-screen. Next, I added a way to choose an image for an element.
Cool. A very basic (and very hard-to-use) element editor.
Next, I created a test web page to build an SVG using foreignobject tags to add HTML, then converted my UI preview layout to SVG to make sure I could get that working. Check!
Ok, now let's try getting the SVG into a canvas - once that's done, a video should be easy-ish. I looked at some hack-ish samples of rendering SVGs on to a canvas and was happy to see it looking great in a test (after debugging issues with drawImage()). Sweeet.
The next challenge was to start animating. First, I struggled with creating "animation frames" (I think I need a better word for this). Dealing with mouse click events, detecting where it was clicked, and mapping that to a time in the video. Then displaying that as a box. After getting this working, I reused/abused my style editor to make changes to values at a given time.
I'll spare some of the gory details, but connecting it to AnimeJS was mostly straightforward despite some bugs. So then I connected the play and seek buttons to AnimeJS calls, and wow! There is finally a moving image on the screen.
This process took about a week of coding. But here it was, a very difficult tool that would let you add elements, move them around, and change a bunch of random CSS properties!
Since I had spent some time testing rendering, now I just had to connect it to AnimeJS and get a new snapshot after each seek.
It worked perfectly! So how do I chain all of these canvas images into a video file? I assumed this would have to be done on a backend (and that's still true), but I found a cool little project called webm-writer-js that works in Chrome. Create a webm video object, give it some options, tell it the frame rate, then called videoWriter.addFrame(mycanvas) after each frame is drawn. Call complete() at the end, set the href of an a element to the blob, and it downloaded my first video!
I'm so happy with the result, and with all the JS projects and little snippets I was able to glue together to get this working!
Now, I'd love other people to use it and give me feedback. I'd like to keep developing this and make it something others understand and can benefit from!
All is not perfect in web video land. Here are the main issues:
- Rendering slows down quickly as you add images. In order to convert to an SVG that can be rendered to a canvas, image "src" attributes need to be converted to base64 files. This is horribly inefficient and I'm not sure there's a workaround (maybe browser extensions are an option). It's going to be similar after adding font options to text.
- There are plenty of bugs and minor UI issues that need work.
- Rendering is almost perfectly laid out, but text display is off by just a tiny bit.
In addition, I want to make rendering work in every browser, which means an ffmpeg backend to put it together.
The app needs audio support, it needs to allow you to select videos and add them to the animation, you should be able to group elements for easy animations on all of them, and you should be able to move many animations at once (for example, if the video is moving too quickly).
Plenty of fun and painful work ahead!