Nominally, this page is a discussion of the algorithms that I'm using to create video-based renderings. However, its TRUE purpose is to give me an excuse to put up some of the neat videos that I've created in the course of developing this ap.
Here's the basic video that I've been testing my algorithms on. The clip was taken quickly with a friend's digital camera, and I haven't had time to replace it with something slicker. If you have a video clip that might make for good NPR, and you're willing to allow me to present it with my Siggraph poster, send me a copy -- it will be a little embarrassing if I'm still using this clip when I get to the conference.
The key part of rendering video is stroke density control -- it should be noted that I'm not actually using the quality control algorithms provided with Triangle (for some reason the object mode version of the quality control algorithms won't work for me, so instead I just fairly naively insert new strokes at the circomcenter of all triangles violating the max area rule, and that seems to work well enough for my purposes.) Here's a video showing stroke positions advected by the optical flow field -- I think the aggregate effects of the cinepak compression and the dense, alpha-mapped stroke points looks pretty cool. Here's the same video with a fluid field controlling stroke motion in the background region. And, because the high point density does tend to obscure the behaviors of individual strokes, here's a version with much lower point density.
Dorin's feature based clustering algorithms are great for still images, but I really should be using something else when working with videos -- I could probably get much better region tracking if I used something sensitive to optical flow and informed by the regions of the previous frame. But the results with the simple color segmentation aren't really that bad. There is clearly a real problem presented by Jonathan's striped shirt, and there are some annoying "islands" in the background region. I can get rid of the islands by increasing Ncon, but in the process I also destroy the speaker regions...
Here are videos showing the behavior of the thinplate and velocity fields. Notice that the problems segmenting the striped shirt have left us with a really ugly thinplate field. Also notice how the thinplate field in the background region jumps around in the areas far from the region borders. This implies that strokes rendered without a fluid field in that region will suffer from a fair amount of discontinuity between frames. Using fluid fields gets rid of the continuity problem, but if I wanted to stick with thinplate fields for a big empty region like this, I would probably insert some control points in the empty regions to ensure a bit more continuity.
The final issue to worry about is stroke color in the fluid region. Because the velocity field is not aligned with the contour vectors at region boundaries, using the underlying image to color strokes in the fluid region leads to some undesirable effects (the wacky stroke behavior around Jonathan's head). You can get around these by using the region color, rather than the image color, to determine the base stroke color. However, doing that highlights the "island" problems with the region segmentation. An approach that seems to give you the best of both worlds is to use a hybrid fluid/tps field for stroke alignment and advection in the background region. For example, here the background field is calculated in following way:
Let m=min dist between (x,y) and any clustered gradient position.
f(x,y) = s*vel(x,y)+t*tps(x,y)
(in this video, t_min=0 and c=4).
Here's a video showing the clustered gradient points and relative velocity/tps contributions for the hybrid field (black means all TPS, white means all velocity).