Sven Olsen's Homepage


Bio and Contact Information

Email:

Current Project: stars-in-shadow.com

Hi! This page summarizes my academic research work. I haven't really been an "active researcher" since about 2011 -- most of my time is now spent working on a computer game called Stars in Shadow. But I do still have a strong interest in computer graphics.

In 2010 I completed my doctorate at the University of Victoria. My undergraduate degree is from Swarthmore College, and I did my master's at Northwestern University.

The above image is output from a video filter that we designed in 2006/2007. This filter is described in the paper, Real-Time Video Abstraction.

Download CV. (Now about 5 years out of date.)


XDoG: An eXtended Difference-of-Gaussians Compendium (C&G, 2012)
Abstract: Recent extensions to the standard Difference-of-Gaussians (DoG) edge detection operator have rendered it less susceptible to noise and increased its aesthetic appeal. Despite these advances, the technical subtleties and stylistic potential of the DoG operator are often overlooked. This paper offers a detailed review of the DoG operator and its extensions, highlighting useful relationships to other image processing techniques. It also presents many new results spanning a variety of styles, including pencil-shading, pastel, hatching, and woodcut. Additionally, we demonstrate a range of subtle artistic effects, such as ghosting, speed-lines, negative edges, indication, and abstraction, all of which are obtained using an extended DoG formulation, or slight modifications thereof. In all cases, the visual quality achieved by the extended DoG operator is comparable to or better than those of systems dedicated to a single style.

Accepted for publication in Computers & Graphics. Preprint.

Additional notes on the filter parameters.


Image Simplification and Vectorization (NPAR, 2011)
Abstract: We present an unsupervised system which takes digital photographs as input, and generates simplified, stylized vector data as output. The three component parts of our system are image-space stylization, edge tracing, and edge-based image reconstruction. The design of each of these components is specialized, relative to their state of the art equivalents, in order to improve their effectiveness when used in such a combined stylization / vectorization pipeline. We demonstrate that the vector data generated by our system is often both an effective visual simplification of the input photographs, and an effective simplification in the sense of memory efficiency, as judged relative to state of the art lossy image compression formats.

Download Paper. Suplementary Material.
My thesis (on the same topic).


Recovering Color from Black and White Photographs (ICCP, 2010)
Abstract: This paper presents a mathematical framework for recovering color information from multiple photographic sources. Such sources could include either black and white negatives or photographic plates. This paper's main technical contribution is the use of Bayesian analysis to calculate the most likely color at any sample point, along with an expected error value. We explore the limits of our approach using hyperspectral datasets, and show that in some cases, it may be possible to recover the bulk of the color information in an image from as few as two black and white sources.

Download Paper. Suplementary Material.


Real-Time Video Abstraction (ACM SIGGRAPH, 2006)
Abstract: We present an automatic, real-time video and image abstraction framework that abstracts imagery by modifying the contrast of visually important features, namely luminance and color opponency. We reduce contrast in low-contrast regions using an approximation to anisotropic diffusion, and artificially increase contrast in higher contrast regions with difference-of-Gaussian edges. The abstraction step is extensible and allows for artistic or data-driven control. Abstracted images can optionally be stylized using soft color quantization to create cartoon-like effects with good temporal coherence. Our framework design is highly parallel, allowing for a GPU-based, real-time implementation. We evaluate the effectiveness of our abstraction framework with a user-study and find that participants are faster at naming abstracted faces of known persons compared to photographs. Participants are also better at remembering abstracted images of arbitrary scenes in a memory task.

Download Paper. Project Page. Watch Video. Bibtex.


Color2Gray (ACM SIGGRAPH, 2005)
Abstract: Visually important image features often disappear when color images are converted to grayscale. The algorithm introduced here reduces such losses by attempting to preserve the salient features of the color image. The Color2Gray algorithm is a 3-step process: 1) convert RGB inputs to a perceptually uniform CIE L*a*b* color space, 2) use chrominance and luminance differences to create grayscale target differences between nearby image pixels, and 3) solve an optimization problem designed to selectively modulate the grayscale representation as a function of the chroma variation of the source image. The Color2Gray results offer viewers salient information missing from previous grayscale image creation methods.

Download Paper. Project Page. Source Code and Implementation Notes. Bibtex.


Interactive 3D Fluid Jet Painting (NPAR, 2006)
Abstract: We present an interactive system which allows users to create abstract paintings in the style of Jackson Pollock using three dimensional viscous fluid jets. Pollock's paintings were created by using streams of household paint to make guided, semi-random patterns on his canvas. Our fluid jet model consists of two coupled simulations: a Navier-Stokes solver for an axis-symmetric fluid column and a linked-mass system for tracking the three dimensional motion of the jet's axis line. The paint trails left by the jets are represented using implicit surfaces. Our system also includes an algorithm for generating the splatter patterns created by the impacts of a highspeed fluid drops. We allow users to analyze the fractal properties of the images they create, comparing them to those known to exist in Pollock's own paintings.

Download Paper. Project Page. Download Video. Bibtex.


Interactive Vector Fields for Painterly Rendering (Graphics Interface, 2005)
Abstract: We present techniques for generating and manipulating vector fields for use in the creation of painterly images and animations. Our aim is to enable casual users to create results evocative of expressionistic art. Rather than defining stroke alignment fields globally, we divide input images into regions using a colorspace clustering algorithm. Users interactively assign characteristic brush stroke alignment fields and stroke rendering parameters to each region. By combining vortex dynamics and semi-Lagrangian fluid simulation we are able to create stable, easily controlled vector fields. In addition to fluid simulations, users can align strokes in a given region using more conventional field models such as smoothed gradient fields and optical flow, or hybrid fields that combine the desirable features of fluid simulations and smoothed gradient information.

Download Paper. Bibtex.