Seminar by Prof. Connelly Barnes

Title: Patch-based methods, including PatchTable, which permits efficient patch queries for large datasets
Date and Time: 9:00pm - 10:00pm , Tuesday, August 18, 2015
Venue: Room 146 Research Building

Speaker: Prof. Connelly Barnes
Date and Time: 9:00pm - 10:00pm , Tuesday, August 18, 2015.
Venue: Room 146 Research Building

Title: Patch-based methods, including PatchTable, which permits efficient
patch queries for large datasets

First, I will present a brief summary of different patch-based methods for image synthesis that I have worked on over the years ,  including PatchMatch, Image Melding,  and "Synthesis of Complex Image  Appearance from  Limited Exemplars" (presented at SIGGRAPH this summer).

Next, I will go into depth on the work "PatchTable: efficient patch queries for large datasets and applications." The abstract is below.

The PatchTable paper presents a data structure that reduces approximate nearest neighbor query times for image patches in large datasets. Previous work in texture synthesis has demonstrated real-time synthesis from small  exemplar textures. However ,  high performance has proved elusive for modern patch-based optimization techniques  which frequently use many exemplar images in the tens of megapixels or above. Our new algorithm, PatchTable, offloads as much of the computation as possible to a precomputation stage that takes modest time, so patch queries can be as efficient as possible. There are three key insights behind our algorithm: (1) a lookup table similar to locality sensitive hashing can be precomputed, and used to seed sufficiently good initial patch correspondences during querying, (2) missing entries in the table can be filled during pre-computation with our  fast Voronoi transform ,  and (3) the initially seeded correspondences can be improved with a precomputed k-nearest neighbors mapping. We show experimentally that this accelerates the patch query operation by up to 9x over kcoherence, up to 12x over TreeCANN, and up to 200x over PatchMatch. Our fast algorithm allows us to explore efficient and practical imaging and computational photography applications. We show results for artistic video stylization, light field super-resolution, and multi-image inpainting.

I am an assistant professor of computer science at the University of Virginia. My research focuses on creation and manipulation of image and video, and I am now also interested in 3D printing. I am particularly interested in by-example algorithms, machine learning in graphics, and making new creative tools that are inspired by the ways artists work.