Lecture by Prof. Irfan Essa

Title: Computational Video: Methods for Video Analysis and Enhancement Date and Time: 14:00-15:00 pm, Monday, December 8, 2014 Venue: Lecture Hall, Second Floor, Office Building, Software Campus

Speaker: Prof. Irfan Essa
Date and Time: 14:00-15:00 pm, Monday, December 8, 2014
Venue: Lecture Hall, Second Floor, Office Building, Software Campus
Host: Prof. Baoquan Chen     

Title: Computational Video: Methods for Video Analysis and Enhancement                                     
 
Abstract:
In this talk, I will start with a discussion of the pervasiveness of image and video content, and how such content is growing with the ubiquity of cameras.  I will use this to motivate the need for better tools for analysis and enhancement of video content. I will start with an overview of some of our ongoing work on temporal modeling of video for analysis and synthesis, then lead up to some of our current work and describe two main projects. (1) Our approach for a video stabilizer, currently implemented and running on YouTube, and its extensions. (2) A robust and scalable method for video segmentation.
 
I will describe, in some detail, our Video stabilization method, which generates stabilized videos and is in wide use. Our method allows for video stabilization beyond the conventional filtering that only suppresses high frequency jitter. This method also supports removal of rolling shutter distortions common in modern CMOS cameras that capture the frame one scanline at a time resulting in nonrigid image distortions such as shear and wobble. Our method does not rely on apriori knowledge and works on video from any camera or on legacy footage. I will showcase examples of this approach and also discuss how this method is launched and running on YouTube, with Millions of users.
 
Then I will describe an efficient and scalable technique for spatio temporal segmentation of long video sequences using a hierarchical graph-based algorithm. This hierarchical approach generates high quality segmentations and we demonstrate the use of this segmentation as users interact with the video, enabling efficient annotation of objects within the video. I will also show some recent work on how this segmentation and annotation can be used to do dynamic scene understanding.
 
Time permitting; I will also talk about Georgia Tech’s Online Masters in Computer Science Program, offered entirely online.
 
Bio:
Irfan Essa is a Professor in the School of Interactive Computing (iC) and Associate Dean in the College of Computing (CoC), at the Georgia Institute of Technology (GA Tech), in Atlanta, Georgia, USA. Professor Essa works in the areas of Computer Vision, Computer Graphics, Computational Perception, Robotics and Computer Animation, Machine Learning, and Social Computing, with potential impact on Video Analysis and Production (e.g., Computational Photography & Video, Image-based Modeling and Rendering,  etc.) Human Computer InteractionArtificial Intelligence,Computational Behavioral/Social Sciences, and Computational Journalism research.  He has published over 150 scholarly articles in leading journals and conference venues on these topics and several of his papers have also won best paper awards. He has been awarded the NSF CAREER and was elected to the grade of IEEE Fellow. He has held extended research consulting positions with Disney Research and Google Research and also was an Adjunct Faculty Member at Carnegie Mellon’s Robotics Institute.  He joined GA Tech Faculty in 1996 after his earning his MS (1990), Ph.D. (1994), and holding research faculty position at the Massachusetts Institute of Technology (Media Lab) [1988-1996].