Keynotes


Can Machines Learn to Generate 3D Shapes?

Hao (Richard) Zhang, Professor, Simon Fraser University, Canada.

Abstract
At heart, computer aided design and graphics are both about synthesis and creation. Early success has been obtained on training deep neural networks for speech and image syntheses, while similar attempts on learning generative models for 3D shapes are met with difficult challenges. In this talk, I will first go over how the sub-field of 3D shape modeling and synthesis in computer graphics has evolved, from early model-driven approaches to recent data-driven paradigms, and highlight the challenges we must tackle. I would argue that the ultimate goal of 3D shape generation is not for the shapes to look right; they need to serve their intended (e.g., functional) purpose with the right part connection, arrangements, and geometry. Hence, I advocate the use of structural representations of 3D shapes and show our latest work on training machines to learn one such representation and an ensuing generative model. Finally, I would like to venture into creative modeling, perhaps a new territory in machine intelligence: can machines learn to generate 3D shapes creatively?

Brief Biography
Hao (Richard) Zhang is a full professor in the School of Computing Science at Simon Fraser University (SFU), Canada, where he directs the graphics (GrUVi) lab. He obtained his Ph.D. from the Dynamic Graphics Project (DGP), University of Toronto, and M.Math. and B.Math degrees from the University of Waterloo, all in computer science. Richard's research is in computer graphics with a focus on geometry modeling, shape analysis, 3D content creation, and computational design and fabrication. He has published more than 100 papers on these topics. He is an editor-in-chief of Computer Graphics Forum and an associate editor of several other journals. He has served on the program committees of all major computer graphics conferences including SIGGRAPH (+Asia), Eurographics, Symposium on Geometry Processing (SGP), among others, and is SIGGRAPH Asia 2014 course chair and a paper co-chair for SGP 2013 and Graphics Interface 2015. He received an NSERC DAS (Discovery Accelerator Supplement) Award in 2014, the Best Paper Award from SGP 2008, a Faculty of Applied Sciences Research Excellence Award at SFU in 2014, a National Science Foundation of China (NSFC) Overseas, Hongkong, and Macau Scholar Collaborative Research Award in 2015, and is an IEEE Senior Member. For his university service, he received the SFU Dean of Graduate Studies Awards for Excellence in Leadership in 2016. He is on sabbatical as a visiting professor at Stanford University in 2016-2017.
E-mail: haoz@cs.dot.sfu.ca
Website: http://www.cs.sfu.ca/~haoz/.


Face Analysis for Affective Computing

Matti Pietikäinen, Professor, University of Oulu, Finland.

Abstract
Human faces contain lots of information, including identity, demographic characteristics, emotions, direction of attention, visual speech - and even invisible information such as micro-expressions and heart rate. This information can be used, for example, in developing systems for various applications of affective computing. In recent years, we have been investigating various methods needed for building such systems. The first part of this talk provides an introduction to the problem area and potential applications. Then, it overviews our recent progress in face descriptors, face and facial expression recognition, pain analysis from facial expressions, group-level happiness analysis, micro-expression analysis, heart rate measurement from videos, visual speech analysis, and multimodal emotion analysis. Finally, some future challenges are discussed.

Brief Biography
Matti Pietikäinen received his Doctor of Science in Technology degree from the University of Oulu, Finland. He is currently a professor and senior research advisor at the Center for Machine Vision and Signal Analysis of the University of Oulu. From 1980 to 1981 and from 1984 to 1985, he visited the Computer Vision Laboratory at the University of Maryland. He has made pioneering contributions, e.g. to local binary pattern (LBP) methodology, texture-based image and video analysis, and facial image analysis. He has authored over 340 refereed papers in international journals, books, and conferences. His research is frequently cited, and its results are used in various applications around the world. Dr. Pietikäinen was Associate Editor of IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Transactions on Forensics and Security and Pattern Recognition journals, and currently serves as Associate Editor of Image and Vision Computing journal. He was President of the Pattern Recognition Society of Finland from 1989 to 1992, and was named its Honorary Member in 2014. From 1989 to 2007 he served as Member of the Governing Board of International Association for Pattern Recognition (IAPR), and became one of the founding fellows of the IAPR in 1994. He is IEEE Fellow for contributions to texture and facial image analysis for machine vision. In 2014, his research on LBP-based face description was awarded the Koenderink Prize for Fundamental Contributions in Computer Vision.
E-mail: mkp@ee.oulu.fi
Website: http://www.cse.oulu.fi/MattiPietikainen.

Fast Physics-Based Simulation of Anatomical Models

Ladislav Kavan, Assistant Professor, University of Utah, USA.

Abstract
Realistic modeling of the human body in computer animation and biomechanics requires us to simulate the mechanical behavior of soft tissues, such as muscles and fat. This is computationally expensive because of the complicated geometry and physics of anatomical structures. I will present our Quasi-Newton methods for fast numerical simulation of deformable solids, starting from mass-spring systems, generalizing to Projective Dynamics and more general hyperelastic materials discretized using linear finite elements. These numerical techniques help us accelerate anatomically-based simulations in projects such as "Computational Bodybuilding", where we simulate hypertrophy or atrophy of muscles and fat. Even more challenging is the corresponding inverse problem, where we are given surface 3D scans of the human body as input and the goal is to estimate the unobserved shapes and sizes of the bones, muscles, and fat tissues. I will conclude the talk with our recent work on anatomically-based animation of the human face. Facial muscles pull on the skin and each other in order to generate facial expressions. Physics-based animation of the face allows us to add effects such as external forces (gravity / contact / wind) or even visualize the outcomes of facial surgery procedures.

Brief Biography
Ladislav Kavan is an assistant professor of computer science at the University of Utah. Prior to joining Utah, he was an assistant professor at the University of Pennsylvania and research scientist at Disney Interactive Studios. Ladislav's research focuses on interactive computer graphics, physics-based animation, and geometry processing. His dual quaternion skinning algorithm has become a popular method to display animated 3D characters. More recently, he is exploring numerical methods in physics-based animation, with applications related to simulating the human body and the face. His goal is to combine computer graphics with biomechanics and medicine. Ladislav is a member of the ACM SIGGRAPH community and serves as an Associate Editor for ACM Transactions on Graphics.
E-mail: ladislav.kavan@gmail.com
Website: https://www.cs.utah.edu/~ladislav/.


Awareness and Influence of Human Surrogates in Augmented Reality

Gregory F. Welch, Professor, The University of Central Florida, USA.

Abstract
A surrogate or "stand-in" for a human can be realized by a real human such as an actor, or a technological human such as a virtual human, including "avatars" (controlled by humans) or "agents" (controlled by computer programs). While virtual humans can afford richness and flexibility, they are typically unaware of their surroundings and unable to affect them. To fully realize the potential of Augmented Reality (AR), virtual objects and people should be seamlessly integrated into the physical reality all around us, exhibiting both awareness of and the ability to affect real events, objects, and people. In this talk I will share some examples of human surrogates, including both virtual and physical-virtual surrogates, some research demonstrating the effects of surrogate awareness, and ideas for the future of human surrogates in AR.

Brief Biography
Gregory Welch is a Professor and the Florida Hospital Endowed Chair in Healthcare Simulation at the University of Central Florida College of Nursing. A computer scientist and engineer, he also has appointments in the College of Engineering and Computer Science and in the Institute for Simulation & Training. Welch earned his B.S. in Electrical Engineering Technology from Purdue University (highest distinction), and his M.S. and Ph.D. in Computer Science from the University of North Carolina at Chapel Hill (UNC). Previously, he was a research professor at UNC. He also worked at NASA's Jet Propulsion Laboratory and at Northrop-Grumman's Defense Systems Division. His research interests include human-computer interaction, human motion tracking, virtual and augmented reality, computer graphics and vision, and training related applications.
E-mail: welch@ucf.edu
Website: http://www.ist.ucf.edu/people/gwelch/.

Contact us

Please email inquiries concerning CAD/Graphics to:
Email:cadcg2017@csu.edu.cn or cadcg2017@gmail.com

Organizers:
In-Cooperation Support: