13 November, 2024 PM
Tam Wing Fan Innovation Wing Two, The University of Hong Kong, HONG KONG SAR
Navigate HereLigang Liu is a professor at University of Science and Technology of China, recipient of the "Distinguished Young Scholar Fund" from the National Natural Science Foundation of China. He currently serves as the Director of the Anhui Provincial Key Laboratory of Graphics Computing and Perceptual Interaction and the Chair of the Geometric Design and Computing Technical Committee of the China Society for Industrial and Applied Mathematics (CSIAM GDC). His research focuses on computer graphics and CAD/CAE, with over 50 papers published in ACM Transactions on Graphics. His accolades include the Distinguished Award of Computer Graphics in China (2024) and the inaugural ACM SIGGRAPH Asia Test-of-Time Award (2023), etc.
Wolfgang Heidrich is currently a Hong Kong STEM Visiting Professor at HKU and a Professor of Computer Science and Electrical and Computer Engineering at KAUST, where he also served as director from 2014 to 2021. He joined KAUST in 2014 after 13 years as a faculty member in the computer science department at the University of British Columbia (UBC). Prof. Heidrich has been recognized as an AAIA Fellow, IEEE Fellow, and Eurographics Fellow, and has received the ACM SIGGRAPH Achievement Award and the Test-of-Time Award. He is best known for his pioneering work in developing high dynamic range (HDR) imaging and displays, which led to the creation of the technology behind Brightside Technologies. This technology was acquired by Dolby and subsequently evolved into a core component of Dolby Vision, a leading technical solution for commercial displays.
Hongbo Fu is a Professor at the Division of Emerging Interdisciplinary Areas, HKUST. Before that, he worked at the School of Creative Media, City University of Hong Kong, for over 15 years. He had postdoctoral research training at the Imager Lab, University of British Columbia, Canada, and the Department of Computer Graphics, Max-Planck-Institut Informatik, Germany. He received a Ph.D. degree in computer science from HKUST in 2007 and a BS degree in information sciences from Peking University, China, in 2002. His primary research interests fall in computer graphics, human-computer interaction, and computer vision. His research has led to over 100 scientific publications, including 70+ papers in the best graphics/vision journals and 30+ papers in the best vision/HCI conferences. His recent works have received a Silver Medal from Special Edition 2022 Inventions Geneva Evaluation Days, the Best Demo awards at the Emerging Technologies program, ACM SIGGRAPH Asia in 2013 and 2014, and the Best Paper awards from CAD/Graphics 2015 and UIST 2019.
Junjie Chen is a Research Assistant Professor in the Department of Real Estate and Construction at HKU. Before that, he was a a postdoctoral fellow from 2020 to 2022 at HKU. He obtained his PhD in hydraulic engineering from Tianjin University in 2020, and was a visiting researcher at the University of Tennessee, Knoxville from 2018 to 2019. His research work on “smart inspection using robotic, AI, and BIM” was awarded a nationwide excellent doctoral dissertation in 2022, and selected as an invited talk at the “AI and BIM” session of Eastman Symposium in 2021. Dr. Chen has published more than 30 papers on international journals and conferences, including on top journals such as Computer-Aided Civil and Infrastructure Engineering, Resources, Conservation and Recycling, Automation in Construction, Building and Environment. He has co-authored a book, and obtained five patents. He is a member of the American Society of Civil Engineers, Institute of Electrical and Electronics Engineers, and the Chinese Hydraulic Engineering society.
Ruizhen Hu is currently a Distinguished Professor of the College of Computer Science & Software Engineering at Shenzhen University and Deputy Director of the Visual Computing Research Center (VCC). She received her Ph.D. from the Department of Mathematics at Zhejiang University. Before that, she spent two years visiting Simon Fraser University, Canada. Her research interests encompass computer graphics and embodied AI, focusing on pioneering novel techniques to accurately model the 3D world and bolster the capabilities of embodied agents to interact with the 3D world. She has received several research awards including the “Asia Graphics Young Researcher Award” and “NSFC Excellent Young Scholar Fund”. She has served as an editorial board member of IEEE TVCG, IEEE Computer Graphics and Applications, The Visual Computer, and Computers and Graphics, as well as the program co-chair for SGP 2024, CVM 2023, and SMI 2020. She has also served seven times on the technical papers program committee for SIGGRAPH/SIGGRAPH Asia.
Taku Komura joined the University of Hong Kong in 2020. Before joining HKU, he worked at the University of Edinburgh (2006-2020), City University of Hong Kong (2002-2006) and RIKEN (2000-2002). He received his BSc, MSc and PhD in Information Science from University of Tokyo. His research has focused on data-driven character animation, physically-based character animation, crowd simulation, 3D modelling, cloth animation, anatomy-based modelling and robotics. Recently, his main research interests have been on physically-based animation and the application of machine learning techniques for animation synthesis. He received the Royal Society Industry Fellowship (2014) and the Google AR/VR Research Award (2017).
Weiwei Xu is currently a professor at State Key Lab of CAD&CG, Zhejiang University. He was a Qianjiang Professor at Hangzhou Normal University and a researcher in Internet Graphics Group at Microsoft Research Asia from 2005 to 2012, and he was a post-doc researcher at Ritsumeikan University in Japan for more than one year. He received a Ph.D. in Computer Graphics from Zhejiang University, Hangzhou, and B.S. and M.S. degrees in Computer Science from Hohai University in 1996 and 1999, respectively.
3D Gaussian Splatting (3DGS) has revolutionized radiance field reconstruction by achieving high- quality novel view synthesis with a fast rendering speed, introducing 3D Gaussian primitives as the rendering field. However, current 3DGS reconstruction methods may have blurring, artefacts and floaters, which is caused by reconstruction of redundant and incorrect geometric structures in complex scene. We attribute this issue to the unstable optimization of the Gaussian positions, which caused by the unequal of each Gaussian parameter in optimization. We introduce a novel method that combines the benefits of the 3DGS representation with fluid simulation, overcoming the drawbacks of the original 3DGS optimization. Our key idea is to replicate the viscous term from fluid simulations to enhance the stability of the optimization procedure which ensures stable optimization of Gaussians.
Traditional game and filming industries heavily rely on professional artists to make 2D and 3D visual content. In contrast, future industries such as metaverse and 3D printing highly demand digital content from personal users. With modern software, ordinary users can easily produce text documents, create simple drawings, make simple 3D models consisting of primitives, take images/videos, and possibly edit them with pre-defined filters. However, creating photorealistic images from scratch, fine-grained image retouching (e.g., for body reshaping), detailed 3D models, vivid 3D animations, etc., often require extensive training with professional software and is time-consuming, even for skillful artists. Generative AI, e.g., based on ChatGPT and Midjourney, recently has taken a big step and allows the easy generation of unique and high-quality images from text prompts. However, various problems, such as controllability and generation beyond images, still need to be solved. Besides AI, the recent advances in Augmented/Virtual Reality (AR/VR) software and hardware bring unique challenges and opportunities for content creation. In this talk, I will introduce my attempts to lower the barrier of content creation, making such tools more accessible to novice users. I will mainly focus on sketch-based content generation and content creation with AR/VR.
People have long been intrigued by a future where robots co-exist side-by-side with human beings in the built environment. However, the realization of this seemingly utopian scenario needs to overcome a major challenge — how to empower robots to effectively perceive the dynamic, unstructured, and complex environment inside a building? The Building Information Model/Modeling (BIM), initially proposed as a computer graphics tool for architectural design, turns out to form a digital representation of the built facilities that lasts through the project lifecycle. In this talk, the presenter explores the potential of BIM as a readily available world model of the built environment to facilitate robot perception. Examples such as BIM-enabled indoor positioning, material recognition, and robot-building integrated design will be given. The presenter will conclude by sharing his vision of a fused symbiosis where the robot and the building mutually benefit from each other, both in a digital and physical sense.
Crafting realistic 3D environments and animating diverse character activities are pivotal tasks within computer graphics, serving not only the traditional sectors of film animation and virtual reality but also the burgeoning field of robot learning. In this talk, I will present our recent endeavors in interaction planning and generation, bridging the virtual and real worlds. Our objective is to empower virtual agents and real robots to engage with their surroundings as humans do, accomplishing a multitude of tasks. I will outline the challenges we face and how we leverage a blend of traditional geometric computing and cutting-edge machine learning techniques to surmount them.
Simulating the deformation of soft and stiff materials is crucial for the design of garments, composite objects, and the control of robots. In this talk, I will present our recent simulation research. First, I will introduce our discovery of an analytical solution for decomposing the rotation and stretching components of the deformation gradient, which eliminates the need for numerical iterations. This advancement accelerates the computation of energies in constitutive models such as As-Rigid-As-Possible (ARAP). We extend this approach to anisotropic energy, enabling the simulation of garment deformation. Finally, I will discuss our recent work on accelerating the simulation of deformable objects using GPUs. Our method effectively handles both stiff and soft materials within a unified framework. Our research significantly accelerates the simulation of contacts and deformations while preserving precision, greatly benefiting design and robotic applications.
The end-to-end 3D reconstruction technology for 3D scenes has significantly enhanced the automation in recovering 3D representations from posed images, resulting in the rapid development of image-based rendering and 3D Scene Reconstruction in recent years. This method is powered by differentiable rendering and neural representation techniques, supporting virtual reality applications like free-viewpoint rendering, volumetric video, and holographic conferencing. This report will first introduce the end-to-end learning approach for unstructured lumigraph rendering, where a per-view representation is automaticallhy learned for ray interpolation. Second, We will talk about a Gaussian surfel representation that is developed ot enhance the geometric accuracy in 3DGS.