How 5G and AI are Powering Our Intelligent Future
With the new generation of 5G + AI technology, the post-epidemic era is spawning the next wave of connected edge-computing SoC revolution. Essentially, the success of AI relies on big learning data and powerful computing capabilities. Behind the good user experience of AI benefits, it is often accompanied by personal privacy and security issues. To satisfy the better user experience such as personal privacy, real-time response and ubiquitous availability, the market strongly demands connected edge devices. For smartphone devices, mobile AI is evolving from face recognition and object detection to image enhancement. In terms of high-end TV, AI functions are also evolving from scene detection and image segmentation to pixel-level super resolution. Such demand for higher computing power and resources is accelerating the innovation of connected smart devices. From daily life at home and workplace to the transportation process, practical intelligence has become an irreversible trend.
In practical AI applications, deep neural network operations need to integrate multiple computing processes at the same time, such as image processing, 3D graphics, and wireless transmission. The demand brings higher and higher challenges to chip design, especially in complex multiplexing, heat generated by each computing unit, and finite memory bandwidth. This presentation will discuss the new applications and opportunities brought by breakthroughs 5G + AI technology, as well as new challenges to chip design.
Dr. Lu is the Senior Director of Computing and Artificial intelligence Technology Group at MediaTek. He is responsible for the ML/DL algorithm/tool development and deployment in phone, camera, tablet, and TV products. The application covers video and image object detection, picture quality enhancement, noise reduction, etc.
Prior to MediaTek, Dr. Lu served as the General Manager of Video IoT (iVoT) Business Unit at Novatek. He was responsible for business and technology planning and execution. iVoT is the leader in dash cam market over years. In 2017, iVoT developed the first SoC integrating deep learning accelerator with 4K-resolution ISP (image signal processing) and video codec for surveillance camera. The SoC and follow-up replace the discrete solution and achieve a commercial success.
Prior to Novatek, Dr. Lu funded Afatek, a fabless company developing RF, digital modulator, and demodulator chips. Afatek developed the first silicon integrating RF front-end with demodulator for digital TV in 2006, and subsequently developed the first silicon integrating RF front-end with multi-standard modulator for surveillance application. Afatek was acquired by iTE in 2008.
Dr. Lu worked in Silicon Valley from 1997-2002. He joined the Excess Bandwidth Corp. funded by Stanford professors specializing in signal processing. He was responsible for the communication front-end and circuit-design. The start-up delivered the best performance symmetrical high speed DSL (SHDSL) modem prototype in six months. They subsequently developed integrated chipset for SHDSL modem in twelve months. Excess Bandwidth Corp. was acquired by Virata Corp (now Conexant) in 2000. Dr. Lu was a member of technical staff at Hewlett Packard prior to Excess Bandwidth.
Dr. Lu received his M.S. and Ph.D. in Electrical Engineering from Stanford University. He led two projects in WDM MAN and LAN testbed development funded by Sprint and ARPA. He published more than 30 papers in the optical communication and photonic switching fields.
Faces & Emotional AI
This talk is about emotional AI, about machine learning and computer vision methods developed for various human-centric AI applications, and about the face analysis technology in general.
Maja Pantic obtained her PhD degree in computer science in 2001 from Delft University of Technology, the Netherlands. Until 2005, she was an Assistant/ Associate Professor at Delft University of Technology. In 2006, she joined the Imperial College London, Department of Computing, UK, where she is Professor of Affective & Behavioural Computing and the Head of the iBUG group, working on machine analysis of human non-verbal behaviour. From April 2018 to April 2020, she was the Research Director of Samsung AI Research Centre in Cambridge. In April 2020, she joined Facebook as an AI Scientific Research Lead in Facebook London.
Prof. Pantic is one of the world's leading experts in the research on machine understanding of human behavior including vision-based detection, tracking, and analysis of human behavioral cues like facial expressions and body gestures, and multimodal analysis of human behaviors like laughter, social signals, and affective states. Prof. Pantic received various awards for her work including BCS Roger Needham Award, awarded annually to a UK based researcher for a distinguished research contribution in computer science, and IAPR Maria Petrou Award, awarded biannually to a living female scientist for her contributions to the field of Pattern Recognition. She is a Fellow of the UK's Royal Academy of Engineering, an IEEE Fellow and an IAPR Fellow.
TEDx CERN talk: https://www.youtube.com/watch?v=4QjZDUaDxQU
WEF 2016 talk: https://www.youtube.com/watch?v=ZHxsRpd0XjI&t=10s
Styles, Trends, and Influences from Large-Scale In-the-Wild Fashion Photos
The fashion domain is a magnet for computer vision. New vision problems are emerging in step with the fashion industry's rapid evolution towards an online, social, and personalized business. Style models, trend forecasting, and recommendation all require visual understanding with rich detail and subtlety. Importantly, not only can this visual understanding benefit individual users, but when analyzed across large-scale multi-modal data, it also can reveal how cultural factors and world events dynamically influence what people around the world wear.
I will present our work developing computer vision methods for fashion. To begin, we explore how to discover styles from Web photos, so as to optimize mix-and-match wardrobes, suggest minimal edits to make an outfit more fashionable, and recommend clothing that flatters diverse human body shapes. Next, turning to the world stage, we investigate fashion forecasting and influence. Learned directly from photos, our models forecast what styles will be popular in the future, while accounting for how trends propagate in space and time across 44 major world cities. Finally, building on this notion of fashion influence, we quantify which cultural factors (as captured by millions of news articles) most affect the clothes people choose to wear across a century of vintage photos.
Kristen Grauman is a Professor in the Department of Computer Science at the University of Texas at Austin and a Research Scientist in Facebook AI Research (FAIR). Her research in computer vision and machine learning focuses on video, visual recognition, and embodied perception. Before joining UT-Austin in 2007, she received her Ph.D. at MIT. She is an IEEE Fellow, AAAI Fellow, Sloan Fellow, and recipient of the 2013 Computers and Thought Award. She was inducted into the UT Academy of Distinguished Teachers in 2017. She and her collaborators have been recognized with several Best Paper awards in computer vision, including a 2011 Marr Prize and a 2017 Helmholtz Prize (test of time award). She served as an Associate Editor-in-Chief for the Transactions on Pattern Analysis and Machine Intelligence (PAMI) and a Program Chair of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2015 and Neural Information Processing Systems (NeurIPS) 2018.