Skip to main content

Forum on Data Science and AI (DSAI)

https://www.cityu.edu.hk/sdsc_web/data-science-and-ai-2022/

Featured Speakers

Speaker Photo
John E. Hopcroft
Turing Award (1986)
Cornell University, USA
Biography
popup close button
John E. Hopcroft

Turing Award (1986)
Cornell University, USA

John E. Hopcroft is the IBM Professor of Engineering and Applied Mathematics in Computer Science at Cornell University. From January 1994 until June 2001, he was the Joseph Silbert Dean of Engineering. After receiving both his M.S. (1962) and Ph.D. (1964) in electrical engineering from Stanford University, he spent three years on the faculty of Princeton University. He joined the Cornell faculty in 1967, was named professor in 1972 and the Joseph C. Ford Professor of Computer Science in 1985. He served as chairman of the Department of Computer Science from 1987 to 1992 and was the associate dean for college affairs in 1993. An undergraduate alumnus of Seattle University, Hopcroft was honored with a Doctor of Humanities Degree, Honoris Causa, in 1990.

Hopcroft's research centers on theoretical aspects of computing, especially analysis of algorithms, automata theory, and graph algorithms. He has coauthored four books on formal languages and algorithms with Jeffrey D. Ullman and Alfred V. Aho. His most recent work is on the study of information capture and access.

He was honored with the A. M. Turing Award in 1986. He is a member of the National Academy of Sciences (NAS), the National Academy of Engineering (NAE), a foreign member of the Chinese Academy of Sciences, and a fellow of the American Academy of Arts and Sciences (AAAS), the American Association for the Advancement of Science, the Institute of Electrical and Electronics Engineers (IEEE), and the Association of Computing Machinery (ACM). In 1992, he was appointed by President Bush to the National Science Board (NSB), which oversees the National Science Foundation (NSF), and served through May 1998. From 1995-98, Hopcroft served on the National Research Council's Commission on Physical Sciences, Mathematics, and Applications.

In addition to these appointments, Hopcroft serves as a member of the SIAM financial management committee, IIIT New Delhi advisory board, Microsoft's technical advisory board for research Asia, and the Engineering Advisory Board, Seattle University.

Math for the Big Data Revolution

The size of data has become enormous. One needs significant mathematical tools to process and abstract information from big data collections. We are living in an information revolution in which processing larger and larger data sets will become common. As the size of data sets increases, more subtle information can be extracted. This talk will illustrate the mathematical background needed to be successful in the information age.

Speaker Photo
Kai-Fu Lee
Chairman and CEO, Sinovation Ventures
President, Sinovation Ventures Artificial Intelligence Institute
Biography
popup close button
Kai-Fu Lee

Chairman and CEO, Sinovation Ventures
President, Sinovation Ventures Artificial Intelligence Institute

Kai-Fu Lee is the Chairman and CEO of Sinovation Ventures (www.sinovationventures.com) and President of Sinovation Venture’s Artificial Intelligence Institute. Sinovation Ventures, managing US$3 billion dual currency investment funds, is a leading venture capital firm focusing on developing the next generation deep tech companies in China. Prior to founding Sinovation in 2009, Dr. Lee was the President of Google China, and senior executives at Microsoft, SGI, and Apple. Dr. Lee received his Bachelor degree from Computer Science from Columbia University, Ph.D. from Carnegie Mellon University, as well as Honorary Doctorate Degrees from both Carnegie Mellon and the City University of Hong Kong. He is the Co-Chair of Artificial Intelligence Council for World Economic Forum Center for the Fourth Industrial Revolution, Fellow of the Institute of Electrical and Electronics Engineers (IEEE), Times 100 in 2013, WIRED 25 Icons, and followed by over 50 million audience on social media.

In the field of artificial intelligence, Dr. Lee built one of the first game playing programs to defeat a world champion (1988, Othello), as well as the world’s first large-vocabulary, speaker- independent continuous speech recognition system. Dr. Lee founded Microsoft Research China, later renamed Microsoft Research Asia, which was named as the hottest research lab by MIT Technology Review. While with Apple, Dr. Lee led AI projects in speech and natural language, which have been featured on Good Morning America on ABC Television and the front page of Wall Street Journal. He has authored 10 U.S. patents, and more than 100 journal and conference papers. Altogether, Dr. Lee has been in artificial intelligence research, development, and investment for more than 30 years. His New York Times and Wall Street Journal bestselling book AI Superpowers: China, Silicon Valley, and the New World Order (aisuperpowers.com) discusses US-China co-leadership in the age of AI as well as the greater societal impacts brought upon by the AI technology revolution. His new co-authored book AI 2041 published in fall 2021 explores how artificial intelligence will change our world over the next twenty years.

How AI Will Transform Our World

AI is fundamentally transforming every aspect of human life on an unimaginable scale, revolutionizing the making of goods to generating unprecedented wealth or creating brand new forms of interactions. Join internationally renowned AI expert Dr. Kai-Fu Lee, bestselling author of AI Superpowers, as he introduces new predictions of the next five major technology trends of our century. In this illuminating talk, Dr. Lee will guide us through most current breakthroughs in the fields of artificial intelligence, automation & robotics, life sciences, new energy, quantum computing and the cross pollination possibilities across these disciplines. Based on his predictions, AI and automation will change everything from how things are produced to how business decisions are made, leading up to the “age of plentitude." AI coupled with other technology breakthroughs will benefit human well-being and longevity, accelerate new sources of clean energy and safer food. Dr. Lee will also decipher the rise of China under the global technological paradigms on the trajectory to become a deep tech superpower.

Keynote Speakers

qiang-yang.jpg
Qiang Yang
Hong Kong University of Science and Technology, China
Biography
popup close button
Qiang Yang

Hong Kong University of Science and Technology, China

Qiang Yang is a Fellow of Canadian Academy of Engineering (CAE) and Royal Society of Canada (RSC), Chief Artificial Intelligence Officer of WeBank and Chair Professor of CSE Department of Hong Kong Univ. of Sci. and Tech. He is the Conference Chair of AAAI-21, President of Hong Kong Society of Artificial Intelligence and Robotics(HKSAIR) , the President of Investment Technology League (ITL) and Open Islands Privacy-Computing Open-source Community, and former President of IJCAI (2017-2019). He is a fellow of AAAI, ACM, IEEE and AAAS. His research interests include transfer learning and federated learning. He is the founding EiC of two journals: IEEE Transactions on Big Data and ACM Transactions on Intelligent Systems and Technology. His latest books are Transfer LearningFederated LearningPrivacy-preserving Computing and Practicing Federated Learning.

Recent Advances in Trustworthy Federated Learning

Federated learning is an important intersection of AI, big data and privacy computing. How to make federated learning safe, trustworthy and efficient at the same time is the focus of future industry and research. In my lecture, I will systematically review the progress and challenges of AI and introduce federated learning as a secure distributed approach to AI modeling. I will then discuss several important research and application directions.

Yi Ma
Yi Ma
University of California, Berkeley, USA
Biography
popup close button
Yi Ma

University of California, Berkeley, USA

Yi Ma is a Professor at the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. His research interests include computer vision, high-dimensional data analysis, and intelligent systems. Yi received his Bachelor’s degrees in Automation and Applied Mathematics from Tsinghua University in 1995, two Masters degrees in EECS and Mathematics in 1997, and a PhD degree in EECS from UC Berkeley in 2000.  He has been on the faculty of UIUC ECE from 2000 to 2011, the principal researcher and manager of the Visual Computing group of Microsoft Research Asia from 2009 to 2014, and the Executive Dean of the School of Information Science and Technology of ShanghaiTech University from 2014 to 2017. He then joined the faculty of UC Berkeley EECS in 2018. He has published about 60 journal papers, 120 conference papers, and three textbooks in computer vision, generalized principal component analysis, and high-dimensional data analysis. He received the NSF Career award in 2004 and the ONR Young Investigator award in 2005. He also received the David Marr prize in computer vision from ICCV 1999 and best paper awards from ECCV 2004 and ACCV 2009. He has served as the Program Chair for ICCV 2013 and the General Chair for ICCV 2015. He is a Fellow of IEEE, ACM, and SIAM. 

CTRL: Closed-Loop Data Transcription via Rate Reduction

In this talk we introduce a principled computational framework for learning a compact structured representation for real-world datasets. More specifically, we propose to learn a closed-loop transcription between the distribution of a high-dimensional multi-class dataset and an arrangement of multiple independent subspaces, known as a linear discriminative representation (LDR). We argue that the encoding and decoding mappings of the transcription naturally form a closed-loop sensing and control system. The optimality of the closed-loop transcription, in terms of parsimony and self-consistency, can be characterized in closed-form by an information-theoretic measure known as the rate reduction. The optimal encoder and decoder can be naturally sought through a two-player minimax game over this principled measure. To a large extent, this new framework unifies concepts and benefits of auto-encoding and GAN and generalizes them to the settings of learning a both discriminative and generative representation for multi-class visual data. This work opens many new mathematical problems regarding learning linearized representations for nonlinear submanifolds in high-dimensional spaces, as well as suggests potential computational mechanisms about how visual memory of multiple object classes could be formed jointly or incrementally through a purely internal closed-loop feedback process.

Related papers can be found at: https://arxiv.org/abs/2111.06636https://arxiv.org/abs/2105.10446, and https://arxiv.org/abs/2202.05411.

nick-sahinidis.jpg
Nick Sahinidis
Georgia Institute of Technology, USA
Biography
popup close button
Nick Sahinidis

Georgia Institute of Technology, USA

Nick Sahinidis is Butler Family Chair and Professor of Industrial & Systems Engineering and Chemical & Biomolecular Engineering at the Georgia Institute of Technology. Dr. Sahinidis previously taught at the University of Illinois at Urbana-Champaign (1991-2007) and Carnegie Mellon University (2007-2020). He has pioneered algorithms and developed widely used software for optimization and machine learning. He received the INFORMS Computing Society Prize in 2004, the Beale-Orchard-Hays Prize from the Mathematical Programming Society in 2006, the Computing in Chemical Engineering Award in 2010, the Constantin Carathéodory Prize in 2015, and the National Award and Gold Medal from the Hellenic Operational Research Society in 2016. He is a member of the US National Academy of Engineering, a fellow of INFORMS, a fellow of AIChE, and the Editor-in-Chief of Optimization and Engineering.

https://sahinidis.coe.gatech.edu/
nikos@gatech.edu

Data-driven Optimization

This talk presents recent theoretical, algorithmic, and methodological advances for black-box optimization problems for which optimization must be performed in the absence of an algebraic formulation, i.e., by utilizing only data originating from simulations or experiments. We investigate the relative merits of optimizing surrogate models based on generalized linear models and deep learning. Additionally, we present new optimization algorithms for direct data-driven optimization. Our approach combines model-based search with a dynamic domain partition strategy that guarantees convergence to a global optimum. Equipped with a clustering algorithm for balancing global and local search, the proposed approach outperforms existing derivative-free optimization algorithms on a large collection of problems.

dacheng-tao.jpg
Dacheng Tao
Inaugural Director, JD Explore Academy, China
Senior Vice President, JD.com
Biography
popup close button
Dacheng Tao

Inaugural Director, JD Explore Academy, China
Senior Vice President, JD.com

Dacheng Tao is the Inaugural Director of the JD Explore Academy and a Senior Vice President of JD.com. He is also an advisor and chief scientist of the digital science institute in the University of Sydney. He mainly applies statistics and mathematics to artificial intelligence and data science, and his research is detailed in one monograph and over 200 publications in prestigious journals and proceedings at leading conferences. He received the 2015 Australian Scopus-Eureka Prize, the 2018 IEEE ICDM Research Contributions Award, and the 2021 IEEE Computer Society McCluskey Technical Achievement Award. He is a fellow of the Australian Academy of Science, the World Academy of Sciences, the Royal Society of NSW, AAAS, ACM, IAPR and IEEE.

More Is Different: ViTAE elevates the art of computer vision

Big data contains a tremendous amount of dark knowledge. The community has realized that effectively exploring and using such knowledge is essential to achieving superior intelligence. How can we effectively distill the dark knowledge from ultra-large-scale data? One possible answer is: “through Transformers”. Transformers have proven their prowess at extracting and harnessing the dark knowledge from data. This is because more is truly different when it comes to Transformers. In this talk, I will showcase our recent work on transformers named ViTAE, on many dimensions of “more” including: model parameters, labeled and unlabeled data, prior knowledge, computing resource, tasks, and modalities. Specifically, ViTAE has more model parameters and more input modality support; ViTAE can absorb and encode more data to extract more dark knowledge; ViTAE is able to adopt more prior knowledge in the form of biases and constraints; ViTAE can be easily adapted to larger-scale parallel computing resources to achieve faster training.

ViTAE has been applied to many computer vision tasks and has proven its promise, such as image recognition, object detection, semantic segmentation, image matting, pose estimation, scene text understanding, and remote sensing.

You can find the source code for this work at here.

Invited Speakers

yiran-chen.jpg
Yiran Chen
Duke University, USA
Biography
popup close button
Yiran Chen

Duke University, USA

Yiran Chen received B.S (1998) and M.S. (2001) from Tsinghua University and Ph.D. (2005) from Purdue University. After five years in industry, he joined University of Pittsburgh in 2010 as Assistant Professor and then was promoted to Associate Professor with tenure in 2014, holding Bicentennial Alumni Faculty Fellow. He is now the Professor of the Department of Electrical and Computer Engineering at Duke University and serving as the director of the NSF AI Institute for Edge Computing Leveraging the Next-generation Networks (Athena) and the NSF Industry–University Cooperative Research Center (IUCRC) for Alternative Sustainable and Intelligent Computing (ASIC), and the co-director of Duke Center for Computational Evolutionary Intelligence (CEI). His group focuses on the research of new memory and storage systems, machine learning and neuromorphic computing, and mobile computing systems. Dr. Chen has published 1 book and about 500 technical publications and has been granted 96 US patents. He has served as the associate editor of a dozen international academic transactions/journals and served on the technical and organization committees of more than 60 international conferences. He is now serving as the Editor-in-Chief of the IEEE Circuits and Systems Magazine. He received seven best paper awards, one best poster award, and fifteen best paper nominations from international conferences and workshops. He received many professional awards and is the distinguished lecturer of IEEE CEDA (2018-2021). He is a Fellow of the ACM and IEEE and now serves as the chair of ACM SIGDA.

Scalable, Heterogeneity-Aware and Trustworthy Federated Learning

Federated learning has become a popular distributed machine learning paradigm for developing on-device AI applications. However, the data residing across devices is intrinsically statistically heterogeneous (i.e., non-IID data distribution) and mobile devices usually have limited communication bandwidth to transfer local updates. Such statistical heterogeneity and communication limitation are two major bottlenecks that hinder applying federated learning in practice. In addition, recent works have demonstrated that sharing model updates makes federated learning vulnerable to inference attacks and model poisoning attacks. In this talk, we will present our recent works on novel federated learning frameworks to address the scalability and heterogeneity issues simultaneously. In addition, we will also reveal the essential reason the of adversarial vulnerability of deep learning models and the privacy leakage in federated learning procedures, and provide the defense mechanisms accordingly towards trustworthy federated learning.

chuchu-fan.jpg
Chuchu Fan
Massachusetts Institute of Technology, USA
Biography
popup close button
Chuchu Fan

Massachusetts Institute of Technology, USA

Chuchu Fan an Assistant Professor in the Department of Aeronautics and Astronautics at MIT. Before that, she was a postdoc researcher at Caltech and got her Ph.D. from the Electrical and Computer Engineering Department at the University of Illinois at Urbana-Champaign in 2019. She earned her bachelor’s degree from Tsinghua University, Department of Automation. Her group at MIT works on using rigorous mathematics including formal methods, machine learning, and control theory for the design, analysis, and verification of safe autonomous systems. Chuchu’s dissertation work “Formal methods for safe autonomy” won the ACM Doctoral Dissertation Award in 2020.

Building Certifiably Safe and Correct Large-scale Autonomous

The introduction of machine learning (ML) creates unprecedented opportunities for achieving full autonomy. However, learning-based methods in building autonomous systems can be extremely brittle in practice and are not designed to be verifiable. In this talk, I will present several of our recent efforts that combine ML with formal methods and control theory to enable the design of provably dependable and safe autonomous systems. I will introduce our techniques to generate safety certificates and certified decision and control for complex large-scale multi-agent autonomous systems, even when the agents follow nonlinear and nonholonomic dynamics and need to satisfy high-level specifications.

Yingying Fan
Yingying Fan
University of Southern California, USA
Biography
popup close button
Yingying Fan

University of Southern California, USA

Yingying Fan is Centennial Chair in Business Administration and Professor in Data Sciences and Operations Department of the Marshall School of Business at the University of Southern California. She received her Ph.D. in Operations Research and Financial Engineering from Princeton University in 2007. She was Lecturer in the Department of Statistics at Harvard University from 2007-2008 and Dean's Associate Professor in Business Administration at USC from 2018-2021. Her research interests include statistics, data science, machine learning, economics, big data and business applications. Her latest works have focused on statistical inference for networks, and AI models empowered by some most recent developments in random matrix theory and statistical learning theory. She is the recipient of the Institute of Mathematical Statistics Medallion Lecture (2023), the International Congress of Chinese Mathematicians 45-Minute Invited Lecture (2022), Centennial Chair in Business Administration (2021, inaugural holder), NSF Focused Research Group (FRG) Grant (2021), Fellow of Institute of Mathematical Statistics (2020), Associate Member of USC Norris Comprehensive Cancer Center (2020), Fellow of American Statistical Association (2019), Dean's Associate Professor in Business Administration (2018), NIH R01 Grant (2018), the Royal Statistical Society Guy Medal in Bronze (2017), USC Marshall Dean's Award for Research Excellence (2017), the USC Marshall Inaugural Dr. Douglas Basil Award for Junior Business Faculty (2014), the American Statistical Association Noether Young Scholar Award (2013), and the NSF Faculty Early Career Development (CAREER) Award (2012). She has served as an associate editor of The Annals of Statistics (2022-present), Information and Inference (2022-present), Journal of the American Statistical Association (2014-present), Journal of Econometrics (2015-2018), Journal of Business & Economic Statistics (2018-present), The Econometrics Journal (2012-present), and Journal of Multivariate Analysis (2013-2016).

Asymptotic Properties of High-Dimensional Random Forests

As a flexible nonparametric learning tool, random forests algorithm has been widely applied to various real applications with appealing empirical performance, even in the presence of high-dimensional feature space. Unveiling the underlying mechanisms has led to some important recent theoretical results on the consistency of the random forests algorithm and its variants. However, to our knowledge, all existing works concerning random forests consistency in high dimensional setting were established for various modified random forests models where the splitting rules are independent of the response. In light of this, in this paper we derive the consistency rates for the random forests algorithm associated with the sample CART splitting criterion, which is the one used in the original version of the algorithm (Breiman2001), in a general high-dimensional nonparametric regression setting through a bias-variance decomposition analysis. Our new theoretical results show that random forests can indeed adapt to high dimensionality and allow for discontinuous regression function. Our bias analysis characterizes explicitly how the random forests bias depends on the sample size, tree height, and column subsampling parameter. Some limitations on our current results are also discussed.

ruth-misener.jpg
Ruth Misener
Imperial College London, UK
Biography
popup close button
Ruth Misener

Imperial College London, UK

Ruth Misener is Professor in Computational Optimization in the Imperial College London Department of Computing. Ruth holds the BASF / Royal Academy of Engineering Research Chair in Data-Driven Optimization (2022 - 2027) and is also an Early Career Research Fellow (2017 - 2022) of the Engineering & Physical Sciences Research Council. 

Ruth received an SB from MIT and a PhD from Princeton. Foundations of her research are in numerical optimization algorithms. Applications include decision-making under uncertainty, energy efficiency, process network design & operations, and scheduling. Ruth’s research team makes their software contributions available open source (https://github.com/cog-imperial). Ruth received the 2017 Macfarlane Medal from the Royal Academy of Engineering and the 2020 Outstanding Young Researcher Award from the AIChE Computing & Systems Technology Division.

OMLT: Optimization and Machine Learning Toolkit

This talk introduces OMLT (https://github.com/cog-imperial/OMLT), an open source software package incorporating surrogate models, which have been trained using machine learning, into larger optimisation problems. Computer science applications include maximizing a neural acquisition function and verifying neural networks. Engineering applications include the use of machine learning models to replace complicated constraints in larger design/operations problems. OMLT 1.0 supports GBTs through an ONNX (https://github.com/onnx/onnx) interface and NNs through both ONNX and Keras interfaces. We discuss the advances in optimisation technology that made OMLT possible and show how OMLT seamlessly integrates with the python-based algebraic modeling language Pyomo (http://www.pyomo.org). The literature often presents different optimization formulations as competitors, but in OMLT, competing formulations become alternatives: users can select the best for a specific application. We provide examples including neural network verification, autothermal reformer optimization, and Bayesian optimization.

peng-shi.jpg
Peng Shi
University of Adelaide, Australia
Biography
popup close button
Peng Shi

University of Adelaide, Australia

Peng Shi received the PhD degree in Electrical Engineering from the University of Newcastle, Australia, and the PhD degree in Mathematics from the University of South Australia. He was awarded two higher doctorates -- Doctor of Science degree from the University of Glamorgan, UK, and the Doctor of Engineering degree from the University of Adelaide, Australia. He is now a Professor at the School of Electrical and Electronic Engineering, and the Director of Advanced Unmanned Systems Laboratory, at the University of Adelaide, Australia. His research interests include systems and control theory and applications to network systems, robotic and autonomous systems, cyber-physical systems, and intelligent systems. He has been continuously recognized as a Highly Cited Researcher in both fields of engineering and computer science by Clarivate Analytics/Thomson Reuters from 2014 to 2021. He has also been acknowledged in the Lifetime Achiever Leader Board in engineering and information technology, and honored as the Field Leader by THE AUSTRALIAN, consecutively from 2019 to 2021. He has served on the editorial board for many journals, including Automatica, and IEEE Transactions on (Automatic Control, Circuits and Systems, Fuzzy Systems), and IEEE Control Systems Letters. Now he serves as the Editor-in-Chief of IEEE Transactions on Cybernetics, Co-Editor of Australian Journal of Electrical and Electronic Engineering, and Senior Editor of IEEE Access. His professional services also include as the President of the International Academy for Systems and Cybernetic Sciences, the Vice President of IEEE SMC Society, and IEEE Distinguished Lecturer. He is a Member of the Academy of Europe, a Fellow of IEEE, IET, IEAust and CAA.

Cyber-physical Systems: Analysis and Design

Cyber-physical systems (CPSs) are the mechanisms controlled or monitored by computer-based algorithms, tightly integrated with the internet and its users. CPSs are the central research topic in the era of Industrial 4.0, and continue to be in the forthcoming Industrial 5.0, which have attracted a lot attention in the past years. Undergoing an ever-enriching cognitive process, CPSs deeply integrates control, communication, computation, cloud and cognition. In this talk, we firstly review some basic knowledge with respect to the concepts, history, and some viewpoints on CPS security. Next, some commonly appeared malicious threats will be introduced.

Yang Shi
Yang Shi
University of Victoria, Canada
Biography
popup close button
Yang Shi

University of Victoria, Canada

Yang SHI received his B.Sc. and Ph.D. degrees in mechanical engineering and automatic control from Northwestern Polytechnical University, Xi’an, China, in 1994 and 1998, respectively, and the Ph.D. degree in electrical and computer engineering from the University of Alberta, Edmonton, AB, Canada, in 2005. From 2005 to 2009, he was an Assistant Professor and Associate Professor in the Department of Mechanical Engineering, University of Saskatchewan, Saskatoon, SK, Canada. In 2009, he joined the University of Victoria, and now he is a Professor in the Department of Mechanical Engineering, University of Victoria, Victoria, BC, Canada. His current research interests include networked and distributed systems, model predictive control (MPC), cyber-physical systems (CPS), robotics and mechatronics, navigation and control of autonomous systems (AUV and UAV), and energy system applications.

Dr. Shi received the University of Saskatchewan Student Union Teaching Excellence Award in 2007, and the Faculty of Engineering Teaching Excellence Award in 2012 at the University of Victoria (UVic). He is the recipient of the JSPS Invitation Fellowship (short-term) in 2013, the UVic Craigdarroch Silver Medal for Excellence in Research in 2015, the 2017 IEEE Transactions on Fuzzy Systems Outstanding Paper Award, the Humboldt Research Fellowship for Experienced Researchers in 2018. He is VP on Conference Activities IEEE IES and the Chair of IEEE IES Technical Committee on Industrial Cyber-Physical Systems. Currently, he is Co-Editor-in-Chief for IEEE Transactions on Industrial Electronics; he also serves as Associate Editor for Automatica, IEEE Transactions on Automatic Control, etc.

He is a Fellow of IEEE, ASME, CSME, and Engineering Institute of Canada (EIC), and a registered Professional Engineer in British Columbia, Canada.

Accelerated Dual Averaging Methods for Decentralized Constrained Optimization

Decentralized optimization techniques offer high quality solutions to various engineering problems, such as resource allocation and distributed estimation and control. Advantages of decentralized optimization over its centralized counterpart lie in that it can provide a flexible and robust solution framework where only locally light computations and peer-to-peer communication are required to minimize a global objective function. In this work, we report the decentralized convex constrained optimization problems in networks. A novel decentralized dual averaging (DDA) algorithm is proposed. In the algorithm, a second-order dynamic average consensus protocol is tailored for DDA-type algorithms, which equips each agent with a provably more accurate estimate of the global dual variable than conventional schemes. Such accurate estimate validates the use of a large constant parameter within the local inexact dual averaging step performed by individual agents. Compared to existing DDA methods, the rate of convergence is improved to $\mathcal{O}({1}/{t})$ where $t$ is the time counter. Finally, numerical results are presented to demonstrate the efficiency of the proposed methods.

kay-chen-tan.jpg
Kay Chen Tan
Hong Kong Polytechnic University, China
Biography
popup close button
Kay Chen Tan

Hong Kong Polytechnic University, China

Kay Chen Tan is currently a Chair Professor (Computational Intelligence) and Associate Head (Research and Developments) of the Department of Computing, The Hong Kong Polytechnic University. He has co-authored 7 books and published over 230 peer-reviewed journal papers. Prof. Tan is currently the Vice-President (Publications) of IEEE Computational Intelligence Society, USA. He was the Editor-in-Chief of IEEE Transactions on Evolutionary Computation from 2015-2020 (IF: 11.554), and IEEE Computational Intelligence Magazine from 2010-2013 (IF: 11.356). Prof. Tan is an IEEE Fellow, an IEEE Distinguished Lecturer Program (DLP) speaker, and an Honorary Professor at the University of Nottingham in UK. He also serves as the Chief Co-Editor of Springer Book Series on Machine Learning: Foundations, Methodologies, and Applications.

Advances in Evolutionary Transfer Optimization

It is known that the processes of learning and transfer of what has been learned are important to humans for solving complex problems. However, the study on optimization methodologies via learning from existing solutions and the transfer of what has been learned to help on solving related or unseen problems, has been under-explored in the context of evolutionary computation. This talk will give an overview of evolutionary transfer optimization (ETO), which is an emerging research direction that integrates evolutionary algorithm solvers with knowledge learning and transfer across different problem domains to achieve better optimization efficiency and performance. It will present some recent research work in ETO for solving multi-objective and large-scale optimization problems via high-performance computing. Some discussions on future ETO research directions, including topics such as theoretical analysis and real-world applications, will also be given.

jun-wang.jpg
Jun Wang
City University of Hong Kong, China
Biography
popup close button
Jun Wang

City University of Hong Kong, China

Jun Wang is the Chair Professor of Computational Intelligence in the Department of Computer Science and School of Data Science at City University of Hong Kong. Prior to this position, he held various academic positions at Dalian University of Technology, Case Western Reserve University, University of North Dakota, and the Chinese University of Hong Kong. He also held various short-term visiting positions at USAF Armstrong Laboratory, RIKEN Brain Science Institute, and Shanghai Jiao Tong University. He received a B.S. degree in electrical engineering and an M.S. degree from Dalian University of Technology and his Ph.D. degree from Case Western Reserve University. He was the Editor-in-Chief of the IEEE Transactions on Cybernetics. He is an IEEE Life Fellow, IAPR Fellow, and a foreign member of Academia Europaea. He is a recipient of the APNNA Outstanding Achievement Award, IEEE CIS Neural Networks Pioneer Award, and IEEE SMCS Norbert Wiener Award, among other distinctions.

Advances in Collaborative Neurodynamic Optimization

The past four decades witnessed the birth and growth of neurodynamic optimization, which has emerged as a potentially powerful problem-solving tool for constrained optimization due to its inherent nature of biological plausibility and parallel and distributed information processing. Despite the success, almost all existing neurodynamic approaches a few years ago worked well only for optimization problems with convex or generalized convex functions. Effective neurodynamic approaches to optimization problems with nonconvex functions and discrete variables are rarely available. In this talk, a collaborative neurodynamic optimization framework will be presented. Multiple neurodynamic optimization models with different initial states are employed in the framework for scatter local search. In addition, a meta-heuristic rule in swarm intelligence (such as PSO) is used to reposition neuronal states upon their local convergence to escape local minima toward global optima. Experimental results will be elaborated to substantiate the efficacy of several specific paradigms in this framework for nonnegative matrix factorization, supervised learning, vehicle-task assignment, portfolio selection, and energy load dispatching.

Fengqi You.jpg
Fengqi You
Cornell University, USA
Biography
popup close button
Fengqi You

Cornell University, USA

Fengqi You is the Roxanne E. and Michael J. Zak Professor at Cornell University (Ithaca, New York). He also serves as Chair of Ph.D. Studies in Cornell Systems Engineering, Associate Director of Cornell Energy Systems Institute, and Associate Director of Cornell Institute for Digital Agriculture. His research focuses on fundamental theory and methods in systems engineering and artificial intelligence, as well as their applications to smart manufacturing, digital agriculture, quantum computing, energy systems, and sustainability. He is an award-winning scholar and teacher, having received around 20 major national/international awards over the past six years from the American Institute of Chemical Engineers (AIChE), American Chemical Society (ACS), Royal Society of Chemistry (RSC), American Society for Engineering Education (ASEE), American Automatic Control Council (AACC), in addition to a number of best paper awards. Fengqi is an elected Fellow of the Royal Society of Chemistry (FRSC) and Fellow of the American Institute of Chemical Engineers (AIChE Fellow). For more information about his research group:www.peese.org

Quantum Computing for Optimization and Machine Learning: From Models and Algorithms to Use Cases

Quantum computing is attracting growing interest due to its unique capabilities and disruptive potential. This presentation will briefly introduce quantum computing and its potential applications to systems optimization and machine learning. We will introduce several new algorithms and methods that exploit the strengths of quantum computing techniques to address the computational challenges of classically intractable optimization problems. Applications include molecular design, manufacturing systems operations, and supply chain optimization. In the second half of the presentation, we will focus on quantum machine learning and the emerging hybrid classical-quantum computing paradigm that exploit the strengths of quantum computing techniques to address the computational challenges of important AI-related problems. The presentation will conclude with a novel deep learning model and quantum computing algorithm for efficient and effective fault diagnosis in manufacturing and electric power systems.

qingfu-zhang.jpg
Qingfu Zhang
City University of Hong Kong, China
Biography
popup close button
Qingfu Zhang

City University of Hong Kong, China

Qingfu Zhang is a Chair Professor of Computational Intelligence with the Department of Computer Science, City University of Hong Kong. His is an IEEE fellow. His main research interests include evolutionary computation, optimization, neural networks, machine learning and their applications.

His multiobjective optimization evolutionary algorithm based on decomposition (MOEA/D) has been one of the most researched and used algorithms in the field of evolutionary computation and many application areas.

Multiobjective Evolutionary Computation based Decomposition

Many optimization problems in the real world, by nature, have multiple conflicting objectives. Unlike a single optimization problem, multiobjective optimization problem has a set of Pareto optimal solutions (Pareto front) which are often required by a decision maker. Evolutionary algorithms are able to generate an approximation to the Pareto front in a single run, and many traditional optimization methods have been also developed for dealing with multiple objectives. Combination of evolutionary algorithms and traditional optimization methods should be a next generation multiobjective optimization solver. Decomposition techniques have been well used and studied in traditional multiobjective optimization. Over the last decade, a lot of effort has been devoted to build efficient multiobjective evolutionary algorithms based on decomposition (MOEA/D). In this talk, I will describe main ideas and techniques and some recent development in MOEA/D. I will also discuss some possible research issues in multiobjective evolutionary computation.

qingpeng-zhang.jpg
Qingpeng Zhang
City University of Hong Kong, China
Biography
popup close button
Qingpeng Zhang

City University of Hong Kong, China

Qingpeng Zhang is an Associate Professor with the School of Data Science at CityU. He received the B.S. degree in Automation from Huazhong University of Science and Technology in 2009, and the Ph.D. degrees in Systems and Industrial Engineering from The University of Arizona in 2012. Prior to joining CityU, he worked as a Postdoctoral Research Associate with The Tetherless World Constellation at Rensselaer Polytechnic Institute. His research interests include healthcare data analytics, medical informatics, network science, and artificial intelligence. His research has been published in leading journals such as Nature Human Behaviour, Nature Communications, JAMIA and MIS Quarterly, as well as featured in press such as The Washington Post, The New York Times, New York Public Radio, The Guardian, The Daily Mail,and Global News.

GraphSynergy: A Network-inspired Deep Learning Model for Anticancer Drug Combination Prediction

In this talk, I will introduce an end-to-end deep learning framework based on a protein–protein interaction (PPI) network to make synergistic anticancer drug combination predictions. The framework, namely GraphSynergy, adapts a spatial-based Graph Convolutional Network component to encode the high-order topological relationships in the PPI network of protein modules targeted by a pair of drugs, as well as the protein modules associated with a specific cancer cell line. The pharmacological effects of drug combinations are explicitly evaluated by their therapy and toxicity scores. An attention component is also introduced in GraphSynergy, which aims to capture the pivotal proteins that play a part in both PPI network and biomolecular interactions between drug combinations and cancer cell lines. GraphSynergy outperforms the classic and state-of-the-art models in predicting synergistic drug combinations on the 2 latest drug combination datasets. Specifically, GraphSynergy achieves accuracy values of 0.7553 (11.94% improvement compared to DeepSynergy, the latest published drug combination prediction algorithm) and 0.7557 (10.95% improvement compared to DeepSynergy) on DrugCombDB and Oncology-Screen datasets, respectively. Furthermore, the proteins allocated with high contribution weights during the training of GraphSynergy are proved to play a role in view of molecular functions and biological processes, such as transcription and transcription regulation. This research indicates that introducing topological relations between drug combination and cell line within the PPI network can significantly improve the capability of synergistic drug combination identification.