Workshop on Trustworthy AI

Project Ultimate is organising a remote workshop on Trustworthy AI on the morning of 2nd of February, 2024. Experts in Hybrid AI, Safety & Trustworthiness for robotics, Ethics and XAI from industry, academia, and other theatres of activity attend this remote workshop to share opinions and ideas in presentations and round table. The workshop will happen from 9:30am till 1:30pm (CET). Important notice: The meeting will be remote only.

                               9:30 – 9:40 

 

9:45 – 10:15

 

 

 

10:15 – 10:45 

 

 

 

10:45 – 11:15 

 

 

 

11:15 – 11:30 

 

11:30 – 12:00 

 

 

 

12:00 – 12:30

 

 

12:30 – 13: 15

13: 15 – 13:30

Introduction

Dr. Michel Barreteau

 

 “The role of Trustworthy AI / Explainable AI in Telecom industry

Prof. Rafia Inam

 

 “Trustworthy AI – a Legal Orientation

Prof. Stanley Greenstein

 

Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation

Dr. Natalia Díaz Rodríguez

 

Short Break

 

« Multilevel Physics Informed Methods« 

Prof. Serge Gratton

 

« AI-Powered Human-Centred Robot Interactions for Smart Manufacturing« 

Dr. Francisco Fraile

 

Round table 

Feedback and farewell

Rafia Inam

She is a Senior Project Manager at Ericsson Research in Trustworthy AI and Adjunct Professor at KTH. She has conducted research for Ericsson for the past nine years on 5G for industries, network slices, and network management; and AI for automation. She specializes in trustworthy AI, Explainable AI, risk assessment and mitigations using AI methods, and safety for cyber-physical systems; for telecom and collaborative robots.  She won Ericsson Top Performance Competition 2021 on her work on AI for 5G network slice assurance, and was awarded multiple Ericsson Key Impact Awards. She has won best paper awards on her two papers. Rafia received her PhD in predictable real-time embedded software from Mälardalen University in 2014. She has co-authored 50+ refereed scientific publications and 55+ patent families and is a program committee member, referee, and guest editor for several international conferences and journals. 

The role of Trustworthy AI / Explainable AI in Telecom industry 

Trust and reliance on modern telecom systems are widespread. However, the adoption of AI introduces new risks and necessitates countermeasures. Governments, companies, and standards bodies worldwide are recognizing the need for trustworthy AI systems. The presentation will discuss the importance of Trustworthy AI and Explainable AI for Telecom industry to enable customer trust; and how these techniques can support the industry to ensure correctness of AI models, provide transparency to different users, enable automation of telecom use cases, and help to identify and describe unexplained or new behavior of the models. The work presents different telecom examples using different explainable AI techniques. 

Stanley Greenstein​

He is Senior Lecturer (Associate Professor) in Law and Information Technology (IT) at the Department of Law, Faculty of Law, Stockholm University. He is also a co-worker at the Swedish Law and Informatics Research Institute (IRI, https://irilaw.org/) and a Digital Futures faculty member. Stanley’s main area of interest is the interaction between technology and law. In this regard, his teaching, research and practical participation in project work has centred on the topic of artificial intelligence (AI) and its ethical and societal implications. 

Trustworthy AI – a Legal Orientation

 

One of the functions of the law is to protect society from risks. While there are many advantages to society from the development of emerging technologies, like Artificial Intelligence (AI), there are also risks. As the risks associated with AI have become more widely apparent, so too have the regulatory initiatives with which to address these risks increased. The European Union has identified that the advantages of AI must be harnessed, however, in order for Europeans to embrace this technology, it needs to be trusted. It has therefore developed a regulatory strategy in order to mitigate the risks associated with AI and in turn encourage its use. This presentation will provide an orientation on current and forthcoming regulatory initiatives in relation to trustworthy AI and in doing so facilitate cooperation between different academic disciples all working together in order to create trustworthy AI. 

Natalia Díaz Rodríguez

Natalia Díaz Rodríguez has a double PhD degree (2015) from University of Granada (Spain) and Åbo Akademi University (Finland). She is currently Marie Curie postdoctoral researcher and docent at the DaSCI Andalusian Research Institute in data science and computational intelligence (DaSCI.es) at the Dept. of Computer Science and Artificial Intelligence. Earlier, she worked in Silicon Valley, CERN, Philips Research, University of California Santa Cruz and with FDL Programme with NASA. She was also Assistant Prof. of Artificial Intelligence at the Autonomous Systems and Robotics Lab (U2IS) at ENSTA, Institut Polytechnique Paris, INRIA Flowers team on developmental robotics during 4 years, and worked on open-ended learning and continual/lifelong learning for applications in computer vision and robotics. Her current research interests include deep learning, explainable Artificial Intelligence (XAI), Responsible, Trustworthy AI and AI for social good. Her background is on knowledge engineering and is interested in neural-symbolic approaches to practical applications of responsible and ethical AI. 

Connecting the dots in trustworthy AI: From AI principles, ethics, and key requirements to responsible AI systems and regulation

Trustworthy Artificial Intelligence (AI) is based on seven technical requirements sustained over three main pillars that should be met throughout the system’s entire life cycle: it should be (1) lawful, (2) ethical, and (3) robust, both from a technical and a social perspective. However, attaining truly trustworthy AI concerns a wider vision that comprises the trustworthiness of all processes and actors that are part of the system’s life cycle, and considers previous aspects from different lenses. A more holistic vision contemplates four essential axes: the global principles for ethical use and development of AI-based systems, a philosophical take on AI ethics, a risk-based approach to AI regulation, and the mentioned pillars and requirements. The seven requirements (human agency and oversight; robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental wellbeing; and accountability) are analyzed from a triple perspective: What each requirement for trustworthy AI is, Why it is needed, and How each requirement can be implemented in practice. On the other hand, a practical approach to implement trustworthy AI systems allows defining the concept of responsibility of AI-based systems facing the law, through a given auditing process. Therefore, a responsible AI system is the resulting notion we introduce in this work, and a concept of utmost necessity that can be realized through auditing processes, subject to the challenges posed by the use of regulatory sandboxes. Our multidisciplinary vision of trustworthy AI culminates in a debate on the diverging views published lately about the future of AI. Our reflections in this matter conclude that regulation is a key for reaching a consensus among these views, and that trustworthy and responsible AI systems will be crucial for the present and future of our society.

Serge Gratton

The approximation of the solution of partial differential equations (PDEs) using artificial neural networks (ANNs) dates back to the 90s. However, it is only in recent years that this topic has emerged as an active field of research. Since their introduction, Physics-Informed Neural Networks (PINNs) have garnered increasing interest in this field. Their effectiveness in practice has been supported by theoretical results. However, training these networks remains challenging, particularly due to slow convergence when approximating solutions characterized by high frequencies. To improve this situation, we introduce two distinct approaches. Firstly, the ‘deep-ddm’ method, which combines domain decomposition methods (DDM) with an innovative coarse space correction. Secondly, a multi-level algorithm designed for nonlinear optimization in PINNs, employing specialized neural architectures for enhanced performance. Both methods demonstrate significant improvements in solving PDEs. The deep-ddm method offers accelerated convergence and efficient information exchange between subdomains. Meanwhile, the multi-level algorithm yields superior solutions and computational savings. This synergy of classical and contemporary methods opens new avenues in applying PINNs to complex PDEs. 

Multilevel Physics Informed Methods

Serge Gratton is a Professor of Exceptional Class in Applied Mathematics at INP-IRIT. He has published over a hundred articles in leading international journals, particularly in the fields of data assimilation and numerical optimization for machine learning. He has co-supervised 25 doctoral students and is an associate editor for two leading international journals in optimization (SIOPT and OMS). He heads two AI programs: a specialized Master’s in AI and a dual-degree engineering program with INSA-ENSEEIHT. Since 2019, he has held a chair in the field of machine learning under physical constraints, within which he has developed recurrent neural structures suited to the temporal prediction of the trajectories of partially observed chaotic dynamic systems. His involvement in ANITI has steadily increased: as a chair holder, academic leader, deputy scientific director, and now as the scientific director. 

Francisco Fraile

This presentation will provide an overview of the main innovations of AI-PRISM, a human-centred, AI-based solution ecosystem (with and without physical embodiment) targeting manufacturing scenarios. It will provide a high level overview of the main innovations, pilot use cases, and technology approach of this EU funded innovation initiative. This high-level presentation will serve as a catalyst to discuss with the audience synergies an potential collaborations with AI-PRISM.

 

AI-Powered Human-Centred Robot Interactions for Smart Manufacturing

Francisco Fraile is an associate professor and senior researcher in the fields of digital manufacturing and zero defects manufacturing, at the Polytechnic University of Valencia.

Share
with

Fourth Newsletter

Thales Alenia Space delivers cost-effective solutions for telecommunications, navigation, Earth observation, environmental management, exploration, science and orbital infrastructures. Governments

Francisco Fraile

AI-Powered Human-Centred Robot Interactions for Smart Manufacturing Francisco Fraile is an associate professor and senior