Stockholm AI Summit - AI Ethics
Event Information
About this event
Are you interested in joining the conversation on ethics in AI? Then you should definitely join our next online meetup on AI ethics. We will be discussing AI regulation, ways to operationalize AI responsibly, as well as methodologies to address several aspects of AI-based systems that make them trustworthy. Don't miss it!
On the agenda:
Fredrik Heintz
AI Regulation and Impact - A European Perspective
Europe has taken a clear stand that we want AI, but we do not want just any AI. We want AI that we can trust. This talk will present the European approach to AI regulation and give an overview of some of the impacts and research challenges related to this.
Fredrik Heintz is a Professor of Computer Science at Linköping University, Sweden. His research focus is artificial intelligence especially Trustworthy AI and the intersection between knowledge representation and machine learning. He is the Director of the Graduate School for the Wallenberg AI, Autonomous Systems and Software Program (WASP), coordinator of the TAILOR ICT-48 network developing the scientific foundations of Trustworthy AI, and the President of the Swedish AI Society.
Josefin Rosén
Operationalizing Responsible AI
Organizations are now quickly moving from experimentation and isolated projects towards efforts to operationalize and scale AI. It has never been more important to make sure that we know how to do this in a responsible manner. That we operationalize Responsible AI. Responsible AI can never be an afterthought. It needs to be enabled in the AI platform, embedded in the process, and considered continuously throughout the AI lifecycle. In this talk, Josefin Rosén will share some recommendations on how to do this action.
Josefin Rosén has +15 years of experience in AI and Advanced Analytics and hold a PhD in Chemometrics from the Faculty of Pharmacy at Uppsala University. In her current role she is leading a Nordic team of highly skilled and experienced AI experts providing e.g. strategic guidance to organizations cross industry on how to unleash insight and value from data, and how to operationalize Responsible AI from data to decision.
Ericsson Trustworthy AI team: Alexandros Nikou, Kristijonas Čyras, Swarup Mohalik, Alessandro Previti
Trustworthy AI - Explainability and Safety
AI-based autonomous systems have the capability to learn and evolve so that they can adapt to changing environments. However, this very capability has the potential to result in unexpected emergent behavior. Therefore, one needs methodologies to address several aspects of AI-based systems that makes them trustworthy. In this talk, the Ericsson Trustworthy AI team will present the overview of two such aspects – explainability and safety – through some use cases. Explainable AI(XAI) is used to achieve transparency of AI-based systems by explaining to the stakeholder why and how an AI algorithm arrived at a specific decision. Safety techniques provide quantifiable guarantees that the system satisfies its requirements despite its evolution.
The presenters are researchers at Ericsson Research AI area, studying trustworthy AI for several years. Among other areas, they have been focusing on machine reasoning, explainable AI and safe reinforcement learning deployments in telecommunication applications. A few recent publications include: "Machine Reasoning Explainability", tutorial, AAMAS 2021; "Symbolic RL for Safe RAN Control", AAMAS 2021; “On Exploiting Hitting Sets for Model Reconciliation”, AAAI 2021; “Argumentative XAI: A Survey”, IJCAI 2021.