Hybrid SORS: AI-related standards activities with a focus on algorithmic bias
Objectives
Abstract: The last five years has seen a rapid expansion of AI-related standardisation activity, driven in part by the precautionary principle and in part by legislative attention on the potential impacts of AI. This is all despite AI not being a new technology, but more because of concerns stemming from the likely pervasive roll-out of AI technologies and questions of responsibility and accountability. This talk will provide an overview of activities at ISO and IEEE, the resources currently available and how to participate, followed up with a case study drawing on the speaker's participation in the IEEE P7003 working group on algorithmic bias considerations.
Short bio: Prof. Julian Padget (University of Bath) focuses his research in intelligent agents and how to govern (their) autonomy through ethically aligned design, norm representation and reasoning. This work comprises theoretical contributions using computational logic and inductive logic programming with practical applications in agent architecture, legal reasoning, security analysis, privacy policies, agent-based simulation, frameworks for socio-cognitive systems and computer games (in conjunction with industrial partners). Related and earlier work addresses distributed systems, language design, semantic web services and computer algebra.
Speakers
Speaker: Julian Padget, University of Bath
Host: Ulises Cortés, High Performance Artificial Intelligence Group Manager, CS