Postcard: Ethical Considerations in Artificial Intelligence and Autonomous Systems
Dr Louise Dennis and Professor Mike Fisher, from the Department of Computer Science and the Centre for Autonomous Systems Technology (CAST), travelled to Austin, Texas to take part in the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems workshop.
CAST is a multi-disciplinary research centre whose members include not only Computer Scientists and Engineers but also researchers from Psychology, Law and Philosophy. CAST's research interests has been how machines can be programmed to reason ethically:
We travelled to Austin, Texas to take part in the workshop. A key aim of the Initiative is to produce a document outlining many aspects of the intersection between Ethics and Artificial Intelligence. Version 1 of this document, Ethically Aligned Design was released in December 2016 for public comment. The annual workshop was therefore looking both at the current document and the feedback received with a view to refining and improving it.
The Initiative is organised as a series of committees covering a wide range of issues such as Autonomous Weapons, the economic and humanitarian impact of Artificial Intelligence, the (mis)Use of personal data in AI systems and methodologies to guide ethical research and design. Professor Fisher and I both attended the workshop on Embedding Values into Autonomous Intelligent Systems.
Our research was directly relevant to the workshop and we had many useful discussions about what was currently possible and what wasn't. We discussed different approaches to ethical machine reasoning with a particular focus on important considerations that should hold across all approaches such as eliciting the values of the target community for any Artificial Intelligence application, and creating mechanisms to check that the implementation of ethical machine reasoning actually conformed to the community values.
There is still a lot of work to be done before version 2 of the document can be released but we left confident that the Embedding Values section will have a clearer general view of the issues in this area, whatever approach to implementation is taken.
As well as specific workshops focused on the next version of the Initiative's report, there were a number of interesting talks. One of the IEEE's roles is in the production of industry standards, and part of the Initiative's purpose is to identify areas relating to Ethics and Artificial Intelligence where standards can be produced.
Several potential standards have already been proposed (e.g., a Standard for Data Privacy and one for avoiding Algorithmic Bias) and there were reports from the groups working to develop these standards. There were also several talks from Initiative members from China, Japan and Korea since the Initiative is seeking to strengthen its representation in these countries in order to produce a genuinely global view of ethical consideration in Artificial Intelligence and Autonomous Systems.
It was a packed and productive couple of days. I'm excited both about the new version of the report and the possibility for developing concrete standards.
Artificial Intelligence and Autonomous Systems are going to play increasingly important roles in our lives in the future and it is vitally important that we try to develop them in ways that conform to our values. While the IEEE's Initiative involves a large number of committees and a lot of time spent in discussion, this is necessary in order to build a true consensus about how to do this. There is a long way to go but we are already making progress.