Ethics, social values and artificial intelligence
Date : July 11th 2018
Place : European Science Open Forum (Toulouse)
When designing autonomous machines embedded with artificial intelligence, the integrated design of ethical safeguards can be difficult. One way may be to consider human values that are at stake in the machines behaviour, and identify when those values are promoted or infringed. In artificial intelligence, the notion of human values is left to the end-users discretion with respect to the targeted application. The reason is that human values are abstract concepts from philosophy, social sciences and psychology while computer science needs explicit formal definitions. While some works tried to define what should be general values for intelligent systems, modelling those values remains a key issue. This highly interactive session will put the audience in an ethical designers shoes, and let it experience the difficulties of designing with values. As an example, social networks moderation is important due to the presence of racist, sexist or illegal content, or to fight bullying. However, automated moderation may also forbid and constrain freedom of speech. The audience will be asked to design of a moderation procedure in terms of social values, identify situations when those values could be conflicting, and find ways to deal with them.