The AI FORA project aims at responsible AI use in ethically sensitive and highly normative societal domains such as social assessment for public welfare rovisions. Thereby, the AI FORA project is oriented on the guidelines of the European Commission on Trustworthy AI and high-level multi-stakeholder initiatives to integrate societal and ethical issues in AI research (e.g., AI4EU, Humane AI, etc.). Using the insights from social research and scenario modelling, AI FORA will join forces with society to build better, i.e., impartial, fair, accurate, and context-sensitive AI for future societies.
The project will conceptualise and develop a co-creation methodology and an experimental lab infrastructure for building and critically discussing AI social assessment technologies in a user-friendly fab lab/living lab environment. To guarantee human-centered, comprehensible, and responsible AI, the Co-Creation Lab will prepare AI topics for non-AI-expert end-users, derived from the target group of the empirical study described in WP1 (i.e., clients as well as the staff of the Agency for Labor, Social Affairs, Family and Integration, interested citizens). For this purpose, the functionalities of social assessment technologies like those described in WP1, will be used to explain its decisions to end-users using Explainable AI (XAI) and VR/AR techniques. XAI refers to methods aimed at the description of a system, its internal states, and its decisions in such a way that this description can be understood by human beings (Gilpin et al., 2018). The use of XAI methods can help to improve trust of end-users in the explained system (e.g., Kizilcec et al., 2016; Weitz et al., 2019). With the help of the Co-Creation Lab, the attitudes and mental models that end-users have about AI social assessment technologies are examined. In current XAI research, a common concern is that the development of explanation methods is focused primarily on building solutions for AI-experts while neglecting end-user’s needs (Miller et. al., 2017). 3-D learning environments can offer a variety of potential benefits in learning scenarios, that may also hold true for XAI applications. These advantages comprise increased user motivation/engagement, improved spatial understanding, contextualization of learning, and opportunities for experimental and collaborative learning (Dalgarno et. al., 2010). As such, we will investigate the potential of immersive technologies, such as shared mixed reality environments, for new XAI and collaborative design methods to increase stakeholders’ understanding of AI systems. To achieve these goals, the chair of Human-Centred Multimedia brings expertise in examining mixed reality applications for big data analysis, as well as human-centered interactive machine learning applications (e.g., Heimerl et al., 2019).
With these techniques, stakeholders will experiment with and develop AI-assisted/AI-based social assessment applications. The lab environment will provide stakeholders with a range of datasets, applications, and algorithms, as well as training and education material for co-creative development. This experience-based lab context will implement, test, and evaluate participatory technology co-design at the level of local work practices, work routines, and workplace settings to bring societal values and need to the very heart of AI technology production.