Intermediaries

Mitigating the main risk of AI FORA: The Safe Spaces concept

The main risk of the AI FORA project concerns a failure of the successfully engaging local stakeholders in technology co-design, and of appropriately organizing inter- and transdisciplinary interfaces. The risk is that individual stakeholder groups might not equally speak their minds and contribute their specific perspective and expertise being cautioned or cowed by their surroundings. This risk is likely to appear when participative formats and venues are not neutral but are representing one involved actor’s interest thus failing to enable horizontal and integrative communication. The AI FORA project uses the so-called “Safe Spaces” concept to mitigate this risk within a dedicated stakeholder reconciliation approach in technology co-design. Multi-stakeholder workshops with specifically-developed interactive and participative formats are organized at places of local intermediaries, i.e. network organizations specialised in interreligious, intercultural and inter-societal communication within and between societies.

Kloster Nütschau, Germany
Representative Br. Prior Johannes Ebbe OSB

Kylemore Abbey, Ireland
Representative Mother Abbess Sr. Maire Hickey OSB and Sr. Mariangela Bator OSB

New Camaldoli Hermitage, California, USA
Representative Fr. Prior Cyprian Consiglio OSB Cam

Abbey Montserrat, Spain
Representative Abbot Fr. Manel Gasch Hurios OSB

Saccidananda Ashram Shantivanam, India
Representative Fr. Prior Dorathick OSB Cam

Intermediaries

AI FORA is an example for a project with high ethical and societal implications for the future of our global societies. Therefore, the question has to be addressed “where we will end up”, as technology development such as AI-based social assessment systems need to be based on social values. Organizations but also whole societies are challenged by complex problems where solutions might involve drastic behavioral changes for everybody (other examples are climate change, migration, food and water supply, health). Problems cannot be solved by individuals or subsystems of society – be it science, politics, or markets. Solutions require the expertise, participation and co-design of all societal groups. This problem situation calls for participatory tools. Currently, the social sciences develop new participation formats (among them focus groups, dedicated workshop formats, co-design approaches, or companion modelling). They are targeted to support and realize participative innovation efforts. Stakeholders need to be empowered to obtain relevant knowledge as well as to effectively evaluate in terms of societal needs and moral values outcomes of decision made in technological development and – in particular – also potential alternatives in these developmental processes. For this purpose, they need to be involved in early stages of the process of technology development. Likewise, stakeholders need to be an integral part of the governance of the technology development, being empowered to assess questions of if, when, and how to regulate the technology development.

These new participative and interactive formats require safe spaces where actors can freely exchange without being pre-configured or preferred/discriminated by their environment. Everybody should be perceived as expert for his/her area of experience and everyday life and should be enabled to bring this to the creation of solution approaches. The main risk of the AI FORA project concerns a failure of the successfully engaging local stakeholders and of appropriately organizing inter- and transdisciplinary interfaces. To develop ethically and societally responsible AI use for social assessment requires interaction between heterogeneous stakeholders from all kinds of backgrounds, who need “voice” to communicate on eye-level without being preconfigured and constrained by the environment where these encounters take place. The risk is that individual stakeholder groups might not equally speak their minds and contribute their specific perspective and expertise being cautioned or cowed by their surroundings. This risk is likely to appear when participative formats and venues are not neutral but are representing one involved actor’s interest thus failing to enable horizontal and integrative communication. The AI FORA project uses the so-called “safe spaces” concept to mitigate this risk. Interdisciplinary case study coordination and multi-stakeholder workshops with specifically-developed interactive and participative formats will be organized by local intermediaries, i.e. network organizations specialized in intercultural and inter-societal communication within and between societies. This model relies on a partnership between the technical and the social sciences that is mediated by designed “safe space interfaces” within all layers of the project. These interfaces provide neutral and disinterested space for inter- and transdisciplinary activities and release research partners from activities outside their expertise set.