Country Case Study Germany


The German case study will zoom into a specific micro scene of AI and data analytics in public administration. The research will focus on studying the practices of implementing such systems in the context of migration politics and focus on changed practices for integrating refugees into the German social system.

Studying practices implies a detailed observation of human activities and routines in the interaction with the AI system but also whether and how the AI system impacts on human actors, that is studying the relationships within situations of AI-based local governance taking into account all involved factors. The objective of the research is studying in microscopic detail the change of practices through AI-based social assessment.

At the German Federal Office for Migration and Refugees (Bundesamt für Migration und Flüchtlinge BAMF) AI methods are being applied in three projects: In the ZPE project (central incoming mail) as well as in the EGVP projects (electronic court and administrative mailbox) and in the “profile analysis” project. The decision will be made depending on the possible applications and the specifications for the IT architecture. Since these are new methods, external expertise is useful. It will be involved as the need arises. BAMF sees the greatest potential in all processes in which large amounts of data are processed and employees only ever process a small section, so that connections and patterns are difficult to identify.

In 2019, the Bundestag’s Enquete Commission on Artificial Intelligence asked for a more detailed explanation of the AI projects of German authorities and ministries. The NGO Netzpolitik published the answers of the Ministry of the Interior on AI in the BAMF.

The BAMF is currently running a pilot project on “profile analysis”. According to the group leader for processes and IT, the profile analysis was developed “in order to be able to fulfil the BAMF’s legal reporting obligations to security authorities more easily and quickly”.

For example, the migration authority reports persons to the Office for the Protection of the Constitution if information from hearings suggests that applicants could be of interest to the intelligence service. This is the case, for example, if there could be links to terrorism – be it as a victim or suspected perpetrator. The BAMF is increasingly forwarding data to the Federal Office for the Protection of the Constitution: In 2015, there were just over 500 cases, two years later the the Federal Office for the Protection of the Constitution (Verfassungsschutz) receives over 10,000 tips from the BAMF. It is not known how many actually relevant findings this brings.

At present, the software may only screen foreigners according to “security information”, but the purpose of such systems can easily be expanded. Technically, it would be easy to have computers pre-sort the applicants’ “prospects of staying”.

The BAMF has even more AI Systems in use. The Ministry of the Interior does not list all the AI technologies that the BAMF uses: The BAMF uses software to recognise dialects and gain clues as to where refugees come from. Even though this is scientifically controversial and the BAMF itself assumes a 15 percent error rate.

From January to November 2019, the BAMF used the dialect recognition software with almost 4,000 people. More than a quarter of them (1,056) came from Syria, followed by Algerians and refugees from Morocco and Sudan. This is the result of the answer given by the Federal Ministry of the Interior (BMI) to a question posed by Ulla Jelpke, a left-wing member of the Bundestag.

In 64 per cent of the tests, there is no information on whether the results of the dialect analysis support the statements of the persons concerned, and in 4.7 per cent of the cases there were contradictions, according to the BMI. In the same period last year, the software had been used somewhat more frequently: From September 2017 to mid-November 2018, it was used 6,284 times, but the number of asylum applications was also higher during this period.

Whether the procedure is suitable for obtaining reliable information on the origin of refugees is in doubt. On the one hand, researchers note that language cannot be clearly defined by national borders and can differ and change depending on a person’s life course and socialisation. For another, according to the BAMF, the software currently has an error rate of about 15 per cent. In these cases, it is up to the decision-makers to recognise these errors so that unjustified doubts about the applicant’s information are not included in asylum decisions.

Actually, the BAMF wanted to have the use of its language analysis software scientifically monitored as early as 2018. To date, nothing has happened. In response to a press enquiry, the Federal Office replied at the end of 2019: “Scientific monitoring is not currently taking place, but is planned for the future. In the meantime, there is no longer any mention of a specific time period. Meanwhile, the programme has been running for more than two years and has been used in the asylum procedures of more than ten thousand people.

Another AI system analyses text messages on the smartphones of those seeking protection – this also involves the language used by the applicant.

BAMF sees the greatest risks in the sole decision-making authority of machines, so that at BAMF only assistance systems are being developed that support employees, but will never have the final decision-making authority. On the other hand, the decision-makers and other employees are under a great deal of pressure, which tempts them to believe too much in the supposedly neutral results of the algorithms. This has already happened. Asylum seekers were accused of providing false information and their asylum applications were initially rejected.

The objectives of the German case study are to investigate the AI systems in use for deciding on migrants and asylum seekers in Schleswig-Holstein and Hamburg, two federal states in Northern Germany. For this purpose, the case study will utilize the common AI-FORA research approach of building a team of technical and social science partners for the case studies.

Case study

The research endeavor will integrate social sciences and technical sciences, including the following elements:

  1. Desk research to get an overview of the historical development of the case since the decision to use AI and the first implementation of AI systems. Targeted desk research will be facilitated by our cooperation partner “kommunit IT-Zweckverband” (, a company that develops and supports public administration software, especially for cities and municipalities in Schleswig-Holstein. This research will be done by the social science partner and serve as an input for the technical partner.
  2. In order to investigate the implementation and potential biases along the algorithmic value chain, the technical partners will investigate the functionality and decision-making processes of the software by means such as black-box analysis (Diakopoulos 2014). Subsequently, methods of Explainable AI (XAI) will be used to evaluate the impact of specific input values (features) on the software’s decisions and predictions. These insights will be used as an input for social science research that investigates the impact of the software on administrative procedures and the guiding social values.
  3. Participatory multi-stakeholder workshops organized by the social science partner will be utilizing specific access to migrants and asylum seekers through a current refugee project of the German Intermediary. In 2015, the Intermediary had received the Prize of the Refugee Council (Landesflüchtlingsrat Schleswig-Holstein ) for outstanding commitment to refugee aid.
  4. Workshops will bring together representatives from the federal government and public administration in Schleswig-Holstein, Schleswig-Holstein Refugee Council, representatives of the BAMF, NGOs such as Algorithmwatch, Netzpolitik but also charity initiatives supporting the refugee, and research and media that is dealing with the case to collect data on values, perspectives, opinions and attitudes of decision makers and agenda setters. These workshops will identify actors and possible modes of action in the cases. This provides the input for developing scenario simulations.
  5. Focus-group discussions will be undertaken with affected persons, i.e. (accepted and rejected) migrants and asylum seekers, and other BAMF clients. As the AI systems are meanwhile implemented for several years, the focus groups will enable to reveal if and how the practice of migration politics has changed and how this affects the clients of the process.
  6. Interviews with domain experts will help to understand the context dependencies of actors within these areas. Interviews will be undertaken with local decision makers, external experts in AI-based migration politics such as members of the Enquete Commission AI, as well as practitioners in the administration such as case managers and administration of the agencies. This will support the research in understanding the social, economic, political and administrative pressures that drive the debate around the case. This research will be undertaken by the social science partner.
  7. Since the media discourse strongly influences public opinion and sets agendas for public discourse, it will be analyzed which topics, arguments, and sentiments shape the media discourse in order to capture the lines of arguments. A first overview of the media discourse seems to indicate that the topic is discussed under the issues of bad project management, overspending money, and complaints by the case managers working with the system, but not so much under topics specifically to AI. This research will be undertaken by the social science partner.
  8. Multi-stakeholder workshops and interviews with case managers undertaken by the social science partner will support the technical science partner in identifying issues of relevance for the development of software in a co-creation lab that takes into account the values and interests of the participating stakeholders.

Own research of case study partners (previous projects, publications etc.) on the chosen domain:

These projects are relevant as they investigated responsible innovation especially involving civil society organisations and the implementation and development of AI technology.

  • 2013-2016 EU project ProGReSS: PROmoting Global Responsible research & Social and Scientific innovation (FP7, Science in Society), Petra Ahrweiler (Principal Investigator), 350.000 EUR.
  • 2013-2016 EU-Project GREAT: Governance for Responsible Innovation (FP7, Science in Society), Petra Ahrweiler (Principal Investigator), 350.000 EUR.
  • BMBF DIGISTA – Materiality and Meaning of Urban Communication Practices – Urban Places as Location of Communication: The Example of Augsburg; Duration: 9/2018 – 8/2021; Connection to AI FORA: Future communication practices based on VR/AR; Sponsor and budget (for Elisabeth André): German Federal Ministry of Education and Research (BMBF), André’s team: ca. 245.000 €, Overall funding: 924.000 €
  • BMBF EMPAT – Empathic Training Companions for Job Interviews; Duration: 3/2015 – 2/2018; Connection to AI FORA: Computer-based Assessment of People; Sponsor and budget: German Federal Ministry of Education and Research (BMBF), Elisabeth André’s team: ca. 240.000 €, overall funding: ca. 1,5 Mio € (without contribution by industries)
  • DFG Research Group OC-Trust – Design of User Interfaces for trustworthy Organic Computing Systems; Duration: 10/2009 – 9/2015; Connection to AI FORA: Trustworthiness of AI Systems; Sponsor and budget (for E. André): German Science Foundation (DFG), 1 researcher, student researchers and travel over a period of six years

Some relevant publications for the case study:

  • Yue Zhang, Andrea Michi, Johannes Wagner, Elisabeth André, Björn W. Schuller, Felix Weninger: A Generic Human-Machine Annotation Framework Based on Dynamic Cooperative Learning. IEEE Trans. Cybern. 50(3): 1230-1239 (2020)
  • Kaska Porayska-Pomsta, Paola Rizzo, Ionut Damian, Tobias Baur, Elisabeth André, Nicolas Sabouret, Hazaël Jones, Keith Anderson, Evi Chryssafidou: Who’s Afraid of Job Interviews? Definitely a Question for User Modelling. UMAP 2014: 411-422
  • Ahrweiler, P., Gilbert, N., Schrempf, B., Grimpe, B. and Jirotka, M. (2019) “The role of civil society organisations in European responsible research and innovation,” Journal of Responsible Innovation, 6(1), pp. 25–49. doi: 10.1080/23299460.2018.1534508.
  • Ahrweiler, P. (2017): Simulationsexperimente realexperimenteller Politik – der Gewinn der Zukunftsdimension im Computerlabor. In: Boeschen, S., Gross, M. and W. Krohn (eds.): Experimentelle Gesellschaft. Baden-Baden, 199-237. (Simulation Experiments of Real-World Experimental Policy – Gaining the Dimension of the Future in the computational Laboratory).