Beskæftigelsesudvalget 2022-23 (2. samling)
BEU Alm.del Bilag 34
Offentligt
2649944_0001.png
Street-Level Algorithms and AI in Bureaucratic
Decision-Making: A Caseworker Perspective
ASBJØRN AMMITZBØLL FLÜGGE,
University of Copenhagen, Denmark
THOMAS HILDEBRANDT,
University of Copenhagen, Denmark
NAJA HOLTEN MØLLER,
University of Copenhagen, Denmark
Studies of algorithmic decision-making in Computer-Supported Cooperative Work (CSCW) and related
fields of research increasingly recognize an analogy between AI and bureaucracies. We elaborate this link
with an empirical study of AI in the context of decision-making in a street-level bureaucracy: job
placement. The study examines caseworkers’ perspectives on the use of AI, and contributes to an
understanding of bureaucratic decision-making, with implications for integrating AI in caseworker
systems. We report findings from a participatory workshop on AI with 35 caseworkers from different
types of public services, followed up by interviews with five caseworkers specializing in job placement.
The paper contributes an understanding of caseworkers’ collaboration around documentation as a key
aspect of bureaucratic decision-making practices. The collaborative aspects of casework are important to
show because they are subject to process descriptions making case documentation prone for an
individually focused AI with consequences for the future of how casework develops as a practice.
Examining the collaborative aspects of caseworkers’ documentation practices in the context of AI and
(potentially) automation, our data show that caseworkers perceive AI as valuable when it can support their
work towards management, (strengthen their cause, if a case requires extra resources), and towards
unemployed individuals (strengthen their cause in relation to the individual’s case when deciding on, and
assigning a specific job placement program). We end by discussing steps to support cooperative aspects
in AI decision-support systems that are increasingly implemented into the bureaucratic context of public
services.
CCS Concepts: •
Human-centered learning → Collaborative and social computing →
Empirical
studies in collaborative and social computing
KEYWORDS:
Algorithmic Decision-Making, Casework, Job Placement, Bureaucracy, Public Services
ACM Reference format:
Asbjørn Ammitzbøll Flügge, Thomas Hildebrandt and Naja Holten Møller. 2021. Street-Level Algorithms
and AI in Bureaucratic Decision-Making: A Caseworker Perspective.
In
Proceedings of the ACM on Human-
Computer Interaction,
Vol. 5, CSCW1, Article 40 (April 2021), 23 pages,
https://doi.org/10.1145/3449114
1 INTRODUCTION
Artificial Intelligence (AI) in public services, which supports or replaces human autonomy,
discretion, and decision-making capabilities, continues to attract public and scholarly attention
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee
provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and
the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored.
Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee. Request permissions from [email protected].
Copyright © ACM 2021 2573-0142/2021/April – Art 40… $15.00
https://doi.org/10.1145/3449114
PACM on Human-Computer Interaction, Vol. 5, No. CSCW1, Article 40, Publication date: April 2021.
40
BEU, Alm.del - 2022-23 (2. samling) - Bilag 34: Henvendelse af 20/1-23 fra Asbjørn Ammitzbøll Flügge, Ph.d.-studerende, Datalogisk Institut ved Københavns Universitet om erfaringer og forskning fra digitalisering i jobcentre (anmodning om foretræde)
40:2
Asbjørn Ammitzbøll Flügge et al.
[58]. According to Brayne and Christin, the implementation of AI is often justified as a way to
achieve more effective and objective decisions [13]. Across disciplines, practitioners criticize AI
for being opaque, occluded, biased, discriminatory, sexist, and even racist. Recent studies of
algorithmic decision-making now draw an analogy between bureaucracy and AI [1, 46].
Alkhatib and Bernstein wrestle with the problem of “inflexible algorithms”, arguing that the
algorithm itself should have characteristics of the street-level bureaucrat bridging the gap
between policy and practice to support the work done by professionals [1]. Acting like street-
level bureaucrats on social media like YouTube or Twitter, algorithms decide what content
remains visible or is removed. For example, when using Twitter for crowdfunding, algorithms
decide who and how many people will see a post, and thereby ultimately determine who
receives financial relief, something which is usually handled by public services. Pääkkönen et al.
propose bureaucracy as a conceptual lens for understanding how human actors and AI interact
to produce powerful consequences in areas of uncertainty [46]. Although content moderation
algorithms on YouTube or Twitter act like “street-level bureaucrats”, the platforms are not
bureaucracies (public authorities) in the sense proposed by Weber almost a century ago [55]. In
his sense, bureaucracies are
public
administration or services, and bureaucrats are the
intermediaries between the state and the people. In his seminal work, Lipsky coined the front-
line workers of bureaucracies—teachers, police officers, judges, and caseworkers—as street-level
bureaucrats [32-34]. Their task is to balance policy and rules and exercise discretion while
meeting the needs of the individual when making decisions that affect their lives. Serving in a
public capacity, street-level bureaucrats face obligations of accountability and transparency in
their decision-making that differ from algorithms on
private
platforms. As AI is increasingly
implemented into public services in many Nordic countries [38], we find a need to investigate
caseworkers’ perspectives on bureaucratic decision-making — and which parts of the decision-
making process might benefit from support by AI.
Taking seriously the call in Computer-Supported Cooperative Work (CSCW) to design with the
perspective of those whose work it is to accomplish a certain task [47], our motivation in
writing this paper is to achieve a better understanding of the collaborative aspects of
caseworkers’ bureaucratic decisions when designing AI for public services. For this paper we
understand bureaucratic decision-making as decisions made in a public organization, often
through a collaborative practice such as casework, to satisfy “the bureaucratic system” (for
example, documentation of communication between the caseworker and unemployed individual
to comply with legal requirements or transparency) or determine outcomes for cases affecting
people’s lives (for example, determining eligibility for public welfare). We investigate
bureaucratic decision-making as an object for the design of AI components for caseworker
(workflow) systems. At the time of writing this paper, the National Agency for Labour Market
and Recruitment in Denmark has designed and implemented an algorithmic component
predicting newly unemployed individuals’ risk of long-term unemployment. This is a concrete
example of how AI is implemented to support decision-making in job placement [38, 44], which
is also being adopted in countries such as Austria [2] and Portugal [60].
Formally, caseworkers’ main task is to assist job seekers to return to work. In practice, their role
is divided between guiding people through a bureaucratic system, enforcing the law and policy,
and advocating for the citizen’s needs [4, 32-34]. Møller et al. point out that the IT-systems in
casework often have divided priorities [40]; they can support the caseworkers and unemployed
individuals, as well as supporting the regulatory and policymaking bodies [19, 20, 40, 44]. In this
sense, the role of caseworkers, and their technical infrastructure, is in practice contradictory [5,
PACM on Human-Computer Interaction, Vol. 5, No. CSCW1, Article 40, Publication date: April 2021.
BEU, Alm.del - 2022-23 (2. samling) - Bilag 34: Henvendelse af 20/1-23 fra Asbjørn Ammitzbøll Flügge, Ph.d.-studerende, Datalogisk Institut ved Københavns Universitet om erfaringer og forskning fra digitalisering i jobcentre (anmodning om foretræde)
2649944_0003.png
Street-Level Algorithms and AI in Bureaucratic Decision-Making: A Caseworker Perspective
40:3
40]. In an example from a public service family department, the collaboration between
caseworkers became visible when caseworkers “got stuck” with a challenging case, and
therefore needed their colleagues’ input — seeking their point of view, instead of relying on
their own judgment in the case [44].
The paper reports a qualitative study of caseworkers’ understanding of the perceived potentials
of AI for supporting or even automating, tasks. We bring together research on algorithmic
decision-making and casework from CSCW [e.g 5], including theories of bureaucracy from the
field of public administration [55], and studies on the social implications of algorithmic systems
[38]. Two questions guide our research:
1)
2)
What are the key aspects of bureaucratic decision-making identified by caseworkers as
relevant for AI?
When do caseworkers perceive support by AI as valuable for their work around case
documentation and decision-making?
We investigated this as part of a larger research project on public administration and
1
algorithmic transparency . Our focus here is particularly on the portion of the study set up to
amass the data about job placement that are currently available for the design of AI
components; however, this needs to be compared to the caseworkers’ understanding of AI’s
usefulness, which is not a given. For this purpose, we set up a participatory workshop with
around 35 caseworkers in the fall of 2019 in collaboration with the Danish Association for Social
Workers (in
Danish
Dansk Socialrådgiverforening), which represents many caseworkers in
Denmark. Additional interviews (n=5) with caseworkers from two different job centers,
telephone interviews (n=4), and observations (n=9h) added additional context and qualified the
findings from the participatory workshop. Similar to Eubanks [23], we observed that
caseworkers struggled to understand the algorithmic prediction of long-term unemployment,
and the value of the prediction was unclear both to them and the unemployed individuals.
Decision-making is registered as an individual task of the caseworker, thus the common sense
understanding of casework in job placement may falsely be reduced to individual work. We find
it critical to mitigating this common sense understanding by drawing out the collaborative
aspects of casework. Systems design fails to fully account for the continuous negotiation that
takes place within a community of professionals, particularly if we do not articulate the need
for establishing common ground [43] and how it can be maintained when emerging
technologies shift work conditions [39].
We understand Artificial Intelligence (AI) in this paper as a computational or algorithmic
system capable of performing tasks that require intelligence. In the context of job placement, for
example, this means algorithmic decision-support on an interpretation of the law or the choice
of support offered to the citizen. Although at the start of the workshop we presented the
caseworkers with different types of AI (e.g. rule-based expert systems and different approaches
to machine learning), the purpose of the workshop was to engage caseworkers in a discussion
of when AI could support or replace their decision-making capability, and when it should do
neither. It was not our aim to determine whether caseworkers saw a specific type of AI as
1
Public
Administration
and
https://jura.ku.dk/icourts/research/pacta/
Computational
Transparency
in
Algorithms
PACTA:
PACM on Human-Computer Interaction, Vol. 5, No. CSCW1, Article 40, Publication date: April 2021.
BEU, Alm.del - 2022-23 (2. samling) - Bilag 34: Henvendelse af 20/1-23 fra Asbjørn Ammitzbøll Flügge, Ph.d.-studerende, Datalogisk Institut ved Københavns Universitet om erfaringer og forskning fra digitalisering i jobcentre (anmodning om foretræde)
40:4
Asbjørn Ammitzbøll Flügge et al.
suitable for a specific decision. Consequently, neither we nor the participants distinguished
between different types of AI and other algorithmic systems during the workshop.
The rest of the paper is structured as follows: first, we present related literature concerned with
bureaucracy, casework, and algorithms in public services. Next, we describe our research setting
and method, which is followed by our findings: the potential of algorithms for four specific
decisions. We nuance and validate this potential for AI in job placement casework, bringing
forward casework’s collaborative aspect. Finally, we discuss how our findings pose challenges
for the development and design of AI systems in public services.
2
RELATED WORK: STREET-LEVEL BUREAUCRACY AND STREET-LEVEL
ALGORITHMS
CSCW has a long tradition of investigating workflow systems [8, 22, 25, 26]. The way
bureaucracies track and document work has given workflow systems, such as caseworker
systems, a prominent role to play. The design interest in this domain has often focused on how
scholars can provide meaningful representations of work [25, 26]. More recently, the focus has
shifted from designing representations and outlining models of work to an inquiry of workflows
through different forms of data mining [37]. Data has become a resource for designing decision-
support systems with AI. Mining the data about work processes, it becomes possible to
understand how professionals can work in new ways through AI that allows for new decision-
support tools. Meanwhile, bureaucracy is a complex, multifaceted phenomenon [57], and
bureaucracies are often characterized by routine tasks and a high level of formalization [50].
According to Weber, bureaucracy “refers to a particular type of administrative structure,
developed in association with the rational-legal model of authority” [50 p. 48]. On the one hand,
this provides the basis for more predictable and stable administrative decisions or outcomes. On
the other, the structure also permits public servants, as caseworkers, to exercise “relatively
greater independence and discretion” [50 p.50, following Smith and Ross, 1978]. In what follows,
we describe street-level bureaucracy as a conceptualization of casework in job placement and
present important work on AI in public services.
2.1 The Street-Level Bureaucrats of Job Placement: Caseworkers
The work of public servants or frontline workers, such as caseworkers in job placement, is often
studied through the lens of Lipsky’s [32-34] seminal work on street-level bureaucracy.
Caseworkers are the real-world examples of street-level bureaucrats [5, 6, 19, 24, 40, 44]: public
servants who act as the intermediate between the people and the public authority. Working in
public organizations, street-level bureaucrats also must adhere to administrative law, which
demands cost-effectiveness, transparent, and accountable decision-making [59]. According to
Lipsky, street-level bureaucrats share three distinct characteristics: interaction with citizens,
opportunity to exercise discretion within a bureaucratic structure, and decision-making power
with a potentially high impact on people’s lives [32]. Following Scott and Davis, this double-
sided relationship - independent discretion within general administrative policies and local
procedures - enables bureaucratic systems to handle complex tasks [50], for example, in the area
of job placement [40].
PACM on Human-Computer Interaction, Vol. 5, No. CSCW1, Article 40, Publication date: April 2021.
BEU, Alm.del - 2022-23 (2. samling) - Bilag 34: Henvendelse af 20/1-23 fra Asbjørn Ammitzbøll Flügge, Ph.d.-studerende, Datalogisk Institut ved Københavns Universitet om erfaringer og forskning fra digitalisering i jobcentre (anmodning om foretræde)
Street-Level Algorithms and AI in Bureaucratic Decision-Making: A Caseworker Perspective
40:5
Caseworkers often face the challenge of sparse, overwhelming, or unreliable information.
Boulus-Rødje points out: caseworkers in job placement in Denmark often have to deal with a
“vast amount of information distributed across more than 20 IT systems and organizations,
challenging their ability to conduct an adequate evaluation” [6 p. 57]. In a recent study of job
placement in a German context, Dolata et al. find that caseworkers often experience conflict
between regulation, technological support, and citizens’ expectations. For example, what
caseworkers are required to do does not necessarily align with their own, or the unemployed
citizens' wants, or the possibilities within their caseworker system [19]. Prior studies of job
placement find that supporting technologies often either prioritize the caseworkers and citizens
or the policymakers and regulatory bodies, leading to unresolvable conflicts and the creation of
parallel systems [17, 18, 53]. Møller et. al. identifies two main classes of systems and activities
that are often blended into a single set of practices and support systems. The first class
encompasses programs in which civic services and interfaces are transferred into digital forms
and often take on the character of policy implementation and enforcement. The second class of
systems and activities aligns with practices in which citizens’ records are used in the
instrumental role of tending to the individual and informing the activities of the care
professionals who orbit that individual’s progress toward stability [40].
Casework may come across as an individual activity, but Randall et al. remind us that very little
work is done in isolation [47]. An example of the collaborative aspect of casework,
documentation practices involve a continuously negotiated common ground [43] across the
community of professionals, for example, determining the status of different types of
documentation, and if they are received or within a deadline or not. When preparing for a
meeting, caseworkers assess the citizen’s case. This case often consists of various forms of
documentation: memos from earlier meetings from other caseworkers, documentation from a
union, descriptions of medical conditions from medical specialists, and so forth. All these
different types of documentation must be assembled across the various IT systems used in job
placement [5]. To make the right assessment and to follow up on earlier meetings, caseworkers
depend on the documentation practices of their colleagues and others [19]. This challenges the
commonsense notion of casework as involving an individual caseworker who sits across from
the unemployed individual, entering documentation into the system. In reality, the process is
much more collaborative, and the documentation that appears in the system involves
distributed work. Part of this work is negotiated in the day-to-day application of the legal
requirements, but also with other collaborators, for example, medical practitioners or other
departments in the municipality. Further, as we see how the different classes of systems in
public services are increasingly merged as part of bringing the individual into the decision-
making processes of public services, it becomes increasingly important to show the
collaborative aspects as we move forward to use more AI.
Across contexts, caseworkers’ tasks are changing due to digitalization and technologies such as
AI [7, 44, 58]. Particularly, activities relying on discretion is under pressure, but as we learn
from the studies above, discretion is only one part of the uncertainty about processes and
decisions in casework. Therefore, it is important to be aware that complying with the process of
assembling and documenting an individual’s case involves many stakeholders, not just a single
caseworker. Increased use of digital technologies changes caseworkers’ tasks in various ways.
This includes less face-to-face time and more screen time, extensive data collecting,
documentation, and data work (the work of cleaning, tidying, and adding data into caseworker
PACM on Human-Computer Interaction, Vol. 5, No. CSCW1, Article 40, Publication date: April 2021.
BEU, Alm.del - 2022-23 (2. samling) - Bilag 34: Henvendelse af 20/1-23 fra Asbjørn Ammitzbøll Flügge, Ph.d.-studerende, Datalogisk Institut ved Københavns Universitet om erfaringer og forskning fra digitalisering i jobcentre (anmodning om foretræde)
40:6
Asbjørn Ammitzbøll Flügge et al.
systems) [9, 10, 12, 28, 35]. Street-level bureaucrats are being replaced or finding their tasks
changed due to street-level algorithms [1].
2.2 Street-Level Algorithms in Public Services
Viewing street-level algorithms as alternative strategies for the design of casework is important
because these systems now make decisions traditionally made by street-level bureaucrats [1].
Alkhatib and Bernstein present and apply this novel theory to three cases: content moderation
on YouTube, quality control in crowd work on the online platform Amazon Mechanical Turk,
and algorithmic bias in the US justice system. Pääkkonen et al. build on this work, relying on
two of the cases (crowd work and the justice system) while adding another case documented by
Eubanks [23] on automated housing allocation for homeless people [46]. The algorithm in the
justice system supported judges’ decisions, while the housing algorithms automatically decided
who was most in need of a house and matched homeless people with housing opportunities
based on their eligibility criteria. These street-level algorithms either supported or automated
decisions usually made by street-level bureaucrats. These algorithms are called street-level
algorithms, as they perform the tasks traditionally held by street-level bureaucrats, although
these algorithms have also been applied outside traditional bureaucratic settings. The issue
according to critics is that algorithms on private platforms make decisions that impact people’s
lives, but they do not face the same level of scrutiny to avoid harm, be transparent, or
demonstrate accountability that public institutions would. Another concern is the right to an
individual process, for example in the job center, or the right to a free trial in the courthouse.
Human cases may have characteristics or novelty that cannot be encoded [1]. Although the
street-level algorithm is seen as the computational twin of the street-level bureaucrat, the
algorithms in the cases presented by Alkhatib and Bernstein and Pääkkonen et al. are not
limited to the bureaucracy, being the public organization. However, public services are, in fact, a
domain in their own right, characterized by limited consensus about the means and ends of
decisions [51]. Therefore, we find it necessary to focus our perspective on public services to
gain a better understanding of how to design AI systems within this complicated and particular
context. A key discussion within AI and public services focus on the altering of human
discretion [44]. In the housing context, Pääkkonen et al. argue that the algorithms redistributed
discretionary power to locations of uncertainty being places where it is hard to predict or
control the outcomes of actions. Human discretion should support algorithmic decision-making
in these places [46]. In job placement, Petersen et al. similarly finds that caseworker discretion
is still relevant after algorithms enter the equation, as caseworkers are the ones who decide
what information to record. In doing this, they are similarly making a decision on how AI
should support them in their work [45]. Whilborg et al find that automatic decision-making
systems almost become “co-bureaucrats”, and public officials become mediators, rather than
decision-makers [56]. In his work, which is focused on public sector organizations, Young calls
for a direct link between the level of discretion and the value of AI (low discretion =
automation, medium discretion = decision-support, and high level of discretion = e.g., creation
of new data) [58]. The algorithmic impact on human discretion, for example in the cases
described by Eubanks [23] has strengthened the scholarly and public concern regarding
algorithms [15]. Across public services, AI is by public officials often justified as a mean to make
public services more effective and less contingent on subjective judgments [13], or to ensure
fairness in traditionally opaque decision-making and discretionary practices, thus leading to
better decisions and mitigating individual caseworkers’ arbitrary prejudice or bias. From the
PACM on Human-Computer Interaction, Vol. 5, No. CSCW1, Article 40, Publication date: April 2021.
BEU, Alm.del - 2022-23 (2. samling) - Bilag 34: Henvendelse af 20/1-23 fra Asbjørn Ammitzbøll Flügge, Ph.d.-studerende, Datalogisk Institut ved Københavns Universitet om erfaringer og forskning fra digitalisering i jobcentre (anmodning om foretræde)
2649944_0007.png
Street-Level Algorithms and AI in Bureaucratic Decision-Making: A Caseworker Perspective
40:7
prediction of child harm [48], predictive policing [13], determining eligibility to receive welfare
support [23, 27], or experimenting with automated decision-making in asylum and integration
systems [36], AI and street-level algorithms are in numerous ways being implemented in public
services.
We learn from prior studies across job placement casework and AI how subtle collaboration is
in this domain. Caseworkers' decision-making is highly dependent on a variety of medical
specialists, therapists, and the citizens themselves for documentation and to move the processes
forward. As caseworkers rely on varying specialists, for example, in order to comply with the
requirements for how a case has to be assembled and documented, makes casework highly
unpredictable and thus hard to model. As we seek to make sense of data about casework and
use AI for decision-support, understanding how collaboration takes place becomes even more
critical. Thus, a core challenge for CSCSW-scholars is to empirically describe how collaborative
work functions as a basis for the responsible development of AI systems.
3 BACKGROUND AND METHOD
The initial focus of the study presented in this paper was to engage caseworkers in a discussion
about the value of AI in their daily practice in job placement. This later evolved into
characterizing collaborative aspects of decision-making in job placement casework.
3.1
Data Collection, Analysis, and Validation
The participatory workshop (2h) was organized in collaboration with the Danish Association
for Social Workers (in Danish, “Socialrådgiverforeningen”). Approximately 35-40 caseworkers
participated in the workshop, and of these 9-10 had concrete work experience in job centers. At
the workshop, the caseworkers were divided into groups, and their discussion was guided by a
prepared design artifact [following 3]: a scenario of a 40-year-old unemployed citizen going
from “job-ready” to “activity-ready” (not ready to take a job). The citizen is a persona
(amalgamated) across scenarios that caseworkers explained to us, mainly focusing on the more
vulnerable part of job seekers. The scenario was an iteration of a commonly used tool in the
2
public sector for process descriptions (“Servicerejsen”) . The iterations were made together with
our student researchers in the team. The final design artifact of the scenario served as a
common point for discussion with our participants at the workshop. The scenario enabled
caseworkers to vote on decisions in the scenario, inspired by the principles of Dot Voting [16].
Dot Voting is a commonly used method for decision-making and design processes [30]. In
groups, the caseworkers discussed different decisions and voted for algorithmic decision-
making, decision-support, or neither. A joint discussion about the decisions followed the voting,
where caseworkers could comment and reflect on the votes. The workshop was audio-recorded
(with permission) and transcribed, and field notes were taken. Other participatory strategies
could have been followed [e.g. 14].
The workshop was followed by two rounds of interviews and observations. The first round of
interviews was conducted in January and February 2019 (n=5). All interviews were conducted as
2
Description
of
“Servicerejsen”
in
Danish:
https://videncenter.kl.dk/viden-og-vaerktoejer/digital-
transformation/servicedesign-og-brugerinddragelse/servicedesignvaerktoejet/metode-3-servicerejsen/
PACM on Human-Computer Interaction, Vol. 5, No. CSCW1, Article 40, Publication date: April 2021.
BEU, Alm.del - 2022-23 (2. samling) - Bilag 34: Henvendelse af 20/1-23 fra Asbjørn Ammitzbøll Flügge, Ph.d.-studerende, Datalogisk Institut ved Københavns Universitet om erfaringer og forskning fra digitalisering i jobcentre (anmodning om foretræde)
40:8
Asbjørn Ammitzbøll Flügge et al.
individual interviews and lasted about one hour each. Four of them were both audio-recorded
and later transcribed, and the fifth was only audio-recorded and not transcribed. The examples
from the interviews are realistic caseworker experiences described to the authors by the
caseworkers. The second round of interviews in May 2020 validated preliminary findings from
the study through telephone interviews with caseworkers (n=4) also working in job placement
(May 2020). They lasted between 15 and 30 minutes. Lastly, the first author conducted
observations (September 2020) of meetings between caseworkers and citizens in a job center
(n=9h). All citizens consented to the first author’s presence in the meeting. CSCW has a long
tradition of investigating technologies ethnographically [51]. Due to the opaque nature of
algorithms, Seaver argues that ‘scavenging’ different pieces of information, such as interviews
and observations, can be necessary when doing ethnographic studies of algorithms [49]. Thus,
we treat the workshop, interviews, and observations as ethnographic fieldwork [47]. All
interviews, observations, and the workshop were conducted in Danish; quotes in this paper
were translated by the authors. All the caseworkers were experienced working in job centers
with direct contact with unemployed individuals. They came from different municipalities, and
also different departments within the same municipality, thereby covering many different
categories of unemployed individuals (long-term unemployed, newly unemployed, with/without
medical issues, varying degrees of education, etc.).
We used open coding (NVivo 12 for Mac) for analyzing data with an iterative approach [42]. We
coded the workshop transcription (example of codes: ‘algorithm may decide,’ ‘collecting medical
documentation’ or ‘algorithmic concerns’), applied the codes, and used the coded sections as
guidelines to prepare questions and analyze the first round of interviews. For example, during
Dot Voting, 9/23 posited that an algorithm may decide to collect medical documentation. Since
the topic of whether, when, and why to collect medical documentation had been raised, it was
then used in the interviews.
Including domain experts is a crucial step in the design process, but their presence might risk
becoming a box-checking exercise [38]. To avoid “false consensus,” [21, 38] we introduced the
caseworkers at the workshop to AI and provided examples of its use, allowing them to contest
the value of AI in job placement. Through the two rounds of interviews, we also aimed at
nuancing the findings from the workshop to further avoid “false consensus” and provide
additional complexity and context to the decision. For example, whether or not a caseworker
collects medical documentation when they suspect medical issues may be influenced by the
relationship between the caseworker and unemployed individual, we learned in the interviews.
The iterative aspect became a necessary part of our data analysis, oscillating between the
findings from the workshop, first rounds of interviews, and again when validating preliminary
findings through telephone interviews with other caseworkers from job placement. For
example, the caseworkers referred to the decision to collect medical documentation as “simple”.
The following rounds of interviews nuanced this, highlighting the timing of a decision is
important for the future collaboration between the caseworker and the citizen, as some
unemployed individuals might see the wish to collect documentation as a breach of trust,
thereby harming their collaboration with the caseworker. This was supported by our
observations in job placement, where caseworkers prepared, held, and documented
consultations with unemployed citizens. Table 1 contains the data collection activities, including
duration, number of participants, and examples of questions asked.
PACM on Human-Computer Interaction, Vol. 5, No. CSCW1, Article 40, Publication date: April 2021.
BEU, Alm.del - 2022-23 (2. samling) - Bilag 34: Henvendelse af 20/1-23 fra Asbjørn Ammitzbøll Flügge, Ph.d.-studerende, Datalogisk Institut ved Københavns Universitet om erfaringer og forskning fra digitalisering i jobcentre (anmodning om foretræde)
2649944_0009.png
Street-Level Algorithms and AI in Bureaucratic Decision-Making: A Caseworker Perspective
40:9
Table 1 Data Collection
Type of
activity
Workshop
No. of
participants
35-40
Date and
Duration
October 2019
3 hours
Purpose and questions
Identifying caseworker perspectives on the
possibilities of AI in job placement
Examples of questions:
Which decisions can algorithms support — or
not?
Can algorithms support, e.g., the collection of
medical documentation?
How could AI support you in your work?
Gaining a deep understanding of decision-
making practice in job placement.
Examples of questions:
What kind of decisions do you make?
What types of information do you rely on
when deciding, e.g., that an internship is the
right way forward in collaboration with an
unemployed individual?
How could AI support you in your work?
Validating results, e.g., understanding the
importance of
timing
in decision-making in job
placement.
Examples of questions:
The timing seems to be an important factor
when deciding on activities for unemployed
individuals, can you elaborate on that?
When do you delay a decision?
When do you know which decision is the right
one to make?
Seeing casework in practice.
Examples of questions (for the caseworkers):
What data is the most significant when
assessing a case?
How do you prepare for a meeting with an
unemployed citizen?
How do you use the algorithm that predicts
the risk of long-term unemployment in your
work?
Individual
interviews
(first round).
5
(4
caseworkers
and 1
manager)
February 2020
5 hours (1 hour
each)
Telephone
interviews
(second
round)
5 caseworkers
May-June, 2020
1,5 hour (15-35
min each)
Observations
of 6 meetings
between
caseworker
and citizen,
and
interviews
with
caseworkers
4
caseworkers,
6 citizens
October 2020
9 hours
3.2
Dot Voting and The Scenario of an Unemployed Individual
The participatory workshop started with a presentation by one of the authors on algorithms
and AI, (rule-based expert systems and different variants of machine learning) and how these
are mobilized toward solving different tasks (playing chess, recognize handwritten letters or
PACM on Human-Computer Interaction, Vol. 5, No. CSCW1, Article 40, Publication date: April 2021.
BEU, Alm.del - 2022-23 (2. samling) - Bilag 34: Henvendelse af 20/1-23 fra Asbjørn Ammitzbøll Flügge, Ph.d.-studerende, Datalogisk Institut ved Københavns Universitet om erfaringer og forskning fra digitalisering i jobcentre (anmodning om foretræde)
2649944_0010.png
40:10
Asbjørn Ammitzbøll Flügge et al.
faces, finding the shortest route between two locations) or supporting decisions in job
placement. The caseworkers were then divided into groups to discuss whether AI (defined as a
computational or algorithmic system capable of performing tasks usually understood as
requiring intelligence) could support them in four decisions in a scenario with an unemployed
citizen (prepared as a design artifact). For each decision, the caseworkers, after a discussion,
individually “voted” for the type of AI support they wanted (Table 2 for votes).
The scenario described a 40-year-old citizen with two kids, an education in IT, and experiencing
some issues from arthritis as well as showing early signs of depression. These are common
health issues of the citizens we encountered in the study. As the caseworker opens her case, this
individual is considered “job-ready”. To comply with legal requirements, an unemployed
individual must meet with a caseworker a minimum of four times per year. During these
meetings, the caseworker and the individual work together to identify what the individual
needs to find a full-time or part-time job. During the scenario, the citizen changes from being
“ready to take a job” to “activity ready” (not ready to take a job). Different types of decisions are
illustrated in the grey boxes in Fig. 1., which is created from realistic examples of a workflow in
a municipal job center:
Fig. 1. The workflow of job placement of “Activity Ready” and Job Ready” individuals.
4
FINDINGS
We found a characterization of caseworkers’ collaboration around documentation practices as
part of negotiating the allocation of support and benefits in job placement. We show how
caseworkers perceive AI in bureaucratic decision-making as an opportunity to negotiate the
allocation of support and benefits to meet a particular individual’s needs. Understanding how AI
can become useful for casework shows the need for an orientation towards the development of
collaborative AI supporting the continuous need for negotiating common ground as part of the
day-to-day bureaucratic processes in job placement. These collaborative aspects of bureaucratic
decision-making potentially affect how to design valuable AI in public services. Discussing the
PACM on Human-Computer Interaction, Vol. 5, No. CSCW1, Article 40, Publication date: April 2021.
BEU, Alm.del - 2022-23 (2. samling) - Bilag 34: Henvendelse af 20/1-23 fra Asbjørn Ammitzbøll Flügge, Ph.d.-studerende, Datalogisk Institut ved Københavns Universitet om erfaringer og forskning fra digitalisering i jobcentre (anmodning om foretræde)
2649944_0011.png
Street-Level Algorithms and AI in Bureaucratic Decision-Making: A Caseworker Perspective
40:11
collaborative aspects of caseworkers documentation practices in the participatory workshop, we
show how caseworkers perceived AI for decision-making as valuable for their practices when it
can support 1) raising issues to management if an individual job placement case requires extra
resources because it is deemed “complex”, or 2), become a starting point for the collective
practice in relation to the individual’s case when deciding on and assigning a specific job
placement program. We begin by reporting the findings from the Dot Voting exercise and the
discussion that followed afterward of the potential of algorithmic decision-support/making on
four specific types of bureaucratic decisions.
4.1
Voting for AI
The scenario that formed the basis of our discussions with caseworkers conceptualizes the four
different decisions, and caseworkers individually voted for each decision after discussing it in
their groups (see Table 2).
Table 2. “Votes” collected responses from the workshop.
Algorithms may
make the decision
#1 Decide whether to request medical
documentation regarding the
unemployed individual
#2 Decide whether a “soft start” on an
internship for the unemployed
individual is the right way forward
#3 Decide whether the internship is
going well/if the internship is still
going well
#4 Decide whether the unemployed
individual should join a less
demanding internship based on a
health evaluation
9
Algorithms may
advise the
decision
14
Algorithms may
neither make nor
advise the decision
0
0
20
1
0
19
2
1
13
7
Deciding whether or not to collect a medical record or other kinds of medical documentation
was for many caseworkers attending the workshop seen as a trivial, almost standardized
decision, with limited need for discretion. As pointed out by one caseworker, it is often
mandatory for some groups of unemployed individuals to provide documentation if they are
unable to take a job. The caseworkers reasoned that if a health status is mandatory, an
algorithm could just as well decide to request it:
”I don’t know the criteria for when to collect a medical record, but if the criteria are very
simple, then perhaps the algorithm can make the decision… but it is also dependent on
discretion”.
(Caseworker, AI workshop, October 2019)
Collecting medical documentation is an expense in the job placement, the job center needs to
pay medical practitioners for conducting an evaluation and documentation of a medical issue.
PACM on Human-Computer Interaction, Vol. 5, No. CSCW1, Article 40, Publication date: April 2021.
BEU, Alm.del - 2022-23 (2. samling) - Bilag 34: Henvendelse af 20/1-23 fra Asbjørn Ammitzbøll Flügge, Ph.d.-studerende, Datalogisk Institut ved Københavns Universitet om erfaringer og forskning fra digitalisering i jobcentre (anmodning om foretræde)
40:12
Asbjørn Ammitzbøll Flügge et al.
Therefore, receiving advice from an AI the caseworkers argued, could strengthen their case to
management that collecting medical documentation is worth it since management has to
approve the costs. 20/21 voted that an AI could provide decision-support on whether to “soft
start” on an internship would be the best way forward. Several caseworkers noted that it would
almost always be beneficial to have more advice when making decisions. Receiving advice to
match the job seeker with the right internship — or a soft start — based on their medical
documentation would be valuable, but would also strengthen the caseworkers’ case in the
conversation with the citizen agreeing on a specific internship. A caseworker imagines:
“My assessment is not only based on three [cases], but it is grounded in the fact that the
majority of the people who have been in your situation would benefit from this”.
(Caseworker, AI workshop, October 2019)
According to 19/21 caseworkers, an AI component could offer advice on whether an internship
is going well. Internships are usually evaluated in conversation with the unemployed individual
and an employment consultant from the organization hosting the internship. However, if their
evaluation could be answered in a questionnaire analyzed by an AI component instead of a
telephone interview, the caseworkers would welcome the support to guide their decision.
The last decision from the scenario, whether an unemployed individual should transition to a
less demanding internship based on medical documentation, resulted in the widest spread of
votes. 1/21 was for automation, 13/21 for support, and 7/21 for neither support nor automation.
In general, as the decisions, data, and regulations become more complex, the type of AI changes.
In this concrete decision, the caseworkers said that it would be valuable to have support to
understand the medical documentation or estimate the likelihood that the citizen would
complete the internship. Summarizing this part of the analysis, caseworkers’ perception of the
value of AI was only to some degree determined by the level of discretion; it was seen as more
valuable if the AI component could support their decision mandate towards management
arguing for collecting medical documentation although it is costly, and citizens arguing for a
specific welfare program or internship.
4.2
Adding Context to “Simple” Decisions
Collecting documentation is a key task in many areas or public services. As we shall show in
the following, what might come across as a simple decision (collect medical documentation) is
complicated in some situations.
The workshop provided caseworkers’ impression of AI as valuable in certain ways based on
four types of decisions job placement. Using methods combining Dot Voting principles with
additional interviews, it is critical to prevent falling into the trap of “false consensus”. We now
introduce and analyze the interviews (n=5), in which caseworkers explained how they work and
which decisions they make in real and anonymized cases, as well as input from the second
round of interviews (n=4) and observations (n=9h) of meetings between caseworkers and
unemployed individuals at a physical job center.
A caseworker usually requests medical documentation either from a general practitioner or a
specialist, such as a psychiatrist, if the unemployed individual reports that a medical condition
prevents them from taking a job. In bureaucratic decision-making, caseworkers make decisions
PACM on Human-Computer Interaction, Vol. 5, No. CSCW1, Article 40, Publication date: April 2021.
BEU, Alm.del - 2022-23 (2. samling) - Bilag 34: Henvendelse af 20/1-23 fra Asbjørn Ammitzbøll Flügge, Ph.d.-studerende, Datalogisk Institut ved Københavns Universitet om erfaringer og forskning fra digitalisering i jobcentre (anmodning om foretræde)
Street-Level Algorithms and AI in Bureaucratic Decision-Making: A Caseworker Perspective
40:13
to advance their understanding of the individual, such as deciding to collect an individual’s
medical records. Such decision-making in job placement takes place within a larger context of
public services and largely relies on collaboration with, for example, medical specialists,
therapists, and companies, who act as partners for individuals’ training and internships.
Caseworkers argued that
if
the criteria for making the decision is relatively simple, for instance,
because medical documentation is required by law,
then
algorithms can be suitable. On the
other hand, the caseworkers also point to the need for discretion — a key element in public
services. In the fictive scenario, the decision to collect an individual’s medical records, may in a
real-world bureaucratic context simply be a necessary step in the process for a caseworker to
understand the issue at hand, for example, how serious a medical condition is. As discretion
seems to remain important, we decided to further investigate this example of decision-making.
Providing more context, a caseworker explained that in practice an unemployed individual can
also be a patient in a psychiatric hospital who is about to be discharged. In this case, her first
step would be to collect a medical record. She had recently been in this situation, and her first
meeting with the individual took place in the psychiatric hospital instead of the job placement
office. However, the decision to collect the medical record depends on a variety of factors, and it
is not necessarily the right way to go, she reflected. While certain diagnoses, like arthritis, often
require documentation, which could be automatically requested, other mental health issues are
often more complicated, as it can be less clear if or to what extent the challenges of the
unemployed individual stem from their condition. It may not even be clear to the individual
why they are unemployed. And in some cases, the process of diagnosing may still be ongoing.
The starting point for the caseworker also differs depending on the target group or category of
citizen (“Job Ready” vs. “Activity Ready/Not Ready to Take a Job”). The formal purpose of
collecting an individual’s medical records is so that the caseworker can assess whether anything
is preventing the citizen from taking a job. However, when meeting an individual for the first
time, medical issues are not always the first thing they would look at, as another caseworker
put it. Neither is the medical condition always the most important factor for why the citizen lost
their job, as explained by one of our interviewees:
“Often, when a citizen is ‘job ready’, you don’t look at whether the citizen is sick. You look at
why the citizen has lost their job… Is it because of downsizing in the company, or quarrels with
the boss? Perhaps it is because of a medical condition, and then it would be a good idea to
collect the medical record… but in this case, it is not [a decision] that can simply be automated”.
(Caseworker, AI workshop, October 2019)
Another caseworker reflected on the decision:
“The medical documentation nicely unfolds some of the dilemmas. It’s the automation of some
things, but not others”.
(Caseworker, AI workshop, October 2019)
The differences reflected in caseworkers’ decisions to collect an individual’s medical record
depend on the context provided in the citizen’s case. For individuals who are considered “job-
ready”, the medical record may not be relevant or the right place to start, as the example
illustrates. But the same may be true for the individual who is being discharged from the
PACM on Human-Computer Interaction, Vol. 5, No. CSCW1, Article 40, Publication date: April 2021.
BEU, Alm.del - 2022-23 (2. samling) - Bilag 34: Henvendelse af 20/1-23 fra Asbjørn Ammitzbøll Flügge, Ph.d.-studerende, Datalogisk Institut ved Københavns Universitet om erfaringer og forskning fra digitalisering i jobcentre (anmodning om foretræde)
40:14
Asbjørn Ammitzbøll Flügge et al.
hospital but is still being diagnosed. This complicates the suitability of an AI decision-support in
the kinds of decisions that otherwise may seem “simple”. It also highlights caseworkers’
dependence on other professionals' documentation practices, for example, documentation from
a psychiatric context interpreted into an employment context. When validating what was
important when deciding to collect medical documentation in the second round of interviews
with five other caseworkers in job placement, another aspect came forward. A common-sense
understanding of casework is that the main task is to help unemployed individuals return to
work. Caseworkers’ ability to move the unemployed individual forward, is influenced by their
ability to create a mutually trusting relationship with the unemployed individual, they tell us.
Unemployed individuals could see a caseworker’s request for medical documentation as a sign
of distrust, or and an act of power.
In our workshop with caseworkers, the commonly used principles of Dot Voting helped
uncover some overall trends of whether caseworkers perceive AI as valuable. Regarding the
decision to collect medical documentation, none of the caseworkers entirely rejected the idea of
AI support. We found that only through adding context, with caseworkers thinking aloud and
providing actual examples in the interviews, did the complexity surrounding the decision come
into focus.
4.3
Balancing Legality and Uncertainty in Decision-Making
Caseworkers must determine the course of a range of different issues; when and whether to
collect medical documentation, an unemployed individual is just one. Some decisions relate
more closely to their legislative framework or local procedure. For example, when the status of
an individual is changed by the caseworker from “job-ready” to “activity ready” it is
documented in their caseworker system. This decision affects the kinds of rules that apply to
the individual. Another example is a caseworker’s assignment of “sick-leave”, which exempts
the person from specific legal demands, such as the requirement to submit two job applications
each week. Assigning a sick-leave can also be a matter of balancing legal requirements with the
health and wellbeing of the citizen.
One of the caseworkers from the interviews described a case involving a woman who has
abused cannabis for several years. However, cannabis abuse or treatment for it is not in itself
enough justification to suspend the legal requirements of being “job-ready”, in contrast to
individuals with medical conditions such as depression and arthritis. Still, the caseworker
decided to suspend the legal demands and instead encourage the citizen to concentrate on
voluntary treatment. The caseworker in the situation suggested that a meeting between herself,
the unemployed woman, and the rehabilitation counselor would be the best way forward, to
ensure a suitable process and avoid, for example, double bookings. This again challenges the
assumption of casework as an individual and single-handed practice. If they strictly follow the
legislative framework, the caseworker could be indifferent about the process or plan at the
rehabilitation center, but to best support the unemployed individual in this concrete situation
the caseworker must collaborate. Similarly, our observations of meetings between caseworkers
and unemployed individuals (n=9h) confirm that the individual caseworker is not an island.
When assessing a case, the caseworkers we observed based their assessment on data in the
system, thereby also on colleague’s earlier documentation practices. Documenting meetings
content of meetings with citizens is a mandatory task for caseworkers, so future caseworkers
and the citizen can see what has already been agreed on or talked about. For example,
PACM on Human-Computer Interaction, Vol. 5, No. CSCW1, Article 40, Publication date: April 2021.
BEU, Alm.del - 2022-23 (2. samling) - Bilag 34: Henvendelse af 20/1-23 fra Asbjørn Ammitzbøll Flügge, Ph.d.-studerende, Datalogisk Institut ved Københavns Universitet om erfaringer og forskning fra digitalisering i jobcentre (anmodning om foretræde)
Street-Level Algorithms and AI in Bureaucratic Decision-Making: A Caseworker Perspective
40:15
caseworkers wrote notes to their colleagues, if a citizen had particular challenges that other
caseworkers might benefit from knowing about. For example, mental conditions like depression,
or if the unemployed individual easily becomes aggressive. Both in the scenario of the 40-year-
old citizen with depression and arthritis and in the case with the woman with cannabis
addiction, making the “right” decision indeed depends on the caseworker’s discretion, including
input from colleagues: what is in their opinion best for the citizens.
“Then I say to her ‘I’m exempting you from it, this means I’m registering you as being on sick-
leave until our next meeting, so you can concentrate on getting your treatment started’”.
(Caseworker, Interview at the job center, February 2020)
From the scenario of the 40-year old citizen, it could be that a ‘soft start’ on an internship is not
“soft enough”. Perhaps an internship is not the best way forward. A caseworker explains:
“If the citizen is depressed, tired, and has arthritis, and just started on a new medical treatment
for arthritis, well then he would probably not be able to participate in an internship”.
(Caseworker, AI workshop, October 2019)
During the Dot Voting exercise in the AI workshop, all caseworkers except one voted that an
algorithm could support them when making their decisions, such as assigning sick-leave (see
Table 2). However, this “vote” contradicted the caseworkers’ reflections in the interviews. The
AI workshop abstracted the job placement context, whereas the interviews brought out more of
the “real-life” context. In the AI workshop, a caseworker connected the discussion on sick-leave
to the role of human discretion. He used the example of the former Office Assistant paperclip
‘Clippy’, used in the early Microsoft office systems, to imagine that an AI could similarly
support him in analyzing the data available, and then come with advice he could bring forward
when collaborating with the citizen.
“[Clippy would say] ’You should just be aware that he hasn’t completed a single internship in
the last nine tries’. That [kind of advice] would be nice, but it should not be archived, I am still
the caseworker, it is still me who is the specialist, it is me who exercise discretion, [deciding]
what is the right thing to do together with the citizen”.
(Caseworker, AI workshop, October 2019)
Another caseworker brought up a similar example in the interviews. She thought of the data
about the completion of internships, as her way of trying to understand a particular individual’s
unemployment. If the individual completed the internships, she took this as evidence of the
individual’s ability to show up for an ordinary job. Internships could thus be a concrete place to
start in terms of AI in job placement and how to include a caseworker perspective. Ultimately,
the examples illustrate that there is no clear-cut distinction between decisions strictly given
within a legal framework, and those made by caseworkers to mitigate the consequences for a
citizen of a concrete case. Simply reading the rules and regulations may give the impression that
job placement is mainly concerned with moving an individual closer to an ordinary job. An
important part of the caseworkers’ role from this perspective is to administer rules and make
sure that an individual is economically sanctioned if they do not meet the prescribed
requirements. However, the caseworkers also act as an advocate on behalf of the individual and
make sure that sanctions are applied with proportionality, or even decide to bend the rules
PACM on Human-Computer Interaction, Vol. 5, No. CSCW1, Article 40, Publication date: April 2021.
BEU, Alm.del - 2022-23 (2. samling) - Bilag 34: Henvendelse af 20/1-23 fra Asbjørn Ammitzbøll Flügge, Ph.d.-studerende, Datalogisk Institut ved Københavns Universitet om erfaringer og forskning fra digitalisering i jobcentre (anmodning om foretræde)
40:16
Asbjørn Ammitzbøll Flügge et al.
within the flexibility of the legal framework. These are decisions in which caseworkers balance
the potential consequences from a decision on an individual because they are uncertain or
questionable. For example, if the caseworker did not assign a sick leave to the woman with
cannabis addiction, the caseworker presumed that the woman either wouldn’t get to rehab, or
fail to meet up in the job center while dealing with her addiction. Both would be problematic,
and if she failed to meet up in the job center, the caseworker would need to sanction her.
Usually, an internship is assigned to an individual by a caseworker both to test and develop the
individual’s work capacity. At the workshop, a caseworker’s interpretation of
how the internship
is going
can lead to a financial sanction, illustrating the complexity of the concept of sanction in
practice. A caseworker explains:
“In principle, we don’t make a written decision, but you decide as a caseworker that we will
continue the internship. However, it might be the case that in the follow-up meeting after four
weeks things are going really bad, and then we have to decide”.
(Caseworker, AI workshop, October 2019)
The internship is often set up as a collaboration with a partner company. Since there can be
numerous reasons for an internship to go poorly, determining how to move forward, or perhaps
sanctioning the individual for not fulfilling their agreement, is a complex endeavor. A
caseworker in the second round of interviews reminded us that when deciding, whether it is
collecting medical documentation or assigning a welfare program, timing is very important, and
not strictly defined by legislative framework. Timing is important because there are novel
circumstances for the individual, for example, unstable mental condition. An internship is
marked by the uncertainty of how the unemployed individual develops over time. A caseworker
illustrates:
“Should we stop or shall we continue? Again, advice could be really nice in my situation,
because often we are wondering: ‘Okay, the citizen actually says they are really tired, when
they get home from the internship, but what if we tried another 14 days, would they still be just
as tired when they got home?”.
(Caseworker, AI workshop, October 2019)
The Dot Voting regarding the decision, whether to stop or continue an internship, all
caseworkers except two who thought they could benefit from support by AI and algorithms.
Quotes above and the votes in Table 2 on deciding whether an internship is going well (19/21
for support, 2/21 for neither automation nor support) suggest that AI could be suitable for
support in this decision. For example, a caseworker imagined an AI as the old Microsoft Office
assistant “Clippy” to provide some kind of overview of whether the right steps were taken in
the right order — especially for newly employed caseworkers. However, we interpret these
votes with caution. Our data from the interviews with caseworkers in the job placement already
showed how context and talking aloud is critical for the interpretation of responses in the AI
workshop. For example, caseworkers at the workshop imagined an AI component as “fixing”
the more frustrating parts of their job. A caseworker in the AI workshop reflected:
“Based on the algorithm’s advice, I would suggest that the citizen can participate in, for
example, an internship. That would be damn nice [with this kind of advice], because sometimes
with this damn medical record… what the heck are they [medical practitioner] actually writing.
PACM on Human-Computer Interaction, Vol. 5, No. CSCW1, Article 40, Publication date: April 2021.
BEU, Alm.del - 2022-23 (2. samling) - Bilag 34: Henvendelse af 20/1-23 fra Asbjørn Ammitzbøll Flügge, Ph.d.-studerende, Datalogisk Institut ved Københavns Universitet om erfaringer og forskning fra digitalisering i jobcentre (anmodning om foretræde)
Street-Level Algorithms and AI in Bureaucratic Decision-Making: A Caseworker Perspective
40:17
There I could see a benefit. And then you can use it as a supportive tool to talk with the
citizen…”.
(Caseworker, AI workshop, October 2019)
What these responses are even telling us about AI as valuable in job placement? Some
caseworkers in the AI workshop were optimistic. Several caseworkers expressed concern
regarding issues such as the accuracy of an AI component. During our discussions, several
doubted whether AI would be able to grasp the complexity of the cases they deal with every
day. This doubt was then often expressed as a need for human discretion. In a time-pressured
environment, where caseworkers are responsible for up to 50 citizens or meeting new
unemployed individuals each day, some feared that the AI suggestions would provide a too-easy
solution, transforming critical caseworkers into lemmings. A caseworker expressed it:
“It is about relying on yourself. Like using a GPS to find your way. It is a bit like walking in
Copenhagen without a GPS, then you’re completely confused if you have never walked there
before the days of the GPS… One way or another, I’m afraid that we will lose some of our
professionalism in this”.
(Caseworker, AI workshop, October 2019)
This discussion from the AI workshop illustrates how fear with regards to AI derives not only
from the potential inaccuracy of the algorithm’s advice but also on the risk that it will de-skill
the caseworkers. Another caseworker added that if a decision-support system provided the
advice that they would usually get from colleagues, this could decrease the need for collegial
relationships and collaboration.
Bringing these insights together, our analysis shows that caseworkers imagine AI in various
forms as valuable, mainly for supporting them as they negotiate, for example, allocation of
benefits or support – either towards management or the unemployed individual. We learned
how caseworkers at the AI workshop abstracted the job placement context, whereas the
interviews brought additional complex “real life” context, and thus resulted in a deeper
understanding of bureaucratic decision-making in job placement, with implications for the
design of AI systems.
5 DISCUSSION
Scholars have called for more research on what it means to bring a CSCW perspective to the
root of algorithmic research [52]. We do this while bringing forward the perspective of those
whose work is affected by the deployment of AI, as advised by Randall [47]. Bureaucratic
decision-making is not a perfect description of the topic we have discussed. We did not address
how to categorize different types of bureaucratic decision-making, or how the type of
bureaucracy influences the decisions made within it, for example, if decision-making is different
in a Weberian machine-like bureaucracy vs. Lipsky’s street-level bureaucracy, and how this
materializes in practice. We also did not address the different measures available within public
services. We presume that assessing whether a decision was right or wrong in the justice
system, another part of public services where algorithms are being implemented, is different
compared to job placement. Did the offender re-commit a crime? Did the offender fail to meet
PACM on Human-Computer Interaction, Vol. 5, No. CSCW1, Article 40, Publication date: April 2021.
BEU, Alm.del - 2022-23 (2. samling) - Bilag 34: Henvendelse af 20/1-23 fra Asbjørn Ammitzbøll Flügge, Ph.d.-studerende, Datalogisk Institut ved Københavns Universitet om erfaringer og forskning fra digitalisering i jobcentre (anmodning om foretræde)
40:18
Asbjørn Ammitzbøll Flügge et al.
up in court while on parole? If a crime was committed, or an offender failed to meet up in court,
these are clear, tangible measures. In job placement, it seems more challenging to pin down
whether an internship was a success or not, or whether it was the right thing to assign sick
leave to an unemployed individual.
Following CSCW scholars [19, 40, 44], we find that casework is highly collaborative, both
internally in the organization, involving managers and other caseworkers, and externally with
the unemployed individual. The collaborative aspect of the casework being their daily work,
was important for caseworkers at the workshop when discussing the value of AI. If caseworkers
have AI support, they can leverage it as an “expert opinion” or back-up when making the case,
for example, to collect medical documentation, which is costly and requires management
approval. AI could also be valuable for caseworkers in the conversation with the unemployed
individual, if it could provide an analysis of similar citizens matched particular internships,
thereby strengthen the cause of the caseworker regarding the unemployed individual. We can
develop AI for decision-support, which these examples illustrate. Although documenting is an
individual activity, we need to acknowledge that decisions in casework are calibrated with other
decisions, for example in former cases, in a cooperative practice. By supporting the workshop
with interviews with domain experts, our findings illustrate how collaboration in casework,
especially around documentation is a key aspect of bureaucratic decision-making.
Different legal frameworks in different areas of public services provide varying opportunities or
demands for collaborative work, level of discretion, or types of information, and this is
important to bring into the design process of, for example, AI and algorithms. Alhutter et al.
critique the development of an algorithm for profiling job seekers in Austria, amongst other
things, for not considering how the algorithm is integrated into the daily work of caseworkers,
including meetings with unemployed citizens. Following Christin who argues for enrolling
algorithms in ethnographic research to shed light on, for example, their opacity. [15], we find it
critical to dive deep into the context of the domain in which we seek to deploy or design new
technologies and to not overlook important aspects of the particular situation. Scholars such as
[58] suggest that the complexity of a task is a good indication of whether and how AI can be
valuable. The four decisions from the Dot Vote exercise were chosen because they are decisions
caseworkers in job placement often have to make, and they mirror similar decisions elsewhere
in public services. For example, the decision to collect medical documentation serves as an
example of both a “simple” decision but also represents a more generic decision: when to collect
information about a citizen. It is important to note, even seemingly simple decisions like this
one may have complex and significant consequences in the specific situation. This is the kind of
decision that Young and others imagine as a starting point for the implementation of AI in
public services [29]. A future step for CSCW scholars may be to carefully look for the decisions
in which caseworkers – or other public servants - suggest that AI as decision-support or
decision-making may be valuable
In their studies Wihlborg et al. find that algorithmic decision-making systems almost become
“co-bureaucrats”, and public servants become mediators, rather than decision-makers [56].
Although not in public services, Lee find that algorithms were perceived as less fair and
trustworthy than human decision-makers, when making decisions usually thought of as
requiring unique human skills [31]. In our case, the participatory design workshop on AI
provided us the opportunity to engage in a discussion with a large group of caseworkers. The
in-depth interviews added additional and crucial context to even simple decisions such as “to
PACM on Human-Computer Interaction, Vol. 5, No. CSCW1, Article 40, Publication date: April 2021.
BEU, Alm.del - 2022-23 (2. samling) - Bilag 34: Henvendelse af 20/1-23 fra Asbjørn Ammitzbøll Flügge, Ph.d.-studerende, Datalogisk Institut ved Københavns Universitet om erfaringer og forskning fra digitalisering i jobcentre (anmodning om foretræde)
Street-Level Algorithms and AI in Bureaucratic Decision-Making: A Caseworker Perspective
40:19
collect medical documentation”. Interestingly, the gap between the Dot Voting and the
following interviews indicate that also caseworkers’ risk of oversimplifying the issues or
decisions at hand, when thinking about them abstractly. Combining a participatory design
workshop with in-depth interviews or observational studies allowed us to approach AI and
bureaucratic decision-making from different angles, which nuanced our understanding the
issues at hand. Thus, our study contributes an important methodological finding: there are
limits to research methods that do not consider the specific context.
In our case, in particular, caseworkers seemed to prefer algorithmic decision-support over
automation. A second step could be to carefully map different decisions with different types of
AI. Applying AI for simple tasks [10, 29, 57] thereby leaving human discretion for places of
uncertainty [45], seems like a good place to start implementing AI. However, it is important to
consider the things that can make simple decisions complex. In our context, is collecting or not
collecting medical documentation a decision meant to retrieve information more quickly, to
assess the case on enlightened grounds, or to maintain a trustful relationship with the citizen?
All of these can be at stake, and something the caseworker reflects upon before deciding. This is
a challenge facing the design of AI systems for public services, and perhaps a solution to this
could be to remove AI from the moment of decision-making.
The analogy between bureaucracies and algorithms as proposed by Pääkkönen et al., and
Alkhatib and Bernstein, is a useful lens for analyzing AI in public service organizations.
However, there is to some extent a theoretical disconnect, as highlighted earlier, when we
perceive street-level algorithms as having the same capabilities of street-level bureaucrats, but
apply the concept to algorithms on private platforms like YouTube or Twitter. Although there
are algorithms making decisions and impacting our lives in ways best conceptualized as street-
level algorithms, the disconnect is that the organizations running these algorithms do not have
democratic accountability or legal demands of equal treatment or transparency. This is
worrying. That aside, we argue, to avoid the theoretical disconnect, the theory of street-level
algorithms should focus on analyzing or explaining algorithms or AI in public services - actual
bureaucracies [55]. This is necessary if we as scholars want to understand the work we affect
when we design AI systems for a public service context. Following this, the broader area of HCI
and CSCW could conceptualize a new theoretical contribution describing the street-level
algorithms of private companies as online platforms, banks, or insurance companies.
6
CONCLUSIONS
The study examines caseworkers’ perspectives on the use of AI in job placement and identifies
key aspects of bureaucratic decision-making. This has implications for AI design, as developers
of AI should take the collaborative aspect of casework into account, to support the caseworker’s
decision-making. We report findings from a participatory workshop of AI with caseworkers
from different types of public services, followed up by interviews with caseworkers specializing
in job placement, bringing forward the collaborative aspects of bureaucratic decision-making
and validating initial findings through telephone interviews with caseworkers and observations
of meetings between caseworkers and unemployed individuals at a job center. The paper
contributes an understanding of caseworkers’ collaboration around documentation as a key
aspect of bureaucratic decision-making practices, contesting the common sense understanding
of job placement is a practice carried out individually and single-handed by a caseworker. Our
PACM on Human-Computer Interaction, Vol. 5, No. CSCW1, Article 40, Publication date: April 2021.
BEU, Alm.del - 2022-23 (2. samling) - Bilag 34: Henvendelse af 20/1-23 fra Asbjørn Ammitzbøll Flügge, Ph.d.-studerende, Datalogisk Institut ved Københavns Universitet om erfaringer og forskning fra digitalisering i jobcentre (anmodning om foretræde)
40:20
Asbjørn Ammitzbøll Flügge et al.
data show that caseworkers perceive AI for decision-making as valuable when it can support
their work towards management, (strengthen their cause, if a case requires extra resources), and
towards unemployed individuals (strengthen their cause in relation to the individual’s case
when deciding on, and assigning a specific job placement program).
ACKNOWLEDGMENTS
We would like to thank colleagues Trine Rask Nielsen, Henrik Palmer Olsen, Anette Møller
Petersen and Finn Kelsing, for providing feedback and discussing the early ideas presented in
this paper. We would also thank former student researchers Mark Amtoft Poulsen, Brian
Troelsen Nielsen, and Alexander Sørensen for assisting us at the workshop, and developing
earlier versions of the design artifact, as well as Anne Sofie Thomsen from Local Government
Denmark who inspired this approach. A special thanks to Dansk Socialrådgiverforening, and
the municipalities. Thanks to our anonymous reviewers for their generous reviews and
suggestions for improvement. Lastly, we would like to thank Ali Alkhatib for discussing the
theory of street-level algorithms with us. This work has been supported by the Innovation Fund
Denmark (EcoKnow: award number 7050- 00034A) and the Independent Research Fund
Denmark (PACTA: award number 8091-00025b).
REFERENCES
[1]
Ali Alkhatib and Michael Bernstein. 2019. Street-Level Algorithms: A Theory at the Gaps Between Policy and
Decisions. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19).
Association for Computing Machinery, New York, NY, USA, Paper 530, 1–13. DOI:
https://doi.org/10.1145/3290605.3300760
Doris Allhutter, Florian Cech, Fabian Fischer, Gabriel Grill, and Astrid Mager. 2020. Algorithmic Profiling of Job
Seekers in Austria: How Austerity Politics Are Made Effective. Front. Big Data 3:5. doi: 10.3389/fdata.2020.00005
Antonella De Angeli, Silvia Bordin, and María Menéndez Blanco. 2014. Infrastructuring participatory development
in information technology. In Proceedings of the 13th Participatory Design Conference: Research Papers - Volume
1(PDC ’14). Association for Computing Machinery, New York, NY, USA, 11–20. DOI:
https://doi.org/10.1145/2661435.2661448
Nikolaj Gandrup Borchorst and Susanne Bødker. 2011. “You probably shouldn’t give them too much information”
— Supporting Citizen-Government Collaboration. ECSCW 2011: Proceedings of the 12th European Conference on
Computer Supported Cooperative Work, 24-28 September 2011, Aarhus, Denmark
Nina Boulus-Rødje. 2018. In Search for the Perfect Pathway: Supporting Knowledge Work of Welfare Workers.
Comput Supported Coop Work 27, 841–874. https://doi.org/10.1007/s10606-018-9318-0
Nina Boulus-Rødje. 2019. Welfare-to-work Policies Meeting Complex Realities of Unemployed Citizens:
Examining Assumptions in Welfare. Nordic journal of working life studies Volume 9, Number 2, June 2019.
Mark Bovens and Stavros Zouridis. 2002. From Street-Level to System-Level Bureaucracies: How Information and
Communication Technology Is Transforming Administrative Discretion and Constitutional Control. Wiley (on
behalf of the American Society for Public Administration).
John Bowers, Graham Button & Wes Sharrock. 1995. Workflow from within and without. Technology and
cooperative work on the print industry shopfloor. Proceedings of the fourth European Conference on Computer-
Supported Cooperative Work (ECSCW), September 10-14, Stockholm, Sweden: 51-66.
Aurélien Buffat 2013. Streel-Level Bureaucracy and E-government. Public Management Review.
Justin B. Bullock. 2019. Artificial Intelligence, Discretion and Bureaucracy. American Review of Public
Administration, 2019 Vol. 49(7) 751-761.
Adam Burke. 2019. Occluded algorithms. Big Data & Society. July–December 2019
Peter André Busch and Helle Zinner Henriksen. 2018. Digital Discretion: A systematic literature review of ICT
and street-level discretion. Information Polity 23 (2018) 3-28.
Sarah Brayne and Angèle Christin. 2020. Technologies of Crime Prediction: The Reception of Algorithms in
Policing and Criminal Courts. Social Problems, 2020, 0, 1–17 doi: 10.1093/socpro/spaa004 Article
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
PACM on Human-Computer Interaction, Vol. 5, No. CSCW1, Article 40, Publication date: April 2021.
BEU, Alm.del - 2022-23 (2. samling) - Bilag 34: Henvendelse af 20/1-23 fra Asbjørn Ammitzbøll Flügge, Ph.d.-studerende, Datalogisk Institut ved Københavns Universitet om erfaringer og forskning fra digitalisering i jobcentre (anmodning om foretræde)
Street-Level Algorithms and AI in Bureaucratic Decision-Making: A Caseworker Perspective
40:21
[14] Carrie J. Cai, Samantha Winter, David Steiner, Lauren Wilcox, and Michael Terry. 2019. “Hello AI”: Uncovering
the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making. Proc. ACM Hum.-
Comput. Interact. 3, CSCW, Article 104 (November 2019), 24 pages. DOI: https://doi.org/10.1145/3359206
[15] Angèle Christin. 2020. The ethnographer and the algorithm: beyond the black box. Theory and Society
https://doi.org/10.1007/s11186-020-09411-3
[16] Jeff Dalton. (2019) Dot Voting. In: Great Big Agile. Apress, Berkeley, CA. Apress, Berkeley, CA. DOI:
https://doi.org/10.1007/978-1-4842-4206-3_27
[17] Christopher A. Le Dantec & Keith Edwards. 2008. The view from the trenches: Organization, power, and
technology at two nonprofit homeless outreach centers. In Proceedings of the ACM Conference on Computer-
Supported Cooperative Work and Social Computing (CSCW), 589–598.
[18] Christopher A. Le Dantec & Keith Edwards. 2010. Across boundaries of influence and accountability: The multiple
scales of public sector information systems. In Proceeding of the ACM Conference on Human Factors in
Computing Systems (CHI), 113–122
[19] Mateusz Dolata, Birgit Schenk, Jara Fuhrer, Alina Marti and Gerhard Schwabe. 2020. When the System Does Not
Fit: Coping Strategies of Employment Consultants. Computer Supported Cooperative Work (CSCW) © Springer
Nature B.V. 2020 DOI 10.1007/s10606-020-09377-x
[20] Lynn Dombrowski, Gillian R. Hayes, Melissa Mazmanian, and Amy Voida. 2014. E-government intermediaries and
the challenges of access and trust. ACM Trans. Comput.-Hum. Interact. 21, 2, Article 13 (February 2014), 22 pages.
DOI: https://doi.org/10.1145/2559985
[21] Paul Dourish. 2019. User Experince as Legitimacy Trap. ACM Interactions. 2019 ACM 1072-5520/19/11
[22] Paul Dourish. 2001. Process Descriptions as Organisational Accounting Devices: The Dual Use of Workflow
Technologies. In Proceedings of the 2001 International ACM SIGGROUP Conference on Supporting Group Work
(GROUP '01). Association for Computing Machinery, New York, NY, USA, 52–60. DOI:
https://doi.org/10.1145/500286.500297
[23] Virgina Eubanks. 2018. Automating Inequality — How High-Tech Tools Profile, Police, and Punish the Poor. St.
Martin’s Press
[24] Hans-Tore Hansen, Kjetil Lundbergb and Liv Johanne Syltevika. 2018 Digitalization, Street-Level Bureaucracy and
Welfare Users’ Experiences. SOCIAL POLICY & ADMINISTRATION ISSN 0144-5596 DOI: 10.1111/spol.12283.
VOL. 52, NO. 1, January 2018, PP. 67–90
[25] Richard R. Harper, Michael G. Lamming & William M. Newman. 1992. Locating systems at work: Implications for
the development of active badge applications. Interacting with Computers, 4, 3: 343- 363.
[26] Richard R. Harper. 1992. Looking at ourselves: An examination of the social organization of two research
laboratories. Proceedings of the 1992 ACM conference on Computer-Supported Cooperative Work & Social
Computing, Toronto, Ontario, Canada: 330-337.
[27] Human Rights Watch. 2020. Automated Hardship - How the Tech-Driven Overhaul of the UK’s Social Security
System Worsens Poverty. September 2020. ISBN: 978-1-623138615
[28] Gabriella Jansson and Gissur Ó. Erlingsson. 2014. More E-Government, Less Street-Level Bureaucracy? On
Legitimacy and the Human Side of Public Administration, Journal of Information Technology & Politics, 11:3, 291-
308, DOI: 10.1080/19331681.2014.908155
[29] Lise Justesen and Ursula Plesner. 2018. Fra skøn til algoritme. Tidsskrift for Arbejdsliv. 20, 3 (nov. 2018), 9-23. DOI:
https://doi.org/10.7146/tfa.v20i3.110811.
[30] Eva-Sophie Katterfeldt, Anja Zeising, and Heidi Schelhowe. 2012. Designing digital media for teen-aged
apprentices: a participatory approach. In Proceedings of the 11th International Conference on Interaction Design
and Children(IDC ’12). Association for Computing Machinery, New York, NY, USA, 196–199. DOI:
https://doi.org/10.1145/2307096.2307124
[31] Min Kyung Lee. 2018. Understanding perception of algorithmic decisions: Fairness, trust and emotion in response
to algorithmic management. Big Data and Society. Russel Sage.
[32] Michael Lipsky. Street-Level Bureaucracy, 30th Ann. Ed. Dilemmas of the Individual in Public Service. 2010.
Russel Sage Foundation.
[33] Michael Lispky. 1980. Toward a Theory of Street-level Bureaucracy. New York: Russell Sage.
[34] Michael Lipsky. 1969. Toward a theory of street-level bureaucracy.
[35] Ida Lindgren, Christian Østergaard Madsen, Sara Hofmann, Ulf Melin. 2019. Close encounters of the digital kind:
A research agenda for the digitalization of public services. Government Information Quarterly 36 (2019) 427–436
[36] Petra Molnar. 2019. Technology on the margins: AI and global migration management from a human rights
perspective. Cambridge International Law Journal, Vol. 8 No. 2, pp. 305–330
[37] Naja Holten Møller. 2018. The future of clerical work is precarious. I : interactions. 25, 4, s. 75-77
PACM on Human-Computer Interaction, Vol. 5, No. CSCW1, Article 40, Publication date: April 2021.
BEU, Alm.del - 2022-23 (2. samling) - Bilag 34: Henvendelse af 20/1-23 fra Asbjørn Ammitzbøll Flügge, Ph.d.-studerende, Datalogisk Institut ved Københavns Universitet om erfaringer og forskning fra digitalisering i jobcentre (anmodning om foretræde)
40:22
Asbjørn Ammitzbøll Flügge et al.
[38] Naja Holten Møller, Irina Shklovski and Thomas Hildebrant. 2020. Shifting Concepts of Value. Designing
Algorithmic Decision-Support Systems for Public Services. In Proceedings of the Nordic Conference on Human-
Computer Interaction (NordiCHI).
[39] Naja Holten Møller, Maren Gausdal Eriksen and Claus Bossen. 2020. A Worker-Driven Common Information
Space: Interventions into a Digital Future. Computer Supported Cooperative Work (CSCW) (2020) 29:497–531 ©
Springer Nature B.V. 2020 DOI 10.1007/s10606-020-09379-9
[40] Naja L. Holten Møller, Geraldine Fitzpatrick, and Christopher A. Le Dantec. 2019. Assembling the Case: Citizens’
Strategies for Exercising Authority and Personal Autonomy in Social Welfare. Proc. ACM Hum.-Comput. Interact.
3, GROUP, Article 244 (December 2019), 21 pages. DOI:https://doi.org/10.1145/3361125
[41] Henrik Palmer Olsen, Jacob Livingston Slosser, Thomas Hildebrandt and Cornelius Wiesener. 2019. What’s in the
Box? The Legal Requirement of Explanability in Computationally Aided Decision-Making in Public
Administration. iCourts - The Danish National Research Foundation’s Centre of Excellence for International
Courts
[42] Judith S. Olson and Wendy A. Kellogg. 2014. Ways of Knowing in HCI. Springer.
[43] Gary M Olson and Judith S Olson. 2000. Distance matters. Human-Computer Interaction, vol. 15, pp. 139–178.
[44] Anette Chelina Møller Petersen, Lars Rune Christiensen and Thomas Troels Hildebrandt. 2020. The Role of
Discretion in the Age of Automation. Comput Supported Coop Work. DOI: https://doi.org/10.1007/s10606-020-
09371-3
[45] Anette C. M. Petersen, Lars Rune Christensen, Richard Harper, and Thomas Hildebrandt. 2021. “We Would Never
Write That Down”: Classifications of Unemployed and Data Challenges for AI. In Proceedings of the ACM on
Human-Computer Interaction, Vol. 5, CSCW1, Article 102 (April 2021), 26 pages, https://doi.org/10.1145/3449176
[46] Juho Pääkkönen, Matti Nelimarkka, Jesse Haapoja, and Airi Lampinen. 2020. Bureaucracy as a Lens for Analyzing
and Designing Algorithmic Systems. In Proceedings of the 2020 CHI Conference on Human Factors in Computing
Systems (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–14. DOI:
https://doi.org/10.1145/3313831.3376780
[47] Dave Randall, Richard Harper, and Mark Rouncefield. 2007. Fieldwork for Design: Theory and Practice (Computer
Supported Cooperative Work). Springer-Verlag, Berlin, Heidelberg.
[48] Devanesh Saxena, Karla , Badillo-Urquiola, Pamela Wisniewski and Shion Guha. 2020. A Human-Centered Review
of the Algorithms used within the U.S. Child Welfare System. In Proceedings of the 2020 CHI Conference on
Human Factors in Computing Systems. ACM.
[49] Nick Seaver. 2017. Algorithms as Culture: Some Tactics for the Ethnography of Algorithmic Systems. Big Data &
Society, (December 2017). doi: 10.1177/2053951717738104.
[50] W. Richard Schott and Gerald F. Davis. 2007. Organizations and Organizing – Rational, Natural, and Open
Systems Perspective. Person International Edition.
[51] Lucy Suchman, Jeanet Blomberg, Juliane E. Orr and Randall Trigg. 1999. Reconstructing Technologies in Social
Practice. The American Behavioral Scientist; Nov/Dec 1999; 42, 3; ABI/INFORM Global pg. 392. Sage Publications.
[52] Michael Veale and Van Kleek, Max and Binns, Reuben. 2018. Fairness and Accountability Design Needs for
Algorithmic Support in High-Stakes Public Sector Decision-Making. 2018. Proceedings of the 2018 CHI
Conference on Human Factors in Computing Systems (CHI'18) doi:10.1145/3173574.3174014, ISBN: 978-1-4503-
5620-6. Available at SSRN: https://ssrn.com/abstract=3175424
[53] Amy Voida, Lynn Dombrowski, Gillian R. Hayes, and Melissa Mazmanian. 2014. Shared values/conflicting logics:
working around e-government systems. In Proceedings of the SIGCHI Conference on Human Factors in
Computing Systems (CHI ’14). Association for Computing Machinery, New York, NY, USA, 3583–3592. DOI:
https://doi.org/10.1145/2556288.2556971
[54] Dakuo Wang, Elizabeth Churchill, Pattie Maes, Xiangmin Fan, Ben Shneiderman, Yuanchun Shi, and Qianying
Wang. 2020. From Human-Human Collaboration to Human-AI Collaboration: Designing AI Systems That Can
Work Together with People. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing
Systems (CHI EA ’20). Association for Computing Machinery, New York, NY, USA, 1–6. DOI:
https://doi.org/10.1145/3334480.3381069
[55] Max Weber. 2012. Makt og Byråkrati – Essay om politikk og klasse, samfunnsforskning og verdier. Gyldendal
Akademisk. 3rd edition (first published in 1971).
[56] Elin Wihlborg, Hannu Larsson and Karin Hedström. 2016. The computer says no! — A case study on automated
decision-making in public authorities. Proceedings from 49th Hawaii International Conference on Systems
Sciences (HICSS). Washington, DC: IEEE Computer Society.
[57] James Q. Wilson. Bureaucracy – What Government Agencies Do And Why They Do It. 1989. Basic Books Inc.
[58] Matthew M. Young, Justin B. Bullock and Jesse D. Lecy. 2019. Artificial Discretion as a Tool of Governance: A
Framework for Understanding the Impact of Artificial Intelligence on Public Administration. Perspectives on
Public Management and Governance, 2019, 301–313 doi:10.1093/ppmgov/gvz014
PACM on Human-Computer Interaction, Vol. 5, No. CSCW1, Article 40, Publication date: April 2021.
BEU, Alm.del - 2022-23 (2. samling) - Bilag 34: Henvendelse af 20/1-23 fra Asbjørn Ammitzbøll Flügge, Ph.d.-studerende, Datalogisk Institut ved Københavns Universitet om erfaringer og forskning fra digitalisering i jobcentre (anmodning om foretræde)
Street-Level Algorithms and AI in Bureaucratic Decision-Making: A Caseworker Perspective
40:23
[59] Bernado Zacka. 2017. When the State meets the Street – Public Service and Moral Agency. The Belknap Press of
Havard University.
[60] Leid Zejnilović, Susana Lavado, Íñigo Martínez de Rituerto de Troya, Samantha Sim and Andrew Bell. 2020.
Algorithmic Long-Term Unemployment Risk Assessment in Use: Counselors’ Perceptions and Use Practices.
Global Perspectives 1 (1). https://doi.org/10.1525/gp.2020.12908.
Received June 2020; revised October 2020; accepted January 2021.
PACM on Human-Computer Interaction, Vol. 5, No. CSCW1, Article 40, Publication date: April 2021.