r Academy of Management Annals
2020, Vol. 14, No. 1, 366410.
https://doi.org/10.5465/annals.2018.0174
ALGORITHMS AT WORK: THE NEW CONTESTED TERRAIN
OF CONTROL
KATHERINE C. KELLOGG
Work and Organization Studies
MIT Sloan School of Management
MELISSA A. VALENTINE
1
Management Science and Engineering
Stanford School of Engineering
ANG
`
ELE CHRISTIN
Department of Communication
Stanford School of Humanities and Sciences
The widespread implem entation of algorithm ic technologie s in organizations prom pts
questions about how algorithms may reshape organizational control. We use Edwards
(1979) perspective of contested terrain, wherein managers implement production tech-
nologies to maximize the value of labor and workers resist, to synthesize the in-
terdisciplinary research on algorithms at work. We find that algorithmic control in the
workplace operates through six main mechanisms, which we call the 6Rs”—employers
can use algorithms to direct workers by restricting and recommending, evalua te worker s by
recording and rating, and discipline workers by replacing and rewarding. We also discuss
several key insights regarding algorithmic control. First, labor process theory helps to
highlight potential problems with the largely positive view of algorithms at work. Second,
the technical capabilities of algorithmic syste ms facilitate a form of rational control that is
distinct from the technical and bureaucratic control used by employers for the past century.
Third, employers use of algorithms is sparking the developmentofnewalgorithmicoc-
cupations. Finally, workers are individually and collectively resisting algorithmic control
through a set of emerging tactics we call algoactivism. These insights sketch the contested
terrain of algorithmic control and map critical areas for future research.
INTRODUCTION
Over the past decades, the use of algorithms has
transformed how firms and markets operate. We focus
in this article on algorithmic technologies, defined in
emerging s ocial science usage as computer-programmed
procedures that transform input data into desired out-
puts in ways that tend to be more encompassing,
instantaneous, interactive, and opaque than previous
technological systems (e.g., Gillespie, 2014: 167). To
date, most research in management and economics has
emphasized the benefits of using algorithms to improve
allocation and coordination in complex markets, facili-
tate efficient decision-making within firms, and improve
organizational learning (e.g., Athey & Scott, 2002;
Hall, Horton, & Knoepfle, 2019; Liu, Brynjolfsson, &
Dowlatabadi, 2018a). These analyses primarily focus
on the impact of algorithms in terms of econo-
mic value derived from greater efficiency, revenue,
and innovation.
Here, we provide a different perspective. Drawing on
labor process theory (e.g., Braverman, 1974; Burawoy,
1979; Smith, 2015; Thompson & Smith, 2009), which
describes organizational control as contested terrain
(Edwards, 1979), we analyze algorithms as a major force
in allowing employers to reconfigure employerworker
relations of production within and across organizations.
In this view, managers implement new production
We gratefully thank Catherine Turco for her extremely
helpful contributions from the beginning of this project,
and J.P. Eggers and two anonymous reviewers for improving
the article throughout the review process. The article has
benefited greatly from comments by Matt Beane, Michael
Bernstein, Beth Bechky, Samer Faraj, Arvind Karunakaran,
Sarah Lebovitz, Vili Lehdonvirta, Hila Lifshitz-Assaf,
Melissa Mazmanian, Wanda Orlikowski, Alex Rosenblat,
Ryan Stice-Lusvardi, Emily Truelove, and Steve Vallas.
1
Corresponding author.
366
Copyright of the Academy of Management, all rights reserved. Contents may not be copied, emailed, posted to a listserv, or otherwise transmitted without the copyright holders express
written permission. Users may print, download, or email articles for individual use only.
technologies and control mechanisms that maximize
the value created by workers labor (e.g., Burawoy, 1979;
Smith, 2006). Workers, in turn, resist and defend their
autonomy in the face of tighter employer control, poten-
tially reshaping the relations of production (e.g.,
Thompson & Vincent, 2010).
We argue that organizational scholarship has not
kept pace with the ways that algorithmic technologies
have the potential to transform organizational control
in profound ways, with significant implications for
workers. Even though organizational scholars have
begun to explore the intersection between emerging
technologies and the changing nature of work and
control (e.g., Bailey, Leonardi, & Barley, 2012; Barley,
2015; Barley, Bechky, & Milliken, 2017; Barrett, Oborn,
Orlikowski, & Yates, 2012; Leonardi & Vaast, 2017),
most of the research about algorithms at work has been
published outside of management journals (for impor-
tant exceptions, see Curchod, Patriotta, Cohen, &
Neysen, 2019; Faraj, Pachidi, & Sayegh, 2018;
Orlikowski & Scott, 2014b).
Scholars a cross the disciplines of information
science, humancomputer interaction, sociology,
communication, legal studies, and computer-supported
cooperative work have discussed the societal implica-
tions of algorithms in terms of surveillance and dis-
crimination (boyd & Crawford, 2012; Eubanks, 2018;
Noble, 2018; ONeil, 2016; Pasquale, 2015; Scholz, 2012;
Zuboff, 2019) but have not focused on how algorithms
can reshape the control relationship between managers
and workers. In management, scholars have analyzed
the implications of big data for organizational strategy
and design (Loebbecke & Picot, 2015; Newell &
Marabelli, 2015; Puranam, Alexy, & Reitzig, 2014), and
for research methods (Agarwal & Dhar, 2014; George,
Haas, & Pentland, 2014), but have not analyzed the ef-
fects of these technological developments on manager
worker dynamics.
FIGURE 1
Review of Algorithmic Control as Contested Terrain
DIRECTION
DISCIPLINE
CONTROL
MECHANISMS
RECOMMENDING
RESTRICTING
RECORDING
RATING
REPLACING
REWARDING
WORKER
EXPERIENCES
MANIPULATION
DISEMPOWERMENT
SURVEILLANCE
DISCRIMINATION
PRECARITY
STRESS
EVALUATION
2020 367Kellogg, Valentine, and Christin
Drawing on our review of the vast and inter-
disciplinary literature on algorithms, we offer a syn-
thesized framework of the contested terrain of
algorithmic control (Figure 1). To do so, we first de-
scribe the management and economics literature on
the use of algorithms to facilitate improved decision-
making, coordination, and organizational learning
in organizations. We next delineate the two key pre-
vious forms of rational controltechnical and bu-
reaucratic controland elaborate how the affordances
of algorithmic technologies have provided employers
with an opportunity to implement new control mec-
hanisms to activate workers efforts. Then, based on a
detailed review of algorithmic studies, we argue that
employers can use algorithms to control workers
through six main mechanisms, which we call the 6
Rs: employers can use algorithms to help direct
workers by restricting and recommending,evaluate
workers by recording and rating, and discipline
workers by replacing and rewarding.
We conclude by providing a model of algorithmic
control as the new contested terrain of control and
offer a road map for future research along four main
lines. First, we discuss how labor process theory
raises importa nt questions not addressed in the
existing research on the positive economic value
of algorithms. Second, we analyze algorithmic con-
trol as distinct from previous regimes of control,
namely, technical and bureaucratic control. Third,
we highlight the emergence of novel occupations
algorithmic curators, brokers, and articulatorsthat
offer new avenues for control and resistance. Last, we
discuss the development of different forms of worker
resistance, which we label algoactivism, that range
from individual practical action to platform orga-
nizing, discursive framing, and legal mobilization.
ECONOMIC VALUE OF ALGORITHMS
FOR EMPLOYERS
Before reviewing the literature on rational control
and on how employers can use algorithms to reshape
the relations of production between managers and
workers, we begin by briefly reviewing the manage-
ment and economics research to date on algorithms
in organizations. Up to this point, this research has
primarily focused on the economic and operational
value of algorithms to organizations. In particular,
scholars in organizational strategy, economics,
information systems, and human-computer in-
teraction have emphasized how employers can
use algorithms to facilitate improved decision-making,
coordination, and organizational learning.
First, existing studies have documented how algo-
rithmic technologies can enable individuals to make
more accurate decisions than they did before. Some of
these improved decision-making processes stem from
the finely grained data that organizations are now col-
lecting on how customers engage with products and
marketing materials (Glynn, 2018; Hollebeek et al.,
2016); some stems from computational analyses, such
as systems that can improve doctors interpretation and
decision-making about radiologic images (Hosny,
Parmar, Quackenbush, Schwartz, & Aerts, 2018), or
machine-learning algorithms that can predict customer
preferences (Boyle, 2018; Gomez-Uribe & Hunt, 2016).
In some cases, automated analyses remove humans
almost enti rely from the decision-ma king process, such
as systems that maintain optimized stock portfolios that
outperform human traders (Heaton, Polson, & Witte,
2017). Algorithmic systems can also change how peo-
ple produce and use evidence for decision-making. For
instance, companies can rely on sophisticated data in-
frastructures that allow them to run randomized control
trials or statistical tests (also called A/B tests) on many
of their decisions, meaning some decisions that were
previously intuition based are now subject to the sta-
tistical gold standard for establishing causality or
modeling expected impact (Bradley, 2019).
Second, scholars have found that algorithmic
technologies can automate coordination pr ocesses
in ways that produce economic value for employers.
Employers have used algorithms to stitch together
or combine micro tasks (Bernstein et al., 2015;
Little, Chilton, Goldman, & Miller, 2010). For ex-
ample, studies have described how a crowd of
workers can each label a single image and then an
algorithm can combine their responses into a dataset
that provides considerable analytical value for de-
veloping computer vision (Russakovsky et al., 2015).
Such automated coordination processes have been
shown to provide economic efficiency (Puranam,
2018). For example, studies of the web-based en-
terprise have shown that an API (an interface that
a line of code can call to do things) can take a cus-
tomized customer query and automatically check
stock, combine the requested products, inform the
customer, and send customized products; each of
these interdependencies (e.g., between front-fac-
ing
services and inventory management), which
previously had been coordinated by people, could
now be automatically coordinated by code, thus
lowering labor costs (Davis, 2015; Davis, 2016).
Third, existing studies docum ent how employers
can use algorithmic technologies to automate orga-
nizational learning in ways that produce economic
368 JanuaryAcademy of Management Annals
value for them. These studies show how employers
have used algorithmic systems to identify and learn
from user patterns across individuals, and then re-
sponsively change system behavior in real time (Boyle,
2018; Liu, Mandel, Brunskill, & Popovic, 2014). For
instance, some employers have used smartphone op-
erating systems to analyze and compare user patterns
over time to recognize information that was relevant to
users across different apps, such as phone numbers or
addresses in emails or texts that users had copied to the
map or phone apps (Cipriani & Dolcourt, 2019; Yin,
Davis, & Muzyrya, 2014). Academic studies have noted
that as employers begin to use latent data collection
systems related to the int ernet of things, similar al-
gorithmic systems will be able to track what in-
formation people search or create in different rooms
or meetings, and automatically offer personalized in-
formation or ideas for different individuals, meetings,
teams, and projects (e.g., Landay, 2019). Scholars of
organizational learning suggest that these systems are
likely to lead to more efficient search and retrieval of
information, as well as better analyses of ideas or de-
cisions that impact financial or service performance for
the organizations. They argue that these benefits to or-
ganizations will unfold in automated and tightly cou-
pled feedback loops between user and system behavior
(e.g., Nikolaidis & Shah, 2012; Sachon & Boquet, 2017;
Shah, Wiken, Williams, & Breazeal, 2011).
These studies emphasize the benefits to employers
of algorithmic technologies in terms of economic
value, based on improved efficiency in decision-
making, coordination processes, and organizational
learning. What they miss is an understanding of
algorithmic systems as instrume nts of control that
are contested between employers and workers.
THE HISTO RICAL CONTESTED TERRAIN OF
RATIONAL CONTROL
To set the stage for our review of algorithms and
the changing nature of rational control, we briefly lay
out the intellectual history of rational control in the
postindustrial era as a contested terrain (Edwards,
1979) between employers and workers. As noted
earlier, labor process theorists have highlighted
how managers are compelled to establish control
over workers to maximize th e value created by
workers labor (e.g., Braverman, 1974; Burawoy,
1985; Thompson & Smith, 2009). In this view, con-
trol is a dialectical process in which employers
continuously innovate to maximize value captured
from workers and workers inevitably engage in re-
sistance to maintain their autonomy, dignity, and
identity (e.g., Edwards, 1979; Jaros, 2010; Thompson
& Van den Broek, 2010).
For more than a century, organizational scholars
have examined the activities of managers attempting
to control the labor process using both normative and
rational control (Barley & Kunda, 1992). Employers
use normative control when they try to obtain de-
sired behavior from workers by winning their hearts
and minds (e.g., Kunda, 1992); they use rational
control when they try to obtain desired behavior
from workers by appealing to workers self-interest
(e.g., Taylor, 1911). In this article, we focus primarily
on algorithmic control as a new form of rational
control, considering normative control in our sug-
gestions for future research.
We suggest that Edwards (1979) foundational typol-
ogy of control mechanisms is useful for reviewing and
organizing both the expansive past literature on rational
control and the emerging interdisciplinary literature on
algorithms in the workplace. Edwards asserts that em-
ployers obtain desired behavior from workers using
three related control mechanisms: direction, evaluation,
and discipline. Direction entails the specification of
what needs to be performed, in what order and time
period, and with what degree of accuracy. Evaluation
entails the review of workers to correct mistakes, assess
performance, and identify those who are not performing
adequately. Discipline entails the punishment and re-
ward of workers so as to elicit cooperation and enforce
compliance with the employers direction of the labor
process. Edwards approach also emphasiz es the in-
evitable resistance tactics that workers develop to de-
fend their autonomy in the face of tightening employer
control. Rather than control systems unfolding as ever-
more systematic applications of total power, workers
have the ability to resist and, in consequence, poten-
tially reshape the relations of production.
Within systems of rational control, technical control
has historica lly be en located in the physical and
technological aspects of production (Braverman, 1974;
Burawoy, 1979), whereas bureaucratic control has re-
lied on standardized rules and roles to guide worker
behavior (Blau, 1955; Weber, 1947). These different
systems of rational control should be viewed as ideal
types; in practice, models of control frequently overlap
and can be combined in hybrid forms (e.g., Barley
& Kunda, 1992; Cardinal, Kreutzer, & Miller, 2017 ;
Sitkin, Cardinal, & Bijlsma-Frankema, 2010).
Technical Control
Scholars have characterized technical control as
control that is exercised through organizational
2020 369Kellogg, Valentine, and Christin
technologies that substitute for the presence of di rect
supervision. The development of assembly lines in
the first half of the 20th century allowed employers
to set a machine-driven pace for workers, changing
workers perception of space in the process by mak-
ing it harder for them to wander around and chat
with coworkers; over time, the worker became
nearly as much locked in place as the machinery
(Edwards, 1979: 114). With technical control, em-
ployers accomplish the direction of workers through
technologies that drive workers to do particular tasks
at a particular rate (e.g., Nussbaum & DuRivage,
1986). These modes of automated production estab-
lish specific work directions through task sequenc-
ing, specialization, and de-skilling (e.g., Braverman,
1974; Burawoy, 1979). Evaluation occurs through
the recording of frequency and length of work tasks,
and worker productivity, accuracy, response time,
and time spent away from the assembly line or
computer terminal (Aiello & Svec, 1993; Dworkin,
1990). Discipline is accomplished through the re-
cruitment of a reserve army of second ary workers
ready to take the jobs of any primary workers who do
not cooperate and comply with employer directives
(Edwards, 1979).
Scholars have demonstrated that technical control
can lead workers to experience alienation because
they can be deprived of the right to conceive of
themselves as the directors of their own actions
(Blauner, 1964). It can also create feelings of constant
surveillance that lead workers to police their own
behavior to comply with organizational expecta-
tions (e.g., Sewell, Barker, & Nyberg, 2012). Workers
have resisted technical control by sabotagin g the
machines and related equipment (Haraszti, 1978;
Juravich, 1985; Ramsay, 1966), stealing supplies
or time (Anteb y, 2008), developing alternative tech-
nical procedures (Bensman & Gerver, 1963), collec-
tively withholding effort (Gouldner, 1954, Roy,
1954), and creating secret social spaces in bath-
rooms and corridors (Pollert, 1981).
Bureaucratic Control
Although technical control is primarily embedded
in the technical or physical aspects of the production
process, bureaucratic control typically relies on an
impersonal and formal system of rules, procedures,
and roles to guide worker behavior (e.g., Edwards,
1979). Bureaucratic control, which many scholars
suggest emerged in the years following the Second
World War, is manifested in the organizational
structure of the firm, establishing the impersonal
force of company policy as the basis for legitimacy
(e.g., Blau, 1955; Selznick, 1943). Bureaucratic control
achieves direction, evaluation, and discipline differ-
ently than does technical control. Here, direction is
accomplished through job descriptions, rules (e.g.,
Gouldner, 1956; Weber, 1946), checklists (e.g., Grol &
Grimshaw, 2003; Pronovost & Vohr, 2010), and em-
ployee scripts (Moreo, 1980). Evaluation is accom-
plished through direct observation and subjective
judgment of supervisors (Vancil, 1982), and through the
use of metrics (Govindarajan, 1988). Discipline is ac-
complished primarily through incentives and penal-
ties; workers who exhibit desired behavior are
rewarded with promotions, higher pay, and jobs with
greater responsibility, more benefits, better work sta-
tions, or preferable tasks, whereas those who do not
exhibit desired behavior are fired according to rules or
policies (e.g., Ezzamel & Willmott, 1998; McLoughlin,
Badham, & Palmer, 2005).
Bureaucratic control can lead workers to feel as if
theyareinanironcagea technically ordered, rigid,
and dehumanized workplace (Weber, 1968). They may
experience a loss of individuality, autonomy, and a
lack of individual freedom (e.g., Whyte, 1956). In re-
sponse, workers may use some of the same resistance
tactics they use in response to technical control, in-
cluding work stoppages or strikes (McLoughlin et al.,
2005). They may also resist by using humor, cynicism,
direct criticism, work-arounds, or pro forma compli-
ance (e.g., Bolton, 2004; Gill, 2019; Hodgson, 2004 ;
Lipsky, 2010).
Algorithmic Technologies: Comprehensive,
Instantaneous, Interactive, and Opaque
Technological innovation plays an important role
in facilitating employers invention of novel control
systems (e.g., Hall, 2010). Over the past decades,
the development of algorithmic technologies has
allowed employers to transform the exercise of
rational control. Algorithms are often defined as
computer-programme d procedures for transforming
input data into a des ired output (Barocas et al., 2014;
Gillespie, 2014: 167). As Dourish (2016) notes,
however, since algorithms arise in practice in re-
lation to other computational forms, such as data
structures, they need to be analyzed and understood
within those systems of relation that give them
meaning and animate them (see also Christin, 2019;
Seaver, 2017; Ziewitz, 2016). In particula r, the con-
nections between algorithmic systems and the data
they draw on have become more complex over time.
Algorithmic procedures became salient as early as
370 JanuaryAcademy of Management Annals
the 1950s, when mainframe computers and com-
puterized systems were first implemented (Hicks,
2017). By the 1980s, they were widely used in
workplaces through the development and commer-
cialization of microcomputers and information
technologies (Zuboff, 1988). Over recent decades,
employers have begun to use algorithmsin
particular, data mining and machine-learning
algorithmsthat are more likely to rely on big
data characterized by volume (often measured in
petabytes and involving tens of millions of observa-
tions), variety (the data have widely different formats
and structures), and velocity (data can be added in
real time and over a long time frame) (e.g., Zuboff,
2019). Here , we report four technological affordan-
ces, or potential for social action provided by tech-
nological forms (Leonardi & Vaast, 2017; Zammuto,
Griffith, Majchrzak, Dougherty, & Faraj, 2007), that
are relevant to how employers can use algorithms to
interact with managers and workers. Specifically, we
describe how algorithmic technologies can be more
comprehensive, instantaneous, interactive, and
opaque than prior workp lace technologies (Table 1).
First, algorithmsand the data they processare
now often more comprehensive than any kind of
technology mobilized for technical or bureaucratic
control. Cameras, sensors, and audio devices can
now record workers bodily movements and speech
to provide evidence of worker adherence to or de-
parture from production routines (e.g., Austrin &
West, 2005; Beane & Orlikowski, 2015; Landay,
2019; Xu, He, & Li, 2014). Accelerometers from
smartphones can be analyzed to gauge worker
movement (e.g., Clemes, OConnell, & Edwardson,
2014; Thorp et al., 2012). Biometric and sensor data
are being used to verify employee identities, screen
for drug and alcohol use, and collect feedback
on emotional and physiological indicators in real
time (Ball & Margulis, 2011). Text data, video-based
recognition techniques, and natural language-
processing algorithms can monitor email or chat
in real time to assess employee mood, productivity,
and turnover intent (e.g., Angrave, Charlwood,
Kirkpatrick, Lawrence, & Stuart, 2016; Goldberg,
Srivastava, Manian, Monroe, & Potts, 2016; Leonardi
& Contractor, 2018; Lix, Goldberg, Srivastava, &
Valentine, 2019).
Second, algorithms now typically provide in-
stantaneous feedback, which relates to the velocity
aspect of big data (Jacobs, 2009; Katal, Wazid, &
Goudar, 2013). Given the double ability of digital
technologies to automate and pro duce information
(Zuboff, 1988), platforms can instantaneously com-
pute, save, and communicate real-time information
with workers and managersincluding client com-
ments, completion rates, or number of page views
(e.g., Etter, Kafsi, Kazemi, Grossglauser, & Thir an,
2013; Mayer-Sch
¨
onberger & Cukier, 2013; Sachon &
Boquet, 2017). As a result, feedback and assessment
can be incorporated continuously into the produc-
tion process (Crowston & Bolici, 2019).
Third, algorithms can promote interactivity, es-
pecially when used in conjunction with algorithmi-
cally mediated platforms that provide data from
TABLE 1
New Technological Affordances of Algorithms
Affordances of Algorithmic
Systems Key Insights Example Studies
Comprehensive Wide range of devices and sensors Angrave et al. (2016), Ball & Margulis (2011), Beane &
Orlikowski (2015), Goldberg et al. (2016), Harari,
M
¨
uller, Aung, & Renfrow (2017), Landay (2019),
Leonardi & Contractor (2018), Levy (2015), Lix et al.
(2019), Xu et al. (2014)
Collecting a variety of data about workers, such as
biometrics, acceleration, text messages, and online
footprints
Instantaneous High velocity of algorithmic computation Crowston & Bolici (2019), Etter et al. (2013), Jacobs
(2009), Katal et al. (2013), Mayer-Sch
¨
onberger &
Cukier (2013), Sachon & Boquet (2017)
Performance assessments incorporated in real time
into the system
Interactive Algorithmically mediated platforms allow for
participation from multiple parties
Amershi et al. (2014), Cambo & Gergle (2018),
Chalmers & MacColl (2003), Holzinger & Jurisica
(2014), Kulesza et al. (2015), Valentine et al. (2017),
Zhou et al. (2018a)
Interactive interfaces channel user behavior in real
time
Opaque Intellectual property and corporate secrecy Bolin & Andersson Schwarz (2015), Burrell (2016),
Danaher (2016), Diakopoulos (2015), Dietvorst et al.
(2015), Orlikowski & Scott (2014b), Pasquale (2015),
Weld & Bansal (2018)
Technical literacy
Machine-learning opacity
2020 371Kellogg, Valentine, and Christin
multiple parties (Amershi, Cakmak, Knox, &
Kulesza, 2014; Cambo & Gergle, 2018; Chalmers &
MacColl, 2003). Employers can use algorithmically
powered chatbots to monitor chat channels and in-
teractively prompt groups to pause and take a poll
regarding next steps (Zhou, Valentine, & Bernstein,
2018b), or even adjust the team hierarchy and work-
flow depending on inputted information (Valentine,
Retelny, To, Rahmati, Doshi, & Bernstein, 2017). These
interactive changes are made possible by the affordan-
ces of platforms, which have powerful computing
power behind the scenes and interactive interfaces
that can be accessed by different categori es of people in
diverse locations, through individual logins on per-
sonal devices (e.g., Holzinger & Jurisica, 2014; Kulesza,
Burnett, Wong, & Stumpf, 2015).
Last, algorithms can be opaque, for three main
reasons: intentional secrecy, required technical lit-
eracy, and machine-learning opacity (Burrell, 2016).
The data and algorithms used to collect and analyze
behavior data are usually proprietary and un-
disclosed (Orlikowski & Scott, 2014a). In addition,
given the complexity of the technologies, most
workers do not fully grasp what kind of data are being
collected about them, how they are being used, or
how to contest them (Bolin & Andersson Schwarz,
2015). Finally, in the context of machine learning
(e.g., models that perform without using explicit
instructions, relying on patterns and inference) ,
algorithms are particularly difficult to decipher
(Dietvorst, Simmons, & Massey, 2015; Weld &
Bansal, 2018). According to Burrell, When a com-
puter learns and consequently builds its own repre-
sentation of a classification decision, it does so
without regard for human comprehension...The
workings of machine learning algorithms can escape
full understanding and interpretation by humans,
even for those with specialized training, even for
computer scientists. (Burrell, 2016: 10)
ALGORITHMIC CONTROL: THE NEW
CONTESTED TERRAIN OF CONTROL
Having reviewed the literature on technical and
bureaucratic control mechanisms, and explored
the technological affordances of emerging algo-
rithmic technologies, we now develop a model of
algorithmic control as the new contested terrain
between employers and workers. We draw on
Edwards (1979) typology of managers attempting
control by directing, evaluating,anddisciplining
workers as a conc eptual lens for reviewing the re-
search on algorithms at work. Through this review,
we find that employers are using algorithms to
control workers through six main mechanisms,
which we call the 6Rs”—they are using algo-
rithms to direct wor kers by restricting and recom-
mending, evaluate workers by recording and
rating, and discipline workers by replacing and
rewarding. We identify related worker experiences
for each of the 6Rs.
Rational Control through Algorithmic Direction
Our review suggests that employers are using al-
gorithmic control to direct workersspe cify what
needs to be performed, in what order and time pe-
riod, and with different degrees of accuracyin
different ways than they do when using technical
and bureaucratic control. Under technical control,
direction is primarily accomplished throu gh tech-
nologies that drive employees to do particular tasks
at a particular rate through task sequencing, spe-
cialization, and de-skilling (e.g., Braverman, 1974;
Burawoy, 1979). Under bureaucratic control, di-
rection is accomplished through job des criptions,
rules, checklists, and scripts (e.g., Weber, 1946; Blau,
1955). By contrast, under algorithmic control, em-
ployers use two key mechanisms to direct worker
behavior: algorithmic recommending and algorith-
mic restricting (Table 2).
Algorithmic recommending. Algorithmic recom-
mending entails employers using algorith ms to offer
suggestions intended to prompt the targeted worker
to make decisions preferred by the choice arch itect.
As with earlier forms of rational control, employers
can inscribe technology with prescriptions that pri-
oritize specific decisions for workers to implement
(e.g., Kellogg, 2018). Unlike previous regimes of ra-
tional control, however, algorithmic recommending
frequently guides worker decisions by automatically
finding patterns in the data, often through machine-
learning algorithms that operate without using ex-
plicit instructions, relying on patterns and inference
to present workers with choices and opportunities
preselected by the algorithm (e.g., Gabrilovich,
Dumais, & Horvitz, 2004; Goldman, Little, & Miller,
2011; Karunakaran, 2016). For example, the non-
profit organization Crisis Text Line, which con-
nected people in crisis with volunteer counselors,
used machine-learning algorithms to analyze text
data and recommend which messages should be
prioritized. Their algorithmic system identified that
the term ibuprofen was 16 times more likely to
predict the need for emergency aid than the word
suicide. Consequently, it automatically prioritized
372 JanuaryAcademy of Management Annals
messages containing th e word ibuprofen, which
helped to shorten the volunteer response time for
high-risk texters from 120 seconds to 39 seconds
(Gupta, 2018).
In addition, employers are now using algorithmic
recommending to bypass the heuristics workers typi-
cally use to make decisions. For instance, a retail
technology company that historically depended on
fashion buyers expertise to make decisions about fu-
ture merchandising began to data mine the actual
performance of past judgments to recommend more
profitable future merchandising decisions (Valentine
& Hinds, 2019). Similarly, Uber relied on personalized
data, such as braking and acceleration speed, to ana-
lyze whether workers were driving erratically and al-
gorithmically recommend when they might need to
rest (Rosenblat & Stark, 2016). In many cases, such
recommendations came in the form of nudges (Thaler
& Sunstein, 2009) that were built into algorithmic
systems, and therefore were hard for workers to ignore.
For instance, Uber engaged in individualized and real-
time nudging to actively compel drivers to go home
whenever three passengers in a row reported feeling
unsafe (Scheiber, 2017).
Although the hope is that algorithms will improve
the accuracy and objectivity of managerial decisions
TABLE 2
Algorithmic Direction
Algorithmic Direction Key Insights Example Studies
Algorithmic
recommending
Prompting the worker to
make decisions preferred
by the choice architect
Can augment workers
decisions by automatically
finding patterns in the data
and prescribing actions
based on this
Danaher (2016), Gabrilovich et al.
(2004), Goldman et al. (2011),
Gupta (2018), Karunakaran (2019),
Pachidi et al. (2014), Rosenblat &
Stark (2016), Scheiber (2017),
Valentine (2019), Veale et al. (2018)Recommending specific
courses of action
Can bypass the heuristics
workers typically use to
make decisions
Algorithmic
restricting
Restricting access to
information
Can continuously and covertly
restrict information available
to workers
Afuah & Tucci (2012), Aneesh et al.
(2014), Arazy et al. (2016), Barrett et al.
(2016), Calo & Rosenblat (2017), Faraj
et al. (2011), Fayard et al. (2016),
Kallinikos & Tempini (2014), Kittur
et al. (2019), Lakhani (2016), Lee et
al. (2015), Leonardi & Vaast (2016),
Lifshitz-Assaf (2018), Majchrzak
et al. (2013), Muthukumaraswamy
(2010), OMahony & Bechky (2008),
Orlikowski & Scott (2014a, 2014b),
Shaikh & Cornford, (2010), Tempini
(2015), Treem & Leonardi (2013),
Truelove (2019), West &
OMahony (2008)
Restricting behavior Can interactively restrict the
behavior of crowdworkers
and online community
members
Potential worker
experiences
Frustration Recommendations may not be
intelligible to workers
Angwin et al. (2016), Askay (2015),
Barocas & Selbst (2016), Brayne
(2017), Christin (2017), Danaher
(2016), Gray & Suri (2019), Lee
et al. (2015), Martin et al. (2014),
ONeil (2016), Pachidi et al. (2014),
Rosenblat & Stark (2016), Salehi
et al. (2015), Vallas (2018), Vallas
& Schor (2020), Yeung (2017)
Bias Recommendations can reinforce
social and racial inequalities
Overriding workers
conceptions of well-being
Recommendations may negatively
affect the welfare of those
being nudged
Reduced voice Restrictions can prevent workers
from communicating with
managers and with one another
Precarity Restrictions can break jobs
down into micro tasks,
which can be scheduled in
finely grained, opaque, and
unpredictable ways
2020 373Kellogg, Valentine, and Christin
(e.g., Brockman, 2019), these forms of algorithmic
recommending may negatively affect workers con-
ditions and livelihoods in several ways. First, workers
may be frustrated when algorithmic recommenda-
tions are not intelligible to them. Take the example of
warehouse logistics. Under technical control, em-
ployers used recommendation systems that stocked
warehouses so that similar items were located close
to one another, which frustrated workers when em-
ployers categories differed from the categories of the
workers, but were intelligible to the workers. Algo-
rithmic recommendation systems may exacerbate
such worker frustration by relying on more opaque
categories. For example, Amazons algorithmic rec-
ommendation system stocked its large warehouses
using a chaotic storage algorithm, which assigned
shelves based on space and availability (Bumbulsky,
2013; Danaher, 2016). Because the algorithmic logic
was opaque, workers could not rely on their own
cognition to find items for order fulfillment and had
no way to find items when the algorithm broke down
(Danaher, 2016). In healthcare settings, this opacity
has been shown to increase professionals doubt
and ambiguity regarding their diagnostic decisions
(Lebovitz, Lifshitz-Assaf, & Levina, 2019).
Similarly, scholars of bur eaucratic control have
long shown that bureaucratic recommendation sys-
tems can frustra te workers in sales by requiring them
to use employer-approved scripts rather than tailor-
ing their sales messages to clients as they saw fit.
Pachidi et al. (2014) demonstrate how algorithmic
recommendation systems can exacerbate such frus-
tration when scripts become unintelligible to
workers. In their study of algorithmic recommending
in a telecommunications organization, salespeople
were frustrated not only because they were expected
to model their behavior based on recommendations
provided by their employers but also because the
machine-learning model built into the algorithmic
system did not allow them to see what the recom-
mendations were based on. Because their compen-
sation depended on commissions and because the
recommendations often conflicted with the sales-
peoples own judgments about which customers
were the best targets, workers only symbolically
complied with the recommendations. This led to
conflict between the salespeople and their em-
ployers; employers ultimately chose to fire many of
the salespeople in response. Similarly, Christin
(2017) shows that judges and prosecutors resented
the opacity of predictive algori thms called risk-
assessment tools because they found them to be
unintelligible.
Second, algorithmic recommending has the po-
tential to negatively affect the welfare of those being
nudged. For example, Rosenblat and Stark (2016)
describe how Ubers algorithmic recommendation
system did not let drivers see where a passenger was
going before accepting the ride, making it hard to
judge how profitable a trip would be. Similarly,
scholars showed that surge pricing was explained by
Uber to be a means to ensure positive customer ex-
perience by attracting supply to an area of high de-
mand, but that these surges and the attendant rates
were often erratic and unreliable (Lee, Kusbit,
Metsky, & Dabbish, 2015). In many cases, algorith-
mic nudges were not easily opted out of. For in-
stance, Uber and Lyft both used an algorithm called
forward dispatch that dispatched the next ride to
a driver before the current one ended. Although
drivers could pause the services automatic queuing
feature, once they logged back in and accepted their
next ride, the feature restarted. As a result, workers
reported feeling powerles s (Leicht-Deobald et al.,
2019). Beunza (2019) suggests that when workers
are directed by an algorithm that they perceive as
unfair, this may undermine their moral compass and
increase their willingness to engage in unethical
behavior.
Third, social and racial inequalities may be rein-
forced because algorithms may direct workers at-
tention to particular inferences and classes of people
in ways that may be biased (Angwin, Larson, Mattu,
& Kirchner, 2016; Harcourt, 2007). In the current
literature, the lack of counterfactuals means that it is
not clear if and when these new processes are worse
or better than the older processes. Yet, some scholars
have raised concerns that when the algorithms
training dat a (e.g., the data used to allow the
machine-learning algorithm to find patterns between
inputs and outcomes) are biased, it can lead to dis-
criminatory models (Barocas & Selbst, 2016; ONeil,
2016). Training data can be biased in two main ways.
First, historical data can reflect existing patterns of
inequality and discrimination. For example, Angwin
et al. (2016 ) compared the recidivism rates predicted
by the risk-assessment tools used in criminal justice
with the rate that actually occurred over a two-year
period. Because the algorithm had learned from
cases in which structural discrimination had played
a role, it flagged African-American defendants as
higher risk, with higher rates of false positives, than
comparable white defendants, even though the al-
gorithm was correctly calibrated regarding true
positives for African-American and white de-
fendants (Corbett-Davies, Pierson, Feller, Goel, &
374 JanuaryAcademy of Management Annals
Huq, 2017). Second, algorithms can draw inferences
from a biased sample of the population. In such a
case, any decision that rests on these inferences may
systematically disadvantage those who are under- or
overrepresented in the dataset. For example, Brayne
(2017) details how police organizations used pre-
dictive policing algorithms to identify high-risk
individuals and places, and employed these to direct
enforcement officia ls inspection priorities. By de-
voting a large share of their attention to monitoring
the activities of individuals belonging to protected
classes, police officers observed potential issues for
these individuals at systematically higher rates than
for other individuals who did not face the same de-
gree of scrutiny.
Algorithmic restricting. Algorithmic restricting is
another mechanism that employers are using to di-
rect the work of workers. It entails the use of algo-
rithms to disp lay only certain information and allow
specific behaviors while preventing others. As with
earlier forms of rational control, employers can in-
scribe algorithms with assumptions and prescrip-
tions that restrict the activities of workers (e.g.,
Callaghan & Thompson, 2001).
Unlike past forms of rational control, however, al-
gorithmic control allowsthe restriction of information
to be incorporated instantaneously and covertly into
the work process. For example, platform organiza-
tions such as Uber used algorithms to narrow shift
choices, ride choices, or delivery choices to smooth
service offerings (Calo & Rosenblat, 2017; Lee et al.,
2015). Similarly, a hospital employer used algorithms
for real-time restriction of the loading requests of
pharmacy assistants robots (for replenishment of
stock in its storage) to benefit clients waiting for pre-
scription refills, despite the fact that this intensified
the work of the pharmacy assistants (Barrett et al.,
2012). Along these same lines, to discourage work-
ers from working with clients off of the platform,
Upwork used algorithmically powered chatbot warn-
ings reminding workers of their agreement to not
work outside of the platform when certain words
such as Skype, phone, or email were typed into the
chat between workers and clients; Upwork sent
similar messages when workers shared email ad-
dresses or phone numbers with clients, or suggested
using other cloud-sharing platforms such as Google
Drive or Dropbox (Jarrahi, Sutherland, Nelson, &
Sawyer, 2019).
In addition, employers can use algorithms to in-
teractively restrict the behavior of crowds or online
community membe rs. Algorithmic systems can be
configured to constrain the activities of people who
are not formally affiliated with the organization but
still provide work that is relevant to the organization.
When firms use crowds through online platforms for
innovation, they often limit the crowds participa-
tion to facilitate the selection and integration of in-
novative solutions. For example, in crowdsourcing
initiatives such as Topcoder and Kaggle, managers
used algorithmic restricting to limit and curate sub-
missions for quality and relevance when they made
open calls on the platforms (Afuah & Tucci, 2012;
Lakhani, 2016). To mitigate organizational and pro-
fessional barriers to adoption of crowdsourced
solutions (Fayard, Gkeredakis, & Levina, 2016;
Lifshitz-Assaf, 2018), employers have created algo-
rithms to evalu ate these solutions without involving
professionals (Kittur et al., 2019). Firms also utilize
algorithmic restricting on online platforms used for
participatory production, where customers produce
and share content as they consume it (e.g., Faraj,
Jarvenpaa, & Majchrzak, 2011; Karunakaran, 2018).
For example, in journalism, managers have used
algorithms in combination with social media plat-
forms to invite the crowd to create content for
news articles, but have restricted submissions in ways
that increased compliance with company standards
(Muthukumaraswamy, 2010). Similarly, an advertising
agency enlisted social media users to create and dis-
tribute content related to the brands that the agency
represented, while at the same time strategically elic-
iting specific kinds of participation (Truelove, 2019).
Organizations such as TripAdvisor, Wikipedia, and
PatientsLikeMe, which have depended completely on
external contributors for their content, have faced par-
ticular challenges because they have needed to strike a
balance between restricting the behavior of external
contributors, on the one hand, while giving them
enough freedom that they were willing to contribute
content, on the other (Arazy, Daxenberger, Lifshitz-
Assaf, Nov, & Gurevych, 2016; Barrett, Oborn,
& Orlikowski, 2016; Kallinikos & Tempini, 2 014;
Orlikowski & Scott, 2014b; Tempini, 2015).
These forms of restriction come with important
consequences for workers. As with technical control,
workers often experience alienatio n with algorith-
mic restricting when they lose control over their own
labor and are deprived of the right to conceive
of themselves as directors of their own actions
(Blauner, 1964). However, algorithmic restricting
can limit worker voice more extensively than before.
Askay (2015) shows how an online feedback system
that interactively combined workers experiences
and ratings suppre ssed their expressions of negative
feedback, which did not fit into the data collection
2020 375Kellogg, Valentine, and Christin
interface. The ratings were known to be positively
biased, which helped the company, but limited
workers feedback, which had to fit into the existing
interface. Similar restrictions on communication are
imposed in online labor markets. As Gra y and Suri
(2019) explain, the API determines the dialog and
communication between the programmer and the
worker. The API gives each individual requester
and worker their own unique identifier, a string
of seemingly random letters and numbers such as
A16HE9ETNPNONN.’” Hidden behind such ano-
nymized handles and restrictive interfaces, workers
were prevented from communicating with each
other on the platform, and from communicating with
the requesters. These restrictions often prevented
workers from ever speaking directly with a human
manager (Martin, Hanrahan, ONeill, & Gupta, 2014;
Rosenblat & Stark, 2016; Salehi, Irani, Bernstein,
Alkhatib, Ogbe, & Milland, 2015).
Algorithmic restricting can also increase precarity
for workers. Algorithmically mediated platforms can
fragment workers efforts in several, interconnected
ways. First, on-demand workers are currently cate-
gorized as independent contractors, or users of the
platforms, rather than as employees (Rosenblat &
Stark, 2016; Vallas, 2019; Vallas & Kovalainen,
2019). Second, jobs are frequently broken down
into discrete or even micro tasks, which can be
scheduled in finely grained, opaque, and un-
predictable ways. For example, food-delivery plat-
forms restricted information about available shifts
and delivery orders, so drivers were only able to
choose from among the choices presented to them
by the algorithmic interfaces, without fully
grasping what kind of information was being re-
stricted (Ivanova, Bronowicka, Kocher, & Degner,
2018). Workers on the Upwork platform who did not
work for 30 days had their profile status changed to
private so that clients could not find them (Jarrahi
et al., 2019). And, on the Amazon Mechanical Turk
platform, requesters (e.g., employers) could rate
workers but workers could not rate requesters; this
information asymmetry made it difficult for workers
to sanction abus ive clients and prevented other
workers from learning which clients to avoid (Martin
et al., 2014).
Rational Control through Algor ithmic Evaluation
Employers obtain desired behavior from workers
not only through direction but also through
evaluationthe review of workers activities to cor-
rect mistakes, assess performance, and identify those
who are not performing adequately. Our review of
the literature on algorithms at work suggests that al-
gorithmic control uses different mechanisms for
evaluation than do technical and bureaucratic con-
trol. With technical control, evaluation occurs
through the recording of frequency and length of
work tasks, and worker productivity, accuracy, re-
sponse time, and time spent away from the assembly
line or computer terminal (Aiello & Svec, 1993;
Dworkin, 1990). With bureaucratic control, evalua-
tion is accomplished through direct observation and
subjective judgment of supervisors (Vancil, 1982)
and throu gh the use of metrics (Govindarajan, 1988).
With algorithmic control, employers use two pri-
mary mechanisms for evaluating workers: algorith-
mic recording and algorithmic rating (Table 3).
Algorithmic recording. Algorithmic recording
entails the use of computational procedures to
monitor, aggregate, and report, often in real time, a
wide range of finely grained data from internal and
external sources. As with earlier forms of rational
control, employers typically use the data to quantify,
compare, and evaluate worker output regarding the
frequency and length of work tasks, quality of worker
output, and nonproductive work time (e.g., Alvesson
& Karreman, 2007; Vancil, 1982). Consequently,
there is often an asymmetry between the information
possessed by workers and managers (Zuboff, 1988).
Yet, employers frequently use algorithmic re-
cording to track a wider range of worker behaviors
than in technical and bureaucratic systems. For ex-
ample, some organizations have developed algo-
rithms to monitor collective language and analyze
sentiments in team chat interfaces (Lix et al., 2019).
Klick Health, a large Canadian healthcare consulting
firm, used a machine-learning tool to calculate the
average time it took workers to complete a variety of
tasks and to alert managers when projects appeared to
be going offtrack (Schweyer, 2018). The organization
tracked the activities of employees to flag and reduce
behaviors that may have impacted worker flow and
productivity (Segal, Goldstein, Goldman, & Harfoush,
2014). Many companies have also used algorithmic
recording to analyze how employees communicate
with one other, using these data to locate groups of
employees who interact frequently, link employee
communication groups to their business produc-
tivity, identify communication liaisons and isolates,
and spot communication that may threaten the com-
pany (Leonardi & Contractor, 2018; Watkins Allen,
Coopman, Hart, & Walker, 2007: 173).
The development of comprehensive procedures
of data gathering has led to new modalities of
376 JanuaryAcademy of Management Annals
surveillance. For instance, Uber relied on the data
provided by its applicationinstalled on drivers
and customers smartphonesnot only to monitor
the behavior of individual drivers but also to manage
its drivers and customer base as a whole (Rosenblat &
Stark, 2016). In the trucking industry, employers
have used fleet management systems to monitor a
wide range of timekeeping and performance data
about truck drivers, including a drivers fuel effi-
ciency, idling time, speed, geolocation, lane de-
partures, braking and acceleration patterns, cargo
status, and vehicle mainte nance information (Levy,
2015: 164). Similarly, UPS had a saying of small
amounts of time, large amounts of money because
they learned that, by using finely grained data, they
could reduce even one keystroke per driver per day,
which over a year saved the company $100,000;
in addition, saving each driver one minute per day
could save almost $15 million (Davidson, 2016).
In addition, as with bureaucratic control, man-
agers are using algorithmic recording to provide
feedback to workers. However, compared with bu-
reaucratic control, which relies on subjective evalua-
tions often months after the directed behavior to
TABLE 3
Algorithmic Evaluation
Algorithmic Evaluation Key Insights Example Studies
Algorithmic
recording
Recording and aggregate finely
grained behavior and statistics
from internal and external sources
Can track a wide range of behaviors Alvesson & Karreman (2007), Bailey
et al. (2019), Karunakaran (2016),
Kittur et al. (2019), Lehdonvirta
et al. (2019), Leonardi & Contractor
(2018), Levy (2016), Lix et al.
(2019), McClelland (2012),
Rahman (2019), Rosenblat & Stark
(2016), Schweyer (2018), Segal
et al. (2014), Watkins et al. (2007)
Providing real-time feedback Can enable real-time adjustments of
worker performance
Algorithmic rating Using online rating and ranking Can aggregate quantitative and
qualitative data to measure work
productivity and evaluate workers
within an organization based on
external and internal sources
Barrett et al. (2016), Christin (2018),
Curchod et al. (2019), Horesh et al.
(2016), Jharver et al. (2018), King
(2016), Levy & Barocas (2018), Lix
& Valentine (2019), Mallafi &
Widyantoro (2016), Orlikowski &
Scott (2014b), Ramamurthy et al.
(2015), Rahman (2019), Rosenblat
(2018), Varshney et al. (2014)
Using predictive analytics Can predict future worker
performanceachievement,
skills, retention, etc.
Potential worker
experiences
Loss of privacy Workers may be concerned that the
data collected may include their
overall aptitude in various skills in
work and home settings, and their
physical and mental health
Ahmed et al. (2016), Angwin (2014),
Anteby & Chan (2018), Bock
(2015), Bodie, Cherry, McCormick,
& Tang (2017), Chan & Wang
(2018), Fourcade & Healy (2016),
Greenwood et al. (2017), Jhaver
et al. (2018), Levy & Barocas (2017),
Lix & Valentine (2019), Miller
(2015), OConnor (2015), Rahman
(2019), Rahman & Valentine
(2019), Rosenblat et al. (2017),
Rosenblat & Stark (2016), Ticona &
Mateescu (2018), Tufekci (2014),
Valentine & Bernstein (2019),
Wood et al. (2019), Wood &
Lehdonvirta (2019)
Data accuracy Workers may not be aware of the data
being collected, so they may not be
able to appeal judgments against
them or correct misinformation
Discrimination Algorithmic recording and ratings
can be subject to gender and race
stereotyping; workers may have
fewer mechanisms for contesting
mechanisms they feel are unfair;
consumer rating may escape legal
action
Weight of ratings in hiring decisions Workers may be concerned that
employers may select workers
primarily based on prior ratings
and may communicate with
workers primarily through online
tools that do not allow in-person
assessments of workers
2020 377Kellogg, Valentine, and Christin
reward or discipline workers (Alvesson & Karreman,
2007), algorithmic recording uses computational
procedures to provide real-time feedback to workers
and managers. In a large warehouse fulfillment ser-
vices organization, employees and managers received
real-time information throughout the day, showing
whether and how they were meeting their targets
(McClelland, 2012). A handheld scanner program
measured finely grained worker behaviors like being
late or searching through a bin where the correct item
was not found, and calculateda workerscore based on
these data; if a workers score was consistently lower
than expected, this triggered an alert for a manager to
redirect the worker (McClelland, 2012). Similarly,
employer platforms such as Upwork have used real-
time metrics to monitor workers, including variables
such as up-to-date availability and 100 percent
complete worker profile, as well as data about the
freelancers activity on the platform in the past 90
days(Rahman, 2019). Uber used real-time geolocation
information to optimize the matching of drivers and
customers and to track the percentage of canceled
trips and unaccepted trip requests for each driver.
Ubers system identified predicted areas of surge
pricing and alerted drivers through notifications
(Rosenblat & Stark, 2016).
Regarding worker consequences, like with techni-
cal control through recording, algorithmic recording
can shape the subjectivity of workers so that they
come to see themselves in the ways they are defined
through surveillance (Sewell, 1998). Feelings of con-
stant surveillance, in turn, can lead workers to police
their own behavior to comply with organizational
expectations (Ahmed et al., 2016; Bailey, Erickson,
Silbey, & Teasley, 2019). Making the output of algo-
rithmic recording visible to other workers may also
lead workers to change their behavior to match their
peers (Lehdonvirta, K
¨
assi,Hjorth, Barnard,& Graham,
2019). Unlike previous forms of recording under
technical and bureaucratic control, however, since
algorithmic recording greatly expands previous con-
trol mechanisms in scope and frequency, workers
may experience a loss of privacy (Anteby & Chan,
2018; Fourcade & Healy, 2016; Rosenblat & Stark,
2016; Tufekci, 2014). The data collected may relate
to multiple aspects of the employee as a person
including their overall aptitude in various skills and
settings, their physical and mental health, their re-
productive plans, or even what they had for breakfast
(Bock, 2015). This surveillance can extend control
beyond work hours, as some employers have given
workers wearable devices that rewarded lifestyle
choices such as exercise and sleep (OConnor, 2015).
Algorithmic recording may also raise worker
concerns about the accuracy of the data collected.
For example, in the context of drug testing, false
positives can deprive workers of their jobs and tar-
nish their reputations for future opportunities. This
is problematic, given the fact that algorithmic re-
cording canlike previous forms of recording
be inaccura te or bias ed (Angwin, 2014; boyd &
Crawford, 2012; Eubanks, 2017; Miller, 2015;
ONeil, 2016). In larger data pools, however, bias and
inaccuracies may be harder to check than before: it
can be difficult to reverse engineer the data, or to
cross-compare it with related indicia to ensure its
accuracy (Bodie et al., 2017). Because workers may
not be aware of the data being collected about their
behavior and performance, they may not be able to
appeal judgments against them or correct missing
or mistaken information.
Algorithmic rating. Algorithmic rating is another
mechanism for guiding worker behavior through
evaluation. Managers are now often using computa-
tional technologies to gather ratings and rankings to
calculate some measure of workers performance, as
well as predictive analytics to predict measures of
their future performance. As with earlier forms of
rational control, managers draw on a mix of quanti-
tative and qualitative data collected inside the orga-
nization to measure productivity and evaluate
workers against those measures (e.g., Karreman &
Alvesson, 2004). Yet, algorithmic rating can also
provide ongoing aggregation of quantitative and
qualitative feedback about worker performance from
both internal and external sources. For instance,
most online marketplaces and online labor markets,
such as Amazon, Craigslist, and Upwork (Rahman,
2018); Ebay (Curchod et al., 2019); Uber and Lyft
(Rosenblat, 2018) ; Airbnb (Jharver et al., 2018); and
TripAdvisor (Orlikowski & Scott, 2014), and most
online health communities (Barrett et al., 2016) have
used user-generated rating systems. One company
assigned contractors a single kharma rating ba sed
on manager, peer, and client ratings of their work,
skills, and personality, and on their objective com-
pliance with budgets and deadlines; workers who
had higher scores got better access to additional
projects (Lix & Valentine, 2019). In web journalism,
many newsrooms used data including ratings pro-
duced by content management systems and analyt-
ics software programs to track the preferences of
online readers to manage their staffers workflow
(Christin, 2018). In the restaurant and hospitality
industry, crowdsourced platforms such as Yelp and
TripAdvisor provided managers with an ongoing
378 JanuaryAcademy of Management Annals
flow of crowdsourced data about worker behavior.
Customers could review restaurants and hotels
through ratings in a range of categories (value, ser-
vice, and room quality); they could also post com-
ments and pictures on the aggregators website. This
ongoing flow of ratings was routinely used by man-
agers to monitor the performance of their staff
(Orlikowski & Scott, 2014a). All of these de-
velopments contribute to the institutionalization of
refractive surveillance (Levy & Barocas, 2018), in
which data such as ratings that are recorded about
external users (e.g., customers) can be repurposed to
assess internal sources (e.g., workers).
In addition, and in contrast to past forms of tech-
nical and bureaucratic control, employers can use
algorithms to predict how workers are likely to per-
form in the future. For example, one consulting firm
used algorithmic rating to predict turnover intention,
identifying high-flight risk individuals who were
likely to leave the company (King, 2016). Another
company deployed algorithms to predict the exper-
tise of their employees using data from both their
enterprise systems (resumes, explicit assessments of
employee expertise, job position histories, and foot-
prints of employees work activities such as sales
pipeline, software documentation, and publications)
and their corporate social networking site (Horesh,
Varshney, & Yi, 2016; Varshney, Chenthamarakshan,
Fancher, Wang, Fang, & Mojsilovi
´
c, 2014). Studies
have used algorithmic rating models to predict the
need for employee up-skilling based on a mismatch
between employee skills and their current job de-
mands (Ramamurthy, Singh, Davis, Kevern, Klein,
& Peran, 2015), and to predict the potential for
employees to achieve performance targets based on
historical data about the employees achievement
orientation, adaptability, analytical thinking, com-
munication, and information seeking (Mallafi &
Widyantoro, 2016).
Algorithmic rating comes with several important
consequences for workers. First, similar to algorith-
mic recommending, algorithmic rating raises im-
portant concerns about discriminatory outcomes.
Algorithmic rating can be subject to gender and race
stereotyping (Greenwood, Adjerid, & Angst, 2017;
Levy & Barocas, 2017; Rosenblat, Levy, Barocas, &
Hwang, 2017). For example, in the case of credit
scoring, low credit scores were more likely to lead to
negative hiring and s alary-related outcomes for fe-
male (versus male) and black (versus white) job ap-
plicants (OBrien & Kiviat, 2018). With algorithmic
rating, however, online customers (instead of man-
agers) also often act as raters, with implications for
evaluations. Customers have been shown to dis-
criminate in online labor markets (Chan & Wang,
2018; Edelman, Luca, & Svirsky, 2017). But they may
not be held accountable for their ratings in the way a
manager in an ongoin g employme nt relat ion wou ld
be. Workers also have fewer mechanisms for con-
testing unfair evaluations (Rosenblat et al., 2017;
Rosenblat & Stark, 2016). Overa ll, the legal status of
algorithmic rating in connection to discrimination
remains unclear. Although companies are currently
prohibited from making employment-related de-
cisions based on workers protected characteristics
under Title VII of the Civil Rights Act of 1964, con-
sumer ratings may escape legal action because they
fall under the business necessity argument
(Rosenblat et al., 2017; Rosenblat & Stark, 2016).
In addition, in comparison to bureaucratic rating,
algorithmic rating carries extreme weight in hiring
decisions. Some online labor platforms have used
algorithms to restrict access to jobs for contractors
with low ratings (Wood, Graham, Lehdonvirta, &
Hjorth, 2019). In addition, algorithmic ratings are
often much more public than past forms of rating
(e.g., Curchod et al., 2019). They also can be volatile
because they often dynamically draw from multiple
data sources, update frequently, and automatically
deny access even based on small variations in rating.
They also may be accidental or erroneous (Wood &
Lehdonvirta, 2019). Both in online marketplaces
(e.g., Airbnb, Amazon, Craigslist, and Ebay) and
online labor markets (e.g., UpWork, Uber, Lyft, and
Care.com), employers and customers have been
shown to select workers primarily based on prior
ratings and to communicate with workers primarily
through online tools that do not allow in-person
assessments of workers rather than through face-to-
face interviews (Chan & Wang, 2018; Rahman &
Valentine, 2019). Consequently, algorithmic ratings
have become an essential reputational asset for
workers. In the words of a freelancer on UpWork,
ratings are our billboard, it is our PR megaphone, it is
the front door to our shop (Rahman, 2019: 21). From
ride-sharing to care work platforms, good algorithmic
ratings ensure the visibility of online workers, which
in turn shapes their ability to find work. For instance,
on Care.com, algorithmic ratings have been used to
create different categories of workers: the label
CarePros indicated that workers maintained a high
star rating and responded to 75 percent of messages
within 24 hours (Ticona & Mateescu, 2018: 4395).
CarePros workers profiles were more prominently
displayed on the platform, which increased their
likelihood of future employment.
2020 379Kellogg, Valentine, and Christin
Rational Control through Algor ithmic Discipline
Finally, employers obtain desired behavior from
workers through disciplinethe punishment and
reward of workers to elicit cooperation and enforce
compliance. Our review of the literature on algo-
rithms at work suggests that employers using algo-
rithmic control utilize different mechanisms for
discipline than they do when using technical and
bureaucratic control. With technical control, disci-
pline is accomplished through the recruitment of a
reserve army of secondary workers ready to take the
jobs of any primary workers who do not cooperate
and comply with emplo yer directives (Edwards, 1979).
With bureaucratic control, discipline is accomplished
primarily through incentives and penalties; workers
who exhibit desired behavior are rewarded with
promotions, higher pay, and jobs with greater respon-
sibility, more benefits, better work stations, or prefer-
able tasks, whereas those who do not exhibit desired
behavior are fired according to rules, policies, or
schedules (Ezzamel & Willmott, 1998; McLoughlin
et al., 2005). With algorithmic control, employers
use two main mechanisms for disciplining workers:
algorithmic replaci ng and rewardi ng (Table 4).
Algorithmic replacing. Algorithmic replacing
entails rapidly or even automatically firing under-
performing workers from the organization and
replacing them with substitute workers. Although
others have addressed the macro-economic changes
associated with replacement of jobs by algorithms
(e.g., Arntz, Gregory, & Zierahn, 2016; Autor, 2015a,
2015b; Brynjolfsson & McAf ee, 2014; Davenport &
Kirby, 2016; Ekbia & Nardi, 2017; Elliott, 2014;
Frey & Osborne, 2017; Mindell, 2015; Mokyr,
Vickers, & Ziebarth, 2015; Sachs & Kotlikoff, 2012;
Shestakofsky, 2017), we examine algorithmic re-
placement at the workplace level with a focus on how
it can be used by employersasa mechanismof control.
As with past forms of replacing, algorithmic
replacing is accomplished by accessing a reserve
army of workers ready to take the jobs of those who
do not comply with managerial directives. Yet, al-
gorithmic replacing differs from past forms of control
in two main ways. First, market-making platforms
can automatically kick workers off the platform if
their ratings drop below a certain level (Rosenblat &
Stark, 2016). On platforms such as Amazon Me-
chanical Turk (Irani, 2015), Uber (Rosenblat &
TABLE 4
Algorithmic Discipline
Algorithmic Discipline Key Insights Example Studies
Algorithmic
replacing
Automatically replacing
or removing
Can be used to fire underperforming
workers and replace them with
others who may better follow
managerial directives
Ajunwa & Greene (2019), Aneesh (2009), Beunza &
Millo (2015), Borch & Lange (2016), Cherry &
Aloisi (2018), De Stefano (2015), Ha-Thuc et al.
(2016), Irani (2015), Jackson (2019), Jarrahi et al.
(2019), Kittur et al. (2011, 2013), Lange et al.
(2016), Lee et al. (2015), Lenglet (2011), Lenglet &
Mol (2016), MacKenzie (2018), Rahman (2019),
Retelny et al. (2014), Rosenblat & Stark (2016),
Shapiro (2018), Sundararajan (2016), Valentine
et al. (2017)
Immediately replacing
or removing
Can recruit on a greater scale and
at the fraction of the time because
workers are more interchangeable
and labor is mainly digital
Algorithmic
rewarding
Interactively and
dynamically
rewarding
Can provide rewards in real time
for behaviors that comply with
predefined correct behaviors
Bogost (2015), Deterding et al. (2011), Edery &
Mollick (2009), Irani (2015), Ivanova et al. (2018),
Kerfoot & Kissane (2014), Kim (2018), Lehdonvirta
(2018), Liu et al (2018b), Mollick & Rothbard
(2014), Petre (2018), Rahman (2017), Rosenblat &
Stark (2016), Shapiro (2018), Stanculescu et al.
(2016), Walz & Deterding (2014)
Gamifying rewards Can use the principles of game
design to make the affective
experience of work more positive
and fun for employees
Potential
worker
experiences
Precarity Precarity can be greater for low-skilled
workers, especially if they work for
organizations that use platforms
that allow for automatic
replacement
Aneesh (2009), Barley et al. (2017), Bergvall-
K
˚
areborn & Howcroft (2014), Corporaal &
Lehdonvirta (2017), Dourish (2016), Graham et al.
(2017), Gray et al. (2016), Irani & McClelland
(2012), Kleemann et al. (2008), Kittur et al. (2011),
Martin et al. (2014), Postigo (2016), Rahman
(2019), Raval & Dourish (2016), Retelny et al.
(2014), Schenk & Guittard (2011), Schwartz (2018),
Silberman et al. (2010), Valentine et al. (2017)
Frustration and stress Intentional secrecy of the rewarding
system and rapid responsiveness
of the rewards may lead to worker
frustration and stress
380 JanuaryAcademy of Management Annals
Stark, 2016), and Caviar (Shapiro, 2018), workers
who did not comply with directives were either
banned from the platform or punished by making
their profiles extremely difficult to find. For exam-
ple, Upwork workers who were regularly submitting
proposals but not winning projects had their free-
lance accounts closed (Jarrahi et al., 2019). Uber
drivers were instantly penalized for rejecting orders
or not following detailed guidelines provided by
complex feedback systems (Cherry & Aloisi, 2018; De
Stefano, 2015). Drivers with a low average passenger
rating and acceptance rate were subject to immediate
deactivation on ride-shar ing platforms (Lee et al.,
2015; Rosenblat & Stark, 2016).
Second, in contrast to past forms of technical and
bureaucratic control, organizations can recruit
workers on a greater scale and in a fraction of the
time recruiting used to take (Kittur et al., 2013;
Sundararajan, 2016; Valentine et al., 2017). In terms
of the scope at which workers can be replaced, al-
gorithmic replacement can be more far-reaching,
especially on on-demand platforms, which allow for
the recruiting of workers globally as well as up and
down the occupational hierarchy (Aneesh, 2009;
Kittur, Smus, Khamkar, & Kraut, 2011; Retelny et al.,
2014; Valentine et al., 2017). Rather than relying on
managers to recruit workers, predictive analytics can
also be built into hiring tools so that replacement
is accomplished more quickly than in the past
(Salehi, Irani, Bernstein, Alkhatib, Ogbe, & Milland,
2017). For example, employers have used hiring
platforms such as Equifax, Kronos, SnagaJob, and
Recruit that workers to submit their work history,
identification information, and schedule availabil-
ity; workers needed to agree to do background checks
and participate in lengthy personality and skill as-
sessments so that the algorithmic software could
automatically process and sort applicants according
to the employer criteria (Ajunwa & Greene, 2019).
Algorithms can also be used to replace highly skilled
workers (Beunza & Millo, 2015; Borch & Lange, 2016;
Lange, Lenglet, & Seyfert, 2016; Lenglet, 2011;
Lenglet & Mol, 2016; MacKenzie, 2018). For in-
stance, recruiters using LinkedIn could enter the
search criteria including one or several examples
of ideal candidates for the position (e.g., existing
members of the team), instead of needing to construct
complicated queries describing hiring criteria;
LinkedIn automatically built a query from the ideal
candidates and then retrieved and ranked results for
recruiters (Ha-Thuc et al., 2016). Finally, algorithms
can be used to recruit workers in thin labor markets
(Jackson, 2019). For instance, platforms dedicated to
the recruitment of underrepresented candidates
(e.g., women and racial minorities) can help com-
panies find high-quality, high-skill workers faster
and more efficiently than the traditional recruiting
model.
In comparison to the technical replacement, algo-
rithmic replacement can result in greater precarity for
less skilled workers (Aneesh, 2009; Kittur et al., 2011;
Retelny et al., 2014; Valentine et al., 2017). Workers
currently employed by organizations using platforms
such as Upwork and AMT could have their work out-
sourced at any time (Barley et al., 2017). Even tradi-
tional organizations have been shown to use platforms
such as these to source on-demand work d irectly
from freelancers, creating the threat of immediate
replacement for existing workers (Corporaal &
Lehdonvirta, 2017; Howe, 2006; Schenk & Guittard,
2011). Workers have limited options for dissent be-
cause the global supply of workers is high and because
there are currently three times as many contractors as
clients on many labor market platforms (Bergvall-
K
˚
areborn & Howcroft, 2014; Graham, Hjorth, &
Lehdonvirta, 2017; Silberman, Irani, & Ross, 2010).
Many platforms treat workers interc hangeably, and
platforms can often sustain losing those who do not
accept the systems terms (Kleemann, Voß, & Rieder,
2008; Postigo, 2016). However, Wood, Lehdonvirta,
and Graham (2018) note that worker outcomes on these
platforms are divergent according to the type of
workersworkers with specialized skills may gain
even more opportunities, whereas workers with fewer
skills become even more powerless.
Algorithmic rewarding. Algorithmic rewarding is
another mechanism used by managers to discipline
worker behavior. It entails using algorithms to in-
teractively and dynamically reward high-performing
workers with more opportunities, higher pay, and pro-
motions. As with past forms of technical and bureau-
cratic control, algorithmic rewarding uses professional
and material incentives to guide worker behavior.
Algorithmic rewarding systems can also provide
rewards and penalties in real time, for behaviors that
comply with predefined correct behaviors. For ex-
ample, Beunza (2019) described how an algorithmic
system encoded with a set of formal rules rewarded
specialists who followed its rules with additional
stock listings. Algorithmic tools are also being used
to differentiate the perform ance of workers by de-
partment, who then receive differential rewards
(Kim, 2018; Liu, Huang, & Zhang, 2018b; Payne,
2018). In platform labor markets such as Amazon
Mechanical Turk (Irani, 2015), Uber (Rosenblat &
Stark, 2016), Cavi ar (Shapiro, 2018), and others
2020 381Kellogg, Valentine, and Christin
(Rahman, 2019), workers who complied with algo-
rithmic assignments were immediately rewarded
with more work, higher pay, and increased flexibil-
ity. In particular, managers have often used algo-
rithmic rewarding to enhance one of the gig
economys main selling pointswork-shift flexibil-
ity and worker self-determination in scheduling
(Ivanova et al., 2018). For instance, Amazon Me-
chanical Turk s reward structure used finely grained
contingent payment; whereas the great major ity of
tasks provided modest rewardsamounting to
$12/hour on averagea small fraction of tasks
provided much more, s ometimes as much as $10-
$20/hour. These jackpot tasks appeared only oc-
casionally and tended to be quickly taken. Workers
could thus gamble with their time, foregoing modest
but certain rewards for a chance to earn bigger re-
wards (Lehdonvirta, 2018).
Like previous forms of control, managers may allow
workers to game algorithmic rewards as a way to
manufacture consent (Burawoy, 1979; Roy, 1959).
Yet, in contrast to past systems of control, algorithmic
control can explicitly rely on the managerially im-
posed gamification of rewards to make the affective
experience of work more positive and fun for em-
ployees (Deterding, Khaled, Nacke, & Dixon, 2011;
Edery & Mollick, 2009; Mollick & Rothbard, 2014;
Petre, 2018; Walz & Deterding, 2014). Nike, Google,
Microsoft, Deloitte, Amazon, Samsung, Target, Dis-
ney, and many other large corporations have embed-
ded the methods of game design in their day-to-day
business processes (Kim, 2018). They have relied on
smartphone-based apps, scoreboards, and video/app
game elements such as digital points and badges to
promote the structure, look, and feel of a designed
gamewiththe intentof advancing employer goals (Liu
et al., 2018b; Stanculescu, Bozzon, Sips, & Houben,
2016). For example, one employer used a basketball-
themed game to algorithmically reward its salespeo-
ple for closing deals with customers: warm leads
counted as layups, whereas cold calls were jump
shots, and large display screens throughout the
office floor showed basketball-based animations track-
ing the game status (Mollick & Rothbard, 2014).
Gamification can also be used to encourage unre-
munerated work by both external and internal work-
ers (Edery & Mollick, 2009). For example, Google used
the ESP game, which matches two players to com-
pete against one another, to motivate external workers
to label online images for free (Von Ahn, Maurer,
McMillen, Abraham, & Blum, 2008). Similarly, Lloyds
TSB bank used virtual stock market games to en-
courage bankers to develop and submit innovation
proposals (Mollick & Werbach, 2015), and IBM added
point- and level-based virtual reward systems to mo-
tivate employees to contribute to its internal knowl-
edge management system (Farzan, DiMicco, Millen,
Dugan, Geyer, & Brownholtz, 2008). U.S. hospitals
have also used gamification to motivate surgical
trainees to spend more practice hours on a simulator
to improve their skill level in minimally invasive
surgeries (Kerfoot & Kissane, 2014).
In comparison to bureaucratic rewarding, algorith-
mic rewarding through gamification may compro-
mise workers capacity to deliberatively set moral and
practical limits for their labor. Ranganathan and
Benson (2017) demonstrate that RFID monitoring
technologies that quantify output in real time can
elicit accidental gamification for workers. Gamifi-
cation may also manufacture consent by subtly
transforming games from employee-generated spon-
taneous play into managerially imposed, mandatory
fun (Mollick & Rothbard, 2014). These dynamics
have led Bogost (2015) to argue that gamification is an
exploitative control system.
Algorithmic rewarding can also create greater ex-
periences of frustration and stress for workers, for
two main reasons: the intentional secrecy of the re-
warding system and the rapid responsiveness of the
rewards. Workers on labor market platforms often
expressed suspicion and frustration about opaque
and unclear guidelines regarding accessing and be-
ing paid for work (Martin et al., 2014; Rahman, 2019).
Many online platforms have been shown to keep
their rating and rewarding algorithms secret to dis-
courage manipulation and ratings inflation. For in-
stance, a prominent high-skilled online labor market
switched its rating from a transparent star system to
an opaque system: suddenly, workers had little to no
insight about what they were being rated on, how
exactly the ratings were used, why they were guar-
anteed pay at sometimes and not others, and why
their designs were sometimes rejected (Dourish,
2016; Rahman, 2019; Raval & Dourish, 2016). In ad-
dition, when employer payment algorithms changed
wages rapidly (Lee et al., 2015; Shapiro, 2018),
workers often did not know why they were experi-
encing the pay changes and had limited recourse to
find out (Rahman, 2017; Raval & Dourish, 2016;
Schwartz, 2018b). Algorithms may also prevent con-
tact with human managers. When an algorithm, in-
stead of a person, is on the other side of a managerial
relationship, it can create an additional obstacle for
workers to question or challenge the directions they
are given or have a say in the labor process (Graham
et al., 2017; Silberman, Irani, & Ross, 2010).
382 JanuaryAcademy of Management Annals
ALGORITHMIC CONTROL AS THE NEW
CONTESTED TERRAIN OF CONTROL: INSIGHTS
AND RESEARCH AGENDA
Our review has identified specific ways that em-
ployers are using algorithms to control worker be-
havior. Most generally, we see that algorithmic
control plays out familiar themes from labor process
theory around managers utilizing technological sys-
tems to pursue economic value and increase their
control over workers. In this section, we elaborate four
key insights about how algorithmic control is a new
contested terrain of rational control (see Figure 2). We
discuss 1) how labor process theory helps to prob-
lematize the predominant research focus to date on
the economic value of algorithms; 2) how algorithmic
technologies facilitate employers constant reconfi-
guring of control systems, ushering in a novel form of
rational control that is distinct from the technical and
bureaucratic control used by employers for the past
century; 3) how algorithmic occupations represent
an emerging landscape for the control-resistance di-
alectic; and 4) how what we call algoactivism tac-
tics allow for individual and collective resistance of
algorithmic control. Taken together, these themes
reveal the contested terrain of algorithmic control
and chart an agenda for future research.
Problematizing the Predominant Research Focus
on the Economic Value of Algorithms
Our first insight related to algorithmic control is a
problematization of the existing research focus on
the economic value of algorithmic systems. To date,
most of the research on algorithms in organizational
strategy, economics, information systems, and
humancomputer interaction has emphasized how
algorithms can facilitate and improve decision-
making, coordination, and learning. In this view,
algorithmic systems allow actors to optimize orga-
nizational and economic goals. Our application of
a labor process perspective makes three distinct
contributions.
Algorithmic systems as contested instruments
of control. Applying a labor process perspective to
the dominant understanding of algorithms draws
attention to the structurally antagonistic characte r
of employerworker relations. It allows us to un-
derstand algorithmic systems not as neutral tools
that facilitate efficiency and improve communica-
tion exchanges, but as contested instruments of
control that carry specific ideological preferences
(Winner, 1980). In this view, algorithmic systems
are not merely encoded with technical information
embedded through rules and routines; instead,
FIGURE 2
New Insights and Future Directions
Problematizing the
Predominant Focus on the
Economic Value of
Algorithms
Algorithmic Control in
Historical Perspective
Mapping the Emerging
Landscape of
Algorithmic Work and
Occupations
Algoactivism: Individual
and Collective Resistance of
Algorithmic Control
New View of
Algorithmic Systems:
Contested Instruments of
Control
New Mechanism for
Action: Obscuring and
Securing Surplus Value
New Important
Outcomes: Worker
Experiences and
Livelihoods
Variation Across
Organizations and
Individuals
Algorithmic
Comprehensiveness
Algorithmic Instantaneity
Algorithmic Interactivity
Algorithmic Opacity
Disintermediation of
Managers
Algorithmic Curation
Algorithmic Brokerage
Algorithmic Articulation
Individual Resistance Via
Practical Action
Platform Organizing
Discursive Framing about
Algorithmic Fairness,
Accountability, and
Transparency
Legal Mobilization
around Employee
Privacy, Managerial
Surveillance,
Discrimination, and Data
Ownership
2020 383Kellogg, Valentine, and Christin
algorithms are often created and implemented based
on the interests of powerful actors. As such, algo-
rithmic systems tend to give employers dispropor-
tionate access to key resources in the workplace.
Mechanism for action: obscuring and securing
surplus value. Our application of Edwards frame-
work of direction, evaluation, and discipline reveals
how employers may use algorithms to secure a share
of capital from workers exertions while obscuring
their methods for doing so; this may, in turn, help to
prevent or stall worker contestation. According to
the labor process theory, due to the relative auton-
omy of the labor process, a key challenge for em-
ployers is the activation of labor effort. Employers
often want to keep the share of capital that labor re-
ceives low, yet also seeks to secure this surplus value
with minimal conflict (e.g., Burawoy, 1979) . Em-
ployers can use algorithms to obscure how they ex-
tract surplus value from workers and divert workers
attention from the actual distribution of gains to less
contentious objects (e.g., Chai & Scully, 2019).
In this view, information asymmetries are not
random: instead, they are deliberately created by
employers to constrain workers choices and control
workers ability to contest the distribution of surplus
value (e.g., Felstiner, 2011; Howcroft & Bergvall-
K
˚
areborn, 2019). The opaque nature of algorithmic
control can allow employers to track what workers
are doing but limit workers understanding of em-
ployers strategies. When employers perpetuate the
narrative that algorithmic control systems are fully
automated, they may be deliberately underplaying
their role in calibrating and intervening in the sys-
tems architecture, nudges, and sanction s; this in-
visibility may make it harder for workers to find a
relevant target for contestation (e.g., Lee et al., 2015;
Rosenblat, 2018; Veen et al., 2019).
Important outcomes: worker experiences and
livelihoods. A labor process perspective on algo-
rithms at work also draws attention to employees
working conditions and livelihoods. Scholars of
organizational strategy, economics, information sys-
tems, and humancomputer interaction have primar-
ily focused on the efficiency and organizational goal
attainment made possible by the use of algorithmic
systems, but have largely ignored the topic of how
employers use of algorithms may negatively affect
workers. In fact, when studies in these literatures have
addressed worker experiences, they have frequently
emphasized primarily the positive worker outcomes
associated with the use algorithmic systems, high-
lighting how this use may enable geographically dis-
persed people to come together (Brabham, 2013); give
workers high levels of flexibility and autonomy
(McAfee & Brynjolfsson, 2017); create better matching
between the supply and demand of worker skills
(Kittur et al., 2013); and heighten inclusivity by offer-
ing better opportunities to workers whose availability
or mobility prevents them from working regular hours
(Valenduc & Vendramin, 2016).
Our use of labor process theory leads us to high-
light some of the negative effects that algorithmic
control may have on workers (see also Chai & Scully,
2018; Griesbach, Reich, Elliott-Negri, & Milkman,
2019; Vallas, 2019; Vallas & Kovalainen , 2019). For
example, platform workers may become hypervigi-
lant, spending many hours sorting through tasks and
being on call day and night, because most micro-task
platforms only allow workers to pick up jobs on a
first-come, first-served basis (Gray & Suri, 2019). In
addition, workers on these platforms can lose their
jobs and wages, with no explanation and no oppor-
tunity to appeal the cancellation of their accounts
(Martin et al., 2016; Rahman, 2018). Labor precarity
for low-skilled workers can increase when re-
cruitment is global and instantaneous (Brooks, 2012;
Cherry, 2015). Finally, although platforms may af-
ford workers high levels of flexibility, autonomy,
and task variety, these benefits are often coupled
with low pay, social isolation, irregular work hours,
and exhaustion (Wood et al., 2019).
Variation across organizations and individuals.
Yet, although a labor process perspective draws at-
tention to how algorithmic control can result in
negative outcomes for workers, studies have also
shown that there is variation in worker outcomes
across organizations and individuals (Christin, 2017;
Griesbach et al., 2019; Lehdonvirta, 2018). Organi-
zations can facilitate more positive outcomes for
workers both through informal managerial practices
and through formal structuring of the work process.
For example, regarding informal managerial prac-
tices, Kessinger and Kellogg (2019) demonstrate how
managers in a digital marketing agency softened the
edges of algorithmic evaluation by engaging in re-
lational work with employees who were subject to
algorithmic recording; this reduced employee stress
and encouraged employee learning.
Regarding formal structuring of the work process,
Lehdonvirta (2018) shows how three micro-work
platforms deployed different algorithmic control
regimes despite offering similar types of work. Al-
though MTurk was fashioned as a task marketplace
where unbridled competition between workers
resulted in workers having to be constantly on call,
CloudFactory was des igned after a more orderly
384 JanuaryAcademy of Management Annals
assembly line image, applying technical controls on
workers task throughput. This reduced competition
between workers and allowed them to choose their
working hours more freely. In another example of
deliberate structuring of the work process, Corporaal,
Windwehr, and Lehdonvirta (2019) demonstrate that
employers can use algorithmic technologies to create a
predictable and explicit means for workers to engage
in internal dispute resolution; they detail how em-
ployers using relationship-driven dispute resolution
and prevention practices can actually demonstrate
less adherence to due process criteria than do em-
ployers using algorithmic technologies. Gray and
colleagues highlight several other ways that em-
ployers can structure the work process to facilitate
more beneficial outcomes for workers. First, em-
ployers can create two distinct streams of crowd-work:
one explicitly available for group collaboration
(e.g., sales lead verification) and the other requiring
individual work (e.g., survey responses where in-
dependent results are required for validity); this can
allow workers to collaborate when collaboration does
not run counter to requesters desired outcomes (Gray,
Suri, Ali, & Kulkarni, 2016). Second, companies can
taskify management by turning affirmation and
training into paid tasks. For example, the LeadGenius
platform included real-time chat tools that allowed
groups to speak directly with other crowd-workers
assigned to the same tasks. Workers were able to ask
one another for help, keep each other company, and
contact junior managers to answer questions during
their scheduled work shifts. Team leaders and junior
managers were paid for the time that they spent
checking the quality of crowd-workers tasks and an-
swering crowd-workers questions (Gray & Suri, 2019).
In addition to variation across organizations,
scholars have shown variation across individuals
regarding how they experience algorithmic control.
For example, Cameron (2018) finds that some Uber
drivers felt that these systems afforded them auton-
omy by allowing them to make choices at each stage
in the work process so that they could maximize
earnings and create a continuous stream of work
from discontinuous tasks. Other scholars, too, have
highlighted that some workers appreciate the high
levels of flexibility, autonomy, task variety, and task
complexity that algorithmic control can afford
(Griesbach et al., 2019; Wood et al., 2019). Workers
may also vary in how they come to understand their
new work environment in the absen ce of traditional
socializing agents such as managers or coworkers,
with some seeing their employers as allies rather
than adversari es (Cameron, 2019). Finally, worker
experiences may vary according to country.
Lehdonvirta et al. (2019) demonstrate that although
clients on crowd-work platforms initially often dis-
criminated against workers from lower income
countries, employer provision of data on worker
quality allowed workers to eventually prove their
quality to prospective clients and thus overcome
discrimination based on country stereotypes.
Future rese arch on the economic value of
algorithms. This variation in worker outcomes
across organizations and individuals raises ques-
tions for future research around what employers can
do to mitigate negative worker outcomes associated
with algorithmic direction, evaluation, and disci-
pline. Because these studies demonstrate that nei-
ther the technologies themselves nor the type of work
dictates the ways that employers use algorithmic
control systems, what factors do shape this? Can
employers using algorithmic technologies imple-
ment novel informal manager practices and formal
work structures that result in more beneficial out-
comes for workers across industries and geogra-
phies? And, can employers design these systems
with an understanding of how different types of
workers may have different needs?
In addition, firms implementing new technologies
have been shown to benefit when they incorporate
worker voice during technology deployment (e.g.,
Gittell, 2016; Kellogg, 2018; Litwin, 2011; Valentine,
2018), invest in working training to integrate the
technologies into their workflow (Adler, Goldoftas,
& Levine, 1999; Kellogg, Myers, Gainer, & Singer,
2020; Kochan, Adler, McKersie, Eaton, Segal, &
Gerhart, 2008), and partner with postsecondary edu-
cation providers to teach workers the necessary skills
to use the technologies (Lowe, Goldstein, & Donegan,
2011; Osterman, 2011). In the context of algorithmic
technologies, how can employers promote worker
voice during technology design and implementation
to shape worker experiences and livelihoods in more
positive ways? How can they provide training to give
workers the skills they need to work with these tech-
nologies? And, how can employers partner with
community colleges, apprenticeship programs, and
sectoral training programs to recruit and retain a
workforce that can skillfully use these technologies
while also helping workers to increase their long-term
employment and earnings prospects?
Algorithmic Control in Historical Perspective
Our second insight related to algorithmic control
as a new contested terrain is our elaboration of the
2020 385Kellogg, Valentine, and Christin
key similarit ies and differences between algorithmic
control and the two primary forms of rational
controltechnical control and bureaucratic control
that have been used by employers over the course
of modern industrial history. To synthetize these
differences, we draw on the four affordances of
algorithms introduced earlier (comprehensiveness,
instantaneity, interactivity, and opacity), to which we
add another key difference: facilitation of the disin-
termediation of managers. Although we briefly ad-
dress these five differences, we call for more research
on additional affordances of algorithms, as well as on
the relationship between the rational and normative
aspects of algorithmic control.
Algorithmic comprehensiveness. Worker activi-
ties can be more constrained under algorithmic
control than under previous regimes of rational
control because algorithmic control can be more
comprehensive in terms of how it directs, evaluates,
and disciplines workers. As in technical and bu-
reaucratic control, workers can be monitored, but as
we saw, worker behaviors that were previously not
directed can now be subject to algorithmic recom-
mendation. Consider for instance how work collab-
oration can be heavily guided using algorithms.
Under technical and bureaucratic control, social in -
teractions and peer collaboration between workers
have been hard to direct (e.g., Beane, 2019;
Bernstein, 2012). On factory floors, interactions
between workers have often served as spaces of
resistance in which workers have contested mana-
gerial goals and methods (e.g., Morrill, Zald, & Rao,
2003). And, in professional workplaces, managers
have historically relied on subjective evaluations to
reward or sanction professional workers. For in-
stance, Karreman and Alvesson (2004) describe how
a bureaucratic control system for management con-
sultants that directed workers to collaborate with
team members was only loosely coupled with eval-
uation and discipline because collaboration was
hard to measure.
Under algorithmic control, however, even collab-
oration is an activity that can be specifically evalu-
ated, directed, and disciplined, as illustrated by the
DreamTeam systems (Zhou, Valentine, & Bernstein,
2018a), the GroupGroup interface (Lix et al., 2019), or
the Chorus.ai system (Bock, 2015). On these plat-
forms, algorithms and bots have measured group af-
fect and the interpretive diversity of ideas being
expressed. The bots have then directly advised the
teams to pause and have a democratic decision-
making process or to be aware that their language
use was becoming increasingly divergent. As these
examples indicate, algorithmic control can encroach
on domains that were previously used by workers for
resistance and pushback, ushering in a new contested
terrain of control. Indeed, when U.S. Transportation
Security Administration workers engaged in invi-
sibility practices to attempt to go unseen, managers
responded by heightening their surveillance, thus
creating a self-fulfilling cycle of coercive surveillance
(Anteby & Chan, 2018).
Algorithmic instantaneity. We also find that al-
gorithmic control can be more instantaneous and
individualized than previous regimes of control. As
we saw throughout the 6Rs, algorithms can pro-
vide real-time and personalized nudges, rewards,
and penalties. These affordances may transform
some of the structural mechanisms through which
control operates. Under previous regim es of techni-
cal and bureaucratic control, employers relied on
slower paced, one-size-fits-all systems to make their
workers more productive. Under technical control,
employers used machines and assembly lines set the
pace, together with piece-rate rewards that evolved
every couple of months (Roy, 1952). Under bureau-
cratic control, firms primarily relied on in-
stitutionalized systems of rules, wage tables, and
advancement guidelines, which remained largely
stable over time (Gouldner, 1954).
Algorithmic co ntrol, where real-time and in-
dividualized nudges and penalties have become in-
creasingly common, represents a large shift. For
instance, automotive production plants now often
rely on collaborative robots (cobots), which record
data from every person in a similar role interacting
with the same robotic interface across dozens of
factories. The co bots automatically update their in-
teractions depending on patterns identified by data
mining algorithms (Sachon & Boquet, 2017), and pair
these data with constraints and rewards that tend to
be more immediate, dynamic, and personalized than
the static, one-size-fits-all rewards used under tech-
nical and bureaucratic control. This, in turn, can
transform the modalities of worker resistance.
Whereas previous systems of control allowed col-
lectives of workers to organize and share resistance
tactics over time , especially regarding shared re-
wards and penalties, algorithmic control can make
such initiatives and contestations harder to achieve.
Algorithmic interactivity. Compared with tech-
nical and bureaucratic co ntrol, algorithmic control
can tighten the power of managers over workers by
facilitating interactive and crowdsourced data and
procedures. As we saw in the 6Rs, organizations
can capture data from external as well as internal
386 JanuaryAcademy of Management Annals
sources; this, in turn, can affect worker experiences
in negative ways. Take the example of the hospitality
industry. Historically, under bureaucratic control,
hotel managers looked at worker productivity, bud-
get compliance, and adherence to operational effi-
ciency targets to measure efficiency, but they lacked
closed-loop analyses for controlling specific factors
that caused poor performance (Moreo, 1980). Com-
pare this with hotel managers who monitored online
comments and ratings on TripAdvisor and related
platforms to evaluate the performance of their em-
ployees (Or likowski & Scott, 2014) or Airbnb hosts
who spent 30 minutes a day changing the name of
their profiles with the hope of showing up in more
searches by customers (Jharver et al., 2018); under
algorithmic control, managers can get interactive
and crowdsourced data that they can use to address
variation in worker performance.
The interactive affordances of algorithms and their
ability to gather both internal and external evaluation
data can further constrain the activities of workers in
two main ways. First, because raters can be both in-
ternal and external to the organization, there are often
inconsistent criteria for ratings. Thus, workers have
been shown to multiply efforts to satisfy both external
and internal criteria that often diverge (Orlikowski &
Scott, 2014). Second, because external ratings often
depend on when customers next open the website or
app, there can be erratic time intervals between ser-
vice delivery and ratings, which can make it difficult
for workers to understand or contest their perfor-
mance assessment (e.g., Rosenblat, 2018).
Algorithmic opacity. Compared with previous
regimes of control, algorithmic control is often more
opaque in terms of how it directs, evaluates, and
disciplines workers. As we saw in the 6Rs, workers
often do not fully grasp how algorithms are being
used to direct, evaluate, and discipline them
(e.g., Burrell, 2016). Managers often rely on algo-
rithmic direction through nudges that are un-
obtrusively incorporated in interfaces, and so may
not be easily noticed by workers, even as they have
powerful effects. Similarly, managers can engage in
algorithmic evaluation by capturing data not only on
workers workplace behaviors but also on their per-
sonal lives; workers are often not informed about the
existence and purpose of such data collection. In
terms of disciplining, platform employers can use
algorithmic replacing to automatically kick worker s
off the platform if their ratings drop below a certain
level, without always making it clear to workers why
they have been removed. Finally, employers using
algorithmic rewarding often keep their algorithms
secret to discourage manipulation and ratings in-
flation, which gives workers limited transp arency
into why work is rejected or why they are guaranteed
pay at sometimes and not others.
Because of these multiple layers of opac ity, algo-
rithmic control may encroach on procedural due
process, that is, the constitutional requirement that
any government deprivation of a liberty or property
right must be precededat a minimumby notice
and the opportunity for a hearing on the matter be-
fore an impartial adjudicator (Crawford & Schultz,
2014: 111). Under the assumption of due process,
workers should be warned about changes that could
impact their liberty or property rights; they should
also have a chance to contest s uch decisions. With
algorithmic control, however, there is frequently no
procedure in place for workers to get access to, con-
test, or challenge algorithmic decisions (Wexler,
2018). This is different from previous instantiations
of bureaucratic control, in the sense that the mere
existence of standardized rules and publicly avail-
able guidelines typically increases the transparency,
reliability, and predictability of organizational sys-
tems; of course, whether such standardized rules
and guidelines actually increase workers rights is
another question (Blau, 1955).
Disintermediation of managers. In addition to
these four affordances, our review revealed another
key difference between algorithmic control and prior
forms of rational controlalgorithmic systems en-
able the disintermediation of managers aroun d the
direction, evaluation, and disciplining of workers.
Traditionally, scholars have pointed to how imper-
sonal rules can make bureaucratic control feel
inhumane and even imprisoning (Weber, 1947). In-
terestingly, however, many of the studies in our re-
view highlight that technical and bureaucratic
regimes of control also included human decisions
that could be made with varying degrees of discre-
tion. The ability for workers to appeal to a human
decision-maker means that bureaucratic systems, in
many ways, allowed for more leeway than algorith-
mic systems that may remove human decision-
making altogether from control structures. In many
ways, algorithmic control atits mostextreme is a polar
opposite to some firms attempts to leverage com-
munication technologies to make managers more ac-
countable to and in greater dialogue with workers
(Turco, 2016). When managerial decisions are fully
automated, there are fewer opportunities for workers
to appeal to the empathy of human decision-makers,
and often fewer rule exceptions granted (Aneesh,
2009; Lee et al., 2015; Schildt, 2017).
2020 387Kellogg, Valentine, and Christin
Gray and Suri (2019) introduce the label algo-
rithmic cruelty to describe fully automated
decision-making that can materially impact worker s
payment or future opportunities. Such algorithmic
cruelty comes with additional constraints on
workers activities. In particular, when managers
are disintermediated, workers cannot question th eir
punishments and rewards; they have limited re-
course to find out why they are experiencing pay
changes or have been automatically replaced
(Rahman, 2019; Raval & Dourish, 2016; Schwartz,
2018b). Workers on these platforms also often have
no one to help them understand a problem they are
trying to solve, or give them any feedback on what
worked and did not work (Gray et al., 2016; Martin
et al., 2014; Schwartz, 2018). As we saw in the 6Rs,
this is often a source of worker frustration, anxiety,
and stress.
Future resea rch on algorithms and control.
Expanding on recent work mentioning the develop-
mentofanalgorithmic cage (Faraj et al., 2018;
Rahman, 2019), our review demonstrates that
algorithmic control can be more encompassing,
instantaneous, interactive, opaque, and disin-
termediating than the historical regimes of control
that employers have used over the past two centu-
ries. What are the consequences of removing man-
agers (and human supervision in general) from the
scene of work (Lindebaum, Vesa, & den Hond , 2020)?
Who is accountable and responsible when things go
wrong, and what are some potential mechanisms for
holding actors accountable? Future resear ch should
examine the consequences of such developments for
workers well-being and privacy (Fox, Howe ll,
Wong, & Spektor, 2019). For instance, it is unclear
how algorithmic opacity affects workers identities
and performance. Does it necessarily create a climate
of fear, passivity, and frustration? Is the effect mod-
erated by the level of support that workers perceive
to be receiving from their supervisors (Bernstein & Li ,
2017)? Or can algorithmic control lead to the emer-
gence of novel algorithmic imaginaries (Bucher,
2017)new values, institutions, and symbols re-
lated to algorithms through which people define
their work-related identities and collectivesthat
change workplace dynamics in unexpected ways?
This in turn opens up an important avenue of re-
search about the connections between the rational
and normative aspects of algorithmic control.
Whereas this review focuses on algorithmic control
as a rational form of control, many aspects also carry
normative implications. For instance, gamification,
symbolic rewards, and real-time surge dynamics
impact the affective experiences of workers, seeking
to win their hearts and minds through feelings of
fun and excitement (e.g., Gerber & Krz ywdzinski,
2019; Griesbach et al., 2019). Future resear ch should
explore how such rational and normative features
may alter or reinforce algorithmic control.
Mapping the Emerging Landscape of
Algorithmic Occupations
A third insight related to algorithmic control as
a new contested terrain relates to what we refer to
as algorithmic occupations. When employers de-
velop algorithms to automate various kinds of work,
some jo bs and tasks are eliminated (e.g., Benzell,
Kotlikoff, LaGarda, & Sachs, 2015; Brynjolfsson &
McAfee, 2014; Sachs & Kotlikoff, 2012). But existing
studies consistently show that employers use of algo-
rithms can also create or reconfigu re forms of work
(e.g., Anteby, Chan, & DiBenigno, 2016; Autor, 2015a,
2015b; Davenport & Kirby, 2016). Some of the new
work emerges because most computational tools are
not off the shelf or plug and play technologies,
despite the dominant rhetoricthey require consid-
erable work to develop, fine-tune, implement, main-
tain, and change over time (e.g., Sachs, 2019;
Shestakofsky, 2017). Our review draws attention to
how these occupational developments may affect the
control-resistance dialectic. Employers may develop
and fund new or reconfigured occupational work to
strengthen algorithmic control, but this work may also
become an active area for worker agency. Here, we
highlight three kinds of occupational work emerging as
part of the dialectic of algorithmic control and re-
sistance: algorithm ic curation, algorithmic brokerage ,
and algorithmic articulation.
Algorithmic curation. As organizations pursue
the collection, analysis, and deployment of addi-
tional varieties of data about customers and
workers
activity, they also create a novel type of
work, which is the curation of these data for them to
be useful to managers. Curation is not a new phe-
nomenon: from internal librarians to laboratory
technicians, workers have long engaged in cleaning
data and interpreting quantitati ve results for their
employers (e.g., Bechky, 2019; Nelson & Irwin,
2014). Yet, the kind of curation work that is emerg-
ing under algorithmic control is distinct from pre-
vious forms of curation in at least two ways.
First, many employers use rhetoric around artifi-
cial intelligence that suggests that it is fully auto-
mated, meaning that it is a technical system with no
humans in the loop (Danaher, 2016), even though
388 JanuaryAcademy of Management Annals
human curation remains essential to make most al-
gorithmic technologies function correctly (e.g., Pine,
Wolf, & Mazmanian, 2016). Employers tend to ex-
ternalize curation work, which is typically staffed by
contingent workers, who have been characterized as
ghost workers or crowd-workers (Gray & Suri,
2019; Kittur et al., 2013). Some employers treat these
algorithmic curators as interchangeable by setting up
systems that make the workers as replaceable as
possible, so that their particular skills or social con-
nections are not relevant. Relatedly, major social
media platforms tend to outsource the curation
of social media posts to subcontracting companies
where workers with low pay and no benefits manu-
ally delete offensive content (e.g., Common, 2019;
Gillespie,2018;Lintott & Reed, 2013). However, in the
new contested terrain of control, just as employers
may use curation work to strengthen their control of
workers, so workers in these contingent, low-paid
jobs may push back. For instance, on one mainstream
social media platform, algorithmic curators ex-
changed and publicized guidelines and priorities
that the platform had attempted to obscure (Gray
et al., 2016; Martin et al., 2014; Schwartz, 2018a).
In addition, algorithmic cur ation is more in-
teractive than previous forms of curation work.
Truelove (2019) showed this in her study of an ad-
vertising firm that engaged external audiences in the
creation and distribution of content using social
media technologies; members of the advertising firm
tracked audience-generated content in real time and
continuously curated it in ways that steered the au-
dience to create content that was desired by the cli-
ent. Even as employers implem ent such interactive
algorithmic curation in an effort to bring internal and
external worker decision-making in line with orga-
nizational goals, workers may introduce consid er-
able discretion and agency as they curate algorithmic
data.
Algorithmic brokerage. The adoption and de-
velopment of large data-driven and algorithmic sys -
tems often leads to the creation of another type of
work th at we call algorithmic brokerage . Algorithmic
brokers typically seek to communicate the logic and
value of the algorithmic systems to various groups
in the organization. Such brokerage roles are shap-
ing the development of occupations that specialize
in interpreting algorithmic outputs (e.g., Henke,
Levine, & McInerney, 2018). Similar to traditional
brokerage work, algorithmic brokerage involves two
main sets of practicesconnecting practices and
buffering practicesto bridge different groups with
disparate expertise, meanings, and status (Barley,
1996; Burt, 1992; Kellogg, 2014; Lingo & OMahony,
2010; Obstfeld, 2005).
However, algorithmic brokerage differs from prior
forms of brokerage in several ways. First, the success
of employers algorithmic control attempts is de-
termined by the degree to which workers change
their workflows to consume algorithmic outputs.
Employers, thus, may hire algorithmic trainers,
explainers, and sustainers and data translators to
translate, train, and sell other workers on the merits
of the algorithms (Henke et al., 2018; Wilson,
Daugherty, & Morini-Bianzino, 2017). This algorith-
mic brokerage work differs from prior forms of bro-
kerage because it involves brokers trying to sell
workers on accepting algorithmic outputs that are
often putting workers under more comprehensive
control. For example, Karunakaran (2016) demon-
strates how lower status occupations such as crime
analysts in a police department performed important
brokering roles in implementing a predictive polic-
ing technology across the organization and, in the
process, gained additional jurisdiction through their
ability to do the data janitorial work of acquiring,
cleaning, and integrating the different sources of
training data.
Because algorithmic brokerage w ork involves
social meanings and interactions, it provides a
new terrain for w orker agenc y. For example, i n
their ethnographic study of a poli ce organization,
Waardenburg, Sergeeva, and Huysman (2018) find
that t he introduction of predictive p olicing w as
followed by the emergence of the occupational
role of intelligence officer. Whereas the em-
ployer intended for intelligence officers to shape
the work o f police officers to comply with the al-
gorithmic outputs, the intelligence officers began
to steer police action based on t heir ownlargely
subjectiveinterpretations.
Algorithmic articulation. Employers develop-
ment of algorithmic systems has shaped the emer-
gence of a third kind of occupational work, which we
label algorithmic articulation. Scholars have long
shown that articulation work (Star, 1995; Strauss,
1985)not the work of designing a system or pro-
ducing a product, but the surrounding work that
makes it possibleinvolves a lot of planning and
coordinating about who will be doing what, when,
where, and how, as well as handling missed re-
sponsibilities, unfinished jobs, and all the steps
necessary so that projects do not break down. For
example, Bailey, Leonardi, and Chong (2010) dem-
onstrate how articulation work was needed to con-
nect technologies as well as people, describing it as
2020 389Kellogg, Valentine, and Christin
minding the gaps of technological interdependence
by navigating, bridging, crossing, expanding, and
bypassing the gaps that emerge in all sociotechnical
systems. Under algorithmic control, new occupations
related to the articulation of computational technol-
ogies are emerging. For example, many data-driven
organizations have developed novel divisions of labor
between algorithms developers, platform engineers,
nonalgorithm engineers, user interface designers, user
testing engineers, product developers, and information
technology support staff (Colner, 2018). Members of
each of these groups have performed extensive artic-
ulation work to integrate their own specialized work
with other groups jurisdictional work. Similarly, dig-
ital consultants and project managers have engaged
in such integrative articula tion work as they have
developed and maintained algorithmic systems and
workflows (Shaughnessy, 2018).
Another type of articulation work involves
addressing the failure of algorithmic technologies.
Previous technologies used to fail in relatively pre-
dictable ways, but machine-learning algorithms of-
ten fail in ways that are difficult or impossible to
forecast (Shestakofsky, 2017). Thus, a new form
of articulation work involves handling the un-
predictable failures of algorithmic technology in-
terdependence by applying flexibility, situational
adaptability, creativity, interpersonal interaction, or
persuasion. For example, Gray and Suri (2019) de-
scribe how Uber relied on articulation work to au-
thenticate their drivers. Drivers had to upload photos
of themselves each day; Ubers real-time ID check
algorithm confirmed if the uploaded photo matched
the photo ID on record. But sometimes the algorithms
could not discern if a driver who had shaved his
beard was, in fact, the same driver. In such cases,
micro workers repaired (Jackson, 2014) algorith-
mic failure by reviewing the content of the recorded
data to adjudicate whether the photos matched the
drivers identity.
For employers, articulation work is necessary to
integrate and streamline algorithmic workflows to
produce economic value in the organization. But,
these novel forms of articulation work also provide
opportunities for workers to contest algorithmic
control. Payoff for employers usuallyonlyoccurs after
a substantial portion of the employers sites have
switched to the new infrastructure. For example, a
cloud computing system designed to aggregate global
customer demand only generated analytics useful to
the employer once stores in different countries all
collected the same type of data regularly; this in-
tegration required smoothing differences in existing
employer processes across different regions (e.g.,
Tabrizi, Lam, Girard, & Irvin, 2019). In such situa-
tions, algorithmic articulators have the opportunity to
claim new jurisdictions and push back on employer
control.
Future research on algorithmic occupations.
The emergence of these new forms of algorithmic
occupational work raises several key questions for
future research. Regarding algorithmic curation,
how can workers engaged in the ghost work of data
curation creatively adapt or reshape algorithmic
production technologies as they do their work? Are
there policy changes required to support their eco-
nomic security and mobility given such temporary,
part-time, and potentially invisible jobs? Regarding
algorithmic brokerage, future research should ex-
plore the specific work practices involved in bro-
kering algorithmic knowledge across grou ps. For
example, because of the opacity of most algorithmic
systems, even brokers with specialized training in
computer scien ce may not be able to fully interpret
how the systems work. More needs to be understood
about how such brokers make sense of these systems
and communicate their functioning across constitu-
encies. Regarding algorithmic articulation, future
research should investigate the shape that this work
takes across organizations and fields. For instance,
how can algorithmic failure be addressed proac-
tively through articulation work? Do industries learn
from their mistakes? One potential case study could
be high-frequency trading and the reconfiguration
of articulation work after different flash crashes
(Borch, 2017; Karppi & Crawford, 2016). Finally,
because many of these new occupation members
may occupy lower power peripheral expert roles
in organizat ions (DiBenigno, 2018), future studies
should examine how these experts can influence
others as they engage in such articulation work.
More broadly, future research should explore the
re-skilling involved as organizations and educa-
tional institutions create programs to train members
of these algorithmic occupations. A report by
McKinsey Globa l Institute estimates that, by 2026, in
the United States alone, the demand for algorithmic
translators will reach two to four million. Training
workers to be technically literate would require the
redesign of educational system at all levels and the
expansion of on-the-job training in computational
thinking (Wing, 2006). For example, Myers and
Kellogg (2019) detail how state actors and work-
force intermediaries in four U.S. states built more
coordinated workforce development systems state-
wide by spreading career pathways that spanned
390 JanuaryAcademy of Management Annals
from secondary to postsecondary education and in-
volved intermediary organizations and employers.
Kaynak (2019) describes the emergence in the United
States of coding boot camps that have taught web ap-
plication development to individuals with no back-
ground in programming. Similarly, a number of
universities have created research facilitator roles for
cybersecurity experts to guide the work of an ever-
increasing set of researchers using cyberinfrastructure
(CI) resources; CI experts engaged in care and feeding
of these users of CI capabilities (Berente et al., 2017;
Berente, Howison, King, Cutcher-Gershenfeld, & Pen-
nington, 2014). More research is needed to understand
the structure, professionalization, and career paths of
these emerging occupations.
Algoactivism: Individual and Col lective Resistance
of Algorithmic Control
A final insight related to algorithmic control is
our identification of emerging tactics of resistance,
within and beyond the workplace. St udies of tech-
nical and bureaucratic control have demonstrated
that workers can resist control in a variety of ways,
from individual strategies of resistance to collective
organizing through discursive framing and legal
mobilization (e.g., Morrill et al., 2003). Here, we ad-
vance the concept of algoactivism to both describe
emerging tactics along each of these lines and dis-
tinguish them from prior resistance tactics. We also
suggest areas for future research related to each
kind of resistance.
Individual resistance via practical action. We
find three main individual practical strategies of
resistance: noncooperation, leveraging algorithms,
and personal negotiation with clients. Regarding
noncooperation, workers have long engaged in
noncooperation under regimes of technical and bu-
reaucratic control by carving out psychological, so-
cial, temporal, or physical niches in their workplace
(e.g., Edwards, 1979; Roy, 1952). Under algorithmic
control, workers continue to engage in non-
cooperation, but can now do so in different ways
because of the instantaneous and interactive char-
acter of algori thms. One way they do so is by ignoring
algorithmic recomm ending or rewarding. For in-
stance, Valentine and Hinds (2019) describe how
fashion buyers resisted the algorithmic recommen-
dations stemming from employer-established rec-
ommendation systems, adapting them to be more
consistent with their own professional experience.
Mollick and Rothbard (2014) show that workers at a
sales company resisted the interactive gamification
designed by their employer by refusing to learn the rules
of the game, suggesting that the games were unfair, and
not playing the games in their daily work. And, Christin
(2017) demonstrates that web journalists and legal
professionals engaged in foot-dragging (ignoring
risk scores and analytics systems in their daily
work), gaming (manipulating the variabl es they en-
tered in algorithmic systems to obtain the score that
they desired), and open critique (con testing the data
and methods used to build algorithmic systems as
crude”’ and problematic). Another way that
workers engage in noncooperation is by disrupting
algorithmic recording. For example, in a study com-
paring criminal courts and police departments,
scholars find that legal professionals and police offi-
cersdevelopedasetofresistance strategies, which they
analyzed as data obfuscation”—making things ob-
scure either by blocking data collection or by pro-
ducing more data (Brayne & Christin, Forthcoming; see
also, Levy, 2015). Similarly, Lee et al. (2015) show how
Uber drivers resisted control by turning off their driver
mode when in particular neighborhoods, staying in
residential areas to avoid bar patrons, and frequently
logging off to avoid long trips. And, Lehdonvirta et al.
(2019) find that workers on online labor platforms
assessed clients past feedback-givin g behavior before
accepting contracts, and if bad feedback ratings did
pile up, started afresh with different accounts.
Workers have also been shown to leverage algo-
rithms to resist control. They may reverse engineer
the algorithm that produced the rating to be able to
prioritize the activities that seem to impact the score
(Jharver et al., 2018; Lix & Valentine, 2019; Rahman,
2017). For example, some Airbnb hosts participated
in online forums, read the companys technical doc-
umentation, and monitored competitors profiles and
ratings to figure out what characteristics or behaviors
seemed to influence their ratings. Other hosts pre-
ferred long-term guests, but figured out that they
could be penalized for directly declining short-term
guests, so they set filters on their profiles to screen out
short-term guests in ways that the algorithm would
not penalize (Jhaver, Karpfen, & Antin, 2018). Along
similar lines, MTurk workers deployed their own al-
gorithms to try to gain an upper hand against the
platforms control regime. For instance, workers used
scripts that monitored the marketplace and alerted
the worker when suitable tasks became available.
Workers also applied hacks to remove distracting in-
formation from the user interface (Lehdonvirta, 2018).
Finally, workers have been shown to resist algo-
rithmic control by personally negotiating with cli-
ents to bypas s or alter algorithmic ratings. In one
2020 391Kellogg, Valentine, and Christin
online marketplace, sellers contacted buyers who
had left a negative evaluation and tried to convince
them to withdraw it (Curchod et al., 2019). In an
online labor market, contractors preemptively asked
clients for guarantees of hi gh ratings as part of the
terms of the contracts, rather than allowing clients to
simply rate the work at the end of the projects; when
problems arose, the contractors often offered to work
for free in exchange for good ratings (Rahman, 2017).
In addition to negotiating reciprocal five-star ratings
with clients and sometimes foregoing payment to
avoid bad ratings, contractors also complained to
platform customer support about unduly low ratings
(Lehdonvirta et al., 2019). In another study of gig
project teams, kharma ratings were negotiated and
used as ultimatums. In one case, a product manager
told his team to just finish this milestone and Ill
immediately push the button on your kharma score!
(Lix & Valentine, 2019). Such personally negotiated
interactions around algorithmic ratings partly ex-
plain why online labor markets often have ratings
inflation (Filippas, Horton, & Golden, 2018; Horton
& Golden, 2015; Rahman, 2017).
Platform organizing. In addition to individual
strategies, workers can resist through collective ac -
tion. Workers under regimes of technical and bu-
reaucratic control have long organized to protect
their rights (e.g., Cutcher-Gershenfeld & Kochan,
2004; Kellogg, 2011; Roscigno & Hodson, 2004). Yet,
compared with the dense networks of informal social
ties that existed on production floors, workers under
algorithmic control often do not have the same con-
nections: limited, arms-length, virtual connections
often prevail (Darr, 2018; Massa & OMahony, 2015).
In this context, workers have limited power to shape
face-to-face interactions and shopfloor games be-
cause of the control systems features (Lehdonvirta,
2016). Instead, they have begun to organize using
online forums and platforms and through platform
cooperativism.
A first form of organizing involves the develop-
ment of online forums and platforms dedicated to
workers empowerment and knowledge sharing. In
such work-oriented online communities, workers
have been shown to help each other learn new sys-
tems and practices, anticipate or avoid disciplinary
processes, regain access when locked out of plat-
forms, identify desirable clients or jobs, or learn how
to smooth their earnings (Martin et al., 2014; Wood
et al., 2019). The blog The Rideshare Guy, for in-
stance, provided guidance and instructions to
drivers around how to maximize their income in
diverse car sharing marketplaces (Campbell, 2018) .
Academics and organizers have also designed de-
dicated platforms to allow workers to rate and flag
requesters who have treated them badly. These
platforms include Turkopticon (an activist system
for workers to publicize and evaluate their relation-
ships with employers on Amazon Mechanical Turk)
and Dynamo (a platform for workers to gather, gain
critical mass, and mobilize) (Gray et al., 2016; Marti n
et al., 2014; Schwartz, 2018a). Along similar lines,
Peers.org offered a system for pooling multiple
accounts; Guild was an insurance group that ne-
gotiated between major insurance companies and
on-demand platforms; and Zen99 designed an
all-in-one dashboard that helped 1099 workers orga-
nize finances, taxes, and insurance policies (Aloisi,
2015).
Such forums and platforms can help workers ad-
dress the lack of voice and information asy mmetries
that are often associated with algorithmic control in a
variety of ways. In some cases, workers have collec-
tively engaged in tasks that are somewhat in line with
managerial goals, such as on-boarding, sharing in-
formation on customers, and discussing tricks of the
trade for performing work effectively (Schwartz,
2018). In other cases, workers have used online fo-
rums to share resources and identify des irable cli-
ents or jobs; they have provided guidance to one
another about how to anticipate or avoid discipline;
how to regain access when locked out of platforms;
how to organize finances, taxes, and insurance pol-
icies; and how to smooth earnings and maximize
their income by switching between diverse plat-
forms. Finally, workers have used online forums to
engage in collective mobilization against platforms,
for instance, with the #slaveroo movement against
food-delivery platforms in Europe, as well as through
various strikes and mobilizing of drivers against
Uber in the United States and elsewhere.
Workers have also used platforms to engage in
reverse surveillance or sousveillance, in which
employees recorded and uploaded everything that
happened in their workplaces to make managers
accountable through full documentary evidence
in case employers acted against them (Ali & Mann,
2013; Sewell et al., 2012). Employers have been
shown to push back against worker sousveillance.
For instance, at a warehouse fulfillment service,
employees were not allowed to bring personal de-
vices onto the warehouse floor (McClelland, 2012).
And, it is an open question whether sousveillance
can restore workers power because employees do
not usually have acces s to the employers large data
sets and proprietary algorithms (Danaher, 2016).
392 JanuaryAcademy of Management Annals
Second, activists have organized through platform
cooperativism. For instan ce, the Platform Co-op
consortium brought together a wide range of orga-
nizations that adhered to the project of having plat-
forms being owned by their members, with surplus
revenues being transferred to the members (Scholz,
2012; Scholz & Schneider, 2017). The consortium
featured a directory of 281 organizati ons across
the world that engaged in some version of platform
cooperativism. Scholars have suggested that in-
creasing the number of platform cooperatives could
help promote algorithmic transparency by address-
ing some of the conc erns relating to opacity, bias, and
profit extraction emerging through algorithmic con-
trol (Scholz, 2016). Similarly, studies of Wikipedia,
Linux, and other peer production communities have
demonstrated how these communities relied heavily
on algorithmic control to manage their work pro-
cesses but that these controls reflected shared com-
munity values and were therefore experienced
differently than by workers on corporate platforms
that mostly reflected employer interests (Benkler,
2017; Fayard et al., 2016; Geiger, 2017; Karunakaran,
2018; OMahony & Ferraro, 2007).
Discursive framing about algorithmic fairness,
accountability, and transparency (FAT). Workers
subject to technical and bureaucratic control have
historically mobilized others by crafting frames
(Kaplan, 2008), to spark outrage and hope by depict-
ing existing conditions as unjust and amenable
to change using collective action (e.g., Creed, Scully,
& Austin, 2002). Social movement organizers
have begun to use social media to circulate these
kinds of frames br oadly to mobilize participants
in online movements (e.g., Castells, 2015; Tufekci,
2017). In the context of algorithmic control, workers
and advocates have engaged in discursive framing by
developing novel forms of public discourse about
algorithmic Fairness, Accountability, or Trans-
parency (FAT*).
First, workers have collectively resiste d algorithmic
control by engaging in public critique of algorithmic
systems, criticizing how algorithms could lead to the
reproduction or reinforcement of social and racial in-
equalities because of biased training data (Harcourt,
2007; ONeil, 2016). For instance, in 2016, Angwin and
her colleagues at the nonprofit news organization Pro-
Publica analyzed more than 10,000 criminal defendant
files in Broward County, Florida, and published a cri-
tique of the predictive risk-assessment tool called
COMPAS. ProPublica made the data set public and
accessible to researc hers. Following this publication, a
vibrant debate emerged between Equivant (the
company that owned COMPAS), the ProPublica
journalists, and several academics and computer
scientists who analyzed the data. The different
parties offered distinct measurements of algorith-
mic fairness and conflicting justifications for u sing
them (Feller, Pierson, Corb ett-Davies, & Goel, 2016).
In the aftermath of these discussions, activists
convened a wide range of stakeholders to discuss
the construction methods of their risk-assessment
tools, making some of their data and models public
to relevant experts as well as local communities
affected by the tools (Hannah-Moffat, 2018). In this
case, as in many others, activists used novel forms
of public critique and interdisciplinary dialog to
address algorithmic bias.
Second, activists and computer scientists have
begun to develop new professional codes of ethics
and documentation for computational systems
(Diakopoulos & Friedler, 2016). As noted earlier,
scholars have drawn attention to opacity as a cen-
tral concern in algorithmic control. To address
such concerns, the Association for Computing
Machinery (ACM) developed a Code of Ethics and
Professional Conduct. It also sponsored an annual
ACM FAT* Conference, in which academics and
industry members developed novel designs for al-
gorithmic fairness. For instance, at the 2019 ACM
FAT*, engineers and computer scientists from
Google, Microsoft, and other places noted t hat de-
spite the potential negative eff ects of reported bia-
ses associ ated with trained machine-learning and
artificial intelligence models, documentation ac-
companying these models, even when supplied,
still provided little information regarding model
performance characteristics, intended u se cases,
potential pitfalls, or other benchmarks to help users
evaluate the suitability of these systems to their
context. These a ctivists argued in favor of pro-
viding model cards, short (one to two page) doc-
uments for trained machine-learning models that
would include core metrics about bias, fairness,
and inclusion (Gebru et al., 2017; Mitchell et al.,
2019). Mitchell et al. (2019) give the example o f a
model card for a machine-learning model designed
to detect smiling in imagesa model that could be
used by employers to engage in algorithmic re-
cording by using video surveillance to monitor the
emotions of thei r employees. The model car d de-
tailed the authors of the smi ling algorithm, the
type of model built, the intended use for the model,
the main factors and metrics incorporated, and
some limitations and recommendations for future
developments.
2020 393Kellogg, Valentine, and Christin
Legal mobilization around employee privacy,
managerial surveillance, discrimination, and data
ownership. Workers and advocates have previously
created political opportunities for contesting techni-
cal and bureaucratic control by using a climate of a
supportive administration and vulnerable rivals to
alter laws in line with their own interests, skillfully
frame their projects in terms likely to be attractive to
governments and elites, and battle with rivals to
generate political support from the State for favor-
able legislation (e.g., McCann, 1992, 1994). Along
these same lines, activists have mobilized to create
political opp ortunities around employee privacy,
managerial surveillance, discrimination, and data
ownership. In doing so, they have transferred dis-
putes from an arena where the resolution of con-
flicts depends on the relative power of the workers
and employers to an arena where disputes are re-
solved by reference to legal norms and rules and are
enforced by the p ower of the state and international
institutions.
First, workers and labor organizers have advo-
cated for workplace and legal policies to protect
employee privacy, limit managerial surveillance,
prevent discrimination, and reclassify independent
contractors as employees. Regarding workplace
policy, they have resisted the lack of privacy asso-
ciated with algorithmic recording by negotiating
union agreements with employers around how
and when employers can both track employees and
use the tracking data to discipline employees
(e.g., Davidson, 2016), and by engaging in arbitration
around employees social media posts (Lucero,
Allen, & Elzweig, 2013). For instance, one arbitra-
tion case considered whether employees social
media posts were protected under laws that protect
employees rights to engage in other concerted ac-
tivities for the purpose of collective bargaining or for
other mutual aid or protection (Lucero et al., 2013).
Similarly, through their union, UPS drivers de-
veloped an agreement with UPS that the company
needed to make tracking explicit in drivers con-
tracts, could not discipline drivers only using data,
and could not track drivers without telling them
(Davidson & Kestenbaum, 2014). Workers have also
protested against the discrimination that can arise
through algorithmic rating by raising questions
about whether consumer ratings are subject to legal
action based on the Civil Rights Act of 1964, which
prohibits employers from making employment-
related decisions based on the protected character-
istics of workers. Of particular interest are legal
regulations in the European context. The Data
Protection Impact Assessment (DPIA) clause of the
European Unions GeneralDataProtection Regulation
(GDPR) requires preemptive assessments of the po-
tential impact of high-risk algorithmic systems on
the rights and freedoms of natural persons (GDPR,
Art. 35). Yet, the actual implementation of the DPIA
and GDPR frameworks remains uncertain, pending
ongoing case law, especially in the United States.
More broadly, legal scholars have called for a recon-
ceptualization of workers privacy rights along the
lines of contextual or relational privacy, which
requires an articulation of a set of context-specific
norms that constrain employers regarding the in-
formation they can collect through websites, with
whom they can share it, and under what conditions it
can be shared (Bannerman, 2018; Nissenbaum, 2009).
A second important development relates to the
current employment status of workers under algo-
rithmic control. Mo st platforms have relied almost
exclusively on independent contractors as their
primary workforce (Rosenblat, 2018; Vallas & Schor,
2020). Workers have increasingly challenged this
legal classification, arguing that they should be
considered as employees instead of independent
contractors. Through collective organizing, they
have lobbied to implement legislative change, and in
some cases have also started to sue companiesthe
ridesharing platforms Uber and Lyft and the cleaning
platform Handy, for instancefor classifying them
as contractors, but replacing them when they do not
perform the work in the strict manner required by the
platform (Aloisi, 2015). Legislative efforts took place
in California following the Dynamex decision and
the California Assembly Bill 5 (AB 5), which in 2019
restricted the use of independent contractors by im-
posing the so-called ABC test. Und er the ABC test, a
worker is presumed to be an employee unless the
company proves that (A) the worker is free from the
control and direction of the hiring entity in connec -
tion with the performance of the work, both practi-
cally and contractually; (B) the worker performs
work that is outside the usual course of the com-
panys business; and (C) the worker is customarily
engaged in an independently established trade, oc-
cupation, or business of the same nature as the work
performed for the company.
Third, activists have begun to engage in a set of
regulatory initiatives related to pressing for worker
data ownership. As noted earlier, many employers
are engaging in comprehensive algorithmic re-
cording and finely grained algorithmic rating. Part
of why they may be doing th is is that the data
are valuable, independent of the control of the
394 JanuaryAcademy of Management Annals
workersindeed, many platforms have monetized
their workers data through online advertising
(Zuboff, 2018). Activists have argued in favor of
giving people ownership of their digital data and in
favor of treating data as a form of labor that needs
to be compensated (Arrieta-Ibarra, Goff, Jim
´
enez-
Hern
´
andez, Lanier, & Weyl, 2018; Scholz, 2012). One
version of this proposal suggested that individuals
should be allowed to rent or sell their data to tech-
nology companies through digital intermediaries,
called MIDs (Mediators of Individual Data), that
would negotiate data royalties or wages, to bring the
power of collective bargaining to the people who are
the sources of valuable data. It would also promote
standards and build a brand based on the unique
quality and identity of the data producers they rep-
resent (Lanier & Weyl, 2018).
Future research on algoactivism. The existence
of multiple kinds of algoactivism raises fascinating
questions for future research. Throughout this review,
we have discussed the potential of employers to use
algorithmic technologies to implement a more com-
prehensive, instantaneous, interactive, and opaque
form of control. Yet, the mere existence of such a wide
range of strategies of resistance suggests that workers
continue to have agency within organizational settings.
At a broad level, how do these reactions by
workers modulate the impact of algorithmic di -
rection, evaluation, and discipline on the ground?
Regarding individual resistance using practical ac-
tion, for instance, one study showed that warehouse
workers received scores from their handheld scan-
ners that also directed their minute-by-minute paths
through the warehouse; gaming or resisting such
systems of algorithmic control was extremely diffi-
cult (McC lelland, 2012). Future research should ex-
amine how employer algorithmic control and
worker resistance coproduce new work dynamics
across organizations and fields. In addition, in line
with recent research on stock exchanges (Beunza &
Millo, 2015; MacKenzie, 2018, 2019; Par do-Guerra,
2019), further research should explore how such
practical strategies of resistance are evolving in al-
most fully automated workplaces.
It could also investigate the opportunities and
challenges that arise from platform cooperativism.
For instance, future research could explore how co-
operatives could implement iterative consultations of
their members and users when developing algorith-
mic control systems. They could make the variables,
weights, and models used to design their algorithms
transparent and available to their members and users.
Under these conditions, algorithmic data could be
used to anchor collective discussions and promote
reflexivity among members and users. Future re-
search could also investigate how traditional unions
could get involved with platform organizing (Kochan,
Yang, Kimball, & Kelly, 2019; Wood et al., 2018).
Regarding novel kinds of public discourse about
algorithms, scholars could explore the range of stake-
holders that can best engage in algorithmic framing,
the issues that are most amenable to discussion, the
ways that different stakeholders can work across
boundaries to mobilize for collective action, and how
algorithmic technologies might facilitate such mobi-
lization (Ananny & Crawford, 2018). Regarding codes
of ethics and documentation, scholars could explore
the processes through which organizations can make
their data and code more public while protecting in-
tellectual property, how new professional codes of
ethics can be taught to engineers and computer sci-
entists, and how documentation can best be used by
managers engaging in algorithmic control.
Last, the emerging legal mobilization around algo-
rithmic control provides intriguing ideas for future
research. Scholars should explore the interplay be-
tween law, managerial control, and algorithmic
technologies. How does the existing case law about
privacy rights and third-party tracking influence al-
gorithmic control within workplaces? How do the
GDPR and DPIA frameworks developed by the Euro-
pean Union affect the modalities of algorithmic con-
trol within European and U.S.-based companies?
Regarding employment classifications and the move
from independent contracting to the employer
employee legal contract, what will be the ramifica-
tions of California AB 5 for on-demand platform labor
and the relationship between platforms and their
workers? Regarding worker data ownership, future
research should explore the role of economic in-
centives in driving some of the modalities of algo-
rithmic control. For instance, how is algorithmic
recording and rating implemented differently by
employers that sell these data and by employers that
do not? And,in pilot studiesof workerdataownership
systems, does this framework increase existing in-
equalities in terms of privacy rights, allowing a two-
tiered landscape where affluent workers can hold
on to their personal data and protect their privacy,
whereas low-income workers cannot?
CONCLUSION
This article reviews the interdisciplinary research
about algorithms at work to explore how employers
are using algorithms for organizational control and
2020 395Kellogg, Valentine, and Christin
how it affects worker s. We find that employers may
utilize algorithmic control using six main mecha-
nisms, which we call the 6Rs”—they may use
algorithms to direct workers by restricting and
recommending, evaluate workers by recording
and rating, and discipline workers by replacing
and rewarding. Our model suggests four important
implications for organization studies. First, our ap-
plication of labor process theory to the research on
algorithms at work problematizes the pred ominant
focus to date on the economic value of algorithms; we
draw attention to algorithmic systems as contested
instruments of control that allow employers to se-
cure a share of capital from workers exertions while
obscuring their methods for doing so, and to the
important outcomes of worker experiences and
livelihoods. Second, we demonstrate that algorith-
mic control can be more comprehensive, in-
stantaneous, interactive, and opaque than prio r
forms of rational control, and that it can allow for
further disintermediation of managers. Whereas
technical control leverages tech nology to limit the
need for direct supervision, and bureaucratic control
relies on standardized rules and roles for the same
purpose, algorithmic control can remove managers
(and human supervision in general) even further
from the scene of work. Third, employers use of al-
gorithms in the workplace is sparking the emergence
of new forms of work and occupationsalgorithmic
curation, algorithmic brokerage, and algorithmic
articulationthat may not only help employers to
implement algorithmic control but also become ac-
tive areas for worker agency. Finally, worker s are
engaging in four main forms of algoactivism to resist
algorithmic controlindividual action, collective
platform organizing, discursive framing around al-
gorithmic fairness, accountability and transparency,
and legal mobilization around employee privacy,
discrimination, worker classification, and data own-
ership. Our mapping of the contested terrain of al-
gorithmic control will enable researchers to further
explore some of the unique implications of this type
of control, and to engage in future research around
what employers and worker s can do to mitigate
negative worker outcomes associated with algorith-
mic direction, evaluation, and discipline.
ROLES OF AUTHORS ON THE RESEARCH TEAM
Kate Kellogg (MIT), Melissa Valentine (Stanford),
and Ang
`
ele Christin (Stanford) are professors who
study the intersection of culture, work, and organiz-
ing technologies.
REFERENCES
Adler, P. S., Goldoftas, B., & Levine, D. I. 1999. Flexibility versus
efficiency? A case study of model changeovers in the Toyota
production system. Organization Science, 10(1): 4368.
Afuah, A., & Tucci, C. 2012. Crowdsourcing as a solution
to distant search. Academy of Management Review,
37(3): 355375.
Agarwal, R., & Dhar, V. 2014. Big data, data science, and
analytics: The opportunity and challenge for IS re-
search. Information Systems Research, 25: 443448.
Ahmed, S. I., Bidwell, N. J., Zade, H., Muralidhar, S. H.,
Dhareshwar, A., Karachiwala, B., Tandong, C. N., & ONeill,
J. 2016. Peer-to-peer in the Workplace: A view from the
road. Paper presented at the Proceedings of the 2016 CHI
Conference on Human Factors in Computing Systems.
Aiello, J. R., & Svec, C. M. 1993. Computer monitoring of
work performance: Extending the social facilitation
framework to electronic presence. Journal of Applied
Social Psychology, 23(7): 537548.
Ajunwa, I., & Greene, D. 2019. Platforms at work: Automated
hiring platforms and other new intermediaries in the orga-
nization of work. In Work and Labor in the Digital Age:
6191. Bingley, UK: Emerald Publishing Limited.
Ali, M. A., & Mann, S. 2013. The inevitability of the transi-
tion from a surveillance-society to a veillance-society:
Moral and economic grounding for sousveillance.Pa-
per presented at the 2013 IEEE International Symposium
on Technology and Society (ISTAS): Social Implications
of Wearable Computing and Augmediated Reality in
Everyday Life.
Aloisi, A. 2015. Commoditized workers: Case study re-
search on labor law issues arising from a set of on-
demand/gig economy platforms. Comparative Labor
Law and Policy Journal, 37: 653.
Alvesson, M., & Karreman, D. 2004. Interfaces of control:
Technocratic and socio-ideological control in a global
management consultancy firm. Accounting, Organi-
zations and Society, 29(34): 423444.
Alvesson, M., & Karreman, D. 2007. Unraveling HRM:
Identity, ceremony, and control in a management con-
sulting firm. Organization Science,18(4):711723.
Amershi, S., Cakmak, M., Knox, W. B., & Kulesza, T. 2014.
Power to the people: The role of humans in interactive
machine learning. AI Magazine, 35(4): 105120.
Ananny, M., & Crawford, K. 2018. Seeing without know-
ing: Limitations of the transparency ideal and its ap-
plication to algorithmic accountability. New Media &
Society, 20(3): 973989.
Aneesh, A. 2009. Global labor: Algocratic modes of orga-
nization. Sociological Theory, 27(4): 347370.
Angrave, D., Charlwood, A., Kirkpatrick, I., Lawrence, M.,
& Stuart, M. 2016. HR and analytics: Why HR is set to
396 JanuaryAcademy of Management Annals
fail the big data challenge. Human Resource Man-
agement Journal, 26(1): 111.
Angwin, J. 2014. Dragnet nation: A quest for privacy,
security, and freedom in a world of relentless sur-
veillance. New York: Henry Holt.
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. 2016. Ma-
chine bias: Theres software used across the country
to predict future criminals. And its biased against
blacks. ProPublica, 23.
Anteby, M. 2008. Identity incentives as an engaging form
of control: Revisiting leniencies in an aeronautic
plant. Organization Science, 19(2): 202220.
Anteby, M., & Chan, C. K. 2018. A self-fulfilling cycle of
coercive surveillance: Workers invisibility practices
and managerial justification. Organization Science,
29(2): 247263.
Anteby, M., Chan, C. K., & DiBenigno, J. 2016. Three lenses
on occupations and professions in organizations:
Becoming, doing, and relating. The Academy of
Management Annals, 10(1): 183244.
Arazy, O., Daxenberger, J., Lifshitz-Assaf, H., Nov, O., &
Gurevych, I. 2016. Turbulent stability of emergent
roles: The dualistic nature of self-organizing knowl-
edge coproduction. Information Systems Research,
27(4): 792812.
Arntz, M., Gregory, T., & Zierahn, U. 2016. The risk of
automation for jobs in OECD countries: A compar-
ative analysis. OECD Social, Employment, and Mi-
gration Working Papers(189): 0_1.
Arrieta-Ibarra, I., Goff, L., Jim
´
enez-Hern
´
andez, D., Lanier,
J., & Weyl, E. G. 2018. Should we treat data as labor?
Moving beyond free. AEA Papers and Proceed-
ings, 108: 3842.
Askay, D. A. 2015. Silence in the crowd: The spiral of silence
contributing to the positive bias of opinions in an online
review system. New Media & Society, 17(11): 18111829.
Athey, S., & Scott, S. 2002. The impact of information
technology on emergency health care outcomes. Rand
Journal of Economics 33(3): 399432.
Austrin, T., & West, J. 2005. Skills and surveillance in ca-
sino gaming: Work, consumption and regulation.
Work, Employment and Society, 19(2): 305326.
Autor, D. 2015a. Why are there still so many jobs? The
history and future of workplace automation. Journal
of Economic Perspectives, 29(3): 330.
Autor, D. H. 2015b. The paradox of abundance: Automation
anxiety returns. Performance and progress: Essays
on capitalism, business and society. Oxford: OUP.
Bailey, D., Erickson, I., Silbey, S., & Teasley, S. 2019.
Emerging audit cultures: Data, analytics, and rising
quantification in professors work. Paper presented
at the Academy of Management. Boston.
Bailey, D. E., Leonardi, P. M., & Barley, S. R. 2012.The lureof
the virtual. Organization Science,23(5):14851504.
Bailey, D. E., Leonardi, P. M., & Chong, J. 2010. Minding the
gaps: Understanding technology interdependence
and coordination in knowledge work. Organization
Science, 21(3): 713730.
Ball, K. S., & Margulis, S. T. 2011. Electronic monitoring
and surveillance in call centres: A framework for in-
vestigation. New Technology, Work and Employ-
ment, 26(2): 113126.
Bannerman, S. 2018. Relational privacy and the networked
governance of the self. Information, Communication
& Society:116.
Barley, S. R. 1996. Technicians in the workplace: Ethno-
graphic evidence for bringing work into organization
studies. Administrative Science Quarterly, 41(3):
404441.
Barley, S. R. 2015. Why the internet makes buying a car less
loathsome: How technologies change role relations.
Academy of Management Discoveries, 1(1): 535.
Barley, S. R., Bechky, B. A., & Milliken, F. J. 2017. The
changing nature of work: Careers, identities, and work
lives in the 21st century. Academy of Management
Discoveries, 3(2): 111115.
Barley, S. R., & Kunda, G. 1992. Design and devotion:
Surges of rational and normative ideologies of con-
trol in managerial discourse Administrative Science
Quarterly, 37(3): 363399.
Barocas, S., Rosenblat, A., Gangadharan, S. P., & Yu, C.
2014. Data & civil rights: Technology primer. Paper
presented at the Data & Civil Rights Conference, Oc-
tober 30, 2014.
Barocas, S., & Selbst, A. D. 2016. Big datas disparate im-
pact. California Law Review, 104: 671.
Barrett, M., Oborn, E., & Orlikowski, W. 2016. Creat-
ing value in online communities: The sociomaterial
configuring of strategy, platform, and stakeholder
engagement. Information Systems Research, 27(4):
704723.
Barrett, M., Oborn, E., Orlikowski, W. J., & Yates, J. 2012.
Reconfiguring boundary relations: Robotic innova-
tions in pharmacy work. Organization Science, 23(5):
14481466.
Beane, M. 2019. Shadow learning: Building robotic surgi-
cal skill when approved means fail. Administrative
Science Quarterly, 64(1): 87123.
Beane, M., & Orlikowski, W. J. 2015. What difference
does a robot make? The material enactment of dis-
tributed coordination. Organization Science, 26(6):
15531573.
Bechky, B. A. 2019. Evaluative spillovers from technolog-
ical change: The effects of DNA envy
on occupational
2020 397Kellogg, Valentine, and Christin
practices in forensic science. Administrative Science
Quarterly,138. Available at http://dx.doi.org/10.1177/
0001839219855329.
Benkler, Y. 2017. Peer production, the commons, and the fu-
ture of the firm. Strategic Organization, 15(2): 264274.
Bensman, J., & Gerver, I. 1963. Crime and punishment in
the factory: The function of deviancy in maintaining
the social system. American Sociological Review,
28(4): 588598.
Benzell, S. G., Kotlikoff, L. J., LaGarda, G., & Sachs, J. D.
2015. Robots are us: Some economics of human re-
placement. No. 20941. National Bureau of Economic
Research. Cambridge, MA.
Berente, N., Howison, J., Cutcher-Gershenfeld, J., King,
J. L., Barley, S. R., & Towns, J. 2017. Professionali-
zation in cyberinfrastructure. Available at SSRN
3138592.
Berente, N., Howison, J., King, J. L., Cutcher-Gershenfeld,
J., & Pennington, R. 2014. Leading cyberinfrastructure
enterprise: Value propositions, stakeholders, and
measurement . Stakeholders, and Measurement (March
26, 2014).
Bergvall-K
˚
areborn, B., & Howcroft, D. 2014. A mazon Me-
chanical Turk and the commodification of labour.
New Technology, Work and Employment, 29(3):
213223.
Bernstein, E. S. 2012. The transparency paradox: A role for
privacy in organizational learning and operational
control. Administrative Science Quarterly, 57(2):
181216.
Bernstein, E. S., & Li, S. 2017. Seeing where you stand:
From performance feedback to performance trans-
parency. Paper presented at the Academy of Man-
agement Proceedings.
Bernstein, M. S., Little, G., Miller, R. C., Hartmann, B.,
Ackerman, M. S., Karger, D. R., Crowell, D., &
Panovich, K. 2015. Soylent: A word processor with a
crowd inside. Communications of the ACM, 58(8):
8594.
Beunza, D. 2019. Taking the floor: Models, morals, and
management in a wall street trading room. Prince-
ton, NJ: Princeton University Press.
Beunza, D., & Millo, Y. 2015. Blended automation: In-
tegrating algorithms on the floor of the New York
stock exchange. No. 38. Systemic Risk Center.Lon-
don: London School of Economics and Political Science.
Blau, P. M. 1955. The dynamics of bureaucracy: A study
of interpersonal relations in two government agen-
cies. Chicago: University of Chicago Press.
Blauner, R. 1964. Alienation and freedom: The factory worker
and his industry. Oxford: Chicago University Press.
Bock, L. 2015. Work rules!: Insights from inside Google
that will transform how you live and lead. New York:
Grand Central Publishing.
Bodie, M. T., Cherry, M. A., McCormick, M. L., & Tang, J.
2017. The law and policy of people analytics. Uni-
versity of Colorado Law Review, 88(1): 9611042.
Bogost, I. 2015. Why gamification is bullshit. The gameful
world: Approaches, issues, applications: 65. Cam-
bridge, MA: MIT Press.
Bolin, G., & Andersson Schwarz, J. 2015. Heuristics of the
algorithm: Big Data, user interpretation and institutional
translation. Big Data & Society, 2(2): 212. Available at
http://dx.doi.org/10.1177/2053951715608406.
Bolton, S. C. 2004. A simple matter of control? NHS hos-
pital nurses and new management. Journal of Man-
agement Studies, 41(2): 317333.
Borch, C. 2017. Algorithmic finance and (limits to) gov-
ernmentality: On Foucault and high-frequency trad-
ing. Le foucaldien, 3(1).
Borch, C., & Lange, A.-C. 2016. High-frequency trader
subjectivity: Emotional attachment and discipline in
an era of algorithms. Socio-Economic Review, 15(2):
283306.
boyd, d., & Crawford, K. 2012. Critical questions for big
data: Provocations for a cultural, technological, and
scholarly phenomenon. Information, Communica-
tion & Society, 15(5): 662679.
Boyle, E. 2018. Understanding latent style, Multi-
threaded. Stitch Fix.
Brabham, D. C., 2013. Crowdsourcing. Cambridge, MA:
MIT Press.
Bradley, A. 2019. Building our centralized experimental
platform, Multithreaded, vol. 2019. Stitch Fix.
Braverman, H. 1974. Labor and monopoly capital. New
York: Monthly Review.
Brayne, S. 2017. Big data surveillance: The case of polic-
ing. American Sociological Review, 82(5): 9771008.
Brayne, S., & Christin, A. Forthcoming. Technologies of
crime prediction. The reception of algorithms in po-
licing and criminal courts. Social Problems.
Brockman, J. 2019. Possible minds: Twenty-five ways
of looking at AI. Penguin Press.
Brooks, R. A. 2012. Cheaper by the hour: Temporary
lawyers and the deprofessionalization of the law.
Philadelphia, PA: Temple University Press.
Brynjolfsson, E., & McAfee, A. 2014. The second machine
age: Work, progress, and prosperity in a time of
brilliant technologies. New York: W. W. Norton.
Bucher, T. 2017. The algorithmic imaginary: Exploring the
ordinary affects of the Facebook algorithms. Infor-
mation, Communication & Society, 20(1): 3044.
398 JanuaryAcademy of Management Annals
Bumbulsky, J. 2013. Chaotic Storage Lessons, Medium. Avail-
able at https://medium.com/tech-talk/e3b7de266476.
Burawoy, M. 1979. Manufacturing consent: Changes in
the labor process under monopoly capitalism. Chi-
cago: University of Chicago Press.
Burawoy, M. 1985. The politics of production: Factory
regimes under capitalism and socialism. Brooklyn,
NY: Verso Books.
Burrell, J. 2016. How the machine thinks: Understanding
opacity in machine learning algorithms. Big Data &
Society, 3(1):112. Available at http://dx.doi.org/
10.1177/2053951715622512.
Burt, R. S. 1992. Structural holes: The social structure of
competition.Cambridge,MA:HarvardUniversityPress.
Callaghan, G., & Thompson, P. 2001. Edwards revisited:
Technical control and call centres. Economic and
Industrial Democracy, 22(1): 1337.
Calo, R., & Rosenblat, A. 2017. The taking economy: Uber,
information, and power. Columbia Law Review, 117:
1623.
Cambo, S. A., & Gergle, D. 2018. User-centred evaluation
for machine learning. In J. Zhou, & F. Chen (Eds.),
Human and machine learning: Visible, explainable,
trustworthy and transparent: 315339. Cham, Swit-
zerland: Springer International Publishing.
Cameron, L. 2018. The good bad job: Autonomy and
control in the algorithmic workplace. Paper pre-
sented at the Academy of Management Annual Meet-
ing. Chicago.
Cameron, L. 2019. Allies or adversaries: Making mean-
ing in the new gig employment relationship. Paper
presented at the 9th Biennial Positive Organizational
Scholarship Conference. Ann Arbor, MI.
Campbell, H. 2018. The rideshare guy 2018 reader survey.
The rideshare guy: A blog and podcast for rideshare
drivers. Available therideshareguy.com.
Cardinal, L. B., Kreutzer, M., & Miller, C. C. 2017. An as-
pirational view of organizational control research:
Re-invigorating empirical work to better meet the
challenges of 21st century organizations. Academy of
Management Annals, 11(2): 559592.
Castells, M. 2015. Networks of outrage and hope: Social
movements in the internet age. Hoboken, NJ: John
Wiley & Sons.
Chai, S., & Scully, M. A. 2018. Using labor process theory
to probe the Sharing economy. Paper presented at
the Academy of Management Proceedings.
Chalmers, M., & MacColl, I. 2003. Seamful and seamless
design in ubiquitous computing. Paper presented at
the Workshop at the Crossroads: The interaction of
HCI and Systems Issues in UbiComp.
Chan, J., & Wang, J. 2018. Hiring preferences in online labor
markets: Evidence of a female hiring bias. Manage-
ment Science
, 64(7): 29732994.
Cherry, M. A. 2015. Beyond misclassification: The digital
transformation of work. Comparative Labor Law and
Policy Journal, 37: 577.
Cherry, M. A., & Aloisi, A. 2018. A critical examination of a
third employment category for on-demand work (in
comparative perspective). In Nestor M. Davidson,
Michele Finck & John J. Infranca (Eds), Cambridge
Handbook on the Law of the Sharing Economy.
Forthcoming.
Christin, A. 2017. Algorithms in practice: Comparing web
journalism and criminal justice. Big Data & Society,
Big Data & Society, 4(2): 114. Accessed at http://
dx.doi.org/10.1177/2053951717718855.
Christin, A. 2018. Counting Clicks: Quantification and
Variation in Web Journalism in the United States and
France. American Journal of Sociology,123(5):1382
1415.
Christin, A. 2019. What data can do: A typology of mech-
anisms. International Journal of Communication.In
press.
Cipriani, J., & Dolcourt, J. 2019. iOS 13 and iPadOS: Every
important feature you should know, CNet|Mobile.
Clemes, S. A., OConnell, S. E., & Edwardson, C. L. 2014.
Office workers objectively measured sedentary be-
havior and physical activity during and outside
working hours. Journal of Occupational and Envi-
ronmental Medicine, 56(3): 298303.
Colner, E. 2018. Three years of erch engineering, Multi-
threaded. Stitch Fix.
Common, M. 2019. Fear the reaper: How content modera-
tion rules are enforced on social media. SSRN Elec-
tronic Journal. Available at https://dx.doi.org/10.2139/
ssrn.3405337.
Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., & Huq,
A. 2017. Algorithmic decision making and the cost
of fairness. Paper presented at the Proceedings of
the 23rd ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining.
Corporaal, G. F., & Lehdonvirta, V. 2017. Platform
sourcingHow fortune 500 firms are adopting on-
line freelancing platforms: Oxford: University of
Oxford, Oxford Internet Institute.
Corporaal, G. F., Windwehr, S., & Lehdonvirta, V. 2019.
How labor market intermediaries transform in-
stitutions of work: Insights from a comparative
qualitative study of dispute resolution processes in
contingent work. Paper presented at the Reshaping
Work Conference. Amsterdam, the Netherlands.
2020 399Kellogg, Valentine, and Christin
Crawford, K., & Schultz, J. 2014. Big data and due process:
toward a framework to redress predictive privacy
harms. Boston College Law Review, 55(1): 93128.
Creed, W. D., Scully, M. A., & Austin, J. R. 2002. Clothes
make the person? The tailoring of legitimating ac-
counts and the social construction of identity. Orga-
nization Science, 13(5): 475496.
Crowston, K., & Bolici, F. 2019. Impacts of machine
learning on work. Paper presented at the Proceedings
of the 52nd Hawaii International Conference on Sys-
tem Sciences.
Curchod, C., Patriotta, G., Cohen, L., & Neysen, N. 2019.
Working for an algorithm: Power asymmetries and
agency in online work settings. Administrative Sci-
ence Quarterly,133. Available at http://dx.doi.org/
10.1177/0001839219867024.
Cutcher-Gershenfeld, J., & Kochan, T. 2004. Taking stock:
Collective bargaining at the turn of the century. ILR
Review, 58(1): 326.
Danaher, J. 2016. The threat of algocracy: Reality, re-
sistance and accommodation. Philosophy & Tech-
nology, 29(3): 245268.
Darr, A. 2018. Automatons, sales-floor control and the con-
stitution of authority. Human Relations, 72: 889909.
Davenport, T. H., & Kirby, J. 2016. Only humans need
apply: winners and losers in the age of smart ma-
chines. New York, NY: Harper Business.
Davidson, A. 2016. Planet money. In J. Goldstein (Ed.), The
future Of work looks like a UPS truck. National
Public Radio. Available at https://www.npr.org/
sections/money/2014/05/02/308640135/episode-536-
the-future-of-work-looks-like-a-ups-truck.
Davis, G. F. 2015. What might replace the modern corpo-
ration: Uberization and the web page enterprise.
Seattle University Law Review, 39(2): 501516.
Davis, G. F. 2016. Can an economy survive without corpo-
rations? Technology and robust organizational alter-
natives. The Academy of Management Perspectives,
30(2): 129140.
De Stefano, V. 2015. The rise of the just-in-time workforce:
On-demand work, crowdwork, and labor protection in
the gig-economy. Comparative Labor Law and Pol-
icy Journal, 37: 471.
Deterding, S., Khaled, R., Nacke, L. E., & Dixon, D. 2011.
Gamification: Toward a definition. Paper presented
at the CHI 2011 gamification workshop proceedings.
Diakopoulos, N. 2015. Algorithmic accountability: Jour-
nalistic investigation of computational power struc-
tures. Digital Journalism , 3(3): 398415.
Diakopoulos, N., & Friedler, S. 2016. How to hold algorithms
accountable. MIT Technology Review, 17(11): 2016.
DiBenigno, J. 2018. Rapid relationality: How peripheral
experts build a foundation for influence with line
managers. Administrative Science Quarterly
, 63(3):
526569. Available at http://dx.doi.org/10.1177/
0001839219827006.
Dietvorst, B. J., Simmons, J. P., & Massey, C. 2015. Algo-
rithm aversion: People erroneously avoid algorithms
after seeing them err. Journal of Experimental Psy-
chology: General, 144(1): 114.
Dourish, P. 2016. Algorithms and their others: Algorithmic
culture in context. Big Data & Society, 3(2): 111. Avail-
able at http://dx.doi.org/10.1177/2053951716665128.
Dworkin, T. M. 1990. Protecting private employees from
enhanced monitoring: Legislative approaches. Ameri-
can Business Law Journal, 28: 59.
Edelman, B., Luca, M., & Svirsky, D. 2017. Racial dis-
crimination in the sharing economy: Evidence from a
field experiment. American Economic Journal: Ap-
plied Economics, 9(2): 122.
Edery, D., & Mollick, E. 2009. Changing the game: How
video games are transforming the future of business.
Upper Saddle River, NJ: FT Press.
Edwards, R. 1979. Contested terrain: The transforma-
tion of the workplace in the twentieth century: New
York: Basic Books.
Ekbia, H. R., & Nardi, B. A. 2017. Heteromation, and Other
stories of computing and capitalism. Cambridge,
MA: MIT Press.
Elliott, S. W. 2014. Anticipating a luddite revival. Issues in
Science and Technology, 30(3): 2736.
Etter, V., Kafsi, M., Kazemi, E., Grossglauser, M., & Thiran,
P. 2013. Where to go from here? Mobility prediction
from instantaneous information. Pervasive and Mo-
bile Computing, 9(6): 784797.
Eubanks, V. 2018. Automating inequality: How high-tech
tools profile, police, and punish the poor. New York:
St. Martins Press.
Ezzamel, M., & Willmott, H. 1998. Accounting for team-
work: A critical study of group-based organizational
control. Administrative Science Quarterly, 43(2):
358396.
Faraj, S., Jarvenpaa, S. L., & Majchrzak, A. 2011. Knowl-
edge collaboration in online communities. Organi-
zation Science, 22(5): 12241239.
Faraj, S., Pachidi, S., & Sayegh, K. 2018. Working and
organizing in the age of the learning algorithm. In-
formation and Organization, 28(1): 6270.
Farzan, R., DiMicco, J. M., Millen, D. R., Dugan, C., Geyer,
W., & Brownholtz, E. A. 2008. Results from deploying
a participation incentive mechanism within the en-
terprise, Proceedings of the SIGCHI Conference on
400 JanuaryAcademy of Management Annals
Human Factors in Computing Systems: 563572.
Florence, Italy: ACM.
Fayard, A.-L., Gkeredakis, E., & Levina, N. 2016. Framing
innovation opportunities while staying committed to
an organizational epistemic stance. Information Sys-
tems Research, 27(2): 302323.
Feller, A., Pierson, E., Corbett-Davies, S., & Goel, S. 2016. A
computer program used for bail and sentencing
decisions was labeled biased against blacks. Its
actually not that clear. The Washington Post.
Felstiner, A. 2011. Working the crowd: Employment and
labor law in the crowdsourcing industry. Berkeley
Journal of Employment and Labor Law, 32(1): 143
203.
Filippas, A., Horton, J. J., & Golden, J. 2018. Reputation in-
flation. Paper presented at the Proceedings of the 2018
ACM Conference on Economics and Computation.
Fourcade, M., & Healy, K. 2016. Seeing like a market.
Socio-Economic Review, 15(1): 929.
Fox, S., Howell, N., Wong, R., & Spektor, F. 2019. Vivewell:
Speculating Near-Future Menstrual Tracking through
Current Data Practices, Proceedings of the 2019 on
Designing Interactive Systems Conference: 541552.
San Diego, CA: ACM.
Frey, C. B., & Osborne, M. A. 2017. The future of employ-
ment: How susceptible are jobs to computerisation?
Technological Forecasting and Social Change, 114:
254280.
Gabrilovich, E., Dumais, S., & Horvitz, E. 2004. News-
junkie: Providing personalized newsfeeds via
analysis of information novelty, Proceedings of the
13th international conference on World Wide Web:
482490. New York: ACM.
Gebru, T., Krause, J., Wang, Y., Chen, D., Deng, J., Aiden,
E. L., & Fei-Fei, L. 2017. Using deep learning and
Google street view to estimate the demographic
makeup of neighborhoods across the United States.
Proceedings of the National Academy of Sciences,
114(50): 1310813113.
Geiger, R. S. 2017. Beyond opening up the black box: In-
vestigating the role of algorithmic systems in wiki-
pedian organizational culture. Big Data & Society,
4(2): 114. Available at http://dx.doi.org/10.1177/
2053951717730735.
George, G., Haas, M. R., & Pentland, A. 2014. Big data and
management. Briarcliff Manor, NY: Academy of
Management.
Gerber, C., & Krzywdzinski, M. 2019. Brave new digital
work? New forms of performance control in crowd-
work. Work and labor in the digital age: 121143.
Bingley, UK: Emerald Publishing Limited.
Gill, M. J. 2019. The significance of suffering in organiza-
tions: Understanding variation in workers responses
to multiple modes of control. Academy of Manage-
ment Review, 44(2): 377
404.
Gillespie, T. 2014. The relevance of algorithms. Media
technologies: Essays on communication, material-
ity, and society: 167. Cambridge MA: The MIT Press.
Gillespie, T. 2018. Custodians of the internet: Platforms,
content moderation, and the hidden decisions that
shape social media. New Haven, CT: Yale University
Press.
Gittell, J. H. 2016. Transforming relationships for high
performance: The power of relational coordination.
Palo Alto, CA: Stanford University Press.
Glynn, P. 2018. Your client engagement program isnt
doing what you think it is, MultiThreaded. Stitch
Fix.
Goldberg, A., Srivastava, S. B., Manian, V. G., Monroe, W.,
& Potts, C. 2016. Fitting in or standing out? The
tradeoffs of structural and cultural embeddedness.
American Sociological Review , 81(6): 11901222.
Goldman, M., Little, G., & Miller, R. C. 2011. Real-time
collaborative coding in a web IDE. Proceedings of the
24th annual ACM symposium on User interface soft-
ware and technology: 155164. Santa Barbara, CA:
ACM.
Gomez-Uribe, C. A., & Hunt, N. 2016. The netflix rec-
ommender system: Algorithms, business value, and
innovation. ACM Transactions on Management In-
formation Systems (TMIS), 6(4): 13.
Gouldner, A. W. 1954. Patterns of industrial bureau-
cracy. Glencoe, IL: Free Press.
Govindarajan, V. 1988. A contingency approach to strategy
implementation at the business-unit level: Integrating
administrative mechanisms with strategy. Academy
of Management Journal, 31(4): 828853.
Graham, M., Hjorth, I., & Lehdonvirta, V. 2017. Digital
labour and development: Impacts of global digital la-
bour platforms and the gig economy on worker live-
lihoods. Transfer: European Review of Labour and
Research, 23(2): 135162.
Gray, M. L., & Suri, S. 2019. Ghost work: How to stop Sil-
icon Valley from building a new global underclass.
San Francisco, CA: HMH Books.
Gray, M. L., Suri, S., Ali, S. S., & Kulkarni, D. 2016. The crowd
is a collaborative network. Paper presented at the Pro-
ceedings of the 19th ACM Conferen ce on Computer-
Supported Cooperative Work & Social Computing.
Greenwood, B. N., Adjerid, I., & Angst, C. M. 2017. How
unbecoming of you: Gender biases in perceptions of
ridesharing performance. Available http://dx.doi.org/
10.24251/HICSS.2019.789.
Griesbach, K., Reich, A., Elliott-Negri, L., & Milkman, R.
2019. Algorithmic control in platform food delivery
2020 401Kellogg, Valentine, and Christin
work. Socius,5:115. Available at http://dx.doi.org/
10.1177/2378023119870041.
Grol, R., & Grimshaw, J. 2003. From best evidence to best
practice: Effective implementation of change in pa-
tients care. The Lancet, 362(9391): 12251230.
Gupta, A. 2018. Detecting crisis: An AI solution, Crisis
Text Line Blog, vol. 2019.
Hall, J. V., Horton, J. J., & Knoepfle, D. T. 2019. Pricing
efficiently in designed markets: The case of ride-
sharing. Available john-joseph-horton.com.
Hall, R. 2010. Renewing and revising the engagement be-
tween labour process theory and technology. In P.
Thompson, & C. Smith (Eds),Working life: Renewing
labour process analysis, 159181. New York: Pal-
grave Macmillan.
Hannah-Moffat, K. 2018. Algorithmic risk governance: Big
data analytics, race and information activism in
criminal justice debates. Theoretical Criminology,
23: 453470.
Haraszti, M. 1978. A worker in a workers state. New
York: Universe Books.
Harcourt, B. E. 2007. Against prediction: Profiling, po-
licing, and punishing in an actuarial age. Chicago:
University of Chicago Press.
Ha-Thuc, V., Xu, Y., Kanduri, S. P., Wu, X., Dialani, V., Yan,
Y., Gupta, A., & Sinha, S. 2016. Search by ideal candi-
dates: Next generation of talent search at linkedin.
Paper presented at the Proceedings of the 25th Interna-
tional Conference Companion on World Wide Web.
Heaton, J. B., Polson, N., & Witte, J. H. 2017. Rejoinder to deep
learning for finance: Deep portfolios. Applied Sto-
chastic Models in Business and Industry, 33(1): 1921.
Henke, N., Levine, J., & McInerney, P. 2018. You dont have
to be a data scientist to fill this must-have analytics
role. Harvard Business Review. Available at https://
hbr.org/2018/02/you-dont-have-to-be-a-data-scientist-
to-fill-this-must-have-analytics-role.
Hicks, M. 2017. Programmed inequality: How Britain
discarded women technologists and lost its edge in
computing. Cambridge, MA: MIT Press.
Hodgson, D. E. 2004. Project work: The legacy of bureau-
cratic control in the post-bureaucratic organization.
Organization, 11(1): 81100.
Hollebeek, L. D., Conduit, J., Sweeney, J., Soutar, G.,
Karpen, I. O., Jarvis, W., & Chen, T. 2016. Epilogue to
the Special Issue and reflections on the future of en-
gagement research. Journal of Marketing Manage-
ment, 32(56): 586594.
Holzinger, A., & Jurisica, I. 2014. Knowledge discovery
and data m ining in biomedical informatics: The
future is in integrative, interactive machine learning
solution s. Interactive knowledge discovery and
data mining in biomedical informatics:118.
Berlin: Sprin ger.
Horesh, R., Varshney, K. R., & Yi, J. 2016. Information
retrieval, fusion, completion, and clustering for
employee expertise estimation. Paper presented at
the 2016 IEEE International Conference on Big Data
(Big Data).
Horton, J., & Golden, J. 2015. Reputation inflation: Evi-
dence from an online labor market. Working paper,
NYU, 1.
Hosny, A., Parmar, C., Quackenbush, J., Schwartz, L. H., &
Aerts, H. J. 2018. Artificial intelligence in radiology.
Nature Reviews Cancer, 18(8): 500.
Howcroft, D., & Bergvall-K
˚
areborn, B. 2019. A typology of
crowdwork platforms. Work, Employment and So-
ciety, 33(1): 2138.
Howe, J. 2006. The rise of crowdsourcing. Wired Maga-
zine, 14(6): 14.
Irani, L. 2015. Difference and dependence among digital
workers: The case of Amazon mechanical turk. South
Atlantic Quarterly, 114(1): 225234.
Ivanova, M., Bronowicka, J., Kocher, E., & Degner, A. 2018.
Foodora and Deliveroo: The app as a boss? Control
and autonomy in app-based management: The case
of food delivery riders. Working Paper: 151. Dus-
seldorf, Germany: Hans Bockler Stiftung.
Jackson, S. J. 2014. Rethinking repair. In T. Gillespie, P. J.
Boczkowski, & K. A. Foot (Eds.), Media technologies:
Essays on communication, materiality and society.
Cambridge, MA: MIT Press.
Jackson, S. R. 2019. (Not) paying for diversity: Platform-
based recruiting and the commercialization of diversity.
Paper presented at the MIT Economic Sociology
Working Group. Cambridge, MA.
Jacobs, A. 2009. The pathologies of big data. Communi-
cations of the ACM, 52(8): 3644.
Jaros, S. 2010. The core theory: Critiques, defences and
advances. Working life: Renewing labour process
analysis:7090.
Jarrahi, M. H., Sutherland, W., Nelson, S., & Sawyer, S.
2019. Platformic management, boundary resources,
and worker autonomy in gig work. Computer Sup-
ported Cooperative Work, (2019): 137.
Jhaver, S., Karpfen, Y., & Antin, J. 2018. Algorithmic
anxiety and coping strategies of Airbnb hosts.
Paper presented at the Proceedings of the 2018 CHI
Conference on Human Factors in Computing
Systems.
Juravich, T. 1985. Chaos on the shop floor: A workers
view of quality, productivity, and management.
Philadelphia, PA: Temple University Press.
402 JanuaryAcademy of Management Annals
Kallinikos, J., & Tempini, N. 2014. Patient data as medical
facts: Social media practices as a foundation for
medical knowledge creation. Information Systems
Research, 25(4): 817833.
Kaplan, S. 2008. Framing contests: Strategy making under
uncertainty. Organization Science, 19(5): 729752.
Karppi, T., & Crawford, K. 2016. Social media, financial
algorithms and the hack crash. Theory, Culture &
Society, 33(1): 7392.
Karreman, D., & Alvesson, M. 2004. Cages in tandem:
Management control, social identity, and identifica-
tion in a knowledge-intensive firm. Organization,
11(1): 149175. doi:10.1177/1350508404039662.
Karunakaran, A. 2016. Regimes of quantification: Ex-
amining how predictive analytics shape occupa-
tional jurisdictions and accountability. Paper
presented at the Academy of Management Annual
Meeting. Anaheim, CA.
Karunakaran, A. 2018. In cloud we trust? Normalization of
uncertainties in online platform services.Paperpre-
sented at the Academy of Management Proceedings.
Karunakaran, A. 2019. The social organization of algo-
rithmic accountability: Occupational contestations
in defining what constitutes fairness during the
process of auditing an algorithm. Paper presented at
the 35th European Group for Organizational Studies
pre-colloquium. Edinburgh, UK.
Katal, A., Wazid, M., & Goudar, R. 2013. Big data: Issues,
challenges, tools and good practices . Paper pre-
sented at the 2013 Sixth international conference on
contemporary computing (IC3).
Kaynak, F. E. 2019. Bootcamps: A new path for occupa-
tional entry. Stanford, CA: Stanford University.
Kellogg, K. 2011. Challenging operations: Medical re-
form and resistance in surgery. Chicago: University
of Chicago Press.
Kellogg, K. 2018. Employment recontracting for mutually
beneficial role realignment around a new technol-
ogy in a professional organization. Paper presented
at the Oxford Professional Services Conference.
Oxford.
Kellogg, K., Myers, J., Gainer, L., & Singer, S. Forthcoming.
Moving violations: Trainee status mobility up an il-
legitimate hierarchy for learning of new techniques
when traditional expertise erodes. Organization
Science.
Kellogg, K. 2014. Brokerage professions and implementing
reform in an age of experts. American Sociological
Review, 79(5): 912941.
Kerfoot, B. P., & Kissane, N. 2014. The use of gamification
to boost residents engagement in simulation training.
JAMA Surgery, 149(11): 12081209.
Kessinger, R., & Kellogg, K. 2019. Softening the edges of
algorithmic evaluation: Relational work to miti-
gate negative worker outcomes associated with
algorithmic evaluation. Paper presented at the MIT
Economic Sociology Working Group Seminar. Cam-
bridge, MA.
Kim, T. W. 2018. Gamification of labor and the charge of
exploitation. Journal of Business Ethics, 152(1): 27
39.
King, K. G. 2016. Data analytics in human resources: A case
study and critical review. Human Resource Devel-
opment Review, 15(4): 487495.
Kittur, A., Nickerson, J. V., Bernstein, M., Gerber, E., Shaw,
A., Zimmerman, J., Lease, M., & Horton, J. 2013. The
future of crowd work. Paper presented at the Pro-
ceedings of the 2013 Conference on Computer Sup-
ported Cooperative Work.
Kittur, A., Smus, B., Khamkar, S., & Kraut, R. E. 2011.
Crowdforge: Crowdsourcing complex work.Paper
presented at the Proceedings of the 24th Annual
ACM S ymposi um on User Interface Software an d
Technology.
Kittur, A., Yu, L., Hope, T., Chan, J., Lifshitz-Assaf, H.,
Gilon, K., Ng, F., Kraut, R. E., & Shahaf, D. 2019.
Scaling up analogical innovation with crowds and AI.
Proceedings of the National Academy of Sciences,
116(6): 18701877.
Kleemann, F., Voß, G. G., & Rieder, K. 2008. Un (der) paid
innovators: The commercial utilization of consumer
work through crowdsourcing. Science, Technology &
Innovation Studies, 4(1): 526.
Kochan, T. A., Adler, P. S., McKersie, R. B., Eaton, A. E.,
Segal, P., & Gerhart, P. 2008. The potential and pre-
cariousness of partnership: The case of the Kaiser
Permanente labor management partnership. Indus-
trial Relations, 47(1): 3665.
Kochan, T. A., Yang, D., Kimball, W. T., & Kelly, E. L. 2019.
Worker Voice in America: Is There a Gap between
What Workers Expect and What They Experience?
ILR Review, 72(1): 338.
Kulesza, T., Burnett, M., Wong, W.-K., & Stumpf, S. 2015.
Principles of explanatory debugging to personalize
interactive machine learning . Paper presented at the
Proceedings of the 20th International Conference on
Intelligent user Interfaces.
Kunda, G. 1992. Engineering culture: Control and com-
mitment in a high-tech corporation. Philadelphia:
Temple University Press.
Lakhani, K. 2016. Managing communities and contests to
innovate with crowds. In D. Harhoff, & K. Lakhani (Eds.),
Revolutionizing Innovation: Users, communities and
open innovation: 109134. Cambridge, MA: MIT Press.
2020 403Kellogg, Valentine, and Christin
Landay, J. 2019. Smart interfaces for human-centered AI.
Stanford University Human-Centered Artificial In-
telligence. Stanford, CA: Stanford University.
Lange, A.-C., Lenglet, M., & Seyfert, R. 2016. Cultures
of high-frequency trading: Mapping the landscape of
algorithmic developments in contemporary financial
markets. Economy and Society, 45(2): 149165.
Lanier, J., & Weyl, E. G. 2018. A blueprint for a better
digital society. Harvard Business Review.
Lebovitz, S., Lifshitz-Assaf, H., & Levina, N. 2019. Doubt-
ing the diagnosis: How artificial intelligence in-
creases ambiguity during professional decision
making. New York University.
Lee, M. K., Kusbit, D., Metsky, E., & Dabbish, L. 2015. Working
with machines: The impact of algorithmic and data-
driven management on human workers. Paper presented
at the Proceedings of the 33rd Annual ACM Conference on
Human Factors in Computing Systems.
Lehdonvirta, V. 2016. Algorithms that divide and unite:
Delocalisation, identity and collective action in
microwork. Space, place and global digital work:
5380. Berlin, Heidelberg, Germany: Springer.
Lehdonvirta, V. 2018. Flexibility in the gig economy: Man-
aging time on three online piecework platforms. New
Technology, Work and Employment, 33(1): 1329.
Lehdonvirta, V., K
¨
assi,O.,Hjorth,I.,Barnard,H.,&Graham,
M. 2019. The global platform economy: A new off-
shoring institution enabling emerging-economy micro-
providers. Journal of Management, 45(2): 567599.
Leicht-Deobald, U., Busch, T., Schank, C., Weibel, A.,
Schafheitle, S., Wildhaber, I., & Kasper, G. 2019. The
challenges of algorithm-based HR decision-making for
personal integrity. Journal of Business Ethics, 160(2):
116.
Lenglet, M. 2011. Conflicting codes and codings: How al-
gorithmic trading is reshaping financial regulation.
Theory, Culture & Society, 28(6): 4466.
Lenglet, M., & Mol, J. 2016. Squaring the speed of light?
Regulating market access in algorithmic finance.
Economy and Society, 45(2): 201229.
Leonardi, P., & Contractor, N. 2018. Better people analytics
measure who they know, not just who they are. Har-
vard Business Review, 96(6): 7081.
Leonardi, P. M., & Vaast, E. 2017. Social media and their affor-
dances for organizing: A review and agenda for research.
Academy of Management Annals,11(1):150188.
Levy, K. E. 2015. The contexts of control: Information,
power, and truck-driving work. The Information So-
ciety, 31(2): 160174.
Levy, K., & Barocas, S. 2017. Designing against discrimi-
nation in online markets. Berkeley Technology Law
Journal, 32: 1183.
Levy, K., & Barocas, S. 2018. Privacy at the Margins| refractive
surveillance: Monitoring customers to manage workers.
International Journal of Communication, 12: 23.
Lifshitz-Assaf, H. 2018. Dismantling knowledge bound-
aries at NASA: The critical role of professional iden-
tity in open innovation. Administrative Science
Quarterly, 63(4): 746782.
Lindebaum, D., Vesa, M., & den Hond, F. 2020. Insights
from The Machine Stops to better understand ra-
tional assumptions in algorithmic decision making
and its implications for organizations. Academy of
Management Review, 45(1): 117.
Lingo, E. L., & OMahony, S. 2010. Nexus work: Brokerage
on creative projects. Administrative Science Quar-
terly, 55(1): 47 81.
Lintott, C., & Reed, J. 2013. Human computation in citizen
science. Handbook of human computation: 153162.
Berlin, Heidelberg, Germany: Springer.
Lipsky, M. 2010. Street-level bureaucracy: Dilemmas
of the individual in public service. New York: Russell
Sage Foundation.
Little, G., Chilton, L. B., Goldman, M., & Miller, R. C. 2010.
Turkit: Human computation algorithms on me-
chanical turk. Paper presented at the Proceedings of
the 23nd Annual ACM Symposium on User Interface
Software and Technology.
Litwin, A. S. 2011. Technological change at work: The
impact of employee involvement on the effectiveness
of health information technology. ILR Review, 64(5):
863888.
Liu, M., Brynjolfsson, E., & Dowlatabadi, J. 2018a. Do
digital platforms reduce moral hazard? The case
of Uber and taxis. Cambridge, MA: National Bureau
of Economic Research.
Liu, M., Huang, Y., & Zhang, D. 2018b. Gamifications im-
pact on manufacturing: Enhancing job motivation,
satisfaction and operational performance with smart-
phonebased gamified job design. Human Factors and
Ergonomics in Manufacturing & Service Industries,
28(1): 3851.
Liu, Y.-E., Mandel, T., Brunskill, E., & Popovic, Z. 2014.
Trading off scientific knowledge and user learning
with multi-armed bandits. Paper presented at the
EDM.
Lix, K., Goldberg, A., Srivastava, S., & Valentine, M. 2019.
Expressly different: Interpretive diversity and team
performance, Working Paper. Stanford University.
Lix, K., & Valentine, M. 2019. Kharma scores and team
learning in software development gigs. Working
Paper. Stanford University.
Loebbecke, C., & Picot, A. 2015. Reflections on societal and
business model transformation arising from digitiza-
tion and big data analytics: A research agenda. The
404 JanuaryAcademy of Management Annals
Journal of Strategic Information Systems, 24(3):
149157.
Lowe, N., Goldstein, H., & Donegan, M. 2011. Patchwork
intermediation: Challenges and opportunities for re-
gionally coordinated workforce development. Eco-
nomic Development Quarterly, 25(2): 158171.
Lucero, M. A., Allen, R. E., & Elzweig, B. 2013. Managing
employee social networking: Evolving views from
the national labor relations board. Employee Re-
sponsibilities and Rights Journal, 25(3): 143158.
MacKenzie, D. 2018. Material signals: A historical sociol-
ogy of high-frequency trading. American Journal of
Sociology, 123(6): 16351683.
MacKenzie, D. 2019. How algorithms interact: Goffmans
interaction orderin automated trading. Theory, Cul-
ture & Society, 36(2): 3959.
Mallafi, H., & Widyantoro, D. H. 2016. Prediction model-
ling in career management . Paper presented at the
2016 International Conference on Computational In-
telligence and Cybernetics.
Martin, D., Hanrahan, B. V., ONeill, J., & Gupta, N. 2014.
Being a turker. Paper presented at the Proceedings of
the 17th ACM Conference on Computer Supported
Cooperative Work & Social Computing.
Massa, F. G., & OMahony, S. 2015. Scaling in the dark:
Explaining repertoire escalation in dark communi-
ties. Paper presented at the Academy of Management
Proceedings.
Mayer-Sch
¨
onberger, V., & Cukier, K. 2013. Big data: A
revolution that will transform how we live, work, and
think. Boston: Houghton Mifflin Harcourt.
Majchrzak, A., Faraj, S., Kane, G. C., & Azad, B. 2013. The
contradictory influence of social media affordances
on online communal knowledge sharing. Journal of
Computer-Mediated Communication, 19(1): 3855.
Martin, D., ONeill, J., Gupta, N., & Hanrahan, B. V. 2016.
Turking in a global labour market. Computer Sup-
ported Cooperative Work (CSCW), 25(1): 3977.
McAfee, A., & Brynjolfsson, E. 2017. Machine, Platform,
Crowd: Harnessing Our Digital Future. New York:
W. W. Norton.
McCann, M. W. 1992. Reform litigation on trial. Law &
Social Inquiry, 17(4): 715743.
McCann, M. W. 1994. Rights at work: Pay equity reform
and the politics of legal mobilization. Chicago: Uni-
versity of Chicago Press.
McClelland, M. 2012. I was a warehouse wage slave.
Mother Jones, vol. March.
McLoughlin, I. P., Badham, R. J., & Palmer, G. 2005. Cul-
tures of ambiguity: Design, emergence and ambiva-
lence in the introduction of normative control. Work,
Employment and Society, 19(1): 6789.
Miller, C. C. 2015. Can an algorithm hire better than a hu-
man? The New York Times, June 25, 2015.
Mindell, D. A. 2015. Our robots, ourselves: Robotics and
the myths of autonomy. New York: Viking Adult.
Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman,
L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T.
2019. Model cards for model reporting. Paper pre-
sented at the Proceedings of the Conference on Fair-
ness, Accountability, and Transparency.
Mokyr, J., Vickers, C., & Ziebarth, N. L. 2015. The history of
technological anxiety and the future of economic
growth: Is this time different? Journal of Economic
Perspectives, 29(3): 3150.
Mollick, E. R., & Rothbard, N. 2014. Mandatory fun: Con-
sent, gamification and the impact of games at work.
The Wharton School research paper series. Phila-
delphia, PA: Wharton.
Mollick, E., & Werbach, K. 2015. Gamification and the
enterprise. The gameful world: Approaches, issues,
applications. Cambridge, MA: MIT Press.
Moreo, P. J. 1980. Control, bureaucracy, and the hospitality
industry: An organizational perspective. Journal of
Hospitality Education, 4(2): 2133.
Morrill, C., Zald, M., & Rao, H. 2003. Covert political con-
flict in organizations: Challenges from below. Annual
Review of Sociology, 29: 391415.
Muthukumaraswamy, K. 2010. When the media meet
crowds of wisdom: How journalists are tapping into
audience expertise and manpower for the processes
of newsgathering. Journalism Practice, 4(1): 4865.
Myers, J. & Kellogg, K. 2019. An Expanded Role for State
Actors: Coordinating Workforce Intermediation
Across Multiple Geographies and Industries in Four
U.S. States. Paper presented at LERA 71st Annual
Meeting, Cleveland, OH.
Nelson, A. J., & Irwin, J. 2014. Defining what we doall
over again: Occupational identity, technological
change, and the librarian/internet-search relationship.
Academy of Management Journal,57(3):892928.
Newell, S., & Marabelli, M. 2015. Strategic opportunities (and
challenges) of algorithmic decision-making: A call for ac-
tion on the long-term societal effects of datification. The
Journal of Strategic Information Systems,24(1):314.
Nikolaidis, S., & Shah, J. 2012. Human-robot teaming
using shared mental models. ACM/IEEE HRI.
Nissenbaum, H. 2009. Privacy in context: Technology,
policy, and the integrity of social life: Stanford, CA:
Stanford University Press.
Noble, S. U. 2018. Algorithms of oppression: How search
engines reinforce racism. New York: NYU Press.
2020 405Kellogg, Valentine, and Christin
Nussbaum, K., & DuRivage, V. 1986. Computer monitoring:
Mismanagement by remote control. Business and
Society Review, (56): 1620.
OBrien, R. L., & Kiviat, B. 2018. Disparate impact? Race, sex,
and credit reports in hiring. Socius,4:120. Available
at http://dx.doi.org/10.1177/2378023118770069.
OMahony, S., & Bechky, B. A. 2008. Boundary Organiza-
tions: Enabling collaboration among unexpected Allies.
Administrative Science Quarterly 53(3): 422459.
Available https://doi.org/10.2189/asqu.53.3.422.
Obstfeld, D. 2005. Social networks, the tertius lungens and
orientation involvement in innovation. Administra-
tive Science Quarterly, 50(1): 100130.
OConnor, S. 2015. Wearables at work: The new frontier
of employee surveillance. Financial Times,8.
OMahony, & Ferraro, F. 2007. The emergence of gover-
nance in an open source community. Academy of
Management Journal, 50(5): 10791106.
ONeil, C. 2016. Weapons of math destruction: How big
data increases inequality and threatens democracy.
New York: Broadway Books.
Orlikowski, W., & Scott, S. V. 2014a. The algorithm and
the crowd: Considering the materiality of service
innovation.
Orlikowski, W. J., & Scott, S. V. 2014b. What happens when
evaluation goes online? Exploring apparatuses of valuation
in the travel sector. Organization Science, 25(3): 868891.
Osterman, P. 2011. The promise, performance, and poli-
cies of community colleges. Reinventing higher
education: The promise of innovation: 129158.
Cambridge, MA: Harvard Education Press.
Pachidi, S., Berends, H., Faraj, S., Huysman, M., & van de
Weerd, I. 2014. What happens when analytics lands
in the organization? Studying epistemologies in
clash. Paper presented at the Academy of Manage-
ment Proceedings.
Pardo-Guerra, J. P. 2019. Automating finance: infrastruc-
tures, engineers, and the making of electronic mar-
kets. Cambridge, UK: Cambridge University Press.
Pasquale, F. 2015. The algorithmic self. The Hedgehog
Review, 17(1): 3046.
Pasquale, F. 2015. The black box society: The secret algo-
rithms that control money and information. Cambridge,
MA: Harvard University Press.
Payne, J. 2018. Manufacturing masculinity: Exploring
gender and workplace surveillance.
Work and Oc-
cupations, 45(3): 346383.
Petre, C. 2018. Engineering consent: How the design and
marketing of newsroom analytics tools rationalize
journalists labor. Digital Journalism, 6: 509527.
Pine, K. H., Wolf, C., & Mazmanian, M. 2016. The work of
reuse: birth certificate data and healthcare account-
ability measurements. Conference 2016 Proceedings.
Pollert, A. 1981. Girls, wives, factory lives. London:
Macmillan Press.
Postigo, H. 2016. The socio-technical architecture of digital
labor: Converting play into YouTube money. New
Media & Society, 18(2): 332349.
Pronovost, P., & Vohr, E. 2010. Safe patients, smart hos-
pitals: How one doctors checklist can help us
change health care from the inside out. London:
Penguin.
Puranam, P. 2018. The microstructure of organizations.
Oxford: OUP.
Puranam, P., Alexy, O., & Reitzig, M. 2014. What s new
about new forms of organizing? Academy of Man-
agement Review, 39(2): 162180.
Rahman, H. 2017. Reputational ploys: Reputation and
ratings in online labor markets. Working Paper.
Stanford University.
Rahman, H. A. 2018. Reputational Ploys: Reputation and
Ratings in Online Markets. Paper presented at the
Academy of Management Proceedings.
Rahman, H. 2019. From iron cages to invisible cages:
Algorithmic evaluations in online labor markets.
Working Paper. Stanford University.
Rahman, H., & Valentine, M. A. 2019. How client man-
agers restrain control to keep control: Evidence
from technologically-mediated Gigs. Working Pa-
per. Stanford University.
Ramamurthy, K. N., Singh, M., Davis, M., Kevern, J. A.,
Klein, U., & Peran, M. 2015. Identifying employees
for re-skilling using an analytics-based approach.
Paper presented at the 2015 IEEE International Con-
ference on Data Mining Workshop (ICDMW).
Ramsay, R. A. 1966. Managers and men: Adventures in
industry. Sydney, Australia: Ure Smith.
Ranganathan, A., & Benson, A. 2017. A numbers game:
Quantificaion of work, accidental gamification, and
worker productivity. Paper presented at the Acad-
emy of Management.
Raval, N., & Dourish, P. 2016. Standing out from the
crowd: Emotional labor, body labor, and temporal
labor in ridesharing. Paper presented at the Pro-
ceedings of the 19th ACM Conference on Computer-
Supported Cooperative Work & Social Computing.
Retelny, D., Robaszkiewicz, S., To, A., Lasecki, W. S., Patel, J.,
Rahmati, N., Doshi, T., Valentine, M., & Bernstein, M. S.
2014. Expert crowdsourcing with flash teams.Paper
presented at the Proceedings of the 27th Annual ACM
Symposium on User Interface Software and Technology.
406 JanuaryAcademy of Management Annals
Roscigno, V. J., & Hodson, R. 2004. The organizational and
social foundations of worker resistance. American
Sociological Review, 69(1): 1439.
Rosenblat, A. 2018. Uberland: How algorithms are re-
writing the rules of work. Berkeley CA: University of
California Press.
Rosenblat, A., Levy, K. E., Barocas, S., & Hwang, T. 2017.
Discriminating tastes: Uber s customer ratings as ve-
hicles for workplace discrimination. Policy & Inter-
net, 9(3): 256279.
Rosenblat, A., & Stark, L. 2016. Algorithmic labor and in-
formation asymmetries: A case study of Ubersdrivers.
International Journal of Communication, 10: 37583784.
Roy, D. 1952. Quota restriction and goldbricking in a machine
shop. American Journal of Sociology, 57(5): 427442.
Roy, D. 1954. Efficiency and the fix: Informal intergroup
relations in a piecework machine shop. American
Journal of Sociology, 60(3): 255266.
Roy, D. 1959. "Banana Time": Job Satisfaction and Informal In-
teraction. Human Organization, 18(4): 158168. Available
https://doi.org/10.17730/humo.18.4.07j88hr1p4074605.
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S.,
Ma, S., Huang, Z., Karpathy, A., Khosla, A., &
Bernstein, M. 2015. Imagenet large scale visual rec-
ognition challenge. International Journal of Com-
puter Vision, 115(3): 211252.
Sachon, M., & Boquet, I. 2017. KUKA: Planning for the
future of automation, IESE Business School Case.
Barcelona, Spain: Universidad de Navarra.
Sachs, S. E. 2019. The algorithm at work? Explanation and
repair in the enactment of similarity in art data. In-
formation, Communication & Society,117.
Sachs, J. D., & Kotlikoff, L. J. 2012. Smart machines and
long-term misery. Cambridge, MA: National Bureau
of Economic Research.
Salehi, N., Irani, L. C., Bernstein, M. S., Alkhatib, A., Ogbe,
E., & Milland, K. 2015. We are dynamo: Overcoming
stalling and friction in collective action for crowd
workers. Paper presented at the Proceedings of the
33rd Annual ACM Conference on Human Factors in
Computing Systems.
Salehi, N., McCabe, A., Valentine, M., & Bernstein, M.
2017. Huddler: Convening stable and familiar
crowd teams despite unpredictable availability.
Paper presented at the Proceedings of the 2017 ACM
Conference on Computer Supported Cooperative
Work and Social Computing.
Scheiber, N. 2017. How Uber uses psychological tricks
to push its drivers buttons. The New York Times,2.
Schenk, E., & Guittard, C. 2011. Towards a characterization
of crowdsourcing practices. Journal of Innovation
Economics Management, 7(1): 93107.
Schildt, H. 2017. Big data and organizational designthe
brave new world of algorithmic management and com-
puter augmented transparency. Innovation, 19(1): 2330.
Scholz, T. 2012. Digital labor: The internet as play-
ground and factory. Thames, UK: Routledge.
Scholz, T. 2016. Platform cooperativism. Challenging the
corporate sharing economy. New York: Rosa Lux-
emburg Foundation.
Scholz, T., & Schneider, N. 2017. Ours to hack and to own:
The rise of platform cooperativism, a new vision for
the future of work and a fairer internet. New York:
OR Books.
Schwartz, D. 2018a. Embedded in the crowd: Creative free-
lancers, crowdsourced work, and occupational commu-
nity. Work and Occupations, 45(3): 247282.
Schwartz, D. 2018b. Embedded in the crowd: Creative
freelancers, crowdsourced work, and occupational
community. Work and Occupations, 45: 247282.
Schweyer, A. 2018. Predictive analytics and artificial
intelligence in people management:118. Incentive
Research Foundation.
Seaver, N. 2017. Algorithms as culture: Some tactics for
the ethnography of algorithmic systems. Big Data &
Society, 4(2): 112. Available at http://dx.doi.org/
10.1177/2053951717738104.
Segal, L., Goldstein, A., Goldman, J., & Harfoush, R. 2014.
The decoded company: Know your talent better than
you know your customers. London: Penguin.
Selznick, P. 1943. An approach to a theory of bureaucracy.
American Sociological Review , 8(1): 4754.
Sewell, G. 1998. The discipline of teams: The control
of team-based industrial work through electronic and
peer surveillance. Administrative Science Quar-
terly, 43(2): 397428.
Sewell, G., Barker, J. R., & Nyberg, D. 2012. Working under
intensive surveillance: When does measuring every-
thing that moves become intolerable? Human Re-
lations, 65(2): 189 215.
Shah, J., Wiken, J., Williams, B., & Breazeal, C. 2011.
Improved human-robot team performance using
chaski, a human-inspired plan execution system.
Paper presented at the Proceedings of the 6th Inter-
national Conference on Human-Robot Interaction.
Shaikh, M., & Cornford, T. 2010. Letting go of control to
embrace open source: implications for company
and community. Paper presented at the 2010 43rd
Hawaii International Conference on System Sciences.
Shapiro, A. 2018. Between autonomy and control: Strate-
gies of arbitrage in the
on-demand economy. New
media & Society, 20(8): 29542971.
Shaughnessy, H. 2018. Creating digital transformation: Strat-
egies and steps. Strategy & Leadership, 46(2): 1925.
2020 407Kellogg, Valentine, and Christin
Shestakofsky, B. 2017. Working algorithms: Software au-
tomation and the future of work. Work and Occupa-
tions, 44(4): 376423.
Silberman, M., Irani, L., & Ross, J. 2010. Ethics and tactics
of professional crowdwork. XRDS: Crossroads, The
ACM Magazine for Students, 17(2): 3943.
Sitkin, S. B., Cardinal, L. B., & Bijlsma-Frankema, K. M.
2010. Organizational control.Cambridge,MA:Cam-
bridge University Press.
Smith, C. 2006. The double indeterminacy of labour
power: Labour effort and labour mobility. Work,
Employment and Society, 20(2): 389402.
Smith, C. 2015. Continuity and change in labor process
analysis forty years after labor and monopoly capital.
Labor Studies Journal, 40(3): 222242.
Stanculescu, L. C., Bozzon, A., Sips, R.-J., & Houben, G.-J.
2016. Work and play: An experiment in enterprise
gamification, Proceedings of the 19th ACM Confer-
ence on Computer-Supported Cooperative Work &
Social Computing: 346358. San Francisco, CA: ACM.
Star, S. U. L. 1995. Work and practice in social studies of
science, medicine, and technology. Science, Tech-
nology & Human Values, 20(4): 501507.
Stempien, R. J. 1984. The industrial normative consensus:
A field study of machine operators in industry.
Strauss, A. 1985. Work and the division of labor. The So-
ciological Quarterly, 26(1): 119.
Sundararajan, A. 2016. The sharing economy: The end of
employment and the rise of crowd-based capitalism.
Cambridge, MA: MIT Press.
Tabrizi, B. N., Lam, E., Girard, K., & Irvin, V. 2019. Digital
transformation is not about technology. Harvard Business
Review, 13. Available at https://bluecirclemarketing.com/
wp - c o n t e n t / u p l o a d s / 2 0 1 9 / 0 7 / Digital-Transformation-Is-
Not-About-Technology.pdf.
Taylor, F. W. 1911. The principles of scientific manage-
ment. New York & London: Harper & brothers.
Tempini, N. 2015. Governing PatientsLikeMe: Information
production and research through an open, distributed,
and data-based social media network. The Informa-
tion Society, 31(2): 193211.
Thaler, R. H., & Sunstein, C. R. 2009. Nudge: Improving de-
cisions about health, wealth, and happiness. London:
Penguin.
Thompson, P., & Smith, C. 2009. Labour power and labour
process: Contesting the marginality of the sociology
of work. Sociology, 43(5): 913930.
Thompson, P., & Van den Broek, D. 2010. Managerial
control and workplace regimes: An introduction.
Work, Employment and Society, 24(3): 112.
Thompson, P., & Vincent, S. 2010. Labour process theory
and critical realism. Working life: Renewing labour
process analysis:4769. UK: Macmillan Education.
Thorp, A. A., Healy, G. N., Winkler, E., Clark, B. K., Gardiner,
P. A., Owen, N., & Dunstan, D. W. 2012. Prolonged
sedentary time and physical activity in workplace and
non-work contexts: A cross-sectional study of office,
customer service and call centre employees. Interna-
tional Journal of Behavioral Nutrition and Physical
Activity, 9(128): 110.
Ticona, J., & Mateescu, A. 2018. Trusted strangers: Carework
platforms cultural entrepreneurship in the on-demand
economy. New Media & Society, 20(11): 43844404.
Treem, J. W., & , P. M. 2013. Social media use in organizations:
Exploring the affordances of visibility, editability, persis-
tence, and association. Annals of the International Com-
munication Association, 36(1): 143189.
Truelove, E. 2019. Integrating the crowd into the firm pro-
duction process: The critical role of guided mobilization.
Working Paper. Massacusetts Institute of Technology.
Tufekci, Z. 2014. Big questions for social media big data:
Representativeness, validity and other methodologi-
cal pitfalls. ICWSM , 14: 505514.
Tufekci, Z. 2017. Twitter and tear gas: The power and
fragility of networked protest. New Haven, CT: Yale
University Press.
Turco, C. J. 2016. The conversational firm: Rethinking
bureaucracy in the age of social media. New York:
Columbia University Press.
Valenduc, G., & Vendramin, P. 2016. Work in the digital
economy: sorting the old from the new. Brussels,
Belgium: European Trade Union Institute.
Valentine, M., & Hinds, R. 2019. Algorithms and the org
chart. Working Paper. Stanford University.
Valentine, M. A. 2018. Renegotiating spheres of obligation:
The role of hierarchy in organizational learning. Ad-
ministrative Science Quarterly, 63(3): 570606.
Valentine,M.A.,Retelny,D.,To,A.,Rahmati,N.,Doshi,T.,&
Bernstein, M. S. 2017. Flas h organiza tions: C rowd-
sourcing complex work by structuring crowds as orga-
nizations. Paper presented at the Proceedings of the 2017
CHI Conference on Human Factors in Computing Systems.
Vallas, S. P. 2019. Platform capitalism: Whats at stake for
workers? New Labor Forum, 28(1): 4859.
Vallas, S. P., & Kovalainen, A. 2019. Taking stock of the
digital revolution, work and labor in the digital age:
112. Bingley, UK: Emerald Publishing Limited.
Vancil, R. F. 1982. Implementing strategy: The role of top
management. Boston: Division of Research, Harvard
Business School.
Varshney, K. R., Chenthamarakshan, V., Fancher, S. W.,
Wang, J., Fang, D., & Mojsilovi
´
c, A. 2014.
Predicting
408 JanuaryAcademy of Management Annals
employee expertise for talent management in the
enterprise. Paper presented at the Proceedings of the
20th ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining.
Veale, M., Van Kleek, M., & Binns, R. 2018. Fairness and
accountability design needs for algorithmic support
in high-stakes public sector decision-making. Paper
presented at the Proceedings of the 2018 CHI confer-
ence on Human Factors in Computing Systems.
Veen, A., Barratt, T., & Goods, C. 2019. Platform-Capitals
Appetite for Control: A labour process analysis of
food-delivery work in Australia. Work, Employment
and Society. Available https://doi.org/10.1177/
0950017019836911.
VonAhn,L.,Maurer,B.,McMillen,C.,Abraham,D.,&Blum,M.
2008. reCAPTCHA: Human-based character recognition
via web security measures. Science, 321(5895): 14651468.
Waardenburg, L., Sergeeva, A., & Huysman, M. 2018.
Digitizing crime: How the use of predictive policing
influences police work practices. Paper presented at
the34th European Group for Organizational Studies
(EGOS) Colloquium: Surprise in and around Organi-
zations: Journeys to the Unexpected.
Walz, S. P., & Deterding, S. 2014. The gameful world:
Approaches, issues, applications: Cambridge, MA:
The MIT Press.
Watkins Allen, M., Coopman, S. J., Hart, J. L., & Walker,
K. L. 2007. Workplace surveillance and managing
privacy boundaries. Management Communication
Quarterly, 21(2): 172200.
Weber, M. 1947. The theory of social and economic or-
ganization. New York: Oxford University Press.
Weber, M. 1968. Bureaucracy. In G. Roth,& C. Wittich (Eds.),
Economy and society: An outline of interpretive so-
ciology. Berkely, CA: University of California Press.
Weld, D. S., & Bansal, G. 2018. Intelligible Artificial In-
telligence. Available at arXiv.org/abs/1803.04263.
West, J., & OMahony, S. 2008. The role of participation ar-
chitecture in growing sponsored open source communi-
ties. Industry and Innovation, 15(2): 145168. Available
https://doi.org/10.1080/13662710801970142.
Wexler, R. 2018. The odds of justice: Code of silence: How
private companies hide flaws in the software that
governments use to decide who goes to prison and
who gets out. CHANCE, 31(3): 6772.
Whyte, W. H. 1956. The organization man. New York:
Simon & Schuster.
Wilson, H. J., Daugherty, P., & Morini-Bianzino, N. 2017.
The jobs that artificial intelligence will create. Mit
Sloan Management Review (Summer 2017).
Wing, J. M. 2006. Computational thinking. Communica-
tions of the ACM, 49(3): 3335.
Winner, L. 1980. Do artifacts have politics? Daedalus,
109(1): 121136.
Wood, A. J., Graham, M., Lehdonvirta, V., & Hjorth, I. 2019.
Good gig, bad gig: Autonomy and algorithmic control in the
global gig economy. Work, Employment and Society
,
33(1): 5675.
Wood, A., & Lehdonvirta, V. 2019. Platform precarity:
Surviving economic insecurity in the gig economy.
Paper presented at the SASE. New York.
Wood, A. J., Lehdonvirta, V., & Graham, M. 2018. Workers
of the internet unite? Online freelancer organisation
among remote gig economy workers in six Asian and
African countries. New Technology, Work and Em-
ployment, 33(2): 95112.
Xu, L. D., He, W., & Li, S. C. 2014. Internet of things in
industries: A survey. IEEE Transactions on Indus-
trial Informatics, 10(4): 22332243.
Yeung, K. 2017. Hypernudge: Big Data as a mode of reg-
ulation by design. Information, Communication &
Society, 20(1): 118136.
Yin, P.-L., Davis, J. P., & Muzyrya, Y. 2014. Entrepreneurial
innovation: Killer apps in the iPhone ecosystem.
American Economic Review, 104(5): 255259.
Zammuto, R. F., Griffith, T. L., Majchrzak, A., Dougherty, D. J., &
Faraj, S. 2007. Information technology and the changing
fabric of organization. Organization Science, 18(5): 749762.
Zhou, S., Valentine, M., & Bernstein, M. S. 2018a. In search
of the dream team: Temporally constrained multi-
armed bandits for identifying effective team struc-
tures. Proceedings of the 2018 CHI Conference on
Human Factors in Computing Systems: 113. Mon-
treal QC, Canada: ACM.
Zhou, S., Valentine, M. A., & Bernstein, M. S. 2018b. In search
of the dream team: Temporally constrained multi-
armed bandits for identifying effective team struc-
tures. Paper presented at the Proceedings of the 2018 CHI
Conference on Human Factors in Computing Systems.
Ziewitz, M. 2016. Governing algorithms: Myth, mess, and
methods. Science, Technology, & Human Values,
41(1): 316.
Zuboff, S. 1988. In the age of the smart machine: The
future of work and power. New York: Basic Books.
Zuboff, S. 2019. The age of surveillance capitalism: The
fight for a human future at the new frontier of
power. London: Profile Books.
2020 409Kellogg, Valentine, and Christin
APPENDIX: METHODS
We based our analysis on a review of more than 1,100 articles that reported an empirical study of algorithmic, crowd,
or platform technologies. We identified the articles through multiple stages. First, we ran a search on the Web of
Science database and Google Scholar f or the following keywords: algorithm*,”“auto mation,”“crowd*, or plat-
form*. We selected 2005 as the loose starting point, a period that represented an inflection point in algorithmic
capabilities. Consistent with the motivation of our review, the search included peer-reviewed conference proceedings
or journals in any social science field, including interdisciplinary social science fields such as humancomputer
interaction; science, technology, and society; and critical algorithms studies. We next skimmed the abstracts of all of
these articles to identify studies that reported empirical studies of work contexts. We included empirical articles
(e.g., including some kind of data, including observation, archival or trace data, and survey). Not included at this point
were studies of leisure or home contexts, or theoretical pieces, or review articles, although we reviewed the citations of
the review articles to find additional articles to include. In our final review, we realized that some technologies were
developing more quickly than reflected in peer-reviewed articles, so we also included case studies or practitioner
journals as motivating examples. Finally, we circulated the article to two experts in each of the interdisciplinary fields
to solicit additional citations.
410 JanuaryAcademy of Management Annals