• Aucun résultat trouvé

Accommodation-through-Bypassing : overcoming professionals' resistance to the implementation of algorithmic technology

N/A
N/A
Protected

Academic year: 2021

Partager "Accommodation-through-Bypassing : overcoming professionals' resistance to the implementation of algorithmic technology"

Copied!
24
0
0

Texte intégral

(1)

Accommodation-through-Bypassing: Overcoming Professionals’ Resistance to the

Implementation of Algorithmic Technology

by

Ari Brendan Galper B.A. Sociology Reed College, 2014

SUBMITTED TO THE SLOAN SCHOOL OF MANAGEMENT IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF

MASTER OF SCIENCE IN MANAGEMENT RESEARCH at the

MASSACHUSETTS INSTITUTE OF TECHNOLOGY May 2020

©2020 Massachusetts Institute of Technology. All rights reserved.

Signature of Author: __________________________________________________________ Department of Management

May 8, 2020 Certified by: _________________________________________________________________

Katherine C. Kellogg

David J. McGrath Jr. (1959) Professor of Management and Innovation Professor, Work and Organization Studies

Thesis Supervisor Accepted by: _________________________________________________________________

Catherine Tucker Sloan Distinguished Professor of Management Professor, Marketing Faculty Chair, MIT Sloan PhD Program

(2)

2

Accommodation-through-Bypassing: Overcoming Professionals’ Resistance to the

Implementation of Algorithmic Technology

by

Ari Brendan Galper

Submitted to the Department of Management on May 8, 2020 in Partial Fulfillment of the Requirements for the Degree of Master of Science in

Management Research

ABSTRACT

While algorithmic technologies are rapidly changing how work is performed in professional organizations, professional workers are resisting the implementation of these technologies in their workplaces. Previous studies of the development and implementation of workplace technologies suggest that managers or technology developers respond to workers’ resistance in a way that is intended to make workers more amenable to using the technology, and that professionals’ recalcitrance can ultimately impede the adoption of new technology despite managers’ or developers’ efforts. In my a 21-month field study of the development and implementation of three cases of machine learning-based algorithmic technology, I find that that the development and implementation of algorithmic technologies can proceed in the face of professionals’ resistance when developers bypass the professional workers and repurpose the technology for use by another group of actors that is present in professional work settings: managers, administrators, and other “central” actors. Using the post-humanist concept of tuning—which views technology development as a dialectic between resistance and accommodation—I show that in order to carry out accommodation-through-bypassing, developers engage in a series of practices whereby they strategically manage relations with the resistant “local” actors, the technology itself, and the “central” actors. These findings highlight that the dialectic of resistance and accommodation that characterizes the technology development process can occur even in the face of strong professional recalcitrance, that accommodation can be strategically geared toward workers who are not the originally-intended users of a technology, and that technology developers can play a key role in influencing workplace relations in professional organizations.

Thesis Supervisor: Katherine C. Kellogg

Title: David J. McGrath Jr. (1959) Professor of Management and Innovation; Professor, Work and Organization Studies

(3)

3

Introduction

The theme of resistance plays a critical role in studies of technology development and implementation in the workplace. Scholars have documented the reasons why workers resist new technologies (e.g. Barley 1986; Bechky 2003; Carlile 2004; Christin 2017; Huising 2014; Orlikowski 1993), the methods by which workers resist (e.g. Burawoy 1979; Christin 2017; Edwards 1979; Noble 1984; Prasad and Prasad 1998, 2000), as well as the implications of workers’ resistance on broader workplace dynamics (e.g. Barley 1986; Barrett et al. 2012; Edwards 1979; Pachidi et al. 2020; Zuboff 1988). However, resistance is not simply a means by which workers counter the development and implementation of workplace technology. Studies that view technology through a post-humanist perspective extend agency to the material realm and conceptualize resistance as a key component of the process by which workers and technology interact with each other. In particular, Pickering (1993, 1995) introduces the concept of tuning to describe the interplay between resistance—or the failure of human actors to achieve the intended capture of material agency—and accommodation, or the active human strategy of responding to material resistance; and Barret et al. (2012) extend tuning to allow for multiple forms of resistance and accommodation from different agencies—human and material. It is the dialectic of resistance and accommodation that comprises the interplay between workers and workplace technology.

Within firms, managers and technology developers can accommodate workers’ resistance to new technology in a number of ways, including by revising goals, modifying the material form of the technology, shifting human frames and activities, and adjusting the social or political relations associated with the technology (Barrett et al. 2012:1450; Pickering 1993, 1995). The activities that comprise accommodation are geared toward making the technology’s intended users less resistant and more amenable to using the technology. For instance, managers and developers may employ discursive strategies to convince intended users to buy-in to a technology (e.g. Bloomfield et al. 1992), or they may alter a technology in order to make it more useful to users (e.g. Leonardi 2011). Additionally, workers may alter their routines and develop “workarounds” to what they perceive as the inconvenience or shortcomings of a technology (e.g. Azad and King 2012; Boudreau and Robey 2005).

Though the literature on technology and organizations has not explicitly investigated whether tuning occurs differently in different types of work contexts, there are a number of reasons why we might expect the dialectic of resistance and accommodation to proceed differently in professional work contexts than it does in non-professional work contexts. Because of the discretion and autonomy that professionals are afforded in their work, and because of the professions’ ability to control the distribution and practice of their expert knowledge (Abbott 1988), professionals may be able to resist the implementation of new workplace technologies in ways that their non-professional counterparts cannot. Thus, professional workers like police officers (Brayne 2017), judges (Christin 2017, 2018), and healthcare professionals (Bloomfield et al. 1992; Constantinides and Barrett 2005; Lapointe and Rivard 2005; Markus 2004) have a greater ability than their non-professional counterparts to actively resist—and even reject outright—new workplace technologies without the threat of reprimand. In cases of strong professional recalcitrance, managers and technology developers may be unable to accommodate resistance in a way that leads to the intended users’ adoption of the technology. As such, in many accounts of professional workers’ resistance, the tuning process effectively stalls at this point, and the technology in question is not implemented in the workplace.

We might expect professionals’ resistance to manifest particularly strongly against technologies that represent clear threats to professional work. Such technologies include

(4)

4

algorithmic technologies, which following Kellogg et al. (2020:366) I define as “computer-programmed procedures that transform input data into desired outputs in ways that tend to be more encompassing, instantaneous, interactive, and opaque than previous technological systems.” Algorithmic technologies raise doubts about the reliability of expert judgement by revealing patterns of inefficiency and discrimination in professionals’ work. Certain algorithmic technologies, like decision support tools, are designed to counteract these patterns by intervening at the site of expert judgement and correcting for the subjective nature of professional decision-making with purportedly objective guidance (Timmermans and Berg 2003). Such interventions threaten users’ discretion and autonomy—two of the defining characteristics of professional work (Abbott 1988)—and tend to elicit strong resistance from professional workers. Given that algorithmic technologies are increasingly commonplace in professional work settings, the issue of professional workers’ resistance to technology carries particular relevance today.

While the existing literature is critical to explaining the processes of technology development, implementation, and resistance in work settings, it cannot account for the outcomes that I observed in my 21-month ethnographic study of the development and implementation of three algorithmic technologies in three healthcare organizations. In each of my cases, a development team sought to develop and implement an algorithmic technology—particularly, a machine learning (ML)-based clinical decision support tool—in a hospital setting; however, the intended users of the technology—professional healthcare workers, including doctors, nurses, and case managers—actively resisted its implementation. The literature would lead us to expect first that the development teams would attempt to accommodate the healthcare workers’ resistance in a way that would make the workers more amenable to using the technologies, and second that healthcare professionals’ recalcitrance would likely impede the adoption of the tools in the hospitals. However, neither of these expectations was born out: developers did not aim their accommodation efforts toward the technologies’ resistant users, and the algorithmic technologies nonetheless continued being developed and planned for implementation within the hospitals. The persistence of the technologies’ development and implementation in each of the three settings thus raises the question: How do the development and implementation of algorithmic technologies occur in the face of resistance by powerful professionals?

I found that technology development and implementation can proceed in the face of professionals’ resistance when developers bypass the professionals and repurpose their technology for use by another group of workers that is present in professional work settings: managers, administrators, and other “central” actors. In order to overcome professionals’ resistance and carry out this accommodation-through-bypassing, the development teams that I studied engaged in a series of practices whereby they strategically managed relations with the resistant “local” technology users, the technology itself, and the central technology users. Managing relations with local users involved observing local users’ engagement with the technology and obtaining local users’ expert knowledge; managing relations with the technology involved incorporating local users’ expert knowledge and redesigning around technological resistance; and managing relations with central users involved demonstrating the technology to central users and obtaining new data by establishing trust with central users.

These findings have multiple implications for our understanding of the development and implementation of algorithmic technology in professional settings. First, they show that the dialectic of resistance and accommodation that characterizes the technology development process can occur even in the face of strong professional recalcitrance. Second, these findings show that accommodation does not necessarily involve accommodating a technology’s initially-intended

(5)

5

users, but can instead be strategically geared toward another, less-resistance group of users. Finally, these findings highlight the critical role that technology developers play in facilitating relations between workers, managers, and workplace technology.

Technology and Resistance in the Workplace

Resistance by and against Technology

While a number of theoretical perspectives within the literature on technology and organizations examine the role of resistance in human-technology interaction, the post-humanist perspective recognizes material agency as a force that works alongside human agency to influence social relations. For instance, Pickering (1993, 1995) contends that the human and material realms shape and transform each other, with each restructuring and reconfiguring the other during encounters between a technology and its users or developers. He describes this idea as a process of tuning that occurs through the dialectical interplay between technological resistance—the failure of human actors to achieve the intended capture of material agency—and human accommodation—the active human strategy of responding to material resistance. Pickering illustrates this dialectic with the example of a scientist developing a piece of equipment for conducting particle physics experiments. The equipment resists the scientist by not producing the desired type of experimental outcomes; the scientist accommodates the technology’s resistance by altering the construction of the equipment, modifying his own experimental techniques, and revising his theoretical assumptions about the physics underlying his experiments (Barrett et al. 2012:1451). With this example, Pickering highlights how resistance and accommodation are at the heart of the process by which the material and human realms—technologies and technology users—are interactively restructured with respect to each other (Pickering 1993:585).

It is not only material agency that acts as a force of resistance in the tuning process: Barret et al. (2012) extend tuning to allow for multiple forms of resistance and accommodation from different agencies—human and material. Accordingly, scholars of workplace dynamics have documented a number of reasons why workers might resist the implementation of new technologies or other types of organizational changes, including challenges to workers’ identity, expertise, and status (Barley 1986; Barrett et al. 2012; Beane 2019; Bechky 2003; Christin 2017; Huising 2014; Kellogg 2011; Timmermans and Berg 2003); differences in interests, meanings, and information between relevant parties (Carlile 2004; Kellogg 2014; Pachidi et al. 2020); and concerns about the usefulness of a change or the workflow disruption that it may cause (Orlikowski 1993).

These reasons for worker resistance correspond to a variety of types of resistance acts. Protests, strikes, and other forms of organized collective resistance have historically played a significant role in shaping workplace relations (Edwards 1979; Friedman 1977); however, more common are acts of routine resistance, which refer to less visible and more indirect forms of opposition that can take place within the everyday contexts of work and organizations (Prasad and Prasad 1998, 2000; Scott 1985). Workers can resist changes to the workplace through a number of routine means, including foot-dragging (Christin 2017; Scott 1985); gaming, or “manipulating rules and numbers in ways that are unconnected to, or even undermine, the motivation behind them” (Espeland and Sauder 2007:29; see also Christin 2017); and open critique (Christin 2017; Prasad and Prasad 2000). Workers can also develop “workarounds,” a use a technology in a way that resists managers’ or developers’ intentions (Azad and King 2012, Boudreau and Robey 2005).

Within firms, managers or technology developers can accommodate workers’ resistance to new technology in a number of ways, including by revising goals, modifying the material form of

(6)

6

the technology, shifting human frames and activities, or adjusting the social or political relations associated with the technology (Barrett et al. 2012:1450; cf. Pickering 1993, 1995). For instance, managers and developers may employ discursive strategies to convince intended users to buy-in to a technology (e.g. Bloomfield et al. 1992), or they may alter a technology in order to make it more useful to users (e.g. Leonardi 2011). Workers might also develop workarounds as a form of accommodation, altering their routines in order to accommodate the perceived inconvenience or shortcomings of a technology (e.g. Azad and King 2012; Boudreau and Robey 2005). Following these examples, we would expect the activities that comprise accommodation to be geared toward making a technology’s intended users less resistant and more amenable to the technology.

Technology and Professional Work

There are a number of reasons why we might also expect technology development and implementation to proceed differently (and encounter different types of resistance) in professional settings than in non-professional settings. First, given the discretion that professionals are afforded in their work, and given the professions’ ability to control the distribution and practice of their expert knowledge (Abbott 1988), professionals have the ability to resist threatening technologies in ways that are less viable for non-professionals. Implicit in studies of professional resistance is that workers like police officers (Brayne 2017), judges (Christin 2017), or physicians (Bloomfield et al. 1992; Constantinides and Barrett 2005; Lapointe and Rivard 2005; Markus 2004) are able to reject a new workplace technology outright without the threat of reprimand; the same does not seem to be true for their non-professional counterparts (cf. Pachidi et al. 2020). Accordingly, while some of the doctors, nurses, and other healthcare professionals who were the intended users of the algorithmic technologies that I studied engaged in various forms of “foot-dragging,” others simply stopped participating in the interventions soon after they were introduced.

Second, professionals face a variety of unique threats to the nature of their work and the jurisdiction that demarcates it. Expert knowledge is increasingly under attack and subject to growing critique; professionals are increasingly blamed for the perpetuation of ‘‘broken’’ systems that are deemed both inefficient and discriminatory, and many areas of expert work that used to be protected from quantitative evaluation are now asked to comply with a growing number of metrics and standards (Christin 2017:2–3; see also Espeland and Sauder 2016; Eyal 2019; Strathern 2000). Increasing public skepticism toward expert knowledge primes professionals to put up defensive barriers against such incursions into their jurisdiction (Timmermans and Berg 2003). While non-professional workers surely share with non-professionals the desire to maintain whatever autonomy and identity that they possess in the workplace, the distinctive threats that are befalling professional workers (as well as the distinctive status-positions from which professional workers are receiving these threats) make it critical to acknowledge how professionals likely resist the implementation of new technology into the workplace in distinctive ways.

Algorithmic Technologies

One driver of the increasing skepticism toward expert knowledge is the proliferation of algorithmic technologies, which following Kellogg et al. (2020:366) I define as “computer-programmed procedures that transform input data into desired outputs in ways that tend to be more encompassing, instantaneous, interactive, and opaque than previous technological systems.” Certain types of algorithmic technologies are used to augment workers’ performance of work tasks by making recommendations for action that are based on underlying patterns found in large quantities of data. Despite the intentions of their developers, these technologies also constitute a new typology of managerial control over the work process: algorithmic control, whereby

(7)

7

algorithmic technologies prompt workers to make specific decisions while continuously and covertly restricting information made available to workers (Kellogg, Valentine, and Christin 2020:373). Workers can experience this new form of control in a variety of negative ways: because algorithmic recommendations may not be intelligible to workers and can reinforce social inequalities, and because information restrictions can prevent workers from communicating with managers and with one another, workers may experience frustration and negative conceptions of well-being under algorithmic control (Kellogg et al. 2020; see also Orlikowski and Scott 2014; Pachidi et al. 2020; Rosenblat and Stark 2016).

While various kinds of workers may have reason to resist algorithmic control, algorithmic technologies are uniquely threatening to professionals because they can reveal patterns of inefficiency and discrimination in professional work, thereby raising doubts about the reliability of expert judgement. Certain algorithmic technologies—including the machine learning-based decision support tools in the current study—are designed to counteract patterns of inefficiency and discrimination by intervening at the site of expert judgement and correcting for the subjective nature of professional decision-making with purportedly objective guidance (Timmermans and Berg 2003). Such interventions threaten users’ discretion and autonomy—two of the defining characteristics of professional work—and are likely to elicit strong resistance from professional workers (Abbott 1988; Timmermans and Berg 2003). Given that algorithmic technologies are increasingly commonplace in a wide variety of work settings, the issue of professional workers’ resistance to algorithmic technology begs investigation.

The literature on technology development and algorithmic technologies would lead us to expect that the developers of the algorithmic technologies that I studied would attempt to accommodate healthcare professionals’ resistance in a way that would lower the professionals’ barriers to using the technologies. The literature would also lead us expect that healthcare professionals’ recalcitrance would ultimately impede the adoption of the technologies in their respective hospitals. However, neither of these expectations was born out: developers did not aim their accommodation efforts toward the technologies’ resistant users, and the technologies nonetheless continued being developed and planned for implementation within the hospitals. The persistence of the technologies’ development and implementation in each of the three settings thus raises the question of how the development and implementation of the algorithmic technologies occurred in the face of resistance by powerful professionals.

Methods

I conducted a 21-month field study of the development and implementation of three algorithmic technologies designed for use in US hospitals. Using data from longitudinal observations as well as formal and real-time interviews (Barley and Kunda 2001) with technology development teams, healthcare professionals, and clinical managers and administrators, I inductively analyzed field notes and interview transcripts to understand the processes by which these groups interact with an algorithmic technology that is being developed for and implemented in their workplace.

Background and Cases

Three Cases of Algorithmic Technology

The specific cases of algorithmic technology that I studied were initially designed as algorithmic recommendation systems, or decision support systems that utilize ML techniques to offer suggestions that prompt a targeted user to make a decision preferred by choice architects (Kellogg et al. 2020:372). Algorithmic recommendation systems are becoming increasingly

(8)

8

commonplace in healthcare organizations, with medical-image diagnostic systems and other complex decision support systems already beginning to transform clinical and operations processes in hospitals and clinics (Yu, Beam, and Kohane 2018).

Each of the three algorithmic recommendation systems that I studied (which, for purposes of brevity, I will refer to as ‘ML tools’) began as a clinical decision support tool planned for implementation in a particular hospital. During the initial implementation period for each tool, the intended users—doctors, nurses, and case managers—largely resisted integrating the technology into their work. Despite the differences between the ML tools’ development teams as well as the differences in the tools’ use-cases and implementation settings (which I will detail below), the development teams all responded to workers’ resistance in a similar way: the development teams changed their plans to implement the ML tools locally—that is, with doctors, nurses, and other individual providers who work on the hospital floor at the point of care—and instead decided to implement the tools centrally—that is, in administrative offices overseen by managers and top-brass clinical researchers. Below, I provide detailed descriptions of the initial, local versions of the tools, as well as their later, centralized versions (see Table 1).

Discharge Risk Tool

The initial idea for the Discharge Risk Tool was born out of a collaboration between a clinical research team at a major academic medical center in New England consisting of physician-researchers, non-clinical project managers, and IT staff; and an academic research team at a nearby university consisting of a professor and a number of postdoctoral researchers who specialize in machine learning methods. Together, the members of these research teams comprised the development team for the Discharge Risk Tool.

The local version of the Discharge Risk Tool was developed to reduce clinically-unnecessary length-of-hospital-stay of surgical patients by providing a risk score indicating the likelihood of readiness-for-discharge for each inpatient in the medical center’s surgical units. The risk scores were intended to prompt select case managers, nurses, and physicians to make decisions that would increase bed availability by reducing unnecessary patient length-of-stay.

While the local version of the Discharge Risk Tool was a decision support tool meant be used at the point of care, the central version of the tool was meant to be used as part of a new “Centralized Capacity Center” that had been recently established in the hospital in an effort to centrally coordinate bed capacity issues across units. The basic functionality of the tool was to remain similar to its local functionality; however rather than being used by individual nurses, doctors, and case managers to assess the likelihood of discharge of their own patients, the tool would be used by the admitting staff and resource clinicians who work in the new Center. These workers would use the information that the tool provides in order to guide their regular communication with clinical staff regarding patient discharge.

Surgery Risk Tool

The Surgery Risk Tool was developed by a small healthcare technology start-up. The tool’s development team consisted of the start-up’s few key employees: the “business owner” and two developers. This core group was in occasional communication with members of an advisory board consisting of high-level clinical and non-clinical industry representatives from a variety of health systems. The development team also worked in collaboration with two physician-researchers who held leadership positions in the large academic medical center where the tool was being implemented.

The local version of the Surgery Risk Tool was developed to assist surgeons in understanding the risk and potential outcomes of patients with acute surgical problems in need of

(9)

9

an emergency operation, and to help surgeons counsel patients and families prior to surgery. When determining whether to perform an operation on a surgical inpatient, surgeons would access a mobile phone-based application that generated a risk estimation of the likelihood of the patient’s post-operative mortality. These risk estimations were intended to prompt surgeons, patients, and families to make decisions that would decrease incidence of patient mortality and increase their understanding of the risk associated with emergency surgery operations.

While the local version of the Surgery Risk Tool was a clinical decision support tool meant be used at the point of care, the central version of the tool is a benchmarking tool designed for use by clinical and administrative leaders to aid with assessing program, physician-team, and individual physician-level performance relative to averages. The vision for the use of the central version of the tool was that clinical leaders would use to tool as a reference guide during the briefs and debriefs that occur during regularly scheduled forums for patient case review, including grand rounds and morbidity and mortality (M&M) conferences.

C-section Risk Tool

The C-section Risk Tool was developed by the same start-up company that developed the Surgery Risk Tool, and the functionality of the local and central versions of the two tools—both in terms of the types of modeling techniques that they used as well as how healthcare workers were intended to interact with them—was quite similar. The C-section Risk Tool’s development team consisted of the start-up’s few key employees as well as a clinician-researcher who held a leadership role at a large academic medical center in the Midwest where the tool was initially piloted.

Data Collection

Over the course of 21 months (June 2018-March 2020), I collected data about the development and use of these three ML tools from in-person observations as well as in-person, phone, and video interviews with development team members, local and central end-users, and hospital clinical leadership.

Data collection for the case of the Discharge Risk Tool began in June 2018, (shortly after development of the tool began), and ended in March 2020. For the first 12 months of data collection, I attended development meetings and conducting observation sessions of developers

Table 1. Three Cases of Algorithmic Technology

Local Use Centralized Use Discharge Risk Tool Clinical decision support tool

introduced to physicians, nurses, and case managers to predict patients’ readiness for discharge

Tool planned for use by admitting staff and clinical leaders in a “Centralized Capacity Center” during communication with clinical staff regarding patient discharge

Surgery Risk Tool Clinical decision support tool introduced to surgeons to predict risk of morbidity in case of emergency surgery

Tool planned for use by hospital leadership as a benchmarking tool for assessing hospital and physician

performance relative to national averages

C-section Risk Tool Clinical decision support tool introduced to OB/GYNs to predict patients’ risk of complications in case of C-section

Tool planned for use by hospital leadership as a benchmarking tool for assessing hospital and physician-group performance relative to national guidelines

(10)

10

and users at the hospital where the Discharge Risk Model was being implemented. Meetings included the weekly developers’ meeting where a group of developers working on various projects in the hospital met to update each other on their progress, solve issues, and define their work tasks for the following week; and a bi-weekly meeting between developers, researchers, and the clinician-researchers who were championing the Model in the hospital. During this time, I also attended approximately once per week the morning clinical rounds on the units where the Discharge Risk Model was to be used, and shadowed the targeted end-users of the model as they conducted their work on the units. I also conducted “real-time” interviews (Barley and Kunda 2001) with these end-users during the shadowing sessions, in addition to a series of private, semi-structured interviews with end-users, developers, and clinical managers. Once the local tool was pulled from the floor and development on the centralized version of the tool began, I continued to attend weekly developers’ meetings on a regular basis and conducted regular interviews with key members of the development team through the end of the data collection period.

Data collection for the cases of the Surgery Risk Tool and C-Section Risk Tool began in October 2019, shortly after development of the tools had begun, and lasted through April 2020. For the Surgery Risk Tool, I attended weekly project-specific conference calls with the development team, and conducted regular in-person, phone, and video interviews with members of the development team. Additionally, I attended a number of in-person meetings and conference calls between development team members and key clinical stakeholders at the hospital where the tool was being implemented. Data collection for the C-section Risk Tool also involved weekly project-specific conference calls with the development team (for this tool, weekly calls often included key clinical stakeholders), as well as regular in-person, phone, and video interviews with members of the development team.

Across all three cases, I observed 38 development team meetings and conducted 32 interviews. These interviews do not include the “real-time” interviews that I conducted with tool users during 39 observation sessions on the hospital floor in the case of the Discharge Risk Tool. All participants were assured of anonymity. Interviews were audio-recorded only when the informant provided consent. The structure of interviews varied according to the informant’s role and the phase of data collection, with most interviews being semi-structured and lasting between thirty minutes and one hour.

Data Analysis

My inductive analysis consisted of repeated coding of field notes and interview transcripts, as well as writing of bi-weekly memos in which I iterated between observations from my data and existing theory. Since data collection for the Discharge Risk Tool began months earlier than it did for the two other cases of ML tools, initial phases of analysis focused on relations between the resistant local users of that tool and the tools’ development team. However, as data collection for the Surgery Risk Tool and C-section Risk Tool began, and as the implementation of the local version of all three tools was suspended in the face of user resistance, I began to focus data collection and analysis on the ways in which the development teams were reacting to these constraints and altering their plans accordingly.

Given that the literature on technology development and implementation led me to expect that a technology may not be implemented at all within a particular setting if it was resisted by professional workers, I was surprised to find that all three development teams sought to salvage their technology and implement it within the same organization where it had initially been resisted. In order to trace the ways in which development teams were continuing their development and implementation efforts despite professionals’ resistance, I began to open-code examples of the

(11)

11

specific practices that the team within each case was using with respect to the other groups of actors involved in ML tool development and implementation—the local users, the technology, and the central users. As the trajectory of each case continued to converge—all three teams planned to “bypass” local workers’ resistance by implementing a version of their ML tool at a higher level within each hospital—I began comparing data across cases, noting similarities in the practices that each development team was using as they engaged with the other groups of actors in their respective hospital setting. Through successive iterations of re-analyzing data, identifying patterns across cases, and combining codes, I identified six key practices through which development teams managed relations with local users, the technology, and central users—two practices within each relational dyad.

Findings

Initial Reactions to Technology Implementation: Worker Resistance

Each of the three ML tools encountered resistance from the outset of its implementation. The underlying reasons for workers’ resistance generally accorded with what the literature would lead us to expect, including concerns about technology usefulness and workflow disruption (Orlikowski 1993). However, workers’ resistance was also a response to the particular characteristics of the technology being implemented.

For instance, some doctors, nurses, and other workers across the cases resisted the ML tool being implemented in their workplace on the basis of the threat to their professional identity, expertise, and status that the tool represented. Workers’ resistance to this threat often manifested through concerns about the accuracy or bias of the ML tools, which the social scientific literature on algorithms identifies as a major impediment to the responsible implementation of algorithmic technology (Angwin et al. 2016; Barocas and Selbst 2016; Noble 2018; O’Neil 2016). Concerns about accuracy and bias were particularly evident in the case of the Discharge Risk Tool, as when a nurse in the surgery unit pointed to the Model’s predictions on her computer screen and noted, “I look at this and see that this patient is predicted to go home. But there is no way that this patient is leaving today. […] These are obvious cases, and there are wrong predictions like this almost every day.”

The intended users of the Discharge Risk Tool felt that they ought to maintain the authority to determine whether a patient should be discharged because they are able to more-accurately and efficiently assess a patient’s readiness-for-discharge than an algorithm. For instance, when referring to the tool, one of the case managers noted that “when I know that a patient just had a surgery yesterday, is full of tubes, is NPO… I don’t need this thing to tell me that they’re not going home today. Duh! It’s a waste of time!” In the eyes of some users, the tool also represented a symbolic affront to the “human touch” involved in the practice of healthcare. This was clear at the end of morning rounds one day early in the tool’s implementation, when, as the nurse manager and the resource nurse were assessing the tool’s discharge predictions, a long-time nurse exclaimed, “This is ridiculous! The computer is not human!” Thus, the tools represented threats to the healthcare workers’ technical expertise and professional status, as well as to the symbolic aspects of their professional identity. One of the developers of the Surgery Risk Tool and C-section Risk Tool summed up the sentiment thusly: “There are a lot of egos in healthcare. We’re coming in with clinical decision support, so we’re directly challenging doctors’ prowess, and that’s not easy for them to get on board behind.”

Healthcare workers also resisted using the ML tools because of differences in interests, meanings, and information between themselves, the tool developers, and the tools themselves.

(12)

12

When different professional groups perceive different types of information to be meaningful and relevant to their the processes of diagnosis, inference, and treatment, the groups can be unable to or discouraged from collaborating to the extent that doing so may compromise their own professional values and identities (Carlile 2004; Kellogg 2014) or their epistemological orientations toward information (Pachidi et al. 2020). In the context of the ML tools, such differences manifested in worker resistance through the issue of interpretability, or the ability to understand how the model underlying the tools processes data (see Burrell 2016). For instance, during a meeting between the developers of the Surgery Risk Tool and two physician representatives, one of the physicians noted the difficulty of reconciling his own methods of gathering and processing information with that of the ML tool:

I’m just trying to compare the tool with the human mind here. After thirty-plus years of experience as a trauma surgeon, I have a series of question that I ask when I have a patient in front of me, and I know that these are the right questions: heart rate, white blood count, urine output... But when I open this tool, it asks me totally out-of-field questions. Who cares about INR or bilirubin? […] Listen, I get why it asks me these questions in terms of the statistics. But if I were to try this tool out in the surgical auditorium at M&M and you [developers] weren’t here to explain it, the surgeons would think it’s absolute crap. So the whole argument about interpretability is not true.

Bypassing Resistance

When confronted with the resistance of professional healthcare workers, the ML tool development teams did not give up on trying to implement their technology. Instead, they aimed to alter the tools based on the feedback provided by the original, resistant users, while salvaging the underlying purpose of the technology. One lesson that the developers claimed to learn was that the effort and cost involved in implementing a clinical decision support ML tool was not worth for the payoff; the necessary incentives were not present at the level of the hospital or individual healthcare worker. As one of the developers of the Surgery Risk Tool and C-Section Risk Tool noted, “The pivot away from the decision support tool has to do with moving toward a more concrete value proposition that we can offer to stakeholders. We have to ask from the beginning, ‘Can you lay out concretely how this is helping you in your work?’”

The development teams for each of the three tools decided to move away from their vision of decision support technology functioning “locally,” or at the point of care, and toward adaptations of the tools that operate more centrally within the hospitals. In the case of the Discharge Risk Tool, the central application of the tool was meant for use by admitting staff and clinical leaders in a “Centralized Capacity Center” during their communication with floor staff regarding patient discharge; in the case of the Surgery Risk Tool and C-section Risk Tool, the central applications of the ML technology took the form of benchmarking tools meant for use by hospital leadership in assessments of performance relative to national averages.

While this pivot from local implementation to central implementation was understood by the tool development teams as a response to their own shifting understandings of usefulness and usability, in practice, the pivot in each case involved effectively “bypassing” workers’ resistance by placing the technology directly in the hands of management. For developers, bypassing workers’ resistance by pivoting their plans for the ML tools involved a set of practices through which they managed their relations with each of the other parties involved in the process: the

(13)

13

resistant, local users of the tools; the technology itself; and the new, central users of the tools. These practices are explained in detail below, and are summarized in Table 2.

Managing relations with local users

Somewhat counterintuitively, pivoting from local to central implementation of the tools by bypassing local workers involved developers managing relations with those same local workers. Managing relations with local, resistant workers included the practices of observing local users’ engagement with technology and obtaining local users’ knowledge. Though developers were engaging in these practices even before the decision to switch from local to central implementation of the tool, the practices ended up playing an important role in developers’ ability to bypass workers.

Observing local users’ engagement with technology

The first practice by which development teams managed relations with local users of the ML tools was observing local users’ engagement with the technology. Development teams began engaging in this practice during the initial implementations of the tools with the aim of determining how users would integrate the tools into their workflow and whether workers were using the tool as intended.

Workers’ disregard toward the tools was present from the outset. For instance, in the case of the Discharge Risk Tool, a project manager from the development team would regularly attend morning surgical rounds, where nurse managers and resource nurses were expected to review the discharge predictions that the tool had generated overnight. Referring to the project manager, one of the nurse managers stated that “He must get really bored. Sometimes we don’t have time to go through the list of patients [that the tool provides], and when we do, it’s super quick, just yes no, yes no.”

Despite local users’ disregard, the practice of observing users’ engagement with the tools was useful for developers because it provided them with an understanding of why users were resisting the tools in the first place, as well as an understanding of what they would have to do in order to generate users’ buy-in. This new understanding translated into the realization that users’ demands for the decision-support technology were not compatible with the development teams’ abilities. For instance, during a meeting between the users of the Surgery Risk Tool and the tool’s development team, the users made the case that in order for the tool to be useful to them, it would have to be trained on more comprehensive data and better integrated into their existing workflow. The business owner of the Surgery Risk Tool reflected on this exchange during an interview:

You saw what happened when we had [two trauma surgeons] worked through the app. They understand the logic of the decision trees perfectly fine. What they have trouble with is understanding why the logic of the tool doesn’t agree with their own logic. And they don’t want to use the tool until we’re working with all the hospital data and get this into the EHR? Well that’s not going to happen.

The difficultly of implementing a clinical decision support tool also became evident to this business owner in the case of the C-section Risk Tool:

When we talked to a few of the OB residents, they told us that they don't actually like this idea [of basing their actions on a model], because they use so much intuition in their practice. To give you an example, I just learned that when they're measuring where the head of a baby is, most of them don't actually measure. They just use their hands to estimate. And everybody has different

(14)

14

sized hands. So using a decision support tool just doesn’t make sense to a lot of these doctors.

Obtaining local users’ knowledge

The second practice by which development teams managed relations with local users of the ML tools was obtaining local users’ knowledge about their domain of expertise despite their resistance. During the initial development phases of the tools, development teams sought out users’ feedback in order to validate their understanding of certain variables in the model and determine what available data was not being taken into account. As one of the developers of the Surgery Risk Tool and C-section Risk Tool noted, “Getting users’ feedback is helpful, because we can see what in the model agrees with users’ intuition, and we might be missing some things that we don’t know about.” Separately, another of the developers explained that “there are plenty of times where the doctors will say something about the model like ‘the way that these lab tests are being used isn’t right at all,’ or ‘something’s gone wrong and this variable isn’t being used as you intended.’”

From the healthcare workers’ perspective, this active participation in the development process occurred somewhat begrudgingly, particularly in the case of the Discharge Risk Tool. Here, the lead clinical researcher on the development team—Stephen—asked participating users to correspond with him on a daily basis and provide him with explanations of which tool-generated predictions they thought were incorrect, and why they were incorrect. As one of the participating case managers recounted: “Stephen is super smart, and he seems like a really nice guy, so I try to help him out and respond to his emails [with my feedback about the accuracy of the Discharge Risk Tool’s predictions] when I can. But some days I just don’t have time or I can’t be bothered.” Managing relations with technology

In addition to managing relations with the resistant, local users who were the original intended users of the ML tools, pivoting from local to central implementation also involved development teams managing relations with the technology itself. The practices that comprised managing relations with the technology included incorporating local users’ feedback into the continued development of the tools, and redesigning the tools around technical resistance.

Incorporating local users’ expert knowledge

Development teams gathered the input that they obtained through their relation management practices with resistant local users, and used this feedback to help them make targeted adjustments to the tools. As a result, the central versions of the tools represented improvements over the initial local versions both in terms of technical performance and appropriateness for their context. At a general level, it was in part users’ feedback that pointed development teams toward pursuing centralization as a solution to the issues of resistance and technical underperformance that they encountered during the initial implementations of the tools. For instance, the business owner of the C-section Risk Tool stated:

A few of the residents gave us feedback about why [the C-section risk tool] doesn’t work, and based on that, David [top clinical researcher] wanted to make a benchmark tool for the OB teams. He knows that physicians are never going to use a decision support tool, and a benchmark presents a more concrete value proposition that he can offer to the hospital.

Although the idea for a benchmarking tool had been floated on the development team since early in the development processes for the Surgery Risk Tool and C-section Risk Tool, it wasn’t until

(15)

15

the teams got a sense of users’ discouraging feedback about the decision support versions of the tools that they decided to pivot toward central versions and alter the technology accordingly.

Local users’ knowledge and feedback also influenced the specific ways in which developers managed changing the technology. As one of the developers of the Discharge Risk Tool recounted: “After the initial roll out, implementation-wise we took a step back. This involved putting together the know-how for different ways of representing features [in the discharge risk model] that we learned from nurse managers and case managers.” In particular, one of the complaints that the resistant users had about the local version of the tool was that some of the recommendations for action that were generated along with the quantitative discharge risk scores were not actionable; that is, they didn’t correspond to concrete actions that individual healthcare workers could take to assist in a patient’s discharge. Regarding this complaint, the developer noted during an interview that “The interface of the tool will be a lot simpler in the Centralized Capacity Center. You probably remember how a lot of the feedback that we got was about the actionability of the recommendations? From the list of hundreds of barriers to discharge that showed up [when local users viewed the predictions], now only three or four of the most important actionable barriers will be listed.” The development teams thus managed their relations with their technology by using the feedback provided by resistant local workers to optimize the tools for their new central context of use.

Redesigning around technological resistance

However, developers relations with their tools were not deterministic, and developers could not alter the technology in any way they wanted; instead, they had to contend with the various ways in which the technology and the data on which it relied presented its own resistance to their efforts. Resistance here describes the failure in practice of human actors to achieve their intended technological goals (Barrett et al. 2012; Pickering 1993). The underlying technology of the tools and the data on which they relied presented various limitations to the development teams— limitations that conditioned the ways in which the development teams executed the pivot from a local tool to a central tool. As such, the second practice by which members of the tool development teams managed their relations with the technology was by redesigning the tools around technological resistance.

In the case of the C-section Risk Tool, one way in which the technology and data influenced how developers pivoted from a local to a central version of the tool was by constraining the type of input data that they would use. The business owner of the tool explained:

We totally changed the data inputs. For the benchmarking tool, we’ve landed on using data from the time of admission, because we found that this is the best for predicting whether someone will need a procedure. Once a patient is admitted, there is so much information that gets passed between providers doesn’t show up in the data, so the post-admissions data [that was used for the local decision support tool] isn’t as reliable.

The resistance posed by the other ML tools lead to the respective development teams making similar moves in their own efforts to pivot from a local decision support tool to a tool that would operate more centrally in a hospital. For instance, in the case of the Discharge Risk tool, developers consolidated models that had been separate when the tool operated in different local contexts within the hospital:

(16)

16

For the Centralized Command Center, since it’s taking a hospital-wide view of bed capacity, we’ve had to spend a lot of time converging the medicine and surgery models [that were separate when the tool was implemented locally], giving them the same code structure. […] The focus has been on what a central body can do to act on a list of patient predictions.

By managing relations with the technology itself through the practices of incorporating users’ expert knowledge and redesigning around technological and data constraints, development teams were able to make the technical adjustments necessary to bypass resistant local users and implement the tools centrally.

Managing relations with central users

The third set of relations that development teams actively managed as part of their pivot from local to central ML tools was their relations win the new, central stakeholders. Unlike the other relations that development teams worked to manage—relations with local users and relations with the technology—managing relations with central users was oriented toward generating buy-in among a stakeholder group. The relational practices through which the development teams sought to generate buy-in among central users included demonstrating the technology to central users with their own data and obtaining this new data by establishing trust.

Demonstrating technology to central users

The first of developers’ practices involved in managing relations with central users was demonstrating to central users updated ML tools that were trained on data from the users’ own hospital. The importance of this practice in generating central users’ buy in was made clear by the business owner of the Surgery Risk Tool and C-section Risk Tool, who noted that: “There’s a saying in healthcare IT: ‘if you’ve seen one floor, you’ve seen one floor,’ because everywhere is different. So clients always want to see their own data.”

One lesson that developers became keenly aware of after the failure of the local implementations of the tools is that displaying technical ability alone is not a reliable way to generate buy-in among stakeholders; in other words, optimizing relations with technology alone is not sufficient. Development teams applied this less in their management of relations with central users. As one of the developers of the Discharge Risk Tool noted:

The take home message from the first implementation [of the discharge prediction tool] is that it is not about accuracy. […] No matter how well you explain things at the beginning, or how accurate you can prove the tool to be, this information will not make people want to use the tool. […] You have to convince people that replication can be done first, and that the tool works how they work; otherwise, they won’t want to use it.

A key component of showing central users that “the tool works how they work” involved generating test predictions based on those users’ own patient data; until development teams undertook this relation management practice, central users were resistant. This pattern was particularly prominent in the case of the Surgery Risk Tool. One of the developers of the tool recalled:

We had the idea for a benchmarking too for a while, but it wasn’t until we put something in front of [the central users] that was based on their data that they clicked and said this was a good idea. Previously they were just along for the

(17)

17

ride, but now they’re all-in. So for me, more than the technical stuff, it’s the politics that matters.

However, before they could demonstrate updated versions of the tools to central users with data from the users’ own hospital, the development teams had to acquire this data. As noted above, different data were used for the benchmarking versions of the Surgery Risk Tool and the C-section Risk Tool than for the clinical decision support version of the tools. Given that the clinical decision support versions had not worked out according to plan, the development teams had no guarantee that they would be granted access to additional proprietary data for reprising the tools; development teams first had to generate some level of buy-in from the new central stakeholder group. However, this situation presented a kind of paradox: developer teams needed to generate buy-in in order to obtain new data, but in order to generate buy-in, developer teams needed to demonstrate the tools’ capabilities using that same new data. A second developer of the tools put it thusly:

The biggest issue we have is the data far and away. […] Having discussions with concrete outputs is so important for this, because with just hypothetical numbers, it’s hard to convince anyone that our tool works. But in order to generate concrete outputs, we need their buy-in and their data. So it’s a difficult circle.

Avoiding this circle required a second practice by which to manage relations with central users: obtaining new data by establishing trust.

Obtaining new data by establishing trust

The process of establishing trust consisted of a number of components. As the business owner of the Surgery Risk Model and the C-section Risk Model explained:

Data exposes gaps. So typically someone who holds personal data wants to keep it to themselves unless there’s a trust based-relationship, and you have to build this relationship. It’s like a financial advisor. Usually your financial advisor is someone you’ve known for a while, maybe someone who works for your whole family. We’re in a similar position. At the beginning, [the clients] are not giving us all of the data—they give us part of the data, so we can show them what we can do and they can get a sense of what we’re all about. We have to find clever ways to get data and we have to be able to show clients what we can do, but they also just have to trust us as people.

For the business owner of these tools, the “clever” strategy for obtaining data included gaining some of the developers’ technical literacy and learning how to respond in-person and in real time to some of the central users’ curiosities and concerns. He continued: “I had [our two developers] give me a quick training on their code, because I need the ability to be on-site and run analyses in person to show clients what the technology can do and that I know what I’m talking about.” The practice of establishing trust in order to obtain data thus partially overlaps with the practice of demonstrating the technology to central users with their own data. Together, these practices enabled the development teams to manage their relations with central users in such a way as to successfully bypass resistant local users and move from developing clinical decision support ML tools to developing ML tools that function at higher levels of hospitals.

(18)

18

Discussion

This study poses the research question: How do the development and implementation of algorithmic technology occur in the face of resistance by powerful professionals? The existing literature on technology development and professional work provides a number of answers to this question, none of which can account for the outcomes that I observed in my ethnographic study of the development and implementation of three algorithmic technologies in three US hospitals.

Following the work of Pickering (1993, 1995) and Barrett et al. (2012), post-humanist perspectives on technology and organizations conceptualize technology development as a process of tuning, which occurs through a dialectical interplay between the resistance of technology and technology users, and human accommodation to this resistance. Within firms, managers and technology developers can accommodate workers’ resistance to new technology in a number of ways that lower workers’ barriers to using the technology, including revising goals, modifying the material form of the technology, shifting human frames and activities, or adjusting the social or political relations associated with the technology (Barrett et al. 2012:1450; cf. Pickering 1993, 1995). How a given process of tuning unfolds within a workplace is contingent on the human and material actors involved, and the literature on worker resistance suggests that professional workers may be able to resist the implementation of new technology in the workplace in ways that non-professional workers cannot: given the discretion that non-professionals are afforded in their work, and given the professions’ ability to control the distribution and practice of their expert knowledge (Abbott 1988), workers like police officers (Brayne 2017), judges (Christin 2017, 2018), and healthcare professionals (Bloomfield et al. 1992; Constantinides and Barrett 2005; Lapointe and Rivard 2005; Markus 2004) are often able to reject a new workplace technology outright without the threat of reprimand; the same cannot be assumed for their non-professional counterparts (cf. Pachidi et al. 2020). Additionally, professionals face a variety of jurisdictional threats that do not apply with equal force to non-professional workers, including increasing public skepticism of expert knowledge (Christin 2017:2–3; see also Espeland and Sauder 2016; Eyal 2019; Strathern

Table 2. Development Teams’ Relation-Management Practices

Development Teams’ Practices Representative Interactions Managing relations with local users

Observing local users’ engagement with technology

Gaining an understanding of how users incorporated the tools into their workflow and/or why users were unwilling or unable to do so

Obtaining local users’ expert knowledge Consulting resistant users about the meaning of clinical measurements and whether certain data should be included in the modeling process

Managing relations with technology

Incorporating local users’ expert knowledge Using local users’ knowledge of EHR data and feedback about tool interface to make adjustments to the tools Redesigning around technological resistance Altering the tools’ input data in order to accommodate the

new context of central use Managing relations with central users

Demonstrating technology to central users Generating and sharing concrete outputs from models trained on central users’ own data

Obtaining new data by establishing trust Using small samples of the available data to exhibit technical know-how and responsiveness to central users’ concerns

(19)

19

2000). These threats to professionals are compounded by the emergence of technologies that threaten the autonomy and discretion that define professional work, including algorithmic technologies (Kellogg et al. 2020).

This existing literature would lead us to hold two key expectations about how the process of tuning may unfold during the development and implementation of algorithmic technologies. First, we would expect that managers and technology developers seek to accommodate professional workers’ resistance in a way that makes the workers more amenable to using the technology; second we would expect that professionals’ recalcitrance could ultimately impede the adoption of the tools. However, neither of these expectations was born out in my 21-month ethnographic study of the development and implementation of three algorithmic technologies in healthcare organizations: developers of the tools did not aim their accommodation efforts toward the technology’s resistant users, and healthcare workers’ recalcitrance did not ultimately impede the adoption of the tools. Instead, I found that technology development and implementation can proceed in the face of professionals’ resistance when developers bypass the workers and repurpose their technology for use by another group of actors that is present in professional work settings: managers, administrators, and other “central” actors. In order to carry out this accommodation-through-bypassing, the developers that I studied engaged in a series of practices whereby they strategically managed relations with the resistant “local” actors, the technology itself, and the central actors. Managing relations with local users involved observing local users’ engagement with the technology and obtaining local users’ expert knowledge; managing relations with the technology involved incorporating local users’ expert knowledge and redesigning around technological resistance; and managing relations with central users involved demonstrating the technology to central users and obtaining new data by establishing trust with central users.

These findings contribute to our understanding of technology development and implementation in professional settings in a number of ways. First, they show that the dialectic of resistance and accommodation that characterizes the technology development process can occur even in the face of strong professional recalcitrance. Second, these findings show that accommodation does not necessarily involve accommodating a technology’s initially-intended users, but can instead be strategically geared toward another, less-resistance group of users. Finally, these findings highlight the critical role that technology developers can play in facilitating relations between workers, managers, and workplace technology.

Pivoting

This study also contributes to our understanding of pivoting, or strategic shifting by a firm or other type of actor (Kirtley and O’Mahony 2020) in a number of ways. The literature on pivoting shows that actors undertake this kind of “structural course correction” (Ries 2011:149) under a number of conditions, including when new information conflicts or expands decision-makers’ beliefs (Kirtley and O’Mahony 2020), or when a maturing market changes an actor’s environment in such a way that requires altering strategy (Gavetti and Rivkin 2007; McDonald and Eisenhardt 2020). This study adds to these conditions, showing that organizational outsiders may pivot when confronted with the resistance of organizational insiders, and that such pivoting can be contained within a single organization. More particularly, this study shows that developers or managers may pivot when confronted with powerful workers’ resistance to their efforts.

Second, my findings contribute to our understanding of how firms or other actors pivot. Current literature establishes that the process of pivoting can involve managers using rhetorical strategies to manage relationships with stakeholders whose identity may be threatened by the pivot (Hampel, Tracey, and Weber 2019; McDonald and Gao 2019), relinquishing of psychological

Figure

Table 1. Three Cases of Algorithmic Technology

Références

Documents relatifs

The professors teaching the different courses in the field of educational organisa- tion have a certain degree of experience in devising the contents of the course by presenting a

In our model, the implementation phase is analysed through different variables: the characteristics of the initial project; the changes in a teacher’s practices; his/her feeling

We had failed to provide teachers and students with an understanding of the full range of visualization techniques that the Cli- mate Visualizer was designed to support

* See also: Interagcncy Mission on the Establishment of an African Regional Centre for-the Transfer, Adaptation and Development of"Technology (E/CN.14/ACTT/1, E/CN.

Both topics are presented separately and the results will be used for future works in machine learning with high frequency trading on Copyright held by

The input of the intelligent system of predicting the characteristics and evaluating the success of software projects implementation (SPCES) are the selected in [10] 24 SRS

The liquid slosh within each carbon dioxide tank changes the A matrix of the system with respect to time giving the SPHERES satellite the time-varying

Based on previous research, in this study we try to re-analyze the morphological-structural dynamics of dynamic heuristics in learning and cognitive development;