Corps de l’article

1. Introduction

Artificial intelligence (AI) has dramatically affected certain workplaces. Researchers speculate that it will bring a new industrial revolution (Industry 4.0, e.g., Rutherford & Frangi, 2021) and may soon replace between 9% (Arntz et al., 2016) and 47% of all jobs (Frey & Osborne, 2013). While AI research is beginning to affect organizations and productivity across fields such as accounting, radiology and marketing services (Chartrand et al., 2017; Syam & Sharma, 2018), little is known about how employees experience its impact on their work and how they adapt to the changes it entails (Phan et al., 2017).

Although AI has aroused great interest among researchers and the general public, there is some confusion over how to define AI, all the more so because its impact varies from one application to another (Raj & Seamans, 2019). The term ‘AI’ is in fact a broad umbrella for a wide range of learning algorithm applications. It has been defined as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation” (p.15, Kaplan & Haenlein, 2019). We will focus here on learning algorithms: an “emergent family of technologies that build on machine learning, computation, and statistical techniques, as well as rely on large data sets to generate responses, classifications, or dynamic predictions that resemble those of a knowledge worker” (p.1, Faraj et al., 2018).

Recent research suggests that learning algorithms profoundly shape job design and related concepts (Faraj et al., 2018; Parent-Rocheleau & Parker, 2021; Parker & Grote, 2020). Job design refers to the content and organization of work tasks, activities and relationships (Parker, 2014). Our interest here is whether learning algorithms are antecedent to job crafting, that is, the ways employees proactively craft the tasks, activities and relationships that make up their jobs.

There is little research on the antecedents to job crafting (Niessen et al., 2016). Researchers have investigated antecedents that range across personality traits, organizational and contextual features, and job characteristics (e.g., Rudolph et al., 2017; Zhang et al., 2019). A common finding is that job crafting occurs especially in certain contexts, such as organizational change (Demerouti et al., 2017). However, it is unclear how AI and learning algorithms impact job crafting behaviours (Parker & Grote, 2020), although recent conceptual reviews argue that learning algorithms crucially affect job design, job crafting, and antecedents to job crafting, such as employee autonomy (Parent-Rocheleau & Parker, 2021; Parker & Grote, 2020). There is an urgent need for research, specifically on how new technologies affect the constraints and freedoms of work and how employees craft their jobs (Parker & Grote, 2020). In this case study, we ask how the introduction of learning algorithms has affected the jobs of bank customer advisors who sell financial products and services and how they responded by means of job crafting behaviours.

Specifically, we will show how learning algorithms led to job crafting by motivating the employees to job craft. Employee autonomy and striving for greater meaning of work are noted antecedents to job crafting (Petrou et al., 2017; Wrzeniewski & Dutton, 2001). Conceptual reviews argue that the opacity of algorithms and their ‘black box’ effects may undermine the meaning of work for employees (Burrell, 2016such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These mechanisms of classification all frequently rely on computational algorithms, and in many cases on machine learning algorithms to do this work. In this article, I draw a distinction between three forms of opacity: (1), and that learning algorithms limit employee autonomy by prescribing the tasks they need to perform (Murray et al., 2020), and, more fundamentally, by subjecting employment relationships to greater organizational control (Kellogg et al., 2019). Yet there has been little empirical research from the employees’ perspective, and it is unclear how AI is related to their motivations. For example, are they motivated to job craft because AI enables their autonomy or because it threatens their autonomy? Given the broadness of the meaning of work, in what specific sense does AI challenge the meaning of their work?

We will further show how employees respond to learning algorithms through job crafting practices, thereby providing insight into whether job crafting is a useful response. We will thus contribute to recent findings that job crafting behaviours are not uniform across dimensions but are instead adapted to the context. On this point, Petrou et al. (2017) showed how such behaviours are differentiated across types of organizational change. Our interest here is to find out whether workers engage in certain job crafting practices before they engage in others, or whether they use these practices in tandem to maintain their autonomy and produce new meanings of work.

In summary, through interviews with a sample of bank customer advisors, we will show how the introduction of learning algorithms motivated them to job craft as a means to defend their job autonomy and meaning of work, and how they engaged in job crafting.

1.1 Learning Algorithms as Antecedents to Job Crafting. Their Impact on Autonomy and the Meaning of Work

We will focus on how learning algorithms shape employee autonomy and the meaning of work through their role as motivators of employee job crafting (Wrzesniewski & Dutton, 2001). Autonomy and decision-making are both impacted by learning algorithms. Algorithmic management can vary from decision support to judgment substitution (Parent-Rocheleau & Parker, 2021). Learning algorithms shape the choices available to employees by automating decision-making, specifically by imposing predefined rules and by rapidly processing massive amounts of data. Researchers have debated whether algorithms help employees make decisions or help employers exercise greater control over the employment relationship (Kellogg et al., 2019). Parent-Rocheleau and Parker (2021) acknowledge the possibility that algorithms exercise control over employees’ work through automatic, remote, and constant monitoring, and through gathering information that may be further used to prescribe tasks (although such practices are often driven by managers, see Autor et al., 2003). Employees may not understand such algorithmic decision-making, i.e., the so-called ‘black box’ and opacity effects (Burrell, 2016; Kellogg et al., 2019)such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These mechanisms of classification all frequently rely on computational algorithms, and in many cases on machine learning algorithms to do this work. In this article, I draw a distinction between three forms of opacity: (1. Such effects nevertheless affect how employees respond to decisions made by the learning algorithm and how they organize their work (Faraj et al., 2018).

Some researchers have considered how AI facilitates decision-making and improves team performance. In medical imaging, deep-learning techniques facilitate image recognition and support decision-making (Tajmir & Alkasab, 2018). During recruitment, AI algorithms help recruiters identify certain human biases (Newman et al., 2020).

In theory, then, learning algorithms affect employee autonomy and decision-making, on the common assumption that they cause employees to feel “out of the loop” and without control (Parker & Grote, 2020). There has not been sufficient investigation into whether employees view or experience such changes as desirable or needing a response.

We will now turn to the meaning of work to focus on how learning algorithms affect the way employees view their own expertise, a factor so important to their identity (Faraj et al., 2018). If we take the common distinction between routine and non-routine tasks, we see that learning algorithms change the boundaries between the two. While algorithms often replace simple cognitive or manual tasks (Autor et al., 2003), it is more problematic for them to replace complex tasks that require understanding of the context (Faraj et al., 2018). In health care, AI supports decision-making, but doctors must also consider the patient’s clinical context. In radiology, AI has altered expertise boundaries by taking over certain non-complex aspects of patient cases, thus enabling radiologists to focus on the more complex ones (Faraj et al., 2018).

AI therefore threatens employees in their autonomy and in the meaning of their work while offering them opportunities for betterment in both areas. Thus, when confronted with learning algorithms, they are unlikely to be passive actors, and we will consider how they proactively respond through job crafting.

1.2 Employee Job Crafting Responses to Learning Algorithms

Classic job design theory has been a means to identify job characteristics and provide job designers in organizations—commonly assumed to be managers—with a top-down approach to job design. It is less useful for understanding how employees proactively shape their jobs and respond to new ways of working (Oldham & Hackman, 2010). By changing the characteristics of work, learning algorithms certainly have implications for job design, and the recent literature on the subject, particularly on job crafting, may help us understand how individuals proactively respond to algorithms. Recent research has shown how employees respond to organizational change by behaving proactively (Walk & Handy, 2018), and such research may provide a model for investigating their responses to the introduction of learning algorithms into the workplace. Few studies have specifically considered the impact of such algorithms on job crafting (Parker & Grote, 2020).

Wrzesniewski and Dutton (2001) define job crafting as the physical and cognitive changes that individuals make to their task and relationship work boundaries in order to align them with their preferences and thus upgrade the meaning of their work. These authors define three job crafting practices. First, there is task crafting: changing task boundaries and altering the types or numbers of tasks. Second, there is cognitive crafting: changing the cognitive boundaries and how the job is perceived. Third, there is relational crafting: changing the relationship boundaries, such as the nature and number of social interactions at work. Job crafting is motivated by a need to gain control over work, to find meaning in work and to secure a human connection and positive self-image (Wrzesniewski & Dutton, 2001). Job crafting once enacted may then meet a need to achieve autonomy and promote the meaning of work. Employees therefore craft their jobs as part of an ongoing process to regulate their needs for autonomy and meaning. For example, they may engage in job crafting to support the meaning of their work during organizational change (Berg, Dutton, & Wrzesniewski, 2008).

As discussed earlier, learning algorithms challenge the autonomy of employees and the meaning of their work. According to job crafting theory we would therefore expect them to engage in job crafting to regain some control. For example, although learning algorithms may select and recommend tasks to employees, it is employees that ultimately decide which ones to perform. Employees may also reduce the perceived importance of the tasks they have lost to AI algorithms and increase the perceived importance of those that remain, perhaps by creating new or enhanced relationships with people connected to their jobs (patients, customers, clients, etc.). Such a relationship shift is a way of reclaiming autonomy and recasting the meaning of work. It is a shift from the expertise required for the tasks toward the development of social relationships (which is indeed what we later found).

In summary, learning algorithms shape employee decision-making and autonomy. The challenge to employee autonomy is particularly concerning as it is one of the most important job characteristics and affects the meaningfulness of work (Oldham & Hackman, 2010). Employees may proactively respond to learning algorithms through job crafting, which has three dimensions: task boundaries, cognitive boundaries, and relationship boundaries. Our research questions are:

  • RQ1: How do learning algorithms affect employees in terms of their job autonomy and the meaning of their work?

  • RQ2: How do employees use job crafting to respond to the introduction of learning algorithms? Specifically, how do they change their task boundaries, their work relationship boundaries, and the cognitive framing of their jobs?

2. Methodology

We interviewed a sample of 27 employees in a French bank to examine how they dealt with the introduction of AI in their jobs and their thoughts on the change two to three years afterwards. We used a qualitative approach toward learning algorithms and job crafting because these topics had been under-investigated. By choosing an open approach without a priori assumptions we could let the interviewees present their experiences. The banking sector was an early adopter of AI in such areas as chatbots, virtual assistance, fraud detection, anti-money laundering and predictive analysis. These kinds of predictive systems have considerably changed employees’ work by altering their tasks, their relationship with the customer, and what their work means to them.

2.1 Research Context

The interviews were conducted in regional branches of a French bank (2,300 employees), referred to here as BankCo, which had recently transformed its operations by introducing digital customer interfaces (automata, Internet applications and smartphone). Like many other banks, it had experienced massive digitalization in recent years, including the introduction of advanced predictive analysis with machine learning.

The sample consisted of customer advisors who developed sales and services for the bank by identifying the needs of customers and advising them on those products and services that best suited their needs (investments, loans, savings, etc.). These advisors were also expected to monitor and prevent risks (for loans and operations) and to guide customers in the use of digital services during daily banking interactions. Such interactions had changed in recent years as customers now rarely travelled physically to their local branch. Banks thus became more proactive in their commercial operations, one of which was to identify customers for sales staff to target.

The interviews were conducted in 2018 and 2019. Since 2016, BankCo has used machine learning to prepare customer-targeting lists and proposals by using many variables about customer characteristics and account movements. The learning algorithms are designed to predict how customers will behave (e.g., taking out a loan or closing an account) and then determine the appropriate products or services to offer them. The bank had long been using customer-targeting lists before the introduction of learning algorithms. The algorithms, however, greatly increased the volume of available data it could use to predict customer needs and the products best suited to meet them.

Work was greatly changed by the learning algorithms. Previously, the customer advisors contacted whomever they saw fit, asked the customer to meet them at the bank, offered him or her products and services and made suggestions based on their analysis of various indicators (e.g., loans, special anniversary dates of the customer, bank account tracking, etc.). After machine learning was introduced, they no longer chose whom they would meet. Instead, each of them received an AI-generated list of customers to be called and topics to be discussed (products to offer, loans, risk of leaving the bank, etc). The bank initially expected them to strictly follow the list, along with the suggested products and services, but, as we shall see, it later relaxed this expectation, as managers realized that predicted objectives were sometimes problematic and variable in quality. The AI list was nevertheless an integral part of the job, as sales necessarily depended on it. It was the basis for the target of around twenty appointments per week.

The employees first phoned the customers on the list to schedule appointments to promote financial services, and they then had to report their activity using the appropriate software. Their managers assessed their performance in terms of the objectives (e.g., percentage of listed customers who were contacted, percentage of contacts that led to sales). For example, 30-50% of the appointments had to be made with the listed customers. The remaining appointments were with regular customers or with those who requested meetings without being prompted. The employees were not paid directly or awarded bonuses if they achieved their AI-suggested goals, but the managers did take such achievements into account when conducting the annual performance evaluations.

2.2 Sample

The sample encompassed a variety of bank employees who had to work with the AI tool. There were 15 women and 12 men, between 30 and 60 years old, for an average of 42, and with years at the bank ranging from 7 to 35 years, for an average of 16 (see Table 1). Twenty-three of them were direct users of the AI tool (customer advisors), and their performance was evaluated accordingly. The four remaining interviewees were two branch heads, the HR manager and the chief bank manager. Branch heads were direct users and also monitored their employees’ use of the AI tool and achieved objectives. The managers and the employees were spread across different branches. The managers provided us with contact information for other branches and individuals within the branches we could ask to take part in the study. This approach provided us with a diverse sample across a total of five branches. Thus, the results were not dominated by any one branch. We sought roughly equal numbers of men and women and a wide age range. BankCo was selected because it had adopted AI tools as a key part of its sales staff operations and because it was similar to other banks in its adoption of AI tools. Before the interviews, the interviewees had agreed to the ethical requirements of our study (informed consent, anonymity, confidentiality etc.).

Table 1

Sample Characteristics

Sample Characteristics

-> Voir la liste des tableaux

2.3 Interview Procedure and Data Analysis

We conducted semi-structured interviews, using an interview guide with relevant themes. This approach ensured that relevant topics would be covered, while allowing the interviewer to discover and explore unanticipated themes, and allowing the interviewees to express their views in their own terms and emphasize what was important to them (Bryman, 1989).

Our research questions informed the themes of the interview guide. The interview questions were based on those themes and were asked in an open manner with no assumptions made as to whether learning algorithms were a threat or an opportunity. The interview questions covered whether the learning algorithm tool had changed the interviewees’ job in general terms and then more specifically in terms of their autonomy, their decision-making, and what they valued, liked and disliked in their job. The questions covered the three job crafting dimensions (how the tool affected their tasks, their relationships with the customers and the bank, their understanding of their job), and in each case how they responded and the impact of their responses. The interviewees were probed to examine their views in depth. The interview guide evolved over time and was open to emerging themes (e.g., the mutual reinforcement between their changing customer relationships and how they viewed the meaning of their work).

The interviews took place at the employee’s workplace (their bank branch), thus providing an opportunity to observe how he or she interacted with colleagues, managers and customers. The duration was from 20 minutes to one hour 45 minutes, for an average of 43 minutes. The number of interviews was not defined in advance; instead, using theoretical saturation (Bowen, 2008), we conducted them until no more significant information could be obtained. All the interviews were recorded and transcribed.

We used Braun and Clarke’s (2006) thematic coding approach and followed their six stages: first, familiarizing ourselves with the data (transcriptions); second, generating the initial codes to understand how the tool impacted the employees’ work and how the employees responded; searching for themes to encapsulate the data, while remaining open to new categories of response; third, defining and interpreting the themes with reference to the literature; fourth, classifying the responses by type of job crafting (task boundaries, cognitive boundaries, relationship boundaries); fifth, reviewing the findings to see whether the employees had answered the research questions; and sixth, selecting representative quotes to illustrate the findings.

3. Findings

3.1 How Learning Algorithms Affect Job Autonomy and the Meaning of Work

We organized our findings in terms of our two research questions. First, how did the learning algorithms affect the employees in their job autonomy and the meaning of their work? Management initially told the employees to follow the algorithm predictions, thus reducing their perceived autonomy and affecting the meaning of their work mainly by changing how they saw their expertise. (About two years after introducing the algorithms, management somewhat relaxed their expectations and accepted the employees’ job crafting responses, having realized the limitations of the algorithm and how it had adversely affected the employees’ jobs.)

3.1.1 Increased Standardization and Quantification of Work Due to AI’s Impacts on Employee Objectives, Autonomy and Decision-Making

The algorithm dramatically influenced the customer advisors. It standardized and quantified their work by generating rank-ordered lists of customers to contact, on which the listed customers’ names were accompanied by objectives and recommendations that the employees were expected to follow (e.g., 20 customers per week with customized offers for each customer), along with a suggested script. The AI-generated script included ‘if-then’ suggestions for the sale employees. In other words, the AI tool predicted the customer’s questions and provided recommended responses for the employee to follow.

“The work is standardized by the way everything is controlled today; the work is more standardized than before.”

Interviewee B9

“When you see the list prescription, there is a script to contact the customer, so that you cannot get out of its way.”

Interviewee B17

The employees had to report their customer activities to their managers weekly and monthly, using a specific software application and knowing that their interactions with the learning algorithm were already being automatically tracked and quantified. The BankCo managers initially believed in the algorithm’s predictions, a supposedly ‘one best way’ to work, and strongly encouraged the employees to follow the generated list of customers.

“We are seen on all indicators, on everything. We no longer know where to turn. Our managers also have indicators on which they must give feedback […]. Everything is quantified.”

Interviewee B16

Processing the list was an additional task on the employees’ busy schedule, which included serving and maintaining existing customers who were not on it. While the learning algorithm was initially supposed to support the employees, most of them saw the increase in standardization and quantification as a burden. The strict rules around objectives were viewed negatively, with more than half the interviewees reporting a loss of autonomy in their work schedule and decision-making freedom, and a loss in their perceived usefulness to the organization. Previously, they were the ones who had chosen their customers and objectives.

The employees perceived their reduced autonomy as signalling a broader employment relationship issue. The managers seemed to trust the algorithm more than they trusted their employees; they seemed to see it as an opportunity to shift the balance of control from the employees to themselves. The learning algorithm led to greater monitoring by management (e.g., through the weekly and monthly employee reports) and increased pressure on the employees, who reported working beyond their contractual working hours to reach the new set of objectives. Some of them mentioned that the list was supposed to help them increase their efficiency (which was measured by the algorithm), but most perceived the new objectives as a means by management to control their work:

“They [the bank managers] introduced a processing rate for scheduled appointments [how many should be processed within a time frame], a rate of appointments that staff had to confirm by a text message, a sales rate for appointments. All this monitoring gives the impression that the list is not in our interests”.

Interviewee B10

The managers trusted the algorithm even though they—as well as the employees—did not understand how it produced the list and the predictions. The employees reported being baffled by the lists of customers, the predictions and the variables used to produce its predictions. This ‘black box’ did however offer them an opportunity to bring some autonomy back into their jobs. They argued they had to make the final judgment because the algorithm’s proposals were not always relevant:

“That said, just because the tool tells us that we must propose such a solution business proposal to customers, such as a home loan], that does not mean that we will do it, because sometimes the algorithm is not 100% efficient”.

Interviewee B19

3.1.2 Change in the Meaning of Work Due to the Learning Algorithm Challenging Employee Expertise

Previously, the employees usually learned about their customers and made recommendations by meeting them in person and investing a lot of time in building relationships through multiple meetings, conversations and exchanges. The employees thus had extensive knowledge of and rapport with their customers, and that knowledge and rapport was a source of pride and meaning in their work. That expertise was now replaced with the learning algorithms’ prescribed analysis. According to the employees, the organization and its managers also believed that the algorithm made better predictions and better understood the customer’s situation, thus eclipsing employees’ personal knowledge. This led employees to think that their expertise was not perceived by their managers as valuable or as good as the algorithm’s. They wondered about the value they added to the process and consequently felt that they did not have to think in depth about the customer’s needs anymore. Once they had the list, they just had to contact the customers and schedule meetings. So while the employees and the managers recognized the tool’s usefulness, both groups initially reported a significant loss in the value of the employees’ expertise due to the AI tools, and a consequent loss of meaning for the employees.

“It's true that it's a concern for customer advisors who wonder what added value they're going to give. They will be concerned that the artificial intelligence tool will be better than their discovery, their questioning, their formulations as a human being.”

Interviewee B18

Although the employees acknowledged the value of the algorithm’s analysis, most disputed the superiority of its knowledge. In fact, there were instances where the algorithm’s predictions were said to clash with the employee’s knowledge of the customer, and where its recommendations were regarded as inappropriate. Those shortcomings were a way for the employees to reclaim the value of their expertise.

“The list is statistics; it is not necessarily something that is customized. We are not necessarily going to ask [the customers] what is noted on the list. We will not necessarily offer it [the proposed services] to the customer. Our offer will depend on the relationship we have with customers.”

Interviewee B25

The hallmark of machine learning systems is their ability to learn. If the employees ignore the algorithm’s proposals and do not provide it with feedback, it will be less able to learn and supply adequate proposals, and the employees will further question its value.

3.2 Crafting Jobs in Response to the Learning Algorithm

Job crafting theory predicts that job crafting behaviour is motivated by challenges to autonomy and the meaning of work. We will now turn to our second research question. How did the employees respond to the introduction of learning algorithms by means of job crafting practices, and what changes did they make to their tasks, their social relationships and how they conceived their jobs?

3.2.1 Task Crafting

Managing Contacts with the Customers on the List

As mentioned, the bank initially expected the employees to follow the algorithm proposals. When the demands of that work were perceived as being too hard, the employees would change their tasks to bring them more into line with their preferences. As they were evaluated on the number of appointments they made from the list, they developed shortcuts to meet their objectives. For instance, if, on the current week’s list, they saw the names of a customer they had met in previous weeks, they would write it down as an appointment for their current weekly target. This was an easy way for them to keep an appointment, to remove someone from the list, and to advance toward achieving their objectives, but it was not how the bank intended the weekly targets to be used.

“You have to organize yourself; you have to work intelligently. That is to say that there are certain customers who came without having made an appointment, who are part of the [AI] listing. I do an automatic search. If a customer came to see me in the previous weeks, I go to the list and see if he appears. Then I note a new appointment for the week to come. So it's a method that is DIY, but we are forced to tinker!”

Interviewee B5

To ‘tinker’ here is a euphemism for manipulating the system to gain control over the schedule.

Remaining the Decision-Maker with the Customer

The employees did not understand how the learning algorithm worked; nor did they understand how it produced its predictions (cf. the ‘black box’ effect). The algorithm supposedly offered the most rational and targeted recommendations on the basis of many variables and much data. Nevertheless, while the employees often adhered to the prescribed list of whom to contact, they would often deviate from the algorithm’s recommendations about what to discuss with the customer, thereby maintaining control over their decisions. They did so because they wished to assert their expertise and gain a better understanding of the customer’s needs.

“When you have a customer in front of you, you really try to discover the customer by asking a lot of questions, exchanging a lot with the customer to offer them services tailored to their needs.”

Interviewee B16
Delegating Mundane Tasks

The employees found ways to circumvent unwanted tasks assigned by the algorithm. One task required the employee to call the customers on the list to arrange appointments. This was time-consuming, and most employees preferred the more complex discussions that took place with the customer during an appointment. To better control their schedule some employees delegated the phone-calling to trainees or temporary workers.

“There is a moment when we cannot do everything. It is not possible, and we also have to keep a little fun in our work; otherwise it is complicated to motivate people, so we take advantage of our temps or trainees to work on the listing, to make appointments.”

Interviewee B9

The option of delegating the initial calling of the customer was not open to all customer advisors and depended on whether their managers had recruited temporary workers or trainee/interns to reduce the workload. Managers thus helped transform the job with a view to keeping their employees motivated and satisfied. Above all, they recognized their employees’ need for latitude in decision-making and scheduling.

Task crafting, or the task of selling services, includes the work of delegating tasks and managing customer proposals. Both are efforts by employees to establish meaningful and personal customer relationships. They are thus also forms of relational crafting. Both can also be seen as efforts by employees to regain their role as providers of expertise.

3.2.2 Relational Crafting

The managers did not directly provide their employees with job crafting strategies. They did, however, understand and accept that their employees were engaged in crafting. As we saw with task crafting, the managers gave their employees some support in that direction and allowed them some autonomy in how they responded to the customer list, particularly the AI-generated recommendations. With the introduction of AI, the overall direction of manager support and employee response was to work toward building personal customer relationships, an effort that both groups viewed as being key to the process of delivering advice—in essence, the human aspect of relationships. The algorithms offered rational recommendations based on data analysis, while the employees provided human (and non-algorithmic) ways of connecting to the customer.

Prosocial Relational crafting: Prioritizing humans over technology

Employees went beyond their job description to develop stronger personal relationships; for example, by visiting elderly customers at home or by helping the customer in ways unrelated to bank products (such as offering tax advice). Such prosocial crafting impacted the customer in a manner not prescribed by the learning algorithm.

“I go to see my customers at their home because they are old. I do it as I think that's normal, so they are faithful, why would I not do that for them?”

Interviewee B1

In response to the learning algorithms, which threatened to minimize their role as experts and depersonalize their customer relationships, the employees sought meaning and expertise in their jobs. They shifted away from the lists and objectives and toward the human aspect and expertise embedded in the employee-customer relationship, thereby subtly shifting the meaning of their work toward their human attachments with the customer and their role in offering expert recommendations.

“I think that the customer still needs to have a bank counsellor […] the customer is very attached to his advisor, he confides.”

Interviewee B5

“Indeed, I think that human beings can be better than computers and that their customer knowledge can effectively enable them to make better proposals.”

Interviewee B18

The employees emphasized the depth of their personal customer relationships, due to their knowledge of the customer’s life and plans—knowledge built up over years—and they stressed the importance of this role.

“There is a closeness when they are called immediately by their first name. We customize the relationship, and they value the fact that we know them. I realize that it's something the customers love. They love knowing that they are known and recognized.”

Interviewee B16

The customer advisors’ expertise was thus redefined away from technical banking knowledge and toward management of human relationships. They moved into areas that AI could not move into, namely trust, accountability and emotional intelligence, areas that they perceived as being the weaknesses of AI. Their responses can be seen as arising not just from challenges to their autonomy and to the meaning of their work but also as an intelligent tactic to reposition their strengths in human connectedness, well beyond the reach of AI.

3.2.3 Cognitive Crafting

Cognitive crafting can be interpreted as an effort to reposition the meaning of work with respect to AI technology. Two themes emerged: how the employees saw themselves, and how they saw the technology.

Redefining the meaning of their work through the customer’s life over and above AI technology

When asked about their job priorities following introduction of the learning algorithm, the employees stressed their importance in the customer’s life, saying the customer relationship went further than a mere business relationship. They said they understood the customer better than an algorithm could ever have. Their job was perceived not only as being commercial but also as a real assistance in the customer’s life, akin to a personal role. Cognitive crafting is thus strongly linked to relational crafting. By developing strong personalized relationships with the customer, the employees emphasized their usefulness in human terms, in contrast to impersonal algorithms, and in a manner that would not have been evident prior to the algorithms.

“Keep in mind that customers are not just algorithms. We know everything about our customers, so it's true that it creates a certain closeness because we know them. We know what they are going through; sometimes we are a little shrink. They tell us about their lives, so they trust us. They confide in us about things they might not tell anyone. And it's true that that's what I like.”

Interviewee B21

Such closeness cannot be provided by AI technology and AI-generated recommendations. It is the employee’s human knowledge and highly personalized relationship with the customer that makes the difference, something that AI was perceived as unable to do.

Understanding AI Technology as Complementing and Not Replacing Their Role

The interviewees’ perception of AI technology had evolved since its introduction, away from the fear that it would replace their role and toward a view of it as a sometimes useful but not essential tool. Their fears of being replaced by the algorithm became less pronounced as they came to understand the weaknesses of the AI-generated lists and predictions and how they could manage the algorithm and remain relevant to the customer.

“In my opinion, we have to get the best out of digital technology, while getting the best out of human beings. That's what will be decisive.”

Interviewee B18

By repositioning the learning algorithm as a tool and by discussing its limitations, the employees may be viewing themselves as being in control of the algorithm and accepting that it has a complementary role, rather than one of replacing employees. They contrasted its impersonality with the personal connection they brought to customer relationships.

4. Discussion

We have contributed to research on the antecedents to job crafting by showing how learning algorithms have a twofold impact: one on the motivation to job craft (here, autonomy and meaning of work), and the other on job crafting behaviours. Our findings provide empirical support for recent conceptual reviews that point to the relevance of job design approaches to the introduction of AI technology (e.g., Parent-Rocheleau & Parker, 2021).

We have found that learning algorithms initially reduced the autonomy of employees by telling them which customers to contact and what to propose. In addition, they now had to report their behaviour to managers by means of a software application, while also having their behaviour automatically monitored by the algorithms. Our findings are in line with pessimistic assessments of how AI technology limits employee autonomy and increases management control (e.g., Murray et al., 2020; Parent-Rocheleau & Parker, 2021). Our findings also show that autonomy changed dynamically over time and that the initial reductions in autonomy were reversed—a trend overlooked in previous research—because initial challenges to employee autonomy were met with employee job crafting practices that were accepted by the managers. For autonomy to change dynamically, managers must be flexible in accepting job crafting and must not rigidly insist on the validity of AI predictions.

While previous research has shown that AI undermines employees’ meaning of work (e.g., Pasquale, 2015), there have been few attempts to understand what such undermining means in concrete empirical terms. We found that the employees’ perceived expertise was undermined by the introduction of AI technology, which was intended—at least in part—to replace their experiential knowledge about customers with predictions based on AI analysis of customer transactions. The managers acknowledged that intention even though they did not understand how the AI predictions were actually produced. As with their autonomy, the employees saw the meaning of their work change dynamically over time. They initially felt that AI technology was threatening their expertise. Then, through their job crafting behaviour, they changed the meaning of their work over time: away from expertise grounded in technical banking knowledge and toward more in-depth knowledge of the customer’s needs and life plan. We therefore identify the employees’ perceived expertise as a specific facet of the meaning of work affected by AI, and we expect that this finding will generalize to many other occupations, given the common rationale for using AI technology to replace tasks.

Turning to how learning algorithms lead to job crafting behaviours, we have contributed to debates as to how job crafting is adapted to fit the context (e.g., Petrou et al, 2017). First, the employees used job crafting behaviours to respond effectively to the learning algorithm, specifically by rebalancing their autonomy and shifting to a different meaning of work. They also adopted the three dimensions of job crafting in a highly coherent and internally consistent manner. Finally, they changed the boundaries of their work by shifting away from technical and mundane tasks and toward deeper relationships with the customer, while changing how they saw their role (as customer ‘counsellors’ rather than as advisors). These relationship aspects cannot be automated as they are non-routine (cf. Routine Biased Technical Change thesis, Autor et al., 2003). We also found that job crafting practices must align in a common direction if the employees wish to oppose the potential existential threat to their jobs from AI technology. This is an advance on research that has viewed Wrzesniewski and Dutton’s (2001) three dimensions as operating in a relatively non-systematic manner (de Gennaro, 2019), without a clear employee strategy or a theory that can explain how those dimensions operate in tandem and synergistically. While job crafting research based on approach/avoidance behaviours suggests clearer inter-relationships between the three dimensions (Lazazzara et al., 2020)defined as task, cognitive, and relational job crafting by Wrzesniewski and Dutton (2001, it fails to explain the dynamic changes in those dimensions. In the case of the customer advisors, their strategy was to develop deeper human contact with the customer, to reframe their understanding of their work identity toward personal relationships with customers, while also repositioning their value beyond the reach of AI technology (i.e., by creating trust and by meeting the customer’s deeper human needs, such as feeling cared for).

Finally, we have shown how job crafting behaviours changed the way the employees understood the meaning of AI technology, which shifted from being a threat to more of a tool that complemented their work. With practice, the AI tool could improve its predictions, but this learning ability was somewhat compromised by the employees selectively acting on its predictions, which in turn led them to further question its value. As a type of organizational change, AI implementation shapes job crafting (the typically assumed direction of influence, e.g., Petrou et al, 2017). But the converse is also true: job crafting shapes AI implementation, or rather the meaning given to it.

Our findings are limited to a single case study in a bank. Nevertheless, we expect them to be of value to all organizations in the banking sector. Furthermore, as learning algorithms were applied in our research context in a manner similar to what is seen in contexts beyond the banking sector (to replace tasks, to guide the recommendations of professionals/service workers), we expect the core findings to apply elsewhere (e.g., learning algorithms disrupt antecedents to job crafting and motivate job crafting behaviours). Nevertheless, future research should investigate other banks and organizations in other sectors. Of particular interest is how managers respond to employee job crafting in contexts where they are variously supportive, accepting, or opposed.

We asked our interviewees to report their views retrospectively three years after the AI implementation, and we let them discuss how preceding events had shaped their lives; however, important points may have been overlooked or forgotten. Future researchers should consider doing a longitudinal study to track the implementation of AI technology from its introduction to its mature operation and differences over time in the way its users understand and make sense of it.

To conclude, at a time when AI is a subject of controversial debates and sometimes hyperbolic claims, we have helped demystify this sometimes poorly defined and misunderstood subject by studying a concrete application from the perspective of its users—in this case, customer advisors in a bank. Our findings have implications not only for them but also for bank managers, for bank customers, and for the broader context of understanding the meaning of AI tools and how they are used. Research on other AI contexts may benefit from our findings, particularly the employees’ concerted and possibly synergistic response across the three job crafting dimensions.