Labor automation for fair cooperation: Why and how machines should provide meaningful work for all

By affecting work, resources, organizations, and people's lives, automation processes can be disruptive of the basic structure of society. Nonetheless, we may benefit from this disruption, as automation may offer opportunities to make social cooperation fairer. Just as philosophers have addressed the problem of which values and principles should regulate the distribution of goods, so we may consider the problem of the values and principles guiding technological change with regard to work. Indeed, automation is often addressed from a distributive perspective. A prevailing concern in the debate is about making sure that the technologically unemployed will not lose access to income through unconditional redistributive policies, while some have suggested policies like a “robot tax” to disincentivize companies' investment in labor-saving devices. While crucial given the massive increase in profits afforded by automation and the inequalities that go with it, concerns about income are not the only ones raised by automation. Without underestimating their relevance, in this article I leave aside problems about income to focus on automation from the perspective of work. That is, my concern here is on social cooperation from the perspective of contribution instead of distribution, within a framework that may be called technological contributive justice. If UBI advocates expect everyone to benefit from automation in their income, the contributive perspective postulates that everyone should benefit from automation in their work. There are three main reasons behind this shift. First, even in a world in which income were unconditionally accessible to all, there would be the problem of how to fairly organize the unautomated socially necessary labor (e.g., waste collection, care work, etc.). I call this the Received: 14 November 2022 Revised: 27 July 2023 Accepted: 7 August 2023


| INTRODUCTION
By affecting work, resources, organizations, and people's lives, automation processes can be disruptive of the basic structure of society.Nonetheless, we may benefit from this disruption, as automation may offer opportunities to make social cooperation fairer.Just as philosophers have addressed the problem of which values and principles should regulate the distribution of goods, so we may consider the problem of the values and principles guiding technological change with regard to work.Indeed, automation is often addressed from a distributive perspective.A prevailing concern in the debate is about making sure that the technologically unemployed will not lose access to income through unconditional redistributive policies, while some have suggested policies like a "robot tax" to disincentivize companies' investment in labor-saving devices.While crucial given the massive increase in profits afforded by automation and the inequalities that go with it, concerns about income are not the only ones raised by automation.Without underestimating their relevance, in this article I leave aside problems about income to focus on automation from the perspective of work.That is, my concern here is on social cooperation from the perspective of contribution instead of distribution, within a framework that may be called technological contributive justice.If UBI advocates expect everyone to benefit from automation in their income, the contributive perspective postulates that everyone should benefit from automation in their work.
There are three main reasons behind this shift.First, even in a world in which income were unconditionally accessible to all, there would be the problem of how to fairly organize the unautomated socially necessary labor (e.g., waste collection, care work, etc.).I call this the "somebody's got to do it" problem.It cannot be solved by merely reallocating income, because it concerns the division of labor itself and its norms.Second, by conceptualizing social cooperation only as a matter of markets and distribution but not production, we are not able to see what happens with regard to what people do besides what they own.But this matters too when it comes to pursuing our life plans (see Section 4) as well as the effects on our aims, aspirations, and character (see Section 3.1).Finally, even if work were completely automatable, we would most likely still consider it undesirable to fully automate certain tasks, such as child care or teaching.
On the other hand, normative thinking about automation often takes the form of utopias of "full automation."Recent examples include ideas of "fully automated luxury communism" (Bastani, 2019) or a or "post-work" views world.A fully automated world, Danaher (2019b) argues, would allow us to pursue a life free from the pressures of economic demands and to enjoy activities for their own sake, much like playing games.From the premise that work is "structurally bad," he draws the conclusion that we should retreat from work, and the prospect of full automation is convenient to this purpose.This is what he calls a "withdrawal" strategy that disengages from work and its demands altogether.
While captivating, post-work arguments rely on a debatable premise: that work will, in fact, end.This is by no means certain and, as such, is not falsifiable.There is much controversy around this issue and there are several nuances to consider.In this article, I develop the point that given this uncertainty, we should redirect our attention to the opportunities that hybrid cooperation may bring about for a fairer society.This requires us to reframe the problem of automation from one of whether future joblessness will take place and is desirable, to one of the preferable ways to realize hybrid cooperation.
Thus, in this article I pursue an alternative "transformative" strategy, aimed at changing the structures of work instead of merely withdrawing from them.To this end, I explore an alternative ideal, which I call "fair hybrid cooperation," according to which we should arrange the automation processes that affect labor in such a way that provides meaningful work for all.This ideal assumes an always-evolving, hybrid cooperation scenario between humans and machines rather than one of "human obsolescence" (Danaher, 2019a).In this ideal, machines help us realize forms of cooperation that make meaningful work possible for all, rather than reserving it only for the few, as is currently the case, or anticipate a mere replacement scenario.
Against this background, this article addresses two main questions.One is concerned with the why of automation (i), that is, the motives driving it.The other is concerned with the how (ii), meaning the quality of automation-driven organizational changes.In what follows, I briefly outline the arguments.
1. Why should automation change work?Despite being sometimes represented as a natural-like phenomenon outside of our control, labor automation results from human choices and social processes.They impact the basic structure of society and ultimately people's chances to pursue their conceptions of the good life.Thus, the values and ends of automation fall under the scope of normative scrutiny.The value currently orienting most labor automation is that of economic efficiency: investments in labor-saving devices are meant to reduce labor costs and increase productivity.The fact that automation overarchingly serves this value requires further investigation.This paper develops the intuition that automation should serve certain societal values besides efficiency.I argue that fair cooperation in the division of labor should be one of them.In other words, when it comes to labor automation, we should be considering not merely technical and economic motivations but also benefits for contributors to cooperation themselves.
2. Technological change affects not only the quantity of jobs available but also the quality of work and its organization.Besides considering the quantitative problem of how many jobs will be lost and whether work will "end" at all, we should be addressing the qualitative question: How should automation change work?Just as there are preferable and objectionable ways to redistribute income and goods, so there are preferable and objectionable ways to organize work and cooperation.As such, I propose a normative criterion to guide automation priorities in a hybrid cooperation scenario, based on the idea of "contributive primary qualities."The latter are characteristics of the relation between the worker and the work activity affecting workers' pursuit of their life plans.This criterion requires that automation-and any cooperative reorganization resulting from it-does not hinder, and preferably enables, workers' experience of those qualities.The case of nurse bots-social robots performing tasks of care work-will be helpful to this inquiry, but other examples will illustrate it as well.
The article proceeds as follows.First, I discuss the efficiency motives driving current automation processes (Section 2).Then, I explain why efficiency should not be the overriding value orienting automation choices and why the motives of automation deserve further normative scrutiny (3).Automation processes, I argue, are part of the basic structure of society, to which considerations of justice apply.Given the structural interdependence between humans and technology in social cooperation, we may refer to it as a system of "hybrid cooperation."Following on from this, I consider "fair hybrid cooperation" as an alternative value co-orienting automation priorities (3).Fair hybrid cooperation is achieved when the organizational arrangements between humans and machines-and between humans themselves as a result of technological disruption-do not hinder, and preferably enable, workers' experience of certain primary qualities in their activity (4).Finally (5), I provide some practical examples through a "fair hybrid cooperation test" to show how this ideal may play out in the real world.I then consider alternative cooperative imaginaries inspired by this ideal, focusing on the case of nurse bots.Section 6 concludes.

| CURRENT VALUES OF AUTOMATION
What values are current automation choices prioritizing?What values should be pursued instead?Labor automation processes are mostly driven by investments aimed at increasing productivity and maximizing profits by cutting the costs of labor.In Keynes' words (1963, p. 364), the driving force behind automation is the "discovery of means of economizing the use of labour outrunning the pace at which we can find new uses for labour."In a similar vein, Weber (1978, pp. 1-67) observed that if "economic action" shapes the goals of profit-making, technology provides the "appropriate means" for it: The fact that what is called the technological development of modern times has been so largely oriented economically to profit-making is one of the fundamental facts of the history of technology.
The costs of automation have dramatically dropped since the beginning of the computing era.As Nordhaus (2007, p. 1) points out, "depending on the standard used, computer performance has improved since manual computing by a factor between 1.7 trillion and 76 trillion." Companies have thus quite a strong incentive to substitute human labor: it costs less and produces more.
Technological innovation has historically served ends of efficiency, which are often in conflict with the quality of workers' cooperation.Think of Adam Smith's (1776) classic reference to the pin factory.The detailed division of labor allowed by technological innovation in the 18th century determined a dramatic increase in productivity.There was a price to pay, though: what Smith called the "stultification" of workers, trapped into a series of mindless tasks ultimately degrading their intelligence and autonomy.Later on, Frederick Taylor's "scientific management" introduced the meticulous quantification of workers' input to maximize the productive output, thereby reducing workers to cogs in a machine.Notoriously, no consideration for the human benefits formed part of the Taylorist experiment.As captured by Taylor's (1919) own words, the goal was different: "in the past the man has been first; in the future the system must be first."Despite massive organizational transformations, automation's core rationale has not changed since that time.Efficiency might only derivatively benefit workers: that is, whether efficiency will benefit workers does not automatically follow from the general concern for efficiency in itself.
To be sure, this is not to deny that many technological innovations already serve other values as well.Doctors rely on sophisticated AI devices to optimize diagnostics and surgery, for example.In such cases (and others could be cited), automation serves other purposes such as better healthcare services for patients.Therefore, strictly speaking, economic efficiency is not the only value being pursued.Thus, my argument is not entirely foreign to certain existing automation practices; rather, it explicitly articulates that economic efficiency should not be the exclusive, overarching value being pursued by automation choices.The idea is that distinctive concerns for the benefits of workers ought to be considered, which are often left out of the picture.When faced with opportunities for technological change, we ought to consider not merely productivity gains and performance optimization but also whether such changes will make cooperation fairer for workers.

| DENATURALIZING AUTOMATION: WHY SHOULD WE QUESTION AUTOMATION'S VALUES?
A tendency can sometimes be identified in public debates whereby automation is naturalized, that is, technological change is presented as a sort of natural process, much like an incontrollable calamity, despite it being the result of human choices (and of socio-structural processes cumulatively perpetuated by human choices).Naturalizing automation entails keeping it outside of the realm of moral inquiry and demands of justice.We consider as worthy of normative inquiry what we acknowledge as resulting from human choices and social processes, much like we do with taxes, social biases, and all sorts of policies.Theories have argued for the redistribution of goods based on values such as equality, fairness, and human capabilities, presupposing that the way in which goods are distributed depends on human decisions.There is no inherent reason why the same should not be done in the context of work and technological change, in terms of preferable ways to realize labor automation.This requires that we de-naturalize our discourses around automation, fully recognizing the human drive and social genesis of technological change and thereby making space to question its driving motives and to expand their scope.Before addressing the idea of fair cooperation, let me articulate why economic efficiency should not be the overarching value and, more broadly, why it is appropriate to include automation in normative considerations.
To begin with, automation has a fully human and social genesis.Hence, it falls within the scope of normative inquiry, in which we question motives and ends and deliberate among the most desirable ones based on reasons.Furthermore, as I will argue shortly, automation belongs to the basic structure of society, to which-following John Rawls-considerations of justice apply.Automation affects how we organize cooperation and as such alters organizational forms which are themselves also part of the basic structure.Thus, automation affects workers' chances to pursue their life plans, as much as other institutions of the basic structure.A further reason concerns the consequences of automation: allowing automation to be driven merely by economic efficiency can exacerbate social inequalities and power imbalances.While I will not expand on this point here, arguments could be made that these undesirable outcomes matter as well (e.g., Marmot et al., 1997).

| Automation, work, and the basic structure: The idea of hybrid cooperation
To support my claim that labor automation processes should be part of normative considerations, I develop the following argument: 1. the basic structure includes the division of labor, 2. social cooperation is structurally dependent on technology, 3. automation alters current forms of the division of labor between humans and machines, 4. therefore, considerations of fairness are appropriate when it comes to the impact of automation on work.
Before expanding on these points, let me clarify that I have no exegetic intent with regard to the work of John Rawls.I do not wish to suggest a more accurate interpretation of his theory but rather use his work as a conceptual toolbox for independent reasoning.I depart from Rawls' view in at least three respects.First, I argue that not only what we own affects our chances to pursue our life plans but also what we do.Second, I emphasize that technology is part of social cooperation, which is thus to be deemed structurally hybrid.Third, I include organizations in the notion of the basic structure.In what follows, I expand on each passage of the argument.

| The basic structure includes the division of labor
Rawls' definition of the basic structure is centered on the concept of social cooperation.Justice, it is maintained, applies to the basic structure, which is "[t]he way in which the main political and social institutions of a society fit together into one system of social cooperation, and the way they assign basic rights and duties and regulate the division of advantages that arise from social cooperation over time" (Rawls, 2001, p. 10).When Rawls includes the "structure of the economy" (Rawls, 2001, p. 10) in the basic structure, he mostly refers to markets and the ownership of the means of production, thereby addressing social cooperation in a distributive sense.Yet beyond distribution, social cooperation comprises non-distributive relations (Young, 1990(Young, , 2006) ) as well as a contributive side (Gomberg, 2007).The distributive side of social cooperation can be said to concern the outcomes of work.The contributive side can be said to concern work itself, at both the individual and the organizational level.In other words, social cooperation is not just about the outcomes of society's cooperative efforts; it is also about the cooperative efforts themselves.
While much of the critical literature on the basic structure has focused on the family, there are reasons to consider work and the division of labor as part of the basic structure too.In this regard, it is helpful to look at the criteria of inclusion in the basic structure.This is a longdebated, difficult problem, which Rawls himself acknowledges.On his account, the criteria of inclusion in the basic structure are the "effects on citizens" aims, aspirations, and character, as well as on their opportunities and ability to take advantage of them, which are "pervasive and present from the beginning of life" (Rawls, 2001, p. 11).It would be hard to deny that work and the way we organize it have a significant impact on people's aims, aspirations, character, and the related opportunities in a rather pervasive way.Think of menial labor, reserved for the most socially vulnerable segments of the population.How can being reserved for repetitive, merely executory tasks not impact workers' aims, aspirations, character, and opportunities as pervasively as other institutions explicitly recognized as part of the basic structure?As for the requirement of being "present from the beginning of life," the relative lack of social mobility in developed countries like the US makes it hard for many people to escape their "occupational fate" (Hughes, 2017), as jobs often follow generational lines and "pedigrees" (Rivera, 2015).I will expand more on the impact of work in Section 4.1; for now, it suffices to note, following Young (2006, p. 92), that the social division of labor "is a fundamental aspect of a system of social cooperation."Besides capital and wages, relations of production "refer primarily to the processes of investment, production, marketing, technological invention, and so on."Given the case is compellingly made by Young (2006;1990), I will not expand further on this point here.I am interested in the implications of this argument with regard to technology-driven changes to labor.In what follows, I develop Young's argument about the division of labor further, toward including processes of automation as fully-fledged parts of the basic structure.

| Human cooperation is structurally dependent on technology
If the distributive conceptualization of social cooperation has tended to prevail over concerns for work, this can be said to be true also for technological change: as Suchman (2007) puts it, technology 'enchants' by 'masking' the 'labors of production'.
Throughout human history, technology has defined the range of what we can and cannot do.Recent labor automation processes via the digitalization and algorithmization of tasks can be seen as the latest transformation of a long-dated, historically varied cooperative relation between humans and machines that began in the third century B.C., when the first waterlock was invented (Beniger, 1986).Over time, the invention of new technologies has expanded the realm of what we have conceived of as possible.It might be an algorithm that extracts news articles from raw data or sending a space shuttle to Mars; it might be a car or a dishwasher; it might be the Industrial Revolution.We tend to think of technology as an entity that externally adds to our human environment.But if we break the macro-system of social cooperation down to its smaller components, we find out that, in direct or indirect ways, at every step of the productive process we rely on technology.Technology is thus not a mere juxtaposition to a purely human cooperative scheme; it can be considered as a fully-fledged nonhuman cooperator (it is, in a sense, accumulated labor itself).
On the other hand, if humans depend on technology to work, they are also the condition of possibility of technology-not only in terms of knowledge, invention (cultural capital), and investments (economic capital), but also of work and cooperation.Hence, regardless of the specific features of this ever-evolving relationship, it makes sense to qualify the division of labor between humans and technology as interdependent.We cannot really think of social cooperation without including technology as a substantial part of the structural processes of valuecreation.These features make cooperation hybrid (Coeckelbergh 2009).Even if it may not directly affect all work at every moment, broadly speaking, we can say that, directly or indirectly, technology is structurally involved in society's overall cooperative practices.We depend on technology; technology depends on us (to a certain extent); and technology co-defines the scope of what we can, and cannot, do.My argument expands on Mark Coeckelbergh's (2009) point that nonhuman entities are to be deemed part of a hybrid cooperative scheme and, as such, they are subject to considerations of justice.Unlike Coeckelbergh, though, I do not confine my concerns to distributive justice but shift to contributive concerns, and I address the problem of fairness in work cooperation itself.And while Coeckelbergh seems mostly concerned with justice in terms of what is owed to nonhuman cooperators-both animals and robots-my reflection distinctively focuses on what machines can do with us.That is, if we are to determine what we owe to each other, given that technology is structurally part of social cooperation, we cannot leave the problem of its organization outside of our normative concerns.

| Automation alters the division of labor between humans and machines
If the division of labor is part of the basic structure and social cooperation is structurally hybrid, the final step of the reasoning is that automation alters current forms of division of labor crystallized in organizational forms.Organizations can be considered as the units of the social division of labor.Together, they can be said to constitute the very tissue of social cooperation.They are where work mostly takes place.Lying at the "meso-level of social life," between individual morality and socio-political institutions (Herzog, 2018, p. 5), organizations determine the way in which resources are redistributed along with the positions of individuals in the socioeconomic hierarchy, their room for claim-making and power, and even the scope of their aspirations (Tomaskovic-Devey & Avent-Holt, 2019).Since they crucially contribute to make inequalities systemic (Tilly, 1998), leaving organizational forms outside of the scope of normative scrutiny seems to have little justification.If they affect individuals' aims, aspirations, and character along with their opportunities, given the pervasiveness and importance of organizations as sources and perpetrators of structural inequalities, they fit the criteria of inclusion in the basic structure.Now, automation-driven organizational changes (e.g., Zuboff, 1988) show that it is at this meso-level of social life that some of the effects of automation are more visible.In this context, labor automation may be described as a process that disrupts organizational forms.The technological contributive justice perspective assesses whether and how these changes clash with suitable normative expectations of organizational forms.
To be sure, Rawls thought of the basic structure and justice as referring to society as a whole, not to the internal life of associations and organizations.Likewise, Rawls explicitly referred to the basic structure as including the "structure" or "organization of the economy," and by that he meant markets and ownership of the means of production.But why markets, which redistribute goods, and not organizations, which create those very goods?Scholars have explored whether companies should be considered as part of the basic structure of society or rather as belonging to the sphere of private unions and associations.O'Neill (2009), for example, takes it as "uncontroversial" that the former is the case.Blanc (2014) and Taylor ( 2004) reach similar conclusions.There seems to be no inherent reason why organizations should not count as parts of the structure of the economy given the impact they have on people's lives, which is as pervasive and relevant as other institutions already explicitly considered as part of the basic structure.
To conclude, if automation is disruptive of organizational forms, and if organizational forms deserve normative scrutiny, it is appropriate to explore preferable ways to arrange organizational forms affected by automation.
3.2 | Fair cooperation in the real world: Automation's "labor aristocracy" versus the "precariat" What is meant by "fair cooperation," then?To answer this question, it is helpful to have a look at Rawls' three essential features of fair social cooperation.(i) Unlike mere "coordination"which is compatible with "absolute central authority," as in slavery for example-cooperation is "guided by publicly recognized rules and procedures" (Rawls, 2001, p. 6).(ii) Cooperation "includes the idea of fair terms of cooperation" that participants "may reasonably accept" within an idea of "reciprocity, or mutuality" (Rawls, 2001, p. 6), where reciprocity requires that "all who do their part as the recognized rules require are to benefit as specified by a public and agreed-upon standard" (Rawls, 2001, p. 6).Finally, (iii) it includes each participant's mutual benefit, "rational advantage, or good." Unsurprisingly, the current forms of the division of labor between humans and machines do not embody these normative features.From the way in which automation processes are currently unfolding, the benefits are reserved for a minority.Workers in routine occupations are the most impacted by automation (Autor et al., 2003).Others perform "taskified" labor (Gray, 2016), their activity being reduced to sequences of unrelated tasks (e.g., "clicking").Hence cooperation does not meet the standards of mutuality and rational advantage.Workers in non-routinary jobs are benefiting from automation in that they can reorganize their work by focusing on more interesting activities.I refer to this divergent access to work's benefits and the highly stratified nature of contribution as "contributive inequality."It recalls the difference between what Gorz (1989) calls "labor aristocracy" and what Standing (2011) refers to as "the precariat."By extension, there seems to be a labor aristocracy and a precariat of automation as well.If automation continues to exclusively pursue productivity without considering mutual benefit and fairness for workers, it will not resemble cooperation but will be more akin to mere coordination.Coordination is cooperation without the normative quality of fairness.In coordination, some workers benefit from automation while others are consistently penalized by it, the whole process being governed by the interests of a restricted group of people (much like the "centralized authority").As I will explain, according to the fair hybrid cooperation ideal, automation should be oriented instead toward making sure that all workers benefit from it in their work.
I have addressed the problem of the why of automation.I will now turn to the problem of the how.How should the division of labor between humans and machines be in order for hybrid social cooperation to meet the requirements of fairness?To answer this question, I propose the concept of "contributive primary qualities" (CPQs).
CPQs refer to qualities of the relation between workers and their work activity that workers should experience to pursue their conception of the good life.As is well-known, Rawls' primary goods are all-purpose means necessary for everyone to pursue their life plans.No less than primary goods, however, what we do and how we do it affect our ability to pursue our life plans too.
Research has shown that work significantly also affects us outside of the workplace, including in our cognitive abilities and overall personality (see for instance Kohn & Schooler, 1978, 1982;Marmot et al., 1997).If work affects our being, it has the power to impact our ability to pursue our life plans overall.The rationale behind CPQs is that when our work activity hinders the experience of these qualities, our ability to pursue ends is severed.Therefore, organizational forms involving divisions of labor between humans and machines that do not hinder, and preferably enable, CPQs are preferable.
These qualities are primarily relational in nature and organizationally embedded rather than "goods" to be redistributed or "possessed."They emerge from the relation between workers, their work activity, and the organizational form.Examples of CPQs are: Security, selfdirection, self-development, dignity, and recognition.In what follows, I articulate the essential features and rationale of each CPQ.

| Security
Job security includes physical and contractual-social safety.Without job security, workers are entirely in charge of dealing with the risks of sickness, unemployment, and care responsibilities.If physical security has to do with safety and protection from health risks, job security in its contractual and social aspects amounts to the overall consistency and predictability of work prospects.Welfare benefits such as paid healthcare, paid vacation, parental leave, and stable contractual prospects are currently reserved for a restricted elite of workers.Standing (2011) defines the "precariat" precisely using job security as a main criterion of class determination.On his account, unlike the traditional working class still protected by a system of rights and wider opportunities for unionization, this growing class of migrants, millennials, and workers with low education share a lack of security.Precarious occupations compel people to engage in exhausting job hunting, not to mention the emotional burden of what has been called "hope labor," which consumes their nonworking hours.This applies not only on a material and practical level, but also on a self-conceptual level, as it were.Precarity has corroding effects on character, operating against a consistent, linear self-narration for positive self-identity (Sennett, 1998).For these reasons, it is hard to deny that without job security one's pursuit of life plans is severely hampered.Hence, fair forms of cooperation should ensure that all workers benefit from it.

| Self-direction
Self-direction refers to having room for conceiving the tasks to perform besides merely executing them.Self-direction is a CPQ because devoting all of one's working energy to merely executing tasks one has not contributed to co-define has been shown to degrade individuals' overall cognitive abilities and personality, for example, flexibility and capacity to deal with complexity (Kohn & Schooler, 1978, 1982).Such abilities are essential for workers' exercise of autonomy in their life as a whole, not merely during work (Schwartz 1982).If the exercise of selfdirection impacts our cognitive abilities and personality, these undoubtedly matter for the formation and pursuit of our life plans.Besides participation in task conception, I take selfdirection to also include time control: we need free time and some degree of control over our time to pursue our conception of the good life.

| Self-development
Over our lifespan, work takes up most of our waking time, and the activities to which we devote our attention tend to affect our abilities overall, which in turn affect the range of things we can aspire to do and become in our life plans.Monotonous and repetitive work with no opportunities for learning leaves little room for the development of one's capacities.Workers in higher positions have more training opportunities and benefit from more task variety.The contributively disadvantaged, by contrast, tend to be reserved for routine tasks, with no prospect of training and self-development.As sources of self-development, variety and relative tax complexity should thus be features of the work activity.

| Dignity and recognition
Recognition is essential for a positive relation to the self (Honneth, 1996).In current organizational settings, though, opportunities for recognition are highly segregated.People from low status groups are more likely to get low status jobs, and their social position is perpetuated by labor structures in turn (Tomaskovic-Devey, 1993).The social standing of our occupation continues to produce effects after we clock out in terms of access to what Scanlon (2018, p. 26) has referred to as segregated "associational goods."In short, the structures of work crucially participate in the perpetuation of a status trap.While recognition is performance-based status, dignity is unconditional status: namely, "a non-conventional normative status of persons such that certain forms of respect and concern are owed to them" (Gilabert, 2020).As Gilabert (2020) puts it, "Dignitarian norms specify the appropriate treatment-the forms of respect and concern-that responding to such status-dignity requires."In the context of labor, I take these dignitarian norms to require the arrangement of the organizational means to accommodate dignity.While dignity is more fundamental than recognition, in the sense that dignity is owed to everyone regardless of what they do, they both matter for workers' overall belief in their worth.Research has shown that a positive relation to the self, in the form of self-esteem and belief in one's worth, fundamentally affects people's overall ability to function in life (e.g., Ryff, 1989) and, therefore, to pursue their life plans.
Note that CPQs resonate with three characteristics of the detailed division of labor in a wellordered society: [N]o one need be servilely dependent on others and made to choose between monotonous and routine occupations which are deadening to human thought and sensibility.Each can be offered a variety of tasks so that the different elements of his nature find a suitable expression.(Rawls, 1999, §79) Here Rawls condemns the most degrading effects of the detailed division of labor in a way that recalls Smith's criticism of the stultified worker in the pins manufacturing industry (1776).While the recipe he suggests against the bads of the division of labor-servile dependency, monotony, and routine-is "variety" (along with "meaningful work"), the CPQs mentioned above provide a more complete way to avoid them.Servile dependency is prevented by self-direction, and monotony and routine by self-development, for example.These aspects are mentioned en passant in Rawls' work, with no systematic development.And yet, they are no less relevant for the pursuit of our life plans than the primary goods.

| Fair hybrid cooperation
I have argued that in order to pursue their conception of the good life, workers should be able to experience certain contributive primary qualities.This requires the division of labor between humans and technology-and among human workers themselves following technological change-to be arranged to make sure that workers will experience these qualities.This criterion may help us to assess existing automation patterns, in the sense that between divisions of labor that enable and divisions of labor that hinder CPQs, the former are to be preferred.It might be objected that not all tasks can be arranged to meet the fair hybrid cooperation criterion grounded on the CPQs.Yet the criterion holds regardless: if automation cannot entirely absorb the tasks that enable the CPQs, these tasks ought to be divided among the workers so as to minimize the amount of nonmeaningful work per worker and enable the experience of CPQs as much as possible.While providing a sort of regulatory ideal, fair hybrid cooperation can be understood as a normative standard for practical orientation operating in degrees, to help us distinguish between more and less preferable automation routes and related organizational forms.The CPQs can also be framed in the more traditional terms of "meaningful work" (more on this in the following section), which could be referred to as "hybrid meaningful work" in this context.The ideal of fair hybrid cooperation requires that in order for meaningful work to be experienced by all workers, the organizational means for these qualities ought to be provided.In positive terms: preferable organizational forms enable these CPQs, besides merely refraining from hindering them.
To frame the fair hybrid cooperation requirements in terms of a contributive-technological criterion: Divisions of labor between humans and machines ought to be arranged so as to not hinder, and to preferably enable, workers' experience of the contributive primary qualities.This criterion is meant to co-orient decisions about which tasks to automate and to determine which automation priorities should be pursued and to generally provide guidance about how best to arrange the division of labor between humans and machines.This implies that when designing the future steps of labor automation, decision-makers and investors should automate tasks to enable human workers' access to these qualities, beyond mere efficiency motives.Ideally, the realization of the ideal would make meaningful work no longer reserved for the few, and technological change an ally toward that goal.In a sense, this framework is meant to reverse Taylorism: that is, to resist humans being reduced to cogs in a machine.In this ideal, humans work with technology and benefit from it.In the Taylorist ideal, humans worked for technology and were undermined by it.
Before engaging with the "fair hybrid cooperation test," two clarifications are in order.First, (i) what do the CPQs have to do with "meaningful work"?Second, (ii) is work necessary to experience those qualities?
1.The question of what counts as meaningful work has inspired a rich philosophical debate, with some arguing for meaningful work on the ground of autonomy, non-alienation, or flourishing (Schwartz, 1982;Roessler 2012;Veltman, 2016), and others arguing that demands of meaningful work would unacceptably conflict with liberal requirements of value pluralism (Kymlicka, 2002).It is beyond the scope of this article to review these theories; here it suffices to note that a concept of meaningful work grounded on the idea of CPQs has the advantage of not having to commit to any substantive idea of happiness, nor make claims about the ultimate ethical role of work in a good life.It rather requires that work be characterized by certain organizationally embedded qualities for workers to be able to pursue their life plans, regardless of the nature of these plans.Thus, instead of choosing between meaningful work and value pluralism as if they were mutually exclusive, this account centered on universally valuable qualities aims at being compatible with different substantive views (much like the idea of primary goods).2. Does this imply that work is necessary to experience those qualities?Not really.In principle, we may get recognition and experience self-direction in other ways-by playing games, in personal relationships, by doing sports, and so on.The ideal pragmatically starts from the assumption that work still occupies a big chunk of our lives, the prospect of future joblessness being quite uncertain, and is one of the most relevant ways in which we meet ineliminable social needs.Recall the "somebody's got to do it" problem and the other considerations above on the persistence of socially necessary un-automated tasks.Hence, we are to make sure that the conditions of work do not prevent us from pursuing our life plans and to de-segregate meaningful work.If the CPQs are not exclusively or inherently tied to work, yet, in a system in which individuals work most of the time and in which work is a major way to meet social needs, CPQs do have a special tie to work (See previous section) given the latter's impact on people's pursuit of their life plans.The nature of this tie is factual and pragmatic rather than substantive.In the light of the persistence of socially necessary unautomated tasks, this view demands that everyone experience meaningful work even in a world with less work.

| A FAIR HYBRID COOPERATION TEST
In this section I take up a few examples to give a clearer picture of how this ideal might play out in the real world.I show how the criterion provided can be used to assess existing organizational forms.In the next section, I show how they can be used to shape new possible cooperative imaginaries.
In current organizational arrangements, CPQs are reserved for a small portion of workers, being highly segregated.Think of "ghost work" (Gray & Suri, 2019): invisible human labor operating behind the scenes of AI.It includes figures such as "data janitors" (Irani, 2019) spending long hours labeling images, for example, and cleansing the internet of inappropriate content.Click-farms and crowd-work are living examples of how cooperative arrangements prioritizing the system over the human are by no means confined to the past.As a rather taskified form of work, benefiting from no security, with little to no room for self-development and self-direction (except in terms of time management to an extent), invisible and therefore not susceptible to recognition, crowd-work does not pass the fair hybrid cooperation test.According to our standard, it is thus objectionable and should be changed in a way that is more conducive to the CPQs.
Let us now consider automated management in the gig economy, particularly in the ridehailing sector, to see whether it meets the criteria of fair hybrid cooperation.To begin with, most drivers and riders do not benefit from any kind of job security, as in several cases they are not even recognized as workers.In fact, in most countries, companies such as Uber frame them as "partners" or "independent contractors."Hence, workers are entirely in charge of the burden of dealing with risks associated with the service they provide.They have, however, some room for self-direction.The aspect in which they enjoy most self-direction tends to be time management.They decide when and for how long to work.Nonetheless, the algorithm nudges them to work for longer hours, via notifications that promise higher earnings in certain areas and times.Likewise, workers are constantly tracked and their data is used to both monitor and control their behavior, besides being a source of value extraction itself.By declining a few orders in a row, they are at risk of being banned from the app or the algorithm ranking them lower.A few negative reviews by passengers may lead to similar sanctions.These aspects suggest high forms of control hindering self-direction.Finally, these highly controlling features do not seem to fit well the dignitarian norms mentioned above.A slogan used by protestors-"We are drivers, not Uber's tools!"-is telling.It suggests a sense of being treated like "mere means" by the company.As for self-development, complaints about the repetitiveness and monotony of this job seem not particularly relevant, so this CPQ might not be lacking.
While it may be very profitable for the company and convenient for customers, the automation of management here is not arranged in a way that enables fair hybrid cooperation.In order to pass the test, this organizational form should be rearranged so as to enable job security by formally recognizing gig labor as work and therefore providing workers with contractual and social protections; and room for self-direction, for example by limiting datafication, nudges, and sanctions.Such changes may benefit the relational qualities as well.
Besides assessing existing organizational forms, the fair hybrid cooperation ideal can help us build alternative organizational arrangements, yet to be realized.It can serve our organizational imagination to pursue fairer forms of cooperation in an increasingly hybrid world.As an example of a positive exploration of the ideal, in what follows I consider the potential of care work automation to fulfill this purpose.

| Alternative cooperative imaginaries: A nurse bots case for meaningful work
"Nurse bots" (or "care bots") are social robots that perform tasks of care work for the children, elderly or disabled, as well as household chores.Equipped with sensors and cameras, speech and face recognition, they are able to speak several languages and to hold a conversation.They can assist doctors in providing diagnoses and treatments, and they can also be companions for patients.Far from being science fiction, several countries are currently investing in nurse bots, particularly Japan, and more recently the European Union as well.Even though the cost of these devices is prohibitively high at present, it is expected to drop in the future.
Nurse bots provide a particularly interesting case of inquiry for a number of reasons.To begin with, care work is notoriously very gender segregated, thereby raising concerns of fairness in terms of equality of opportunity in the contributive realm.Second, unsurprisingly, the countries that are investing more in these robots are dealing with an aging population.According to the UN, by 2050 the number of people over 60 will more than double to 2.1 billion, which goes hand in hand with the problem of the relative shortage of care workers.Traditionally, better-off countries have solved the latter challenge by outsourcing care work to migrants from developing countries, resulting in "global care chains" (Hochschild, 2015).Countries such as Japan, traditionally strict on immigration, are explicitly willing to rely on these robots to fulfill their increasing care needs.In 2009, the Japanese ministry of trade official Motoki Korenaga publicly stated that "Japan wants to become an advanced country in the area of addressing the aging society with the use of robots" (Sharkey & Sharkey, 2011, p. 267).It seems likely that social robots will increasingly become a part of our lives.
While relatively recent, the philosophical interest in care bots is not new.Philosophers have focused on the impact of care bots on the ethical quality of the care relation (Vallor, 2011).The debate has mostly centered on warning against the risks of "de-humanized care" and the safety of vulnerable persons (Sharkey & Sharkey, 2011).Sparrow (2016, p. 1), for instance, argues that nurse bots are part of a "trajectory that leads towards a dystopian future even when this is not the intention of the engineers."Folbre (2006, p. 351) wonders: "Do we really want a cold metallic hand on our pulse?"while addressing the pressing problems raised by care automation and care-related immigration in a globalized world.
As often occurs with the advent of new technologies, there are indeed several risks, which span from cognitive and physical safety to data protection, including "emotional data" (Fong & Dautenhahn, 2003).Yet, being at the crossroads of several challenges of this century-from gender inequality to care workers shortage, from global migrations to aging populations-there are also opportunities that are worth considering and that ought to be weighed against the risks.Indeed, de-humanized care does not have to necessarily be the most likely scenario (Sequeira, 2018), and there seem to be advantages for the cared-for too, such as some freedom from shame in sharing intimate details with healthcare personnel (Pugh, 2018).While certainly not denying the risks, I wish to explore the opportunities that nurse bots may provide for realizing the ideal of fair hybrid cooperation.To that end, I shift attention from the cared-for to the caregivers.How could the automation of care work ensure fairer hybrid cooperation, or hybrid meaningful work?
The coronavirus pandemic has dramatically demonstrated the critical value of nurses to society.Job listings advertising positions paying $8000 a week were not unusual (Hilgers, 2022).The New York Times (Hilgers, 2022) reports of a hospital in Texas losing 80% of its nursing personnel in 2020 alone, a striking example of the much-touted "Great Resignation."This is perhaps unsurprising if we consider the "punishing routines of Covid nursing-the isolation rooms, the angry families and unceasing drumbeat of death."Nursing has been labeled a "burnout profession," being both physically and emotionally draining (Hilgers, 2022).In this context, nurse bots may be able to lend a valuable hand.We may imagine them working with nurses, as opposed to merely replacing them, and making their jobs more bearable, which in turn may go to the benefit of the cared-for as well.The most meaningful part of a nurse's job could be preserved and possibly enhanced by social robots.Let me expand on this point in light of the criteria discussed above.
The contributive primary quality of job security, particularly the component of physical safety, has been especially under threat in hospitals and care homes all over the world during the pandemic.The social division of risk was such that human nurses were especially exposed to risks at the front line, not only the risk of contracting deadly infections but also that of being physically and emotionally overwhelmed (risks, of course, not being exclusive to pandemics).Nurse bots may minimize these risks of exposure to infection, for example by taking up several front-line tasks and mediating access to care facilities.They may minimize the risks of burnout by taking up the bulk of routine work-say, part of the work of night-time assistance-while leaving more room for the more meaningful aspects of the job, such as human connection.The CPQs of self-direction and self-development may be enhanced by reducing the workload and externalizing the most monotonous tasks, such as cleaning, to robots.Opportunities for recognition would be enhanced by nurse bots taking over tasks that are less socially rewarding.From the perspective of hybrid cooperation, in this scenario robots would not merely replace nurses but would work with them in a way that enables their experience of the CPQs.Ideally, following the framework discussed in this article, in this way nurses would benefit from better chances to pursue their life plans.
To be sure, the risks of "ghost work" should not be underestimated.Fair hybrid cooperation requires that the experience of CPQs be available for all contributors to cooperation and that the division of labor with machines ought to be arranged with this priority in mind.Hence, the work of training the robot, fixing it, monitoring it, integrating its tasks, should by no means be kept invisible and unacknowledged, which would entail the risk of causing human nurses to perform extra AI-related shadow work, as it were.This residue of AI-supporting tasks should be minimized, for example by sharing it through rotation or other organizational arrangements among human workers.Alternative organizational strategies may be envisaged too with the criterion of fair hybrid cooperation in mind.For instance, as Folbre (2006) suggests, human nurses may be directly involved in decisions concerning the very design of these robots, which would indeed further enable the CPQ of self-direction.This would be important also when it comes to determining what specifically should count as drudgery and as meaningful.Human nurses involved firsthand in the activities are likely better equipped than anyone else to draw the line between these two types of tasks.
Imagine a future in which public institutions invested in robotized care with the purpose of complementing human nurses in a way that enhances meaningful work.This could be an example of labor automation pursuing fair cooperation rather than mere economic efficiency.

| The problem of a highly competitive global market
It might be objected that automation might well not be a natural phenomenon, but precisely because it is a structural process, we have little room for maneuver.A highly competitive global market pressuring companies to seek ways to reduce costs in order to survive shows that automation choices are constrained.Investors and decision-makers should not be blamed or carry any particular responsibility in pursuing alternative values to economic efficiency.Not keeping up with these pressures would in fact entail high -for example, failure or outsourcing.It would be naïve to deny that normative considerations such as those defended in this article may easily outweigh the global forces behind automation in the real world.While this tension is somehow inherent to any attempt to expand the normative scope of business beyond concerns for efficiency, some observations can be made to at least mitigate the objection.
First, it is worth noting that when it comes to distributive justice, the structural nature of socio-economic processes and the pressures of a highly competitive global economy do not prevent us from seeking ways to counterbalance them regardless.Theorists, activists, and policy-makers argue for the regulation of societies' distribution of material resources and the elimination of poverty, for example.By analogy, it is hard to see any inherent reason why this should not also happen with regard to work and automation.
Second, actors involved in decision-making processes behind automation can be said to hold a form of responsibility despite operating in highly constraining structures.As Young (2010) argued, actors whose decisions are intertwined with structural processes hold degrees of political responsibility, depending on their level of power, privilege, and connection with these processes.Depending on the degree and nature of their involvement, they bear at least a proactive form of political responsibility-not merely oriented to blame and sanctions, but forwardlooking actions (Young, 2010).The fair hybrid cooperation account can be read as a forwardlooking way to minimize the costs for workers involved in technology-driven disruptions.
Finally, it is nowadays a broadly accepted view that businesses hold some form of responsibility toward society.Against the "Friedman doctrine" that the only responsibility of business is to make profits (Friedman, 2007), we recognize that there are values that outweigh economic efficiency and that certain things ought not be sacrificed to the goal of productivity, even when it is convenient from an economic standpoint.We hold businesses accountable when it comes to pollution emissions, for instance, as well as for bad working conditions.This does not mean that tensions between conflicting values and trade-offs in a highly competitive market are resolved.Yet, the fact that there are other competing values is not a good reason to not question them.Furthermore, the ideal of fair hybrid cooperation is not to be understood as an abrupt, all-or-nothing change but rather as a gradually unfolding practice.Think of the eight-hour working day and other labor rights.They were unthinkable in the 19th century and have now become reality (at least in some parts of the world).At the time, efficiency arguments were made, but over time, concurring values have gradually changed the sensibility and the threshold of moral tolerance of the public on the topic.

| CONCLUSION
To develop an ideal of "fair hybrid cooperation," I have argued for the de-naturalization of automation and for the importance of questioning its driving motives.While economic efficiency is one of the main drivers of automation, this article has discussed fair hybrid cooperation as an alternative value to orient labor automation choices.In fact, labor automation processes are part of the basic structure of society, to which considerations of justice apply.Given its structural interdependence with technology, social cooperation may be said to be hybrid.As a process altering this relation, automation raises normative considerations.The contributive primary qualities provide a criterion to normatively assess existing organizational forms and to envisage preferable cooperative arrangements.Fair hybrid cooperation is meant to expand the normative vocabulary at our disposal and to provide practical orientation when it comes to labor automation decisions.This perspective shifts the focus of the debate from ethically desirable lifestyles in a supposedly workless future to the enabling potential of technology for fair cooperation.In its current forms, technology-driven changes in work practices do not benefit everyone.Some are reserved for the crumbs of automation, perpetuating a scenario of meaningful work for the few.The fair hybrid cooperation ideal aims instead at reconciling technological change with the goal of making meaningful work available for all.