Teaming with industrial cobots: A socio‐technical perspective on safety analysis

Collaborative human–machine interaction will be progressively intensified in industrial applications. The aim of this article is to examine current approaches to cobot safety by showing that these approaches can additionally benefit from systems thinking methods. The first part of this article covers a narrative literature review on predominantly techno‐centric robot safety approaches, with a strong focus on containing kinetic energy and ensuring separation with humans. The second part introduces systems thinking methods to analyze a socio‐technical perspective on cobot safety, including joint cognitive systems and distributed cognition perspectives. This explorative research dimension is expected to overcome an overly narrow interpretation of safety issues, anticipating the challenges ahead in ever more complex cobot applications. This article embraces a socio‐technical perspective to explore the potential of Joint Cognitive Systems to manage risk and safety in cobot applications. Three systemic safety analysis approaches are presented and tested with a demonstrator case study concerning their feasibility for cobot applications: System‐Theoretic Accident Model and Processes (STAMP); Functional Resonance Analysis Method (FRAM); and Event Analysis of Systemic Teamwork (EAST). These methods each provide interesting extensions to complement the traditional understanding of risk as required by current and future industrial cobot implementations. The power of systemic methods for safer and more efficient cobot operations lies in revealing the distributed and emergent result from joint actions and overcoming the reductionist view from individual failures or single agent responsibilities. The safe operation of cobot applications can only be achieved through alignment of design, training, and operation of such applications.


| INTRODUCTION
Collaborative robots perform tasks in collaboration with human workers within the scope of an industrial setting (Gualtieri et al., 2021). Different definitions of collaborative robots, also called cobots, have been proposed from which many adopt the following definition: any robot operating alongside humans without the presence of a fence is a collaborative robot (El Zaatari et al., 2019). Other definitions do not consider the absence of a fence but define cobots in terms of proximity or the intention to physically interact with humans in a shared workspace (El Zaatari et al., 2019;Hentout et al., 2019).
Besides the element of increased proximity, there can also be an element of increased robot autonomy (Hentout et al., 2019), although the latter by itself does not define a collaborative robot.
It should additionally be noted that the collaborative robot as such does not exist and it is actually the application that makes the robot collaborative (Malik & Bilberg, 2019a). For the remainder of the article, we will simply use the term cobots, when in reality we mean "collaborative robot applications."

| Growing complexity in collaborative robot safety
There is an underrepresentation between cobot applications used in present-day industry versus a growing potential for cobot applications in academic research (El Zaatari et al., 2019;Saenz et al., 2018). In today's industry, collaborative robots are still used relatively independently of their human colleagues (Malik & Bilberg, 2019b) despite ambitions for an increased collaborative potential concerning this technology. Before cobots were introduced, traditional robots were regulated by several regulations in which separation between industrial robots and humans was rigidly prescribed. This conflicts with the very nature of collaborative workspaces. Unger et al. (2018) report that the uncertainty from safety certification reduces the economic attractiveness of collaborative solutions in comparison with traditional robots. Also, the lack of engineering tools for safety analysis of cobot applications causes a relatively slow uptake of this emerging technology (Saenz et al., 2018). Years after cobots were introduced, several normative standards have been updated in an attempt to fill the standardization void concerning this new technology. But several authors have reported that it is still unclear how to bridge the requirements to meet hazard and risk analysis, as the normative standards do not prescribe specific safety assessment methods (Chemweno et al., 2020;Delang et al., 2017;Guiochet et al., 2017). The challenge is twofold and lies in simultaneously assuring worker safety while adapting to the complexity of increasingly versatile applications.

| Degree of collaboration in current industrial applications
Collaborative robots still conservatively adhere to relatively fixed actions and motions and often remain restricted to pre-determined positions on the work floor (IFR, 2018). Reasons for using collaborative robots in industrial settings are saving floor space by giving up physical separation; allocating tasks to collaborative robots that are either ergonomically or psychologically inconvenient for humans; or for increasing accuracy, speed, and repeatability beyond human capability (El Zaatari et al., 2019;Galin & Meshcheryakov, 2019). In other words, currently, the ambition for versatile collaborations between robots and humans remains restricted to perform tasks where cobots replace humans, rather than engaging in genuinely supportive collaboration between them. Academic research is already concerned with developing more mutually supportive collaborative applications, highly suited for industrial tasks. Some examples (El Zaatari et al., 2019) are tasks such as (i) co-manipulation where a human guides an object path while the cobot supports the weight of the object; (ii) humans inserting bolts in a plate while a cobot tightens these bolts from the opposite side of the plate; or (iii) assembly actions that are dynamically distributed between humans and cobots according to workload and energy consumption. Such intensified mutual support of tasks will require further advanced perception, human awareness, or decision-making capabilities (El Zaatari et al., 2019). Safety is considered a main challenge in much of the literature regarding cobot systems (Chemweno et al., 2020;Lasota et al., 2017;Vicentini, 2020;Villani et al., 2018;Zacharaki et al., 2020). Intensified mutual support with increased task versatility applies to several EU projects, which display clear aspirations for higher degrees of human-machine collaboration for industrial applications in the near future (cf. Table 1).
Additionally, new forms of collaboration emerge, for example by the combination of mobile bases with collaborative manipulation robots (Hentout et al., 2019;Unger et al., 2018). These technologies introduce for more versatility, which confronts designers with understanding the joint behavior of both technologies.

| Aims of the study
We reviewed available literature on cobot applications, showing the limitedness of the degree of truly mutual cooperation between humans and robots. The latter are frequently relegated to sequential tasks or substitution of tasks previously performed by humans themselves. This also explains the limited scope of safety management nowadays, which is restricted to a techno-centric dimension, inherently focused on physical dimensions such as speed, kinetic energy, and physical separation.
It has been acknowledged that tasks involving both humans and technical artifacts cannot be studied independently from the agents involved (Trist & Bamforth, 1951). The notion of "socio-technical systems" indicates the symbiotic relationships between social and technical counterparts. This perspective requires a systemic point of view to ensure a joint understanding, exploration, and analysis (Patriarca, Bergström, et al., 2018). A research dimension relying on systems-thinking implies a focus on interconnections between components and causal links that are distant in space and time from actions under investigations. It is frequently sparked by research concerns in relation to context, interactions, emergence, and multiple perspectives (Engler Bridi et al., 2021;Wilson, 2014). Modern adaptive systems thus demand systemic methods to overcome the limitations imposed by linearity and reductionism inherent in traditional approaches to safety management (Hollnagel, 2018). Systemsthinking is currently dominant in many safety-critical domains such as space operations (C. W. Johnson & de Almeida, 2008), aviation (Adriaensen et al., 2019), road (Newnam et al., 2017), rail transport (Salmon & Read, 2019), and construction (Saurin, 2016).
The explorative research question of this article has a double motivation. First of all, by the inherent features of cobot operations recalling the fundamental aspects of socio-technical systems, but also by the successful contributions available in the literature from systems-thinking applied to safety management (Dekker, 2011), Despite the widespread range of applications of systems thinking, cobot safety is indeed still limitedly explored from a socio-technical systemic view (Jones et al., 2018). Nonetheless, systemic risk analysis would ensure that the whole system is studied within which risks occur, rather than focusing on work and task in separation, or on individual agents.
In line with modern safety management, we then provide an overview of the potential usage of systemic methods for cobot safety management to extend the technocentric view toward the inclusion of interactive socio-technical and organizational contexts from multiple perspectives. Based on evidence from the literature on safety management for socio-technical systems, we finally suggest three systemic approaches, that is, the Systems Theoretic Accident Model and Processes (STAMP) (Leveson, 2011b), the Functional Resonance Analysis Method (FRAM) (Hollnagel, 2012) Event Analysis of Systemic Teamwork (EAST) , subsequently applied to a real cobot, used as a staging area for development. The aim of the article is two-fold. First, we examine the governing safety perspective encountered in literature on industrial cobot safety. Secondly, we introduce systems thinking methods to analyze a socio-technical perspective on cobot safety, including joint cognitive systems and distributed cognition perspectives. This explorative research dimension is expected to overcome an overly narrow interpretation of safety issues, anticipating the challenges ahead in ever more complex cobot applications. Here, we have used the notion of a joint cognitive system (Hollnagel & Woods, 2005) to indicate the focus shift from the interactions between humans and machines toward a proper humanmachine symbiosis (Tzafestas, 2006). This shift in research focus is characterized by goal orientation, control, and co-agency. In this sense, cognition needs to be studied not just as a situated or embedded entity, but rather encompass how it is extended and distributed in the world (Blomberg, 2011).
The remainder of this article is organized as follows. Section 2 provides an explorative literature review about the substitution approaches of functional allocation (Section 2.1); the traditional technocentric paradigm for cobots (Section 2.2), and the socio-technical view on cobots (Section 2.3). Section 3 will introduce STAMP, FRAM, and EAST as three systemic safety analysis approaches that will T A B L E 1 EU projects with a concern for safety aspiration for higher degrees of human-machine collaboration Project Summary

SHERLOCK Project
"SHERLOCK project aims to introduce the latest safe robotic technologies including high payload collaborative arms, exoskeletons and mobile manipulators in diverse production environments, enhancing them with smart mechatronics and AI based cognition" COROMA project -Cognitively enhanced robot for flexible manufacturing of metal and composite parts "COROMA project proposes to develop a modular robotic system to perform multiple manufacturing operations, including safe human-robot collaboration, automatic manufacturing scene understanding, increased autonomy with selflearning and knowledge sharing capability" COLLABORATE Project "This project aims to equip robots with collaborative skills so that they can learn from the human and become valuable assistants for assembly operations, in an effective and safe manner" ROSSINI-Robot Enhanced Sensing, Intelligence and Actuation to Improve Job Quality in Manufacturing "The project aims to develop a disruptive, inherently safe hardware-software platform for the design and deployment of human-robot collaboration (HRC) applications in manufacturing" THOMAS Project "The project aims to create a dynamically reconfigurable shopfloor utilizing autonomous, mobile dual arm workers. These workers are able to perceive their environment and through reasoning, cooperate with each other and with other production resources including human operators" SHAREWORK -Effective and safe Human-Robot Collaboration "Europe-wide smart modular solution integrated by different software and hardware modules to allow robots to physically interact with humans within a collaborative production environment without the need for physical protection barriers" subsequently be applied to a demonstration case study in Section 4, including an overview of their potential for industrial cobots' safety.
Finally, a discussion and conclusion will be presented in Sections 5 and 6.

| The substitution approaches in functional allocation
Functional allocation is an area in human factors safety to decide whether a task in a work system will be apportioned to humans, to automation, or both. The literature mainly provides two traditional approaches to perform function allocation in automated systems. The oldest method is known as the "Men-are-better-at/Machines-arebetter-at" (MABA-MABA) classification scheme (Fitts, 1951), introduced in 1951. It consists of allocating tasks to either humans or machine agents simply by studying their respective strengths and limitation, derived from a pre-defined inventory of capabilities (Table 2). Another traditional approach, which appeared much later in 1978, is the Levels of Automation (LoA) approach (Endsley, 1999;Parasuraman et al., 2000;Roth et al., 2019). It introduces an objective basis for making human-automation allocation choices by assigning recommended levels of automation to technologies.
Finally, it could be argued that there is a third approach, being a body of literature that dismisses the MABA-MABA and LoA approaches as oversimplifications of the problems space (de Winter & Dodou, 2014;Dekker & Woods, 2002;Jordan, 1963;Roth et al., 2019). The critics argue that in both approaches functional allocation is treated as a simple act of substitution, whereas what is needed is a transformation of the interdependencies between how humans and autonomous technologies interact, additionally embedded in changes of operational context. For industrial cobot applications, an agreed methodology for functional allocation is still unavailable (Delang et al., 2017), or is often produced by ad hoc decisions, rather than by fully-informed, well-defined strategies (Lindström & Winroth, 2010).

| MABA-MABA classification scheme
Named after its inventor Paul Fitts, the MABA-MABA approach is also known as the Fitts list (Fitts, 1951). There is great merit in the Fitts list for being the first systematic attempt to map strengths and weaknesses from humans versus machine capabilities (Table 2), even if the critiques correctly observed that the comparison remained static (de Winter & Dodou, 2014;Jordan, 1963). Empirical data about human-machine interaction in aviation, robotics, and car driving has confirmed many of Fitts's predictions (de Winter & Dodou, 2014).
But, the "who does what question" does not necessarily provide a good answer to the challenges of "what needs to be done" (Roth et al., 2019). The MABA MABA approach has been critiqued for its risk of focusing on technologic capabilities and leaving the humans with the "leftover tasks" (Norman, 2015;Roth et al., 2019), and for preferring comparability of human and machine over a more goaloriented human-machine complementarity (Jordan, 1963 T. B. Sheridan, 2000).
The MABA-MABA approach is often tacitly assumed, but can be recognized in the cobot literature: "sensitive tasks are carried out by the human, while strenuous tasks are executed automatically by a small payload robot" (Hägele et al., 2016). Other examples can be found in (Hentout et al., 2019) reporting that human skills include: "high availability," "handling of complex parts and processes," "high task flexibility," and so forth whereas machines are better at T A B L E 2 Fitts list or MABA-MABA classification scheme Humans appear to surpass present-day machines in respect to the following: Present-day machines appear to surpass humans in respect to the following: "exact playback of paths," "reliable performance of repetitive tasks," and so forth. Ranz et al. (2017) adapted the focus concerning functional allocation in industrial settings from mere task execution capability toward efficiency indicators such as cost minimization of allocation, suitability, availability, and operation time, and used algorithms that produce capability indicators for tasks that are unique for either humans or machines while leaving other tasks to be performed by either humans or robots.

| Levels of automation
The earliest account of the LoA perspective can be found in a Naval Research Report about tele-controlled undersea operations for vessels with robotic manipulation arms from 1978, written by Sheridan and Verplank (1978). The LoA approach provides taxonomies to specify cognitive aspects involved in automation (Roth et al., 2019) on a continuum from nonautomated to fully automated systems (Table 3). Parasuraman et al. (2000) further refined the idea that entire tasks can simply be substituted by breaking them down into four types of activity (acquisition, analysis, decision, and action selection) associated with the 10 levels of automation. The LoA approach has been adapted as the typical allocation perspective in the design of self-driving cars and unmanned aerial systems (Roth et al., 2019;SAE International, 2018), and has a significant impact on the design of robots (M. Johnson et al., 2011). An important critique to the LoA is that apart from labeling, it does not provide principles or guidelines for the designers of autonomous human-machine systems" (M. Johnson et al., 2011;Norman, 2015).
Even though the original approach was designed for the cognitive control of computerized systems, the LoA approach has meanwhile been adapted to manufacturing (Frohm et al., 2008;Lindström & Winroth, 2010) by proposing a double LoA taxonomy for both computerized and mechanized tasks. The mechanization perspective ranges from no interference, over measuring, correcting, and finally anticipating mechanical outcomes (Frohm et al., 2008). LoA can also be reported as a minimum-maximum range of tasks to be automated (Lindström & Winroth, 2010), resulting in a flexible range of LoA that reflects a potential area of automation for the manufacturer.  Johnson et al., 2011Johnson et al., , 2018 with an analysis of interdependencies to build a theory of joint ac- tivity. An interdependency analysis would logically be determined by a functional understanding of the work system. We indeed propose a potential way forward in this regard, through specific approaches (STAMP, FRAM, and EAST) described in Section 3 to analyze interdependencies between human, technical, and organizational functions and controllers.

| Coactive design
Coactive design wishes to advance beyond the limitations of LOA and expresses the view that the choice for automating system elements is anything but a binary choice (M. Johnson et al., 2011). It departs from the idea that complete manual or automated control does not apply to many systems. Coactive design is based on joint activity theory and considers the effects of coordination, as an essential trait of nearly all activities that involve more than one agent. It reconsiders the question of allocating functions by transforming it into the question of how to support agent interdependencies.
Whereas in LoA the human is primarily considered with respect to the machine's actions, the fundamental principle of coactive design is that interdependence must shape automation.
Coactive design proposes observability, predictability, and directability as the three most fundamental interdependence relations, although others can be allowed into the analysis. between automation and interaction (M. Johnson et al., 2018). A method used to achieve this is to make an interdependency analysis by modeling the human, the machine (by algorithms, and interface element), and the work (by dividing it into tasks and further in capacities) (M. Johnson et al., 2011). Each task capacity is then compared against the human tasks or the automated alternative elements, which are assessed for observability, predictability, and directability. Because the task has been decomposed in multiple capacities, manual and automated elements can be combined into a coordinated whole. The interdependency analysis informs the engineering design as an approach that resists the substitution fallacy.

| Transformed work and interaction-based considerations
Automation produces qualitative shifts in work systems, which will force people to adapt their previous practices in novel ways (Dekker & Woods, 2002). The safety literature describes these substantial effects of automation as "transformed work practice" (Bradshaw et al., 2013). In human-computer interaction Carroll and Long (1991)
In the literature reviews, there is a strong focus on hardwarerelated safeguards and generic collision avoidance strategies to prevent unsafe human-robot interaction. Contrarily, hazards embedded in the broader work system or hazards generated by added decisional complexity receive little attention (Chemweno et al., 2020;Guiochet et al., 2017). Psychological and societal impact have deserved some attention (Galin & Meshcheryakov, 2019;Lasota et al., 2017;Zacharaki et al., 2020), but mainly look at postimplementation influences on work quality. The prediction and cognitive aspects of human or cobot are almost exclusively used to the benefit of predicting motion and avoiding collisions (Gualtieri et al., 2021;Hentout et al., 2019;Lasota et al., 2017;Vicentini, 2020), while there is a lack of attention in the literature on dependability, task design, context, and environment. Chemweno et al. (2020), and Guiochet et al. (2017) are notable exceptions. Most safety methods focus on the assessment of collision risks often assumed to be known a priori by manufacturers, integrators and users, (Chemweno et al., 2020). This critique is acknowledged by the review data from Gualtieri et al. HRI in industrial collaborative robotics by applying both intrinsically safe design and active strategies. We consequently propose Figure 1 to structure the current research focus in the collaborative safety Pre-collision Post-collision MoƟon Planning PredicƟon Intrinsically safe design F I G U R E 1 Collaborative robot application safety methods in order of progressively more complex safety behaviors, ordered from reactive to pro-active approaches literature, starting with reactive approaches to the far left gradually increasing to increased system anticipation with progressively more complex safety behaviors to the far right. More complex and anticipative safety behaviors come at the cost of increasingly complex implementation (Lasota et al., 2017). Figure 1 is based on the granularity used by Lasota et al. (2017) and the comprehensiveness available in Hentout et al. (2019).
Intrinsically safe design can be achieved by reducing the kinetic energy of the moving parts, increasing energy-absorbing properties of protective layers, installing airbags and soft rounded covers around potential contact surfaces, or limit the robot's velocity or maximum system energy (ISO, 2016;Hentout et al., 2019). Some reviews exclusively focus on inherently safe design through compliant actuators.
Such actuators provide varying stiffness, gear ratios, and damping properties, reviewed in Ham et al. (2009) and Wolf et al. (2016).
Compliant actuators can also be actively controlled by software, additionally reviewed by Grioli et al. (2015), Vanderborght et al. (2013). For safety in industrial cobot applications, the nature of such collaboration intent is usually taken as the a priori point of departure (Vicentini, 2020)  • Coexistence in which humans and robots share the dynamic workspace while operating on dissimilar tasks. This is generally linked to collision avoidance strategies (Hentout et al., 2019), at the left side of Figure 1. The majority of industrial tasks are to be found in this category (IFR, 2018;Malik & Bilberg, 2019b).
• Cooperation in which humans and robots work on the same purpose in the same workspace simultaneously. Cooperative tasks require force-feedback sensing and advanced collision Detection and Avoidance sensing (Hentout et al., 2019). • Hand Guiding: The robot is allowed to work in a noncollaborative mode without the presence of an operator. After the robot has achieved a safety-rated monitored stop, the operator is allowed to enter the workspace and control the robot through a hand-guiding device to lead the robot to a specific point of application. This is linked to a limited set of cooperative and collaborative tasks suited for hand guiding. to base normative safety requirements on a nonnormative taxonomy.
What currently is missing in the academic literature are safety analysis methods that extend to answer such questions from a sociotechnical perspective. In Sections 3 and 4 we propose three systemic methods that provide nonreductionistic, nontaxonomic, and nonnormative analysis perspectives. To objectively assess and challenge design choices of intentional or unintentional contact, methods should also take into account unexpected behavior from degraded systems and reverberations from cobot integration into the context of a working system. which takes coagency as the basic unit of analysis, in which human and machine need to be considered together (Woods & Hollnagel, 2006), as opposed to the classical perspective of understanding humans and technology in isolation, connected through interfaces. The nature of collaborative work where both human and machine engage in joint behavior through a shared mental image motivates this study to take an agent-neutral perspective in terms of pure functional exchanges of task-relevant information, made possible with the methods from Section 3.

| Socio-technical view on cobots
The JCS paradigm belongs to the discipline of Cognitive System Engineering (CSE), which is concerned with "the analysis and design of factors, processes, and relationships that emerge at the intersections of people, technology and work" (Woods & Hollnagel, 2006). CSE recognizes that mental models are not the only basis to understand cognition (Cognition in the Mind) and thus, the understanding of safe designs is not restricted to controlled experimental conditions. CSE indeed studies actual features of a work domain more closely to the test operating conditions in the field, embedded in actual fields of practice (Hollnagel & Woods, 2005), also known as Cognition in the Wild (Hutchins, 1995). In this view, the central role of the human operator as the problem holder receives a different emphasis, being that human-machine interaction deficiencies cannot be understood as deficiencies in an absolute sense but are dependent on the system characteristics "because of the way that they shape practitioner cognition and collaboration in their field of activity" (Woods et al., 2017, p. 152). Therefore, cognition is said to be "situated." When applied to the example of collaborative robots, risk cannot only be understood from the techno-centric perspective of mere energy containment in terms of managing speed, force, and separation.
A JCS perspective further extends that situated interaction with the world, which inevitably involves interactions with other agents and dynamic contexts, and it forces the analysis to include a new system where joint activity is distributed. Although cobots as an applied technology only started to come out in 2008 (Hentout et al., 2019), scholars from other domains have previously studied human-technical joint performance, embedded in purposeful socio-technical systems (Le Coze, 2013;Leveson, 2011b;Rasmussen, 1997;Waterson et al., 2015). Much of JCS research has been concerned with the identification of recurring patterns (Woods & Hollnagel, 2006;Woods, 2002) in automation-induced problems, often in contrast with the putative benefits that designers proposed before design implementation.
The literature draws from experience that machines not always act as a team player (Bradshaw et al., 2013;Hollnagel & Woods, 2005;Klein et al., 2004;Norros & Salo, 2009; by doing things that humans do not anticipate or understand. Automation surprises occur when the actual system behavior is not in line with the user's expectations (Hoffman & Militello, 2008;. Such surprises generally emerge because of a divergence of mental models and low system observability or feedback failures, especially when managing dynamic and nonroutine operations (Hoffman & Militello, 2008;. It has been demonstrated that although high levels of automation enhance routine performance, system failure performance is negatively affected by higher automation levels (Onnasch et al., 2014). Managing systems under nonroutine operations or demanding circumstances is a field of inquiry that has so far received little attention in the cobot literature, requiring more research efforts (Guiochet et al., 2017).
Mutual prediction of both human and robot behavior will play an increasingly important role in safe collaboration tasks and is a frequently researched topic in academic research (Gualtieri et al., 2021;Hentout et al., 2019;Lasota et al., 2017). Whereas prediction of motion paths and imminent collision has received considerable attention in the literature (see Section 2.2), such prediction additionally depends on the operator's Mode error and mode awareness , which occurs when the operator misinterprets the different meanings from automation functions resulting from multiple device mode settings. Mode awareness has received little coverage in cobot applications but was recently applied to cobot case studies by Gopinath and Johansen (2019).
To the best of our knowledge, Chacón et al. (2020)  Essentially, demands for the management of automation create the fundamental question for the socio-technical safety analysis: "What does it mean to be in control in a Joint Cognitive System?" and "How is control distributed across such systems." In light of the previous JCS principles derived from the literature, this generates sub-questions that should take into account how control is embedded in the situated cognition of the work system as a whole and how control is affected by disruptions and nonroutine situations. In the next section, we present three socio-technical safety analysis approaches that provide different ways to answer these research questions. Analysis and Classification System (HFACS), which was not used in this study as it is not considered to align with systems theory . We have disqualified Accimap, as it is a retrospective method only and to a great extent its principles, based on the ideas of a hierarchical control-based model from its originator Rasmussen, have been further encompassed by STAMP (Leveson, 2011a). The systemic approaches have previously demonstrated their usefulness in several other socio-technical systems (e.g., evidence available from recent literature on FRAM Salehi et al., 2021), or from several recent cases in various safety and ergonomics domains applying STAMP and its associated techniques Patriarca et al., 2019;Stanton et al., 2019), or EAST . FRAM and STAMP are in essence qualitative safety analysis approaches, although in the case of FRAM some quantitative extensions Patriarca, Falegnami, et al., 2018), including the application of Fuzzy Logic (Hirose & Sawaragi, 2020;Slim & Nadeau, 2020), have been described. STAMP has been extended with system dynamics (Bugalia et al., 2020;Kontogiannis & Malakis, 2012) and model checking tools (Han et al., 2019;Yang et al., 2019). EAST already has a quantitative element built-in to its framework in the form of network metrics.

| ABOUT SYSTEMIC SAFETY ANALYSIS
STAMP (based on Leveson & Thomas, 2018;Leveson, 2011b) is an accident causality model, in which a system is regarded as a dynamic process made up of interrelated components, kept in states of safe equilibrium by control loops. Whereas in many traditional causation models the most basic element is an event, STAMP uses constraints applied to different levels of control in a process model as the basis for analysis (Leveson, 2011b). The potential for variability in the system is assessed by both endogenous and exogenous couplings and their upstream or downstream reverberations relative to a specific function. This potential is called performance variability. Unlike many traditional safety methods, performance variability is not per se regarded as negative but is a necessary system property to achieve work in light of trade-offs, finite resources, and time constraints. The performance variability of the model and its emergent behavior, as a result of upstreamdownstream couplings, is called functional resonance. To manage variability, positive resonance should be amplified, while negative resonance should be dampened. This is for example achieved by inserting barriers, closing feedback loops, rearranging the order of functions, assigning roles to other agents, creating redundancies, or reorganizing the work system.
The methodological steps that are required in a FRAM analysis are as follows: (i) identification of functions; (ii) identification of variability; (iii) aggregation of variability; and (iv) assessing the consequences of the analysis or the management of the system's performance variability.
It is important to understand that the resulting FRAM model depicts the potential couplings in a representation of work as normally performed and is not possible to determine whether a function will always be performed in relation to other functions. Instead, an instantiation of a FRAM model represents the actual couplings or dependencies that have occurred or might occur under favorable or unfavorable conditions in a particular scenario. The focus of the FRAM is on the interplay of the dependencies. Therefore, the question of what it means to be in control will depend in the first place on how that control is distributed over the system.

| EAST
A comprehensive and recent overview of the different domain applications of EAST, with several methodological variations, can be found in . EAST considers the overall system as the unit of analysis, by studying the interactions between humans and between humans and artifacts within the system itself. EAST is best described as a framework, as it combines several tools and methods that are specific to EAST but derives its data from techniques that exist independently of EAST.
At the core of the overall approach, EAST describes, analyses, and integrates activity by a multiple network representation: task, social, and information networks are first developed individually and are subsequently evaluated in an integrated network of networks. Operation sequence diagram). A shortened form of EAST has been proposed to derive task, social, and information networks directly from the raw data .
EAST outputs can be analyzed either qualitatively or quantitatively. The latter is achieved by applying network analysis metrics, whereas the qualitative data can be derived from network representations and additional supporting representational diagrams as described in step (iii) above. By assessing a distributed inter-agent representation of information and tasks, a JCS analysis is developed.
The outcome of the analysis typically consists of a graphical presentation of distinct information, task, and social networks. This is followed by an integrated network combination of the individual networks, and finally, an interpretation of the metrics analysis that emerges from these networks. We refer to Table 4  part of this approach still involves the modeling of task, social, and information networks, but subsequently enables EAST networks to be used for predictive risk assessment by examining the effects of "alternative circuits," "short circuits," "long circuits," and "no circuits" .

| EXEMPLAR CASE STUDY
We have based our demonstration case study on a real cobot application consisting of an already existing manipulating arm and gripper for heavy loads (David et al., 2014), newly mounted on an AGV-type mobile base. We created virtual data in an imaginary scenario for the joint behavior of manipulation and mobility, based on generic capabilities derived from a press release and an accom-  We isolated a single scenario related to the cobot's drilling function, taking into account that this function requires a coordination challenge between the operator, the cobot's manipulating arm, and the mobile platform. Analyses of such systems applied to realworld examples would inevitably need to be extended to take into account the effects of multiple cobots and operators in a single

Social Network Analysis metrics Description
Emission degree The number of ties emanating from each agent in the network

Reception degree
The number of ties going to each agent in the network

Eccentricity
The largest number of hops an agent has to make to get from one side of the network to another Sociometric status Refers to the number of communications received and emitted by each agent, relative to the number of nodes in the network Agent centrality Calculated to determine the central or key agent(s) within the network. There are a number of different centrality calculations that can be made. For example, agent centrality can be calculated using Bavelas-Leavitt's index

Closeness
The inverse of the sum of the shortest distances between each individual and every other person in the network. It reflects the ability to access information through the "grapevine" of network members

Farness
The index of centrality for each node in the network, computed as the sum of each node to all other nodes in the network by the shortest path

Betweenness
The presence of an agent between two other agents, which may be able to exert power through its role as an information broker Eigenvector Identifies those nodes connected to important nodes, which may provide a discreet intervention target In the latter case, the cobot drives parallel to a structure or object from one hole to the next hole but does not drive or steer toward the object. Otherwise, this would balance out the direction and forces of the drill action, and additionally the cobot-operator separation could be violated. This is a serious hazard as the operator is positioned with his/her back to the cobot holding the drill hanging on the swingarm,

| STAMP application
We first applied STAMP as an example to answer the questions of how control is distributed and maintained in the demonstrated cobot JCS system. We have generated two hierarchical control structures (Figures 3 and 4) with the controllers and their control actionfeedback loops at different granularities. Following STAMP theory, inadequate control may result from missing constraints, inadequate safety controls, missing lower-level commands, or inadequate feedback to enforce constraints (Leveson, 2011b). Although a systematic and comprehensive examination of control requires to subsequently perform the steps provided by STPA as a complement to a STAMP analysis, the HCS developed here summarizes the system's architecture, which provides the basis for the examination of the control's dependencies.
The mode controller accepts multiple inputs (MC1, MC2, MC3) (  In this particular case, mode priority is initiated by a sociotechnical context, because the drilling is one specific task in the organization of the work system. The mobile base navigation behavior subsequently results from a particular combination of mode selector, enabling device, and drill extension activity. It is essential that the operator is aware of why the system behaves as it does, earlier described as mode awareness. See Adriaensen et al. (2021) for an STPA application of the scenario presented in this article. Even without performing STPA the HCS provides a means to verify several inadequate control and inadequate feedbacks. Similar systems thinking requirements and interactions can be identified by extending the scope and adding supplementary inquiries. <Task assignment> is, therefore, an example of a contextual factor which is related to the situated cognition of a work-specific system, whereby drilling for example creates dust which can affect the safety sensor(s), a condition which is not encountered in the cobot task modes A and B in Figure 3

| FRAM application
The FRAM model was made with myFRAM , an open add-on for Microsoft Excel, from which the results were   Different mode management conditions from Table 5 (not highlighted in Figure 6) are depicted by difference in aspects that arrive in <Activate autonomous AGV navigation>, <Activate drill restricted AGV navigation>, and <De-activate AGV navigation> such as the pre-condition |enabling device activated|, |enabling device de-activated|, |drill armed/activated|, |collaboration mode on|, or |collaboration mode off|. These aspects are generally not restricted to mode management but are interrelated to several other functions that use these aspects as a resource, control, or precondition.
Further examples of contextual negative functional resonance can arise when for example an <Obstacle emerges (human or artifact)> ( Figure 5) as the consequence of a falling object or an object overlooked in a previous work task. Alternatively, the previously mentioned negative propagation of drill dust on the safety sensor (Section 4.1) can be traced downstream of the <Sense obstacle> function. In terms of functional propagation, the output of this function is connected to the input <Lock AGV>, which is identical to the stop authority feature in the HCS representation from the STAMP perspective. It is needless to say that other socio-technical system variabilities are contained in this data and deserve to be explored in a full-fledged analysis.

| EAST application
Even if EAST is best suited to analyze multiagent networks with the simultaneous engagement of multiple human operators and cobots, the EAST scenario used in this study stays restricted to the joint behavior of an individual operator's inputs and a mobile platform with an integrated manipulator (cf. 4.1 and 4.2) and drilling extension. This restriction of scope enables the comparison of three systemic methods through a similar restricted case study. The reader should keep in mind that a full analysis of real-world variables with multiple agents will yield other results than represented in this article and would even influence the centrality and distance measures that resulted from this restricted case study. We also want to emphasize that we applied this shortened data collection process for demonstration of the method only. A full-fledged EAST analysis requires researchers to corroborate between observational data and collection of information from subject matter experts, for example, by applying the Critical Decision Method.
In practical terms, we generated the information, task, and social network data with KUMU, an online network analysis tool with builtin SNA capabilities and versatile graphical network options (KUMU, 2021). We started by building an information network, which is represented by the circled network elements in the integrated network representation in Figure 7. We applied the JCS approach by combining both human and technical agents in a nonhierarchical perspective.
Each element produces an information token that is connected to another element. Links between elements, also called nodes, can be uni-directional or bidirectional depending on the way information is emitted to or received from neighboring elements. The two "system behavior" functions in the middle of Figure 7 produce the observable behavior by the cobot navigation and the cobot manipulator.
Together they produce the salient cues for the human operator in terms of expected or nonexpected system cobot movements in space and time. Two other networks are superimposed on the information network. First, the task network can be interpreted from the boxed labels, which also correspond to the colors of the circled elements.
Information elements that belong to the same task are grouped together in clusters. The reason for the multicolor taxonomy for the two elements that concern "system behavior" can be found in the ADRIAENSEN ET AL.
| 187 fact that they emerge from multiple task clusters. Second, the social network coding can be derived from the additional color-coded elements attached to the information elements (cf. legend social network in Figure 7). Some information elements involve multiple agents. One example is where increased human-cobot separation is produced as the dynamic outcome of both the human agent's and the cobot navigation's reactions to physical separation. Figure 8 presents another perspective on coding by agents, being the graphical representation of which agents are conjointly involved in the execution of a specific task.
Additionally, Figure 8 graphically represents some of the SNA metrics produced by the task network. Ranking numbers and values have been assigned to the tasks, with the information network links taken into account for the calculation of the metrics. Table 6  The list of SNA metrics is not comprehensive. We have instead concentrated on those metrics that are useful in the context of this case study to meet our demonstration purposes. First of all, we did not include metrics that concern the whole network such as size, density, or cohesion, as these metrics are calculated concerning the total number of elements. They would therefore not produce meaningful results in a restricted demonstration case study with a limited scope. For similar reasons, an individual node metric like sociometric status has not been included because it also relies on the  of the shortest distances tells something about control of access to information through the network and its members. It is an important measure to tell how well an element is indirectly connected to others.
With "mode management" at the highest-ranked position, this task displays a critical role for being related to many other tasks, showing similar undesirable "mode management" consequences as those explained for emission. Farness, not explicitly added in Table 6 is the mathematical reciprocal of closeness (Bavelas, 1950).  Table 6 applied to eigenvector and betweenness, selected as two metrics that are less intuitive to interpret. Whereas eigenvector is an index measure of the influence of a node in terms of being connected to other well-connected nodes (Falegnami et al., 2020), betweenness (cf. 3.3) on the other hand provides a measure for the number of times an element stands on the shortest path between two other elements, which can also indicate a potential for failure (KUMU, 2021). "System behavior" shows low centrality in terms of betweenness but has a high eigenvector value. "Mode management" in our case study shows the opposite result. The fact that "system behavior" shows a high value on eigenvector can be explained by the fact that the observable cobot behavior emerges as the product from all tasks. Manipulator-related tasks also score high because these too F I G U R E 7 EAST integrated information, task, and social network model. EAST, Event Analysis of Systemic Teamwork are connected to other well-connected nodes. The "mode management" task, which is a shared responsibility between the human operator, the cobot navigation, and the cobot manipulator agent is central in terms of betweenness because it is often involved in many other short element paths. Correct or incorrect "mode management" will indeed immediately affect all neighboring functions for both navigation and the manipulator handling as a direct consequence of system layout in which "mode management" plays a fundamental role. Hence, the graphical support of differently sized elements in terms of specific metric values helps to understand differences in centrality value interpretations for a metric like, for example, eigenvector and betweenness.
The coding from the social agents in Figures 7 and 8  In all three systemic safety methods, "mode management" for example, was considered as an array of functions which was distributed throughout or connected to human and technical agents.
Each method highlighted the importance and centrality from mode management for efficient system performance to emphasize but one  (2) #3 Drilling-related tasks (0.236) Safety separation (7) Drilling-related tasks (2) #4 Safety separation (0.000) Mode management (7) Navigation (2) #5 Mode management (0.000) Navigation (6)    and there is merit in using a combination of methods to understand the complexity and diversity of sociotechnical systems (Salmon & Read, 2019;.
A summary of how the different methods respond to the research questions "What does it mean to be in control in a Joint Cognitive System?" and "How is control distributed across such systems" is provided below in Table 7 and is based on method properties described in section: In  (Figures 3 and 4). Contrarily, in FRAM dependencies exist of couplings connecting the six potential aspects of functions (Input, Output, Precondition, Resource, Control, Time) and it is also possible to assign phenotype values to the aspects, which permit to attach a qualitative evaluation of dependencies (e.g., timing, precision, accuracy, etc.). FRAM is thereby the approach that provides more rigidity in defining how dependencies (called couplings) influence system behavior. In FRAM, a precondition for example needs to be satisfied before the next action can start, whereas a resource defines a coupling that is consumed by the next function. EAST, on the other hand, does not distinguish between aspect types, but it essentially provides a difference between information, tasks, and agent networks.
EAST provides an alternative perspective to assess the quality of dependencies (sometimes called ties or edges in EAST, but in most studies simply named relationships) using SNA metrics to assess the centrality, position, or efficiency of a node. This quantitative assessment of the network and its nodes is not offered by the other two approaches. The EAST framework recently introduced the East broken-links approach for predictive risk assessment (Lane et al., 2019;Stanton & Harvey, 2017). In the broken-links extension, EAST assesses series of dependencies through evaluating them in the context of "alternative circuits," "short circuits," "long circuits," and "no circuits," introducing an additional qualitative propagation potential. The FRAM on the other hand verifies to which extent a series of dependencies produces positive or negative resonances with other couplings upstream or downstream of functions under investigation with a strong emphasis on the nonlinear propagation potential.
Which method is better ultimately depends on the research subject.
EAST can be especially useful in smart factories which contain data information networks in combination with networked technologies.
FRAM on the other hand has the advantage of being a method-sinemodel (Hollnagel, 2012), which makes it highly adaptable to different contexts.
Concerning the focus of investigation and outcome, one important difference between STAMP and the other two approaches is STAMP's focus on negative outcomes and countermeasures. Contrarily, FRAM has been described to apply a more descriptive resilience engineering perspective  and EAST has similarly been described to take advantage of a "non-reductionistic, non-taxonomic method for analyzing non-normative behavior of systems" (Stanton & Bessell, 2014). Improving system safety through FRAM and EAST approaches depends in great part on gaining a better understanding of distributed cognition and control and exposing the implicit functioning of the system. Safety mitigation is not strictly instructed by the FRAM and EAST methodologies. Contrarily, STAMP provides a top-down model and takes a systems-engineering approach with predefined risk mitigation steps incorporated in its methodology. All approaches acknowledge the role of normal performance in accident causation s (Hollnagel, 2012;Leveson, 2011b;. FRAM and EAST, therefore, tend to be more suitable for describing operational systems, including emergent relationships initially not foreseen in the design, whereby the hierarchical control structure approach from STAMP can be preferential for engineering approaches, especially in early design phases, where design is based on the logic of controllers and anticipated contextual parameters. STAMP as a causality model can be complemented with STPA, as a hazard analysis extension. STPA results in system control constraints that result from the HCS by the identification of control actions, unsafe control actions, loss scenarios, and contextual parameters. See Adriaensen et al. (2021) for an extended STPA analysis related to the STAMP analysis from this publication in which the system control constraints for the AGV controller systematically were systematically studied. By applying STPA, we widened the scope to predictive risk analysis, Likewise, we recommend future research to examine cobot applications through the EAST brokenlinks approach as a predictive risk analysis extension.
In essence, several methods can mutually support the understanding of the system or the scenarios and instantiations under investigation. The functional distribution from both EAST and FRAM approaches can subsequently be verified and contrasted with the control structure of the STAMP representation. Additionally, "the focus" of the three systemic methods differ. STAMP delivers a stepwise approach to derive the HCS from losses, hazards, and system control constraints, which provides an opportunity to demonstrate compatibility with more traditional safety analysis requirements. FRAM has strong theoretical underpinnings that do not prescribe specific data collection methods but require the researcher to represent strong ontological models of the work systems under consideration. In comparison to the other approaches, EAST has a greater focus on a comprehensive data collection framework, which increases the scientific reliability of the resulting models.
The limitations of the article can be found in the fact that an actual case study requires interviews and observational data to build more accurate models with support from subject matter experts. The robust initial data gathering methods from EAST can yield data that can in turn be re-utilized in any of the other remaining systemic methods. Future research could provide full-fledged FRAM and EAST analysis of multiagent networks.
Another limitation is that we used the methods for a limited demonstration case and did not provide a systematic analysis of all data that could be gained from this case study. Future research could also investigate new configurations of the various method strengths to be used in combination. From several possible configurations, at least the combination of EAST and STAMP has been described , as well as a combination of network metrics and FRAM (Falegnami et al., 2020). We would also like to point to the fact that we only applied a selection of methods for this study, but ideally future research would draw a full comparison of strengths and weaknesses of several other available systemic methods, such as CWA (Naikar, 2017), system dynamics (Ibrahim Shire et al., 2018), or Net-HARMS  to just name a few. During the writing of this manuscript, one publication in particular deserves attention, because it compares the reliability and validity of, STPA and the EAST broken-links approach , with the recommendation to further test extensions to enhance the reliability and validity of these methods in the future.

| CONCLUSION
The literature review on collaborative robots presented in this publication revealed a great emphasis on a techno-centric perspective, whereby risk was narrowly defined in terms of uncontained energy, with a typical focus on safety mitigation in terms of speed, kinetic energy, and separation. The contribution from this article is to first draw attention to a paradigm shift from a mere techno-centric perspective toward a socio-technical safety perspective, and secondly to provide and demonstrate the feasibility of different systemic safety analysis methods to complement the traditional energy-barrier perspective for cobots safety analysis. Collaborative robot applications purposefully use the principle of distributed cognition to the advantage of a joint action that is stronger than the sum of its parts, which additionally motivated to examine the problem domain from a JCS perspective. The finding from such an approach can support a systemic human factors design perspective and provide insights about implicit cues and effects that can be important for training purposes. We believe this is the first study to explore the joint possibilities of the three systemic approaches STAMP, FRAM, and EAST or to highlight their specific benefits.
The controller-constraint view from STAMP, the network analyses from EAST, and the variability-resonance perspective from FRAM provide complementary lenses to analyze collaborative work in human-machine collectives. Regardless of the specific approach to be applied, with its respective pros and cons, we believe that a sociotechnical research perspective is required to deal with issues referred to modern and future cobot systems.