Evaluating the complexity of engineered systems: A framework informed by a user case study

Evaluating the complexity of an engineered system is challenging for any organization, even more so when operating in a System‐of‐Systems (SoS) context. Here, we analyze one particular decision support tool as an illustratory case study. This tool has been used for several years by Thales Group to evaluate system complexity across a variety of industrial engineering projects. The case study is informed by analysis of semistructured interviews with systems engineering experts within the Thales Group. This analysis reveals a number of positive and negative aspects of (i) the tool itself and (ii) the way in which the tool is embedded operationally within the wider organization. While the first set of issues may be solved by making improvements to the tool itself, informed by further comparative analysis and growing literature on complexity evaluation, the second “embedding challenge” is distinct, seemingly receiving less attention in the literature. In this paper, we focus on addressing this embedding challenge, by introducing a complexity evaluation framework, designed according to a set of principles derived from the case study analysis; namely that any effective complexity evaluation activity should feature collaborative effort toward building an evaluation informed by a shared understanding of contextually relevant complexity factors, iterative (re‐)evaluation over the course of a project, and progressive refinement of the complexity evaluation tools and processes themselves through linking project evaluations to project outcomes via a wider organizational learning cycle. The paper concludes by considering next steps including the challenge of assuring that such a framework is being implemented effectively.

engineering, organizations face a compounding challenge of engineering systems that operate in conjunction with other diverse systems, often with some level of autonomy and emergent capabilities. [10][11][12][13] SoS engineering also confronts a broad scope of nontechnical challenges including politicinssal, economical, social, legislative, and environmental considerations. [10][11][12][14][15][16][17][18][19][20] In such contexts, complexity also presents management challenges that must be overcome in order to successfully navigate the delivery of such systems. [21][22][23] Further, several systems engineering contexts confront additional domain-specific challenges, for example, working with novel and cutting-edge technologies in the defense and space domains, 24 or having to meet exacting certification demands in the aerospace and healthcare domains. [25][26][27][28] Organizations wishing to successfully engineer such systems may find themselves having to make several difficult technical and operational decisions at the start of the defined system life cycle, such as: Do we wish to bid on a "Request for Proposal"? If we do, how much risk are we exposing ourselves to? If we go on to design, deliver, and qualify the solution, how can we be confident that we have engineered the right system, and engineered the system right? In answering such questions, one important consideration for organizations may be to evaluate the complexity of their candidate systems, and assess the implications of this complexity for understanding 29 a System of Interest (SoI) and or in realizing it. 6,16,30 Organizations may rely on guidelines, instructions, and decision support tools to help inform this type of evaluation. Here, one particular industrial complexity evaluation decision support tool is reviewed as an illustratory case study in order to identify challenges for SoS complexity evaluation throughout a development life cycle.
In this paper, we are particularly interested in distinguishing between the challenges involved in designing an effective tool, and the related challenges involved in operationally embedding this tool such that it is effective within an organization.
The paper is structured as follows: First, the literature related to The purpose of this paper is twofold: (i) distinguish the different hurdles facing organizations hoping to successfully evaluate system complexity, so that they can enter into such a process with "eyes open," and (ii) advance the development, refinement, and validation of a holistic SoS complexity evaluation framework that deals explicitly with the "embedding challenge." The primary research question addressed by this research is therefore: "How should organizations embed complexity evaluation tools in order to derive benefits from system complexity evaluation and understand these benefits?"

LITERATURE REVIEW
Increases in the complexity of an engineered system may have challenging consequences that include increases in the system's life cycle costs and increased difficulty in repairing and maintaining the system. 31 Evaluating the complexity of a system can usefully inform decision management and risk management processes throughout a system life cycle, and contribute to architecture evaluation and system analysis processes. 26 However, a significant challenge for those wishing to evaluate system complexity, and one that persists despite considerable research effort, is finding a single, agreed definition of system complexity. 6,7,[30][31][32][33][34][35] There are a range of perspectives on the term, 8 with some researchers arguing that engineering efforts should be concerned, primarily, with structural complexity [36][37][38] (see also descriptive complexity 39 ), while others emphasize dynamic complexity, 40,41 or sociopolitical complexity. 42 The relationships between these types of complexity is described by Sheard and Mostashari. 8 For a systems engineer or architect, structural complexity can quantify the complexity of a system or product architecture. [36][37][38] In such approaches, the structural complexity of an architecture depends on the heterogeneity and quantity of different architectural elements and their connectivity. The structural complexity metric (Equation 1, where C represents structural complexity) includes three terms: C 1 represents "component complexity," which is the sum of complexities of individual components; C 2 represents the total number of pair-wise interfaces and is pertinent to interface design activity; and C 3 represents the topological complexity of the architecture and is pertinent to systems integration activity. The terms in the expanded structural complexity metric (Equation 2) are defined in more detail in Refs. 36-38 and rely on adjacency matrices to represent the architecture (e.g., design structure matrices [DSMs]) to quantify the number of components (n) in the system and the connectivity between them.
where A ij captures the connectivity between elements i and j, and interface complexity ( ij ) depends on the complexities of the pair-wise interfacing components ( i and i ) and a coefficient characteristic of the interface type (f ij ): where i , j ≠ 0 and = 1 n and E(A) = ∑n i=1 i , where i represents ith singular value.
However, care must be taken with such an approach as "the map is not the territory." While an adjacency matrix is certainly a useful representation of an engineered system for systems engineers, the complexity of this representation is not necessarily the complexity of the system itself. 43,44 Further, the constituent terms of Equation (1) themselves rely on estimations, impacting on the objectiveness of the quantification; for example, where n is the number of components in the system and i is the complexity of the components, suggested to be estimated using judgments of technology readiness levels, which have their own limitations and challenges. 45 Further, the implication of the proposed structural complexity metric is that a distributed architecture is inherently more complex than a centralized architecture. When measured in terms of the number of different system components and how connected these components are, such an implication makes sense. However, by not considering the behavior of the system, systems architects and engineers may be neglecting vital information in their evaluation of candidate architec-

tures.
A complementary approach is to evaluate the "dynamic complexity" of a system, which principally concerns evaluating the amount of difficulty in predicting a system's behavior. 41 In such an approach, three elements determine the overall behavioral complexity of an engineered system: the system being observed, the capabilities of the observer, and the behavior the observer is attempting to predict. Behavioral complexity 41 at time t is then: where (1 − p bi ) is the estimate probability of failing to correctly predict behavior. By including confidence intervals such an approach includes consideration of the capabilities of the observer. While such an approach can include considerations of behavior relating to mission performance (e.g., component failures) and the wider system context (e.g., technology supportability), there is a challenge for systems engineers in not only determining which aspects are particularly important for evaluation, but also fundamentally in the reliance on subjective judgments of probabilities relating to all three of these elements (the system itself, the behavior of the system, and the capabilities of the observer). Further, by this definition of dynamic complexity, as systems become more dynamically complex, it necessitates a reduction in the confidence and accuracy of estimating this property.
While evaluating structural or dynamic complexity is nonetheless likely to be a useful approach for system architects and can provide important insights during system design and system analysis, to what extent can such approaches usefully inform other activities in a life cycle, for example, prebid/bid activity, or support evaluations of candidate projects, stakeholder, and operational contexts?
Contrast these two quantifiable perspectives on system complexity, "structural complexity" and "dynamic complexity," with "socio-political complexity," which emphasizes the effect of people on the complexity of a system. 42 This includes competing perceptions of how complex a system is due to the multiple diverse viewpoints of the system and the wider context by stakeholders. It also includes the behaviors of people, agents, or the system in relation to these, and the difficulty in predicting outcomes based on inputs when contrasted with simple systems. How can this kind of complexity be measured or quantified? While one can count the number of different stakeholders (or types of different stakeholders), or estimate metrics such as the degree to which stakeholders are aligned in their perceptions, a challenge remains for organizations to make sense of this subjective, contested property.
Further, how is a complex system distinguished from a complicated system? Some argue a complicated system is one that "one can model and predict outcomes in a way that cannot be done with a complex system," 5 while others, also recognizing emergent behavior as a distinguishing feature, instead emphasize the distinction in terms of how dif- For organizations wishing to evaluate system complexity, this myriad of definitions suggests that system complexity is dependent on perspective, on which aspects of a system are deemed important and for what reasons. For example, is system complexity considered from the perspective of "the system being observed," "the capabilities of the observer," or "the behavior the observer is attempting to predict." 41 Or is the complexity of a system considered in terms of how difficult the system is to comprehend ("cognitive complexity") or how difficult the behavior of the system is to predict ("behavioral complexity"). 47 [50][51][52] Does it include the processes of utilizing the system once deployed or the user's perceptions of how complex the system is (e.g., how familiar users of the system are with important features of the system)? [53][54][55] What is the boundary of the SoI; is it the physical context of the implemented system or does it also include the more extended strategic/business context? 56,57 A complication for organizations engaged in system complexity evaluation is that these concepts may in fact be interrelated and Sheard provides a useful chart showing how a large number of complexity concepts relate to systems engineering activity (SEA) and to each other (the Systems Engineering Complexity Contexts [SECC]). 7 In the context of SoS engineering, opaque "authorities" and managerial and operational independence of constituent systems, create a compounding confusion on the perspective challenge identified here. 26,34,58 Again, clear-cut distinctions between different perspective on SoS complexity are difficult to maintain, exemplified in the overlapping "classifying dimensions" that discriminates between the "Complexity," "Dynamicity," and "Connectivity" of an SoS but which are, in reality, interrelated terms. 59 As a consequence, the development of unambiguous and reliable measures of system complexity is a considerable challenge. Metrics such as cyclomatic complexity 60 and lines of code 61 have been used in software engineering to measure software complexity, and the number and connectivity of physical system components and interfaces are used to measure the complexity of a product architecture topology. 32,36,[62][63][64][65] However, developing metrics for a diverse system as a whole remains a challenge. 66 While "the number of difficult requirements" and the amount of "cognitive fog" present in the project and the "relationships among stakeholders" 30 have been used as metrics of system complexity, accurately measuring and reporting them remain nontrivial tasks.
The "Cynefin framework" 67 from Snowden and Boone is often used to categorize operational context into "simple," "complicated," "chaotic," "complex," and "disordered." Here, the assumption is that simple and complicated contexts allow cause-and-effect relationships to be known, whereas complex and chaotic contexts have no immediately apparent cause-and-effect relationships. The suggestion is that different contexts have different characteristics and require different approaches. However, while a necessary first step for an organization is to acknowledge that their operational context is complex, the extent to which the suggested guiding principles can usefully inform systems engineering methodological approaches is not yet fully known.
Stevens 56 goes further in categorizing engineered systems complexity into the "system context," the "strategic context," the "implementation context," and the "stakeholder context." A complementary view to Steven's Profiler is provided by the SEA profiler. 68 The SEA profiler advocates for adapting SEA (typified as nine activities, e.g., analysis of alternatives, defining the system problem) based on the perceived complexity, utilizing a sliding scale to help systems engineering practitioners determine appropriate approaches for these activities. 68 An aggregate assessment can also be considered across all nine typical SEAs to help a team identify whether they should approach a problem using more traditional systems engineering approaches (e.g., emphasizing the establishment of system requirements and adapting to changes contrasted with trying to predict future enterprise needs and emphasizing discovery of needed mission capabilities for complex systems engineering). This profiler can be used by project teams to discuss and check whether their approach to the engineering of a system seems appropriate for the kind of challenges they are likely to face. While these decision support tool (and the Cynefin framework) may be useful for complexity evaluation and characterizing the system or the project, there remains a challenge in combining several perspectives, categorizations, and measures into a coherent whole. Complexity assessment tools and complexity categorization frameworks are discussed in more detail in Refs. 69, 70.
The "Complex Adaptive Systems Engineering (CASE)" methodology provides guidance on additional activities that support the engineering of complex SEAs (originally a set of 8, later updated to 25 activities). These activities can collectively supplement, re-enforce, and reemphasize traditional activity, potentially contributing differentially to tackling the kind of challenges that engineering complex systems or SoS presents. 71,72 Practitioners also have at their disposal "principles for complex systems engineering" (e.g., embrace political, operational, economic and technical aspects, nurture discussions, enforce layered architecture), which provide additional useful mechanisms for organizations to manage system complexity. 22,73 Similarly, the "Complexity Primer for Systems Engineers" suggests candidate approaches to address complexity in the problem context or environment and to address system complexity. 6 Considering that no single perspective is likely to address all the concerns of an organization and its stakeholders, it makes sense to recognize that there is a complicated landscape of complexity definitions and approaches. [74][75][76] Indeed, the "Evolving toolbox for complex project management" provides a rich guide to the various toolsets that aid the successful realization of complex systems, including, inter alia, the use of the aforementioned profilers and methodologies, cost estimation, systems thinking, and the use of social network analysis. 77 .
Overall, while different toolsets and approaches continue to proliferate and evolve throughout the systems engineering literature, there is value in pursuing empirical questions related to the effective deployment of these ideas in order to shed more light on relevant enablers and obstacles to improved engineering practice.

THE THALES GROUP "COMPLEXITY PROFILER"
Here, we introduce a version of the Thales Group proprietary "Complexity Profiler," 78,79 a spreadsheet-based tool used by teams and individuals to evaluate the complexity of systems of interest and their operational environments during prebid/bid stages and also throughout a project, in support of technical governance actions.
The Thales Group "Complexity Profiler" was inspired by the work of Stevens 56 and developed by four senior experts within Thales Group, each having around 30 years of experience in systems engineering. The tool was introduced to encourage explicit evaluation of both technical and nontechnical risk as a result of system complexity, particularly early in a system life cycle (i.e., prebid, bid phase). It was intended to support not only risk and opportunity identification and evaluation, but also mitigation activity and to identify expertise and competency requirements specific to a particular project. During development of the tool, the system complexity factors, shown in Table 1, were amended from those in the Stevens Profiler 56 to be oriented toward the supply and provision of systems as opposed to the acquisition of systems. TA B L E 1 Description of the complexity factors used in the Thales Group Complexity Profiler, ©Thales Group 2020

Complexity factor Description
Impact of environment on solution Impact of physical environment on the properties of the solution (which includes operational processes).
Operational concept stability Operational concept includes concept of operation, concept of use, and concept of employment. This factor is intended to evaluate the stability and predictability of each concept (purpose, goals, mission, activity objectives) along the solution life cycle (from solution conception to disposal).
User diversity Expected number of users and their role diversity.
External stakeholder involvement Level of confidence regarding stakeholder support during the execution of the contract.
Life cycle interlacing Number of system/solution life cycles possibly interlaced in a global Programme shared between several contractors.

Systems engineering effort and criticality
Level of innovation and criticality of engineered parts.

System behavior stability and determinism
The ability to define system modes, system functions, system states and system performances, and to predict their evolution according to well-defined mathematical laws.
Engineering organization Level of cooperation and subcontracting due to team size and number of organizational units.
The aim of the Complexity Profiler is to provide a "synthetic view that assists a team in quantifying the complexity of a particular solution. The profile helps to frame the decisions and direction that a bid/project has to take. Furthermore it helps to recognise the important differences in difficulty of a project, providing ability to compare the level of difficulty of one aspect of the project against another." 78 Using the tool is intended to guard against the risk that the organization underestimates the level of challenge inherent in developing a particular system, and hence underestimates the resource requirements needed to successfully realize and deliver the system within imposed constraints.
The Thales Group Complexity Profiler is used across a wide range of systems covering the diverse portfolio of Thales Group solution offers, including, but not limited to, Optronics systems, Command and Control (C2) systems, radar systems, radio systems, etc. The user guide for the Complexity Profiler states that "System" covers a system, equipment, platform, product or service. And, that "Solution" covers "system" and any enabling "system" necessary to sustain the "system" of interest during its life cycle. Systems engineering practitioners are likely well aware that every level of this hierarchy of systems can be complex. While the emphasis of this paper is on the natures of challenges that SoS present, as these kind of systems bring an additional layer of challenge, the findings may nonetheless be useful for systems engineering practitioners operating at other levels within such a hierarchy.
The profiling is performed in three stages: (i) assess the complexity of the SoI using the factors detailed in Table 1, (ii) conduct action analysis to define an action plan to manage complexity, and, finally, (iii) support effective decisions and implement action plans.
For each of the complexity factors detailed in Table 1 Depending on the resultant overall complexity of the SoI, the Complexity Profiler will mandate that, as a minimum, teams discuss certain actions (e.g., a high score for "Impact of Environment on Solution" will mandate that teams discuss "Physical simulation (Mechanical, thermal, EMC, etc)"; a high score for "User Diversity" will mandate that teams discuss "Value & Cost analysis," "Concept of Operation," "Concept of Employment," and "Concept of Use"). However, actions suggested by the Complexity Profiler are only mandated to be discussed.
The Complexity Profiler includes a section to annotate in free text the identified risks and the proposed actions to mitigate them. Finally, the team are expected to use the information they have captured to support relevant decisions for the particular development phase the team finds themselves in (prebid, bid, etc.) and to implement an agreed action plan, perhaps launching dedicated investigations and interventions in order to gain more knowledge about the SoI and its environment. As an example, consider a SoI at bid stage that scores highly for "Operational Concept Stability" and "User Diversity." As a result of using the Complexity Profiler the project team decides to undertake a dedicated work package to understand the operational complexity in more detail to support their bid, such as conducting mission analysis and producing or refining operational concept documents. As a further example, consider a synthetic system for which a complexity profile has been completed, shown in Figures 1 and 2. The complexity of this synthetic system appears to be predominately nontechnical, with low scores for "Impact of Environment on Solution" and "System Behaviour Stability" but high scores for "External Stakeholder Involvement" and F I G U R E 1 View of a completed complexity profile for a synthetic system demonstrating complexity, which appears to be predominately nontechnical, suggesting care and attention should be focused on project management, contracting, and commercial arrangements F I G U R E 2 View of complexity factor scores for the same synthetic system as shown in Figure 1

METHODOLOGY
Semistructured qualitative interviews were undertaken in order to collect information on complexity evaluation within Thales Group and the utilization of the proprietary "Complexity Profiler"; exploring perceptions of its strengths and weaknesses, opportunities for improvement, and challenges for its exploitation. A semistructured interview format involves using a number of open and closed questions that provide a formal structure. However, they also allow for further discourse as required to establish a depth of understanding.
The target population was personnel with over 10 years experience working in a systems engineering context within Thales Group who have experience evaluating the complexity of systems. The sample population is predominately systems engineering managers, systems architects, and enterprise architects, although the roles and jobs they undertake within Thales Group vary. The population was sampled using theoretical sampling; individuals were chosen as those who were in the best position to provide answers that were well informed and relevant. A sample size was not predetermined; instead, the interviews were conducted until more than a minimum number (10)  The interviews were audio recorded, and were then transcribed.
A thematic analysis was conducted on the interview transcripts. 83 in that document.

RESULTS
The pertinent findings of the interviews are presented under two subheadings; (i) the tool itself (the Thales Group "Complexity Profiler") and (ii) the tool's embedding. From this analysis, key features of a complexity evaluation framework are derived within which any complexity evaluation tool might be embedded operationally.

5.1
The tool itself

Positives
Respondents generally found the "Complexity Profiler" easy to use; "[I Further, several respondents claimed the "Complexity Profiler" is useful to them for different reasons; for some it was useful for surfacing risks that may otherwise be unnoticed (Section B), for others it could be useful for justifying project resources to mitigate identified risks (Section C), while others claimed it aided communication between technical and nontechnical personnel (Section D). Others suggested using the "Complexity Profiler" helped to demonstrate that a project team had considered the complexity of a candidate system prior to project reviews (Section E).
Below is an extract from an interview (IH), reporting that the most important feature of the "Complexity Profiler" was the identification of risk areas, which they felt was done well, with an acknowledgment that the tool will not manage risk for an organization on its own without further effort from personnel but that it will help with the identification of risk. Below is an extract from an interview (IE), where the "Complexity Profiler" has proven to be useful to justify resources in a project to conduct system modeling and simulation activity to de-risk the systems development project.
It's quite often useful certainly for doing system mod-

Negatives
Respondents Increasingly we have to deal with flowing down complex requirements to suppliers to supply something to integrate into our systems, so we're not only having to manage the complexity of our own activities but also manage the complexity of the things we flow down to our suppliers to do for us and our ability to manage our supply chain from a technical perspective not from a pure procurement. . . perspective, is not so easy. I see more and more complexity and problems coming from the fact that we are a conductor of an orchestra rather than the guys that play all the instruments and that is not an easy. . . Further, the literature surveyed additionally identified the following potentially relevant system complexity factors that could be suitable additions to the Thales Group "Complexity Profiler", for example; "requirement difficulty," "cognitive fog," and "stable stakeholder relationships," 30 structural complexity (the number, diversity and connectivity of components, subsystems and systems alongside their connectivity), [36][37][38] "dynamic complexity" (the difficulty in predicting behavior) 41 difficulty conducting functional analysis and allocation, 4 and technology maturity. 45 The issue is also applicable for organizations who make use of other complexity assessment tools surveyed in the literature such as the "SEA Profiler" 68 and Steven's Profiler. 56 A previous study collected judgments from current systems engineering practitioners on the relative and absolute importance of several different system complexity factors identified that additional relevant system complexity factors could include consideration of the number and diversity of system interfaces and dependencies, nonfunctional requirements, and "client/customer/user complexity (e.g., their understanding of the system, novelty of the system to them, willingness to accept change)." 86  which may result in a lack of common authority for the SoS, full consideration of constituent system constraints, or end-to-end testing and validation of the SoS. 26,58 See Section I. Including these additional SoS complexity factors appears to be a straightforward improvement for the Thales Group "Complexity Profiler," although care must be taken to ensure these are carefully described within the tool to avoid the aforementioned challenge of ambiguous, subjective system complexity factors. These issues can be improved but a wider comparative piece of research is needed in order to widen the scope to consider a set of similar evaluation tools and perform a full analysis and make detailed recommendations.

5.2
The tool's embedding

Positives
Participants felt that asking questions such as "how does the complexity of a candidate system affect our methodologies, process and outcomes?" or "what is the point of taking a complex systems perspective" starts a deliberate act of thinking and discourse within the organization. Without starting to ask these questions they would not have the "Complexity Profiler" in place and would not be asking personnel to consider certain issues relating to system complexity during systems development activity. That many respondents found the "Complexity Profiler" to be useful is a positive indicator, and while the tool is flawed, it is encouraging that the organization has started this line of enquiry and can make improvements in the future. It is a positive feature that the organization has invested effort in understanding the impact of complexity on their systems, given that many systems engineers have not been systematically engaged in an evaluation activity like this as part of their practice or training. 86

Negatives
The Complexity Profiler takes an inherently "divide-and-conquer" Without a wider learning system, new, relevant aspects of complexity may go unnoticed, and the process may not be tailored and adapted to respect the role context plays in system and project outcomes, which may in turn prevent the sharing of lessons and better practice across different projects. Although the Complexity Profiler is completed as a standard part of executing design and engineering projects, and the profiler itself mandates that certain mitigating actions are to be discussed, there is no process or structure to support the evolution of the evaluation process or to connect evaluations with project outcomes, either positive or negative. If an organization was to adopt a similar approach to the Thales Group Complexity Profiler or other complexity assessment tools, such as "SEA Profiler" 68 or Steven's Profiler 56 , careful consideration must be given to how they are embedded. As a consequence, the use of the such tools does not straightforwardly result in improvements to the organization's ability to make the most effective decisions. While this approach could be reasonable in a context where the system evaluation being attempted was grounded in a mature and consensually agreed upon set of principles or theories (e.g., evaluating the load that a device could be expected to tolerate), this is not currently the case for complexity and is unlikely to be the case for the forseeable future. This is due to the diversity of systems, domains, operating environments, contexts, etc., that system complexity evaluation is undertaken within and the current relative lack of maturity in the underpinning theory. While software engineers have developed principles and measures for complexity evaluation, 60,87 determining overall principles and theories for the a diverse SoS as a whole remains a significant challenge. See Section K.
Finally, the Complexity Profiler is a fire-and-forget system that sits inside business processes rather than spanning them. The opportunity for the tool to impact on the evolution of business processes would be more effectively realized if it were embedded in a wider organizational learning system. The lack of such a process with which to learn from These issues are going to be relevant to any complexity evaluation tool, no matter how well it solves the kind of challenges described in the previous list. Any tool will only be effective and valuable if it is embedded effectively within the operation that is deploying it, as such there is a need for a framework within which complexity evaluation tools should reside.

FOUNDATIONS OF A COMPLEXITY EVALUATION FRAMEWORK
While the analysis above centered on the positive and negative features of the tool, here we use the analysis to derive several features of an effective complexity evaluation framework. An effective complexity evaluation framework should provide clarity on language where significant ambiguity is present, found both in the surveyed literature and the case study. It should enable long-term organizational learning and should evolve as a result where necessary. It should dovetail with good governance to ensure it is executed effectively as part of an iterative, whole-life cycle approach. A multiperspective evaluation of system complexity, and the risks that this complexity presents, should be enabled and integrated. Mitigating strategies appropriate to these risks should be mandated in support of organizational decision making. A preliminary sketch of such a framework is provided in Figure 3.
The framework has objectives to: (i) promote discussions of the role of system complexity,and (ii) facilitate shared understanding, in order to (iii) provide enhanced decision support at every life cycle phase. Every design principle of the complexity evaluation framework, Figure 3, is informed by the earlier analysis.
By encouraging an evaluation of the complexity of the SoI, organizations can gain an additional insight into decisions during pre-bid and bid phases, such as "Do we wish to bid on a "Request for Proposal"? If we do, how much risk are we exposing ourselves to?," or to support analysis of alternatives, system architecture evaluation, and system design evaluation. Further, the evaluation of system complexity may be useful to help scale the level of effort required on operational concept development or technical derisking activity such as modeling and simulation.
For the purposes of this paper, a framework is defined as "a structure . . . that can be used as a tool to structure thinking, ensuring consistency and completeness." 88 A complexity evaluation framework should define and support a standardized way to go about evaluating system complexity within an organization, one that promotes effective decision making, improves project outcomes, supports communication between stakeholders (internal and external), and also enhances an organization's understanding of the evaluation process itself, its strengths and weaknesses and its value or impact for the organization. 88 The benefit of such a framework is in ensuring that relevant decision makers engage appropriately with considerations of system complexity and can be shown to have so engaged. Further, the cyclical exchange of information and collaboration promote understanding of the SoI, which can reduce errors caused by the wrong interpretation of the system interfaces. We argue that while it may be useful F I G U R E 3 A preliminary complexity evaluation framework comprising an inner, iterative cycle of sense making that will be completed multiple times over the course of a single project, embedded within an outer cycle of organizational learning that will be completed multiple times across successive projects. By defining the sense-making as a collaborative and iterative process, the framework avoids the challenges associated with the contested definition of system complexity. By embedding complexity evaluation within a wider learning cycle an organization can revisit or redefine the aspects of the complexity that are most pertinent to their contexts, enabling lessons to be identified, learned and shared across system domains and contexts. Organizations can then monitor the costs/benefits of conducting complexity evaluation activity and understand how evaluations relate to eventual project and system outcomes to have proformas, decision support tools, user guides, and processes, their value will be largely determined by their place within a wider process that ensures personnel interact with them appropriately. A framework in this sense is essentially a way to make people stop and think in a carefully structured way. 89

Collaborative
The core of the framework is an emphasis on iteratively building a shared understanding to handle the necessarily subjective, contested, and evolving definitions of system complexity, system boundary, level of abstraction, etc. 8,49,56 Multiple perspectives need to be drawn together and integrated in a collaborative manner in order to develop an accurate and actionable view on the system and its realization project. The need for multiple perspectives when evaluating the complexity of an engineered system is also well advocated for in academic literature. 23 One mechanism in which personnel could achieve this is through collaborating on completing and maintaining a "Complexity in isolation. This approach also bakes in an assumption that complexity is an operational concept that is likely to evolve. Rather than culminate in a set of advisory mitigating actions that must be discussed, the planning and subsequent implementation of mitigations is designed to inform (and be tracked by) the next cycle of complexity evaluation.

Iterative
Each of the five steps in the framework's inner cycle could be prefixed with "re-" to indicate that these steps are taken repeatedly: reidentify, re-evaluate, etc. An evaluation of system complexity needs to evolve alongside the project it relates to by being revisited periodically. Moreover, an organization should be concerned with monitoring the project charged with developing an SoS, and it is only in applying the complexity evaluation framework throughout a system development life cycle that this can be achieved. The evidence from the case study suggests that, despite the intention at the creation of the "Complexity Profiler" tool that it should be used an iterative tool, the reality is that the tool has largely become a "fire-and-forget" activity. While mandating revisions to the "Complexity Profiler," or other complexity assessment tool in the case of other organizations, throughout an SoI life cycle may encourage activity in this regard, organizations must acknowledge potential reticence for this, as one respondent described feelings of "process for the sake of process." Instead, organizations may need to establish a robust cost-benefit assessment, or value proposition, for system complexity evaluation, which can be achieved through the final design principle-the need for organizational learning.

Progressive
Finally, complexity evaluation only has the potential to improve business practices if it is embedded within a wider organizational learning cycle, depicted in the outer loop of the framework. Evaluations of system complexity must be conducted in a contextually sensitive manner otherwise collected evaluations are unlikely to be useful in sharing lessons identified and better practice long term. In this way organizations can continually explore what aspects of complexity are pertinent in their context, and how they impact system and project outcomes. 73 It is during this broader activity of embedding complexity evaluations in organizational activity that focus can turn to recording and ensuring that the "stop and think" process is taking place effectively, and that the framework is enabling consistency and completeness. While this framework requires deployment and operational validation, it takes the necessary first steps toward addressing the challenges and opportunities that were derived from analysis of the case study presented here. The intention is to provide useful insights to practitioners who currently conduct complexity evaluation, or those who wish to instigate their own evaluations, and to establish a frame within which further academic analyses of the role of complexity in SoS engineering can take place. The future work section discusses further how progress may be made with this complexity evaluation framework.

DISCUSSION
There are several hurdles that tend to interfere with an organization's ability to realize the intended benefits of the Thales Group "Complexity Profiler" or similar complexity assessment tools. These challenges include into tool-specific problems that can be improved by making the tool better (e.g., avoiding using a limited number of poorly defined criteria rated against a crude scale, ensuring the tool is sensitive to compounding risks and SoS considerations), and organizational embedding challenges that must be addressed separately (e.g., avoiding a divideand-conquer approach to system complexity evaluation, and ensuring that the tool itself is updated and integrated within a wider organizational learning cycle). Addressing the second set of challenges is not a totally new idea, organizations will have been solving it in various ways. However, more literature seems to be concerned with addressing the first category of challenges, how to approach evaluation, etc., while there seems to be less work on the second category of challenges concerning how such tools are embedded. It is therefore valuable to draw attention to addressing the second category of challenges as a research question distinct from the question of how organizations define, measure and evaluate complexity.
While individual decision support tools, such as the Thales Group "Complexity Profiler," "Steven's Profiler," 56 or the "SEA Profiler," 68 have the potential to be useful to organizations, they need to be embedded within a wider framework in order to mitigate the challenges described here. Further, given the contested definition and subjectivity of the term system complexity evidenced in the literature, organizations need to think about system complexity in a more holistic way. An instantiation of one such complexity evaluation framework is introduced here. It seeks to achieve robust and effective evaluations through the collaborative identification and evaluation of contextually relevant system complexity factors, with continuous reevaluations encouraged and supported by a wider emphasis on organizational learning. These design principles are well aligned, but not fully mapped, with wider principles for the engineering of complex adaptive systems. 5,71 There are several challenges that remain for organizations wishing to evaluate the complexity of their candidate systems. First, identifying risks arising from the complexity of a system cannot be taken to mean these risks have been managed. Second, how can organizations ensure their guiding tools, processes, and frameworks are engaged with in an appropriate way. Finally, and perhaps most importantly, how can organizations be sure that the "stop and think" process is actually happening? Appropriate engagement with the Thales Group "Complexity Profiler" was difficult for personnel engaging with the tool to ensure, for example avoiding biases or even completing the "Complexity Profiler" itself and the same challenges also apply for the framework proposed here.
Evaluating the complexity of a candidate system should not be an end in itself, rather, we argue that it must directly inform an organization's substantive decision making. Complexity evaluation must trigger mitigating action to reduce the likelihood and/or impact of the identified complexity. While the proposed framework encourages organizations to relate their evaluations to suitable mitigating actions and eventual project outcomes, the challenge of implementing this should not be underestimated. Additionally, given the apparent relationship between system complexity evaluation and project risk, care must be taken to ensure complexity evaluation activity is integrated with an organization's wider through-life Risk Management process. 26,90 Organizations should also be cognizant that, while we have emphasized the treatment of risk here, consideration should also explicitly include the treatment of opportunities. 91 . As others have suggested: "In complex (enterprise) environments like that of an SoS, it is better to have an opportunity exploration mindset as opposed to a risk mitigation mindset." 23 Aligning system complexity evaluation within a broader suite of organizational decision management processes is a further challenge as such evaluations are only one aspect of these processes, whether at the concept or development stages of a life cycle. 26 Various other factors also need to be considered such as, inter alia, strategic direction, technology roadmaps, development strategy, risk exposure, etc.
Ensuring that system complexity evaluation is complementary to other existing system analyses remains a further challenge for organizations.

FUTURE WORK
In order to further refine the proposed framework, principles for system complexity evaluation need to be developed in order to deal with, inter alia, SoS-specific considerations, life cycle tailoring, defining complexity within and between the contexts of different individual business units, providing clarity on terminology and governance. The problem structuring methods associated with Soft Systems Methodology (SSM) appear to be particularly well suited to developing these strands of future work. 89 An SSM investigation would aid understanding of the human activity system that undertakes complexity evaluation within an organization in more detail and would identify and inform the implementation of desirable and feasible improvements to the framework. Such an investigation may also offer insight into how the framework can be assured, that is, how an organization could establish confidence that the framework is being implemented effectively.
With a mature framework specified, attention could turn to deploying it within organizations in order to further refine, and eventually validate it.
One way to mobilize the complexity evaluation framework is to create a "Complexity Register" with which to support the collaborative identification of contextually relevant complexity factors and provide a store of data supporting system complexity evaluation. Developing such a tool, along with guidance for users, prompts to support complexity evaluation, and ensuring integration with other engineering management artifacts and risk management processes remains as further work. Such a mobilization would also require deployment and eventual validation.
Empirical evidence should be sought to determine if the expected value of conducting system complexity evaluation throughout a life cycle can be realized and at what cost. While gathering empirical data to support this is fraught with challenges, it is necessary in order to demonstrate the utility of system complexity evaluation.

CONCLUSION
Despite efforts by scholars and practitioners to provide clarity on the term "system complexity," the term is still loaded with ambiguity and opacity, making any efforts to implement tools and processes for SoS complexity evaluation challenging. One particular tool is evaluated here informed by semistructured interviews with senior personnel within Thales Group.
While organizations may use such decision support tools to aid complexity evaluation, we argue that such tools may not deliver value unless they are embedded within an appropriate complexity evaluation framework. Such a framework must support structured thinking that respects the organizational context within which complexity evaluation sits.
A preliminary framework is introduced here, combining the three key features derived from analysis of the use case explored in this paper: being collaborative, iterative, and progressive. It is centered on an iterative five-step complexity evaluation process (identification of system complexity factors, collaborative evaluation of their impact, communication of the resultant shared understanding, planning mitigations, and implementing these mitigations) embedded within a larger organizational learning cycle that interrogates and progressively improves the process of complexity evaluation and monitors its net benefit.
Notwithstanding the significant challenges that currently exist, if organizations can establish the utility of complexity evaluation in this fashion, they stand to gain, at the very least, a greater awareness of likely risks for their SoS development projects, and, more optimistically, may articulate new accurate predictors of project outcomes.

DATA ACCESS STATEMENT
The data supporting this research (Introductory email, participant information sheet, participant consent form, interview questions, anonymized biography of participants, textual excerpts) are available at the University of Bristol data repository, data.bris, at https://doi.org/ 10.5523/bris.pji8xwa0q6ue27lcu8gp62k0q. 85