A resilience‐based framework for assessing the evolution of open source software projects

Open source software (OSS) has been developing for more than two decades. It originated as a movement with the introduction of the first free/libre OSS operating system, became a popular trend among the developer community, led to enterprise solutions widely embraced by the global market, and began garnering attention from significant players in the software industry (such as IBM's acquisition of RedHat). Throughout the years, numerous software assessment models have been suggested, some of which were created specifically for OSS projects. Most of these assessment models focus on software quality and maintainability. Some models are taking under consideration health aspects of OSS projects. Despite the multitude of these models, there is yet to be a universally accepted model for assessing OSS projects. In this work, we aim to adapt the City Resilience Framework (CRF) for use in OSS projects to establish a strong theoretical foundation for OSS evaluation focusing on the project's resilience as it evolves over time. We would like to highlight that our goal with the proposed assessment model is not to compare two OSS solutions with each other, in terms of resilience, or even do a resilience ranking between the available OSS tools. We are aiming to investigate resilience of an OSS project as it evolves and identify possible opportunities of improvements in the four dimensions we are defining. These dimensions are as follows: source code, business and legal, integration and reuse, and social (community). The CRF is a framework that was introduced to measure urban resilience and most specifically how cities' resilience is changing as they evolve. We believe that a software evaluation model that focuses on resilience can complement the pre‐existing models based on software quality and software health. Although concepts that are related to resilience, like sustainability or viability, already appear in literature, to our best knowledge, there is no OSS assessment model that evaluates the resilience of an OSS project. We argue that cities and OSS projects are both dynamically evolving systems with similar characteristics. The proposed framework utilizes both quantitative and qualitative indicators, which is viewed as an advantage. Lastly, we would like to emphasize that the framework has been tested on the enterprise software domain as part of this study, evaluating five major versions of six OSS projects, Laravel, Composer, PHPMyAdmin, OKApi, PatternalPHP, and PHPExcel, the first three of which are intuitively considered resilient and the three latter nonresilient, to provide a preliminary validation of the models' ability to distinguish between resilient and not resilient projects.

technical aspects. 9Lenarduzzi 10 has been working with the design of similar models.In his work, we find a wide review of models and approaches for selecting OSS projects that have been published since 2019.Analyzing 60 relevant studies, the authors pinpointed the criteria categories that have been more frequently used, that is economic data, licensing, community characteristics, application adoption (installability) and support, among others.As Wasserman et al 11 state, it is important for OSS evaluation models to include, apart from numerical scores and metrics, qualitative criteria as well.Moreover, IT managers need compelling evidence for the resilience of an IT solution before committing themselves and adopting it for their IT architecture.Informal discussions with IT managers revealed that a system is expected to be sufficiently maintained over a period of at least 10 years to become eligible for adoption.Finally, this evaluation should be frequently performed as the OSS project evolvesfor example, after each major release-to be able to observe how its resilience changes over time.
T A B L E 1 Literature related to success factors and assessment models in OSS.

Reference Topic Year of publication
Wasserman et al. 6

OSS evaluation 2021
Fang 14 Trust in software ecosystem 2022 Laila and Khan 15 Mission critical OSS 2023 Abbreviation: OSS, open source software.
In Table 1, we summarize the literature by referencing research work related to OSS success factors and OSS assessment and evaluation from 1977 to 2022.We would like to highlight that references 7, 10, and 11 are systematic literature reviews published in 2014, 2020, and 2022 respectively.

| OSS: The concepts of quality, health, and resilience
From the literature review summarized in the table of the previous section, we can see that the main focus of the evaluation and assessment models for OSS projects revolves around the concept of software quality.There is also a limited number of works around the concepts of software health and software trust.Andrade and Saraiva 9 highlight how software health is connected to the longevity of an OSS ecosystem, and they observe that "health is typically looked at from a project scope".In CHAOSS metrics, 16 OSS health is associated with social-related aspects.
Axelrode 17 provides a definition for the resilient software system as follows: A system that "can take a hit to a critical component and recover and come back for more in a known, bounded and generally acceptable period of times".In Urban Planning and Architecture research field, the concept of City Resilience 18 is defined as "the ability [of a system] to cope with change".City Resilience Framework (CRF) 19 defines city resilience as "the capacity of cities to function, so that the people living and working in cities-particularly the poor and vulnerable-survive and thrive no matter what stresses or shocks they encounter".
OSS projects are dynamic systems that are constantly evolving and face changes, be it on a technology level (e.g., changes in the development stack they are using), on the governance level (e.g., changes in the leadership of the project), or on the social level (e.g., the project's community shifts to another OSS project).Therefore, we find the definition of the CRF "resilience lies in the ability of a system to suffer stresses and crises and, nevertheless, survive them" to be conceptually relevant to the OSS domain as well.

| Stressors and crises in OSS
Developer or user base loss to a competitive project, unsuccessful major releases, migration or fork of the project from the code development team or parts of it, appearance of new, competitive software applications, hostile behavior by commercial rival solutions, technology evolution that the project fails to follow, or project sustainability issues are only some of the potential crises and stresses an OSS project may face during its life cycle.Here are some examples of OSS projects that have faced crises and stressors.
With Oracle acquiring Sun in 2010, the OpenOffice suite, which was previously acquired by Sun as part of its StarDivision acquisition, became Oracle's property.OpenOffice community saw that as a threat and created a nonprofit organization, The Document Foundation.They also forked OpenOffice and created LibreOffice, as a failsafe in case Oracle chose to discontinue OpenOffice as they did with the OpenSolaris operating systems. 20In this example, we are seeing how a change to an OSS project governance triggers a stressor for the OSS project.Because a for profit company acquires another company alongside its OSS project OpenOffice, the community of OpenOffice chooses to fork the project and work independently.In addition, the community creates a nonprofit foundation to ensure that the newly created fork, LibreOffice, will remain an OSS project.Gamalielsson and Lundell 21 investigate the case of LibreOffice and how forking OpenOffice helped it evolve.
Core-js is an OSS and is a well-known universal polyfill of the JavaScript standard library, which provides support for the latest ECMAScript standard and proposals.It is used by companies of significant size like LinkedIn, Netflix, Binance, and Spotify.The project is being maintained by a community of 112 contributors from which the founder is contributing the majority of the commits based on the contributors' insights statistics on the Github repository of the project. 22Recently, the founder of the project published a post 23 in the project's Github repository expressing his concerns that due to core-js facing sustainability issues, the future of the OSS project is compromised.We consider this example an indicative crisis an OSS project can face.The project lacks a growing community that can ensure the maintainability and evolution of the project.Right now, it seems that the founder of the project is the most active developer and that makes him a single point of failure for the longevity of the project.
At this point, we would like to take the chance and clarify why although the two aforementioned projects are facing stressors and can be considered great candidates for evaluation with our OSS Resilience Framework are not part of the enterprise testing.In this work, we emphasize on presenting the reasoning behind the design of the proposed assessment model and its connection with the CRF.Therefore, we chose to test the framework with a series of OSS projects that we intuitively classify as resilient and nonresilient to validate that our model is successful in distinguishing resilience.We have been documenting the limitations and threats to validity of our work in the last sections of this manuscript, and they will allow us to incrementally build on the foundation of the research provided in this work, in future iterations of our model.
In this scientific work, we are going to base our proposed framework to the CRF and present a possible adaptation of it to the OSS domain.
The rest of our work is organized as follows.In Section 2, we are presenting usage and potential impact of a resilient-based framework to OSS stakeholders.In Section 3, we will be briefly presenting the CRF.This framework was designed to assess urban resilience, and it has been the inspiration for our resilience-based framework for OSS projects.In Section 4, we analyze our proposed adaptation of the CRF to OSS projects.In Section 5, we are presenting the resilience determination mechanism of open source software resilience framework (OSSRF).In Section 6, we are presenting the different types of indicator OSSRF incorporates alongside a sensitivity and veto principles investigation.Section 7 provides, as proof of concept, the application of the proposed framework to six open source projects, three of which are considered intuitively resilient and three intuitively nonresilient.For each one of these projects, we are studying five consecutive major releases, in order to observe how resilience changes over time.Section 8 presents possible limitations and threats to validity.Section 9 concludes our work by summarizing our findings and presenting ideas for future work.The final section acknowledges the help we received during this work and presents the relevant literature.

| CITY RESILIENCE FRAMEWORK (CRF)
The CRF, as presented in Da Silva and Morera 19 is the result of research conducted with the aim of establishing an accessible, evidence-based definition of urban resilience by the Arup Institute and the Rockefeller Foundation.It takes under consideration the need of cities, as dynamic systems, to be able to adapt and go through challenges while, at the same time, they build resilience in order "to survive in a continuously evolving, uncertain world".It studies the role of the city's stakeholders and how their actions may or may not promote the resilience of the city.
The fact that every city is unique creates a challenge in studying resilience.The authors approach this challenge by defining the City Resilient Index (CRI) which is a set of indicators and variables which allow cities to understand and measure their relative performance regarding resilience.
It is worth mentioning and it is also being stressed by the authors of CRF that the CRI is not aiming to provide a world rank of cities based on their resilience nor a comparing mechanism between the cities.Its main aim is to provide a framework by which a city can better facilitate all those resources, knowledge, and processes to become more resilient over time.
The CRF is, as of the time of writing, actively applied to cities via the 100 Resilient Cities, 24 a nonprofit organization to primarily evaluate the urban resilience of more than 90 cities around the world and, additionally, to assist the cities on crises with tailored made resilience strategies.
The CRI suggests four dimensions which are analyzed in 12 goals.The goals are further decomposed to indicators that serve as KPIs while assessing the resilience of a city.The aforementioned structures of the dimensions and goals, which inspired the adaptation to OSS, are the following: 1. Health and well-being: Related to people, working and living in the city.Goals: (1) Minimal human vulnerability, (2) Diverse livelihoods and employment, (3) Effective safeguards to human health and life.In Kritikos and Stamelos, 25 you can find a preliminary work that presented the CRF analyzing the dimensions and goals arguing the conceptual connection of the framework to the OSS domain.In terms of indicators, in Kritikos and Stamelos, 25 only the indicators for the dimensions of source code and business and legal have been presented.In the next chapter, we are going to present, in further detail, the choices we made adapting the CRF and CRI to the proposed OSSRF.

| USAGE AND POTENTIAL IMPACT OF A RESILIENT-BASED FRAMEWORK TO OSS STAKEHOLDERS
OSSRF was created with several stakeholders in mind.The fact that it focuses on the resilience aspect of an OSS project's evolution, it makes it a good companion with existing OSS assessment models focusing on quality, health, or trust.The use of both quantitative and qualitative indicators also allows experts to be involved in the assessment process which is considered a benefit.
OSS communities and companies that practice OSS can use OSSRF to frequently monitor their OSS projects for resilience changes.This way they can pro-actively identify stressors that could hurt the project.For example, if a decrease of resilience on the social level is identified, the organization or company can look for recent decisions that might have led members of the team to move away of the OSS project's community.
An inner source environment could also make use of OSSRF model.Since inner source works similarly with open source, an inner source company could use it to identify resilience changes in its projects.The metrics could be calculated the same way it would happen in an OSS project community (using code analysis tools and the code repository metrics).OSS consultants can be benefited by OSSRF.They can assess the resilience of specific OSS solutions as they evolve, in order to advocate these solutions to their clients.Governments and the public sector can also use OSSRF to validate a proposed OSS solution by a third party in terms of its resilience.The research community could use resilience as an extra factor when assessing OSS projects.Finally, individual contributors can use OSSRF to assess the resilience of an OSS solution before they join its community or if they want to introduce an OSS tool to the company they are working on the development stack they are using.
We were inspired by the CRF because we believe that OSS projects share conceptual similarities with cities.They are, as well, dynamic and continuously evolving systems.They have their own structural properties, which affect their robustness and ultimately their ability to last; hence, they affect their resilience.They attract people around them who form communities.When those communities flourish, they usually need, as it happens with cities, a governance model.Both cities and OSS projects might face stresses and crises.Sometimes, these challenges might endanger their very survival, depending on the severity and duration of the challenge.
In this section, we are presenting our attempt to adapt the CRF to OSS projects.We aim in utilizing the models and metrics proposed in the extensive literature regarding software quality, metrics, and evaluation, to propose a framework that will assess the relative performance of an OSS project towards resilience.At this point, we would like to note that this scientific work provides one way of adapting the CRF to OSS and that it was designed under the subjective lens of the authors.Other interpretations and adaptations of CRF are possible.However, each one must be validated with actual resilient or nonresilient OSS projects.
OSSRF follows the architecture of CRF.Its structure consists of three layers.At the first layer, we have the four key dimensions: source code, business and legal, integration and reuse, and social (community).Those dimensions are further analyzed to 12 goals.Finally, goals are further decomposed to indicators to provide a way of measuring the performance of the OSS project in each of the goals and dimensions, going bottom up.You can find a visual representation of the OSSRF showing the first two levels of the framework in Figure 1.

| The source code dimension (D01)
We argue that for an OSS project, the source code (e.g., files and classes) is the structural unit of the project.Given the vast application of OSS, source code-related aspects, like its architecture, ability to be maintained, or how secure or tested it is, can be affected by a series of factors like the source code language or the software development style.Ampatzoglou et al 26 study the design quality of OSS projects in several domains and find that the quality of design of OSS projects varies, indeed, between software domains.In Figure 2, we can see a detailed analysis of the dimensions including the relevant goals and indicators.

| Goals
In OSSRF, we propose the following goals for the source code dimension: Open source software resilience framework (OSSRF): Dimensions and goals.
• Architecture (G01): This goal is related to the aspects of the source code that structurally strengthen the project and promote functionality and scaling.We propose this goal in alliance with the "minimal human vulnerability" goal that can be found in CRF which promotes, as more resilient, cities with infrastructures that provide stability, effectiveness, sanitation and robustness, and access to energy supply or drinking water.
• Maintainability (G02): This goal relates to the maintainability of source code.OSS projects, being collaboratively and voluntarily evolving projects, often need to go through phases of refactoring, correction or undergo necessary improvements.Especially after a crisis (e.g., a competitive open source solution wins the attention of the majority of the contributors' base), an OSS project needs to regroup as soon as possible.In the CRF, we find the "diverse livelihoods and employment" goal which similarly aims at maintaining the social capital after a shock (e.g., with supportive financing mechanisms) and improves or corrects by training and promoting business development and innovation.
• Security and testing (G03): This goal is related to the aspects that promote the security and correctness of the OSS project.As in "effective safeguards to human health and life", a goal found in CRF, this goal is about foundational structures that ensure a tested and fully functioning system.

| Indicators
The goals are further decomposed to the following indicators (we base our definitions mainly in Miguel et al.'s literature review. 7In case an indicator's definition is not based on the aforementioned work, we will be providing references accordingly): • Robustness (I01): It is defined as "the degree to which an executable work product continues to function properly under abnormal conditions or circumstances".We propose this as a qualitative indicator (Likert scale) described with the following values.1-limited, 2-little, 3-moderate, 4-good, and 5-great.
• Scalability (I02): It is defined as "the ease with which an application or component can be modified to expand its existing capabilities.It includes the ability to accommodate major volumes of data".We propose that this should be a qualitative indicator (Likert scale) described with the following values.1-limited, 2-little, 3-moderate, 4-good, and 5-great.
• Usability (I03): It is defined as "the degree to which the software product makes it easy for users to operate and control it".We propose that this should be a qualitative indicator (Likert scale) described with the following values.1-limited, 2-little, 3-moderate, 4-good, and 5-great.
When the OSSRF is being applied, the aforementioned indicators (robustness, scalability, and usability) should be provided as qualitative variables, by an expert.We base this decision in Wasserman et al 11 where Wasserman treats those indicators the same way in OSSpal.
• Effectiveness (I04): It is defined as the percentage of critical bugs fixed the last 6 months to all bugs fixed in the last 6 months.This indicator derives from the SQO-OSS quality model as published in Samoladas et al. 27 At this point, we would like to clarify that the effectiveness indicator can follow the definition if the OSS project's issue tracker offers a category for the critical bugs, separating them from the rest.In any other case, this indicator could be treated as a qualitative (Likert scale) with the following values.1-limited, 2-little, 3-moderate, 4-good, and 5-great.
• Corrections (I05): It is proposed as part of the maintainability goal to "try and capture the degree to which the software can be modified to serve correction purposes".
• Improvements (I06): It is proposed as part of the maintainability goal to "try and capture the degree to which the software can be modified to serve improvement purposes".• Security (I07): It is defined as "the protection of system items from accidental or malicious access, use, modification, destruction, or disclosure".
We propose that this should be a qualitative indicator (Likert scale) with the following values.1-limited, 2-little, 3-moderate, 4-good, and 5-great.We base our choice for this indicator in Wasserman et al 11 where Wasserman treats this indicator the same way in OSSpal.
• Testing process (I08): It is proposed as a Boolean indicator to verify that the OSS project under assessment follows a typical process as far as the testing is concerned (i.e., unit testing, test-driven design techniques).Madeyski 28 provides an empirical study that shows the importance of test-driven techniques in software development.
• Coverage (I09): It is defined as "the ratio of basic code blocks that were exercised by some test, to the total number of code blocks in the system under test". 29Therefore, this is proposed as a percentage indicator [0, 100%].

| The business and legal dimension (D02)
The business and legal dimension for an OSS project has become extremely important nowadays as it provides the legal framework under which an OSS project can be used, reused, and even commercialized.OSS projects are mainly dependent in voluntary work, but as Popp 30 argues, we often see more mature projects utilizing open source business models to offer commercial services.The equivalent dimension of CRF, economy and society, is related with organization focusing, as well, to the aspects of sustainable economy, rule of law, and community support.In Figure 3, we can see a detailed analysis of the dimension including the relevant goals and indicators.

| Goals
We are proposing the following goals for the business and legal dimension: • License (G04): This goal aims to investigate the legal aspects of an OSS project.In alliance with the goal in CRF "comprehensive security and rule of law", this goal describes the legal framework under which the OSS project is published in order to pro-actively secure its openness and availability to be used, reused, and shared according to the license terms.
• Market (G05): This goal is proposed in alliance with the "sustainable economy" goal of the CRF that takes under consideration the aspects of the business environment, the diverse economic base, and business continuity planning.Following this paradigm, in OSS, we, respectively, study the aspects related to market and commercial use of an OSS project.Red Hat, Inc., for example, is a well-known company that uses a dual-licensing model for some of its products.source license that was introduced by MongoDB, and a commercial license.Under the SSPL, users are free to use, modify, and distribute the software for any purpose, as long as they comply with the license's terms.The commercial license, on the other hand, provides customers with additional features, support, and services.
• Support (G06): This goal is related to a rather controversial subject in OSS.Daffara 31 refers to the myth that OSS "is not reliable or supported" and argues against it.With the adoption of OSS software in vital parts of companies and organizations (i.e., web servers running Apache and Linux), it has become evident that professional support is key for an OSS project to become a success.Support helps the end user to feel safe (i.e., when crisis strikes or during a shock) and provides a sense of belonging.Same goes for the "collective identity and community support" goal that we find in CRF, referring to the beneficial role of collective identity and local community support, especially in times of crisis.

| Indicators
We are further decomposing the goals to the following indicators: • License type (I10): Using a license for an OSS project along with the type of this license plays a significant role in the evolution and success of OSS.Välimäki and Oksanen 32 study how licenses, depending on the level of permissiveness (i.e., copyright versus copyleft) or the level of persistence (GPL versus LGPL), can affect the OSS project in terms of adoption and commercialization.Lindman et al 33 argue that licensing can often be a complex task for OSS teams and that is why we find structured license selection processes mainly in big OSS projects.Taking the above under consideration, we propose the following values for this specific indicator: 1-all restrictive, 2-not licensed, 3-mixed license, 4persistent license (i.e., GPL), and 5-all permissive license (i.e., MIT).
• Dual licensing (I11): Whereas dual licensing is not necessarily a success factor for an OSS project, based on the literature, studies like Valimaki 34 and Daffara 31 argue that dual licensing is a key factor when it comes to commercialization of an OSS project.Therefore, we consider it a plus, when it comes to the market goal.We propose this indicator as Boolean: 0-for nondual licensed projects and 1-for dual licensed ones.
• Commercial resources (I12): Providing commercial resources (i.e., user guides or merchandize) is a known business model for OSS projects.
We propose that this indicator is Boolean with the following: 0-for projects with no commercial resources and 1-for project with commercial resources.
• Commercial training (I13): Providing commercial training (i.e., video tutorials or Massive Open Online Courses [MOOCs]) is another known business model for OSS projects.We propose that this indicator is Boolean with the following: 0-for projects with no commercial resources and 1-for project with commercial resources.
In regard with commercial resources and commercial training in Munga et al, 35 the authors study how well-known companies like IBM and Redhat achieved competitiveness and economic growth by providing added value services to their open source solutions.
• Industry adoption (I14): An OSS project that manages to attract the interest of the industry is more likely to become successful in the market.
Daffara 31 argues that OSS boosts both innovation and software development speed, whereas in Saguy and Sirotinskaya, 36 a scientific work about open innovation and the SME food industry authors highlight that "open innovation offers SMEs a special avenue to better compete in the marketplace".We propose this indicator as Boolean with the following: 0-indicating projects with no industry adoption and 1-indicating projects that have been adopted by the industry.
• Nonprofit / foundation support (I15): Many successful OSS projects are supported by nonprofit organizations.Sometimes, these NGO are created to support specifically an OSS project (i.e., Free Software Foundation, Linux Foundation, WordPress Foundation, and Blender Foundation) as mentioned in Izquierdo and Cabot. 37We define this indicator as Boolean with 0-for projects not supported by a nonprofit organization and 1-for projects supported by a nonprofit organization.
• For profit / company support (I16): As with the nonprofit / foundation support indicator, the "for profit / company support" indicator takes under consideration the existence of a company "attached" or supporting an OSS software.There are examples from well-known projects that have helped companies built business models around them (i.e., Redhat offering paid services for Linux installations or Automattic for WordPress).We propose this indicator as Boolean with 0-for projects not supported by a company and 1-for projects supported by a company.
• Donations (I17): Donations have been one of the most known ways for OSS projects to earn money, since the early days of OSS.Jansen 38 refers to donations as "indicator of acceptance".We propose that this indicator is Boolean with 0-for projects not accepting donations and 1-for projects that accept donations.

| The integration and reuse dimension (D03)
The third dimension of CRF, related to place, was designed to study the connection and communication of the city's ecosystems.We designed OSSRF's respective dimension to study the levels of integration and reuse of the OSS project.Since OSS projects usually reuse components of other OSS projects or are being reused themselves, it is critical to be able to measure the performance of a project on this aspect.In Figure 4, we can see a detailed analysis of the dimension including the relevant goals and indicators.

| Goals
We are proposing the following goals for the integration and reuse dimension: • Initialization (G07): This goal is proposed having in mind the ability of the project to be initialized in such a way that it will help its uninterrupted functionality.The same goal is also relative to the agility of an OSS regarding its configuration.We argue that this goal shares conceptual similarities with the "effective provision of critical services" goal of the CRF that takes under consideration all those factors that predefine and protect critical assets, services, and ecosystems within a city.
• Dependencies (G08): This goal takes under consideration the dependencies that an OSS project uses in order to function properly.Dependencies are as good for the project as their quality and resilience.This is why this goal is aligned with the "reduced exposure and fragility" goal of the CRF.
• Reuse (G09): This goal is about the ability of the OSS project, in a whole or at a component level, to be reused by other OSS projects.Reusability of a project, apart from making it a good candidate to complete another software's requirements, is also an indicator of high-quality architecture and source code.In our framework, this goal aligns with the "reliable mobility and communications" goal of the CRF model in the sense that its reusable components promote mobility and tend to integrate or be integrated easily with other OSS projects.

| Indicators
We are further decomposing the goals to the following indicators (we base our definitions mainly in Miguel et al.'s literature review. 7In case an indicator's definition is not based on the work, it will be followed by a separate reference to the respective source): 1. Installability (I18): It is defined as "the degree to which the software product can be successfully installed and uninstalled in a specified environment".We propose that this should be a qualitative indicator described with the following values provided by an expert.1-limited, 2-little, 3-moderate, 4-good, and 5-great.
We base our choice for the aforementioned indicator, installability, to be qualitative (Likert scale) in Wasserman et al 11 where Wasserman treats those indicators the same way in OSSpal.

Configurability (I19):
It is defined as "the ability of the component to be configurable".Meinicke et al 39 argue that highly configurable systems lead to exponentially growing configuration spaces making quality assurance challenging.Based on that, we propose that this should be a qualitative indicator described with the following values provided by an expert.1-limited, 2-little, 3-moderate, 4-good, and 5-great.4. Resource utilization (I21): It is defined as "the degree to which the software product uses appropriate amounts and types of resources when the software performs its function under stated conditions".Silberschatz et al 41 study operating systems and highlight that often, an OSS project is designed with the end user in mind and thus the focus is mainly ease of use or performance and security and not resource utilization.
This difficulty in having a clear metric on resource utilization led as to propose that this should be a qualitative indicator described with the following values provided by an expert.1-limited, 2-little, 3-moderate, 4-good, and 5-great.
5. Complexity (I22): or CC is defined as "a quantitative measure of the number of linearly independent paths through a program's source code" by McCabe. 42Bray 43 and Watson et al 44 provide the following groups of values for the CC metric, as seen in Table 2.
In the studies, there is a debate on whether the first group of values should stop at 10 or 15 describing the first group as without much risk and the second group of moderate risk.We propose an indicator that provides a fifth tier of complexity, taking under consideration the threshold of 15, as seen in Table 3: Without loss of generality, the originally proposed tier of 0-15, described as "simple program, without much risk", is further decomposed to trivial program with not much risk and simple program with little risk.
So depending on McCabe's CC metric, this indicator's values are described as follows based on the risk deriving from the complexity of the product: 1-very high risk, 2-high risk, 3-moderate risk, 4-little risk, 5-not much risk.

Modularity (I23):
It is defined as "the degree to which a system or computer program is composed of discrete components such that a change to one component has minimal impact on other components".As Viseur 45 states, high modularity of an OSS project is a competitive advantage for developers and, at the same time, allows users to gradually discover and use functionality (i.e., Mozilla Firefox add-ons).To our best knowledge, there is not a well-established metric for assessing the modularity of an OSS project; therefore, we propose that this should be a qualitative indicator described with the following values (Likert scale) provided by an expert.1-limited, 2-little, 3-moderate, 4-good, 5-great.
This metric has a range [0,1] where I = 0 indicates a maximally stable category and I = 1 indicates a maximally instable category.The lowest the number, the more stable the project; therefore, for this indicator, the final value that we use to the framework calculations is 1-I.

Cohesion (I25):
It is measured by Cidamber and Kemerer 47 using the lack of cohesion in methods (LCOM).We then use the thresholds provided by Ferreira et al 48 to evaluate the resulting value of LCOM, based on the size of the software, as shown in Table 4: Therefore, our cohesion indicator ranks between [1,3]

| The social (community) dimension (D04)
The last dimension of CRF is about leadership and strategy.In OSS projects, there is extensive work for the community that is being built around an OSS software.Leadership, strategy, and knowledge acquisition usually derive from an OSS project's community (i.e., feature proposal, bug reports, translations, documentation, or testing).In the social (community) dimension, we are proposing goals and indicators regarding the development process, the governance, the development, and user base with the aim of studying how strong the social capital of the OSS project is.In Figure 5, we can see a detailed analysis of the dimension including the relevant goals and indicators.

| Goals
We are proposing the following goals for the social (community) dimension: • Developement process and governance (G10): This goal was designed and fully aligns with the "effective leadership and management" of the CRF.Its indicators verify that the project has all the necessary information to guide its users and developers through the process of evolving the software.It also provides the necessary mechanisms to ensure a friendly and open environment that everyone can contribute on the project in the context of a governance model.We suggest that this specific indicator takes the value 1 if the governance model that it is used is one of the state-of-the-art OSS governance models as seen in RedHat. 49Developer base (G11): This goal is related with the development community of the project.In order for an OSS project to be successful, this part of the community needs to always stay motivated and active.This goal is similar to "integrated development planning" of the CRF.
• User base (G12): This goal is related with the end users of the OSS project.It can potentially include members of the development community that are also users of the software or have undertaken the role of a tester.This part of the community is the "customers" of the OSS project, and hence, it is really important to be engaged and motivated as their feedback (i.e., feature proposals, bug reports and so forth is invaluable).
This goal is aligned with the "empowered stakeholders" of CRF.

| Indicators
We are further decomposing the goals to the following indicators: F I G U R E 5 Social (community) dimension, goals and indicators.
• Governance model (I26): The existence of a governance model for an OSS project is considered mandatory, especially if it wants to become self-sustainable using one or more of the business models discussed in the business and legal dimension.We propose this indicator as Boolean with the following: 0-for projects that do not utilize a governance model and 1-for project that utilize governance models.
• Project roadmap (I27): The project roadmap, as with the Governance model, is an indicator of a well-organized project with clear goals and milestones that wants to clearly share with its community.We propose this indicator as Boolean with the following: 0-for projects that do not use roadmaps and 1-for projects that use roadmaps.
Mäenpääet al. 50 study community aspects for well-known, hybrid, OSS projects with commercial success and highlight both governance and roadmap existence as indicators of healthy OSS communities.
• Code of conduct (I28): The fact that OSS projects form global, diverse communities that work asynchronously need to set the rules of communication and interaction between their members.We propose this indicator as Boolean with the following: 0-for projects that do not use a code of conduct and 1-for projects that use one.
• Documentation standards (I29): Another critical indicator for the success of community-driven projects, as are OSS projects, is standards for the documentation of the source code.This helps newcomers to easily understand the existing code base and smoothly become a part of the team.We propose this indicator as Boolean with the following: 0-for projects that do not follow a documentation standards and 1-for projects that follow one.
Butler et al 51 investigate work practices used by contributors to well-established OSS projects and highlight the use of both code of conduct and documentation standards.
• Coding standards (I30): Coding standards have been always a part of the OSS projects documentation.They serve as the source code development manual for the developers in the community of the OSS, and they have been adopted by the leaders of the free / OSS movement (Linux Kernel, GNU, and so forth).Coding standards indicate professionalism and maturity for the OSS project.We propose this indicator as Boolean with the following: 0-for projects that do not use coding standards and 1-for projects that use coding standards.
The following indicators were inspired mainly by the works of Robles et al 52 and Wasserman et al. 11 We will be providing respective references per indicator, where necessary.
• Developers attracted (I31): It is proposed as the rate of developers joined the project in the last 6 months to the total number of developers.• Open versus closed issues (I34): It is proposed as the number of issues closed in the past 6 months to the number of issues opened in the past 6 months.This indicator gives us a perspective of the activity of the community regarding bug fixing.This indicator ranks between [0,1].
• Source code documentation (I35): It is defined as the rate between the number of comment lines of code (CLOC) to the number of the lines of code (LOC).This indicator gives us a perspective of the documentation effort done regarding to the source code.This indicator ranks between [0,1].
• Localization process (I36): OSS project localization process (i.e., translation of the software and / or project resources) is the best practice that is backed up by literature.Subramaniam et al 53 argue that software translations benefit the evolution and growth of OSS and thus should be one of the project leaders priorities.We propose that this indicator has the following values: 0-no localization process is defined, 1localization process is defined.
• Issue tracking activity (reporting bugs) (I37): It is defined as the number of bugs reported the past 6 months to the number of reported bugs since the beginning of the software.This indicator ranks between [0,1].
We would like to clarify that the coexistence of bug reports and open / closed issues serves the need to separately assess the bug reports that come from the end users to the technical issues that are usually reported by the developers of the project.We acknowledge that sometimes the developers of an OSS project also act as end users, but as Herzig et al 54 state, often times end users' reports are misclassified as bugs when they are really features (i.e., code enhancement requests or customization requests).
• User guide (completeness) (I38): This indicators has the goal of evaluating the maturity of an OSS project's user guide.User guides have been adopted by the most evolved and well-known OSS projects (e.g., GNU Emacs user guides 55 ).We propose the indicator's values as follows: 1nonexisting user guide, 2-on hiatus / discontinued, 3-prerelease (alpha / beta / release candidate), 4-released (version 1.0+), 5-commercial versions of the guide.
Since the assessment of a project regarding its resilience is based on indicators, we need a mechanism to determine whether the OSS project under review is resilient and how its resiliency changes as the project evolves over time.Starting from the indicators' level, we will consider an OSS project successful towards a resilience goal when at least 50% of the goals indicators are considered resilience.
Moving to the dimensions level, an OSS project will be considered resilient towards a dimension when at least to 50% of the goals of this specific dimension are considered resilient.Finally, overall, a project will be considered resilient, when at least two out of the four dimensions (50%) of are considered resilient.
To assist this mechanism, we express all the values in the indicators level to percentages.More specifically, the framework has four types of On the next level of the framework, the goals level, we are calculating the average of the indicators for each goal and the resulting percentage is the value of the goal.Finally, for the upper level, the dimensions level, we are following the same process as with the goals.We are calculating the average of the goals for each dimension, and the resulting percentage is the value of the dimension.
At this point, we would like to point out that the proposed framework considers all the the indicators, goals, and dimensions equal regarding their importance on the resilience framework (there are no weights).This decision alongside with the aforementioned 50% threshold is reported as threats to validity, and it is in our future goals to study and try to approach them empirically.

| APPLYING RESILIENCE FRAMEWORK ON OSS: INDICATORS AND TOOLS
In this section, we are applying the OSSRF to six OSS projects with the aim of providing a proof of concept that OSSRF can distinguish projects that are intuitively resilient from projects that are intuitively nonresilient.From the selected projects, three are intuitively resilient and three are intuitively nonresilient.In order to present the concept of an OSS project's resilience evolution over time, for each one of the aforementioned projects (resilient and nonresilient), we will be assessing their resilience using OSSRF for a number of major, consecutive releases.
In the next part, we are going to analyze the application of the framework, providing, where necessary, the tools and information used for the proposed frameworks application.For brevity, we will not be presenting the numbers for all 38 indicators for all six projects.Instead, we are presenting the results on the goals and dimensions levels.The readers of the work are encouraged to find the raw data online (https://doi.org/10.5281/zenodo.5576580).

| Qualitative indicators
The following indicators are qualitative and are evaluated by an expert: robustness, scalability, usability, corrections, improvements, security, installability, configurability, dependability, resource utilization, and modularity.
In the absence of an expert and in order to keep the experiment as unbiased as possible, we will be using average values (3) for the aforementioned indicators for the resilient group of projects expecting that the nonaverage values will highlight the resilience of the project.For the nonresilient projects, we will adopt the value of (2) for the qualitative indicators.The reason we will be doing that is that, percentage wise, the qualitative factor gives on average a 60% score to each indicator boosting the average above 50%.Since most of the nonresilient project has a lifespan of 2 years and little activity and contributors community, we believe that, without loss of generality, we can inject a small penalty to qualitative indicators such as robustness, scalability, and usability.
To verify our decision, we conducted interviews with five experts.The background of the experts is as follows: 1. Expert #1: It has working experience as an OSS practitioner (software engineering) and researcher for more than 10 years.Expert #1 holds a masters degree in Computer Science.We presented the six projects as seen in Section 7 to the experts (identifying them as resilient and nonresilient which is exactly the way we ran our tests for this scientific work), and we presented them with the definition of resilience as adopted from the CRF for the purposes of this manuscript.We also presented to them the qualitative indicators (and their definitions) as defined in this work.Then we asked them to independently provide, in their expert opinion, the appropriate values for the qualitative indicators (scoring them from 1 to 5, following the Likert scale).
For the nonresilient projects, for all the indicators, we have an average score of 2 from our experts, with the exception of the scalability indicator (I02) that scored an average of 1.This validates that for the qualitative indicators, it was reasonable to inject the penalty we chose.For the resilient projects, for all the indicators, we have an average score of 4 from our experts, with the exception of Security (I07) that got an average of 5.This validates that using the median value (3) in our tests was more conservative than an expert would probably do.
For brevity reasons, we chose not to share the raw data of the aforementioned analysis in the manuscript.The readers of the work are encouraged to find the raw data online (https://doi.org/10.5281/zenodo.5576580).

| Mixed indicators
There are also mixed indicators.Complexity, instability, and cohesion will be measured for object-oriented projects, whereas for nonobjectoriented projects, these indicators will be considered qualitative and will be treated as described in the qualitative indicators section.Effectiveness is also a mixed indicator.If the project's issue tracker provides categorization for the critical bugs, then the indicator can be measured.On a different case, it will be treated as described in the qualitative indicators section taking an average value which, since it is a percentage indicator, will be 50%.

| Sensitivity and veto principles investigation
Our assessment model in its current version is unweighted, which means that all the indicators are equally contributing to the decision on whether an assessment concludes to a resilient or nonresilient result.This and the fact that 14 of our models indicators are Boolean lead to a concern that a specific value of a specific indicators (in the absence of weights) might be able to independently impact the decision of our assessment model.
To address that concern, we have conducted one-factor-at-a-time sensitivity analysis.More specifically, we experimented by applying changes to one indicator at a time keeping all the other indicators of the model to their baseline values.This analysis led to the following discoveries: 1. Factors with high sensitivity: The only factor that presents high sensitivity is Testing Process (I08).More specifically, if this Boolean factor gets the value of 1 (true), it significantly increases the resilience score for source code dimension (D01).We have added this finding to our limitations and threats to validity section.

2.
There are no indicators to our model that function as veto principles: Apart from the one-factor-at-a-time sensitivity analysis with baseline values, we repeated the analysis on a set of indicators values that lead to resilient and nonresilient projects, respectively.This way, we wanted to ensure that a single indicator cannot independently alter the result of our model assessing a nonresilient project as resilient and vice versa.
For brevity reasons, we chose not to share the raw data of the aforementioned analysis in the manuscript.

| Tools used for indicator measurement
In order to be able to apply the OSSRF to the selected projects, the following tools were used:

| APPLYING RESILIENCE FRAMEWORK ON OSS: RESILIENT AND NONRESILIENT PROJECTS
In this section, we will be sharing the application of the OSSRF assessment model to five consecutive versions of three intuitively resilient and three intuitively nonresilient projects.We selected these projects in order to present that the model discussed in this manuscript can successfully distinguish between resilient and nonresilient projects while they evolve in time (hence, the five consecutive versions).
First, we will provide a brief description of the project sharing some context that will present our reasoning for choosing it as intuitively resilient or no resilient.Following, you will find the scores of the goal and dimension levels for all the versions to which we applied the OSSRF.To better assist the interpretation of the results, we will be also sharing charts showing the trend of the four dimensions of OSSRF as each project evolves from one version to the next.For clarity, we present the six projects and their respective versions in Table 5.

| Resilient projects
In the section, we present the application of the OSSRF to three projects we intuitively classify as resilient.

| Laravel
Laravel 58 is a well-known open source project in the domain of PHP web frameworks.According to this report, 59 it was the top solution for 2021.
Laravel was first released in 2011 (10 years lifespan) and according to Github, 60 it has hundreds of contributors, is watched by more than 4000 users, and was forked more than 22,000 times as of the time of writing.In Table 6, you can see the values of the OSSRF analysis for each version.
In Figure 6, we can see that all four dimensions score above 60% for all consecutive major releases.Therefore, following the resilience determination mechanism of our framework, the project is considered resilient.

| Composer
Composer 61 is a dependency manager for the PHP programming language.It has been evolving for 10 years now, has been forked nearly 6500 times, and has a community of nearly 1000 developers on Github. 62In Table 7, you can see the values of the OSSRF analysis for each version.
In Figure 7, we can see that all four dimensions score above 50% for all consecutive major releases.Therefore, following the resilience determination mechanism of our framework, the project is considered resilient.It is worth mentioning that we see a slow decline over the last versions

| PatternalPHP
PatternalPHP is written in PHP and is hosted in Github.It is a lifetime less than 2 years; however, there are five consecutive releases that we studied.As we already mentioned, all the qualitative indicators for this project are proposed with the average value of 2.
The rest of the indicators are measured using the tools described in Section 6.2.The results of the application of OSSRF to PatternalPHP are presented in Table 10.
T A B L E 9 OSSRF assessment for the OKApi project.In this section, we will be presenting possible limitations and threats to validity on a construct, internal, and external level.Applying OSSRF to OSS projects should be done after taking under consideration the maturity of its community and age.Intuitively, we would suggest the selection of projects that have been active for at least 1 year and have formed a community of at least 10 contributors.Applying it in projects that do not meet the aforementioned criteria or fall under the category of solo maintained OSS projects may be misleading.
We proposed OSSRF as an adaptation of CRF from the urban architecture domain to the OSS engineering domain.OSSRF aims to assess the evolution of the resilience of a project in time (e.g., from one major version to the next).We may have mapped the original framework with the aim of holistically assessing an OSS project, but despite the conceptual similarities, this mapping of the levels of the two frameworks, dimension, goal, and indicator wise was done under the subjective lens of the authors.This should be considered as one way of adapting the CRF to OSS.
For the adaptation of the model, we used indicators that are being proposed in other assessment models that are based in software quality.
Since our model tries to investigate changes on the resilience level, the use of metrics from the quality assessment domain may also pose a threat to validity.For the full version of the OSSRF proposed in this work, some indicators are proposed as qualitative indicators evaluated by an expert T A B L E 1 1 OSSRF assessment for the PHPExcel project.We performed a sensitivity analysis to the proposed indicators of OSSRF.Some indicators present higher sensitivity.More specifically, the testing process (I08) indicator, which is a Boolean indicator, seems to present very high sensitivity.There are also different or alternative indicators to the ones selected by the authors of this work who could have been selected.Apart from indicators, there are also best practices in the Agile methodologies or related to OSS software engineering in general, that could be relevant to the resilience of a project.Some examples could be continuous integration, release often-release early, CI/CD best practices, or privacy-related decisions (especially in the light of the GDPR).
Moreover, for the measurement of the aforementioned six projects, we used specific tools to measure certain indicators.Those tools, commercial or open source, are referenced, and the intended reader can find more information on their respective websites about their functionality or use them to recreate the experiment (since there are either available for free or under evaluation demo).The selection of these specific projects was based on them intuitively being classified as resilient or nonresilient in order to be able to validate that OSSRF can distinguish successfully a resilient and a nonresilient project, as they evolve in time.
The qualitative values for the six projects are preselected from the authors, and the reasoning behind those decisions is being analyzed to Section 6.1.1.In order to provide validation for our decisions, we interviewed five experts.The number of the participants on the interview or the demographical or professional background of the experts might also pose a threat to validity.The tools selected for the the application of the model are all written in PHP programming language.In addition, the tools come from five discreet domains.

| CONCLUSIONS AND FUTURE WORK
In this work, we are trying to adapt the concept of urban resilience to OSS.Resilience, following the evolution of systems that are dynamic (i.e., OSS projects), and face stresses and crises allow us to study how and if those stresses and crises impact the survival of the projects.The proposed framework (OSSRF) is an evaluation approach that aims in providing a strong theoretical basis in OSS evaluation.We fully applied OSSRF to six open source projects, three of which are intuitively nonresilient and three of them resilient.We consider OSSRF a resilience-based model focusing on assessing the resilience of an OSS project as it evolves over time (from version to version).In this sense, we would argue that it is another way of assessing an OSS project and OSSRF and other quality assessment models for OSS should not be considered mutually exclusive.
From the results of the application, we see that both the resilient and nonresilient projects are assessed as resilient and nonresilient, respectively, by OSSRF.The resilient projects are showing high scores in the dimensions of business and legal and community, and this validates the fact that the more active the projects, the more well-organized they are (in terms of licenses, code of conduct, contribution guidelines and so forth).
The nonresilient projects are showing low scores in the business and legal and social (community) dimension.This is probably due to the fact that those projects started as solo maintained projects, not trying to attract a community and / or because their initial intention was not to become commercial or sustainable.In any case, they were slowly abandoned and deprecated.
Another interesting finding, coming from the nonresilient projects analysis, is that our framework managed to identify as nonresilient both projects with a small life span or a small numbers of contributors but also PHPExcel, a project that for a certain period of time (in the first half of its life cycle) attracted enough contributors and had a prolonged lifetime before it becomes archived.We argue that the fact that OSSRF follows consecutive releases of the project over time gives it the agility to pinpoint decrease of resilience and connect it with one or more of the dimensions of the model.
For future work, we intend to try and experimentally evaluate the qualitative indicators used in this version of the framework, by an even bigger smaple of independent, OSS expert evaluators.In addition, we would also like whether domain-specific experts are evaluating the qualitative indicators differently.We will also try to investigate whether the code repository, programming language, age, domain, or development philosophy aspects affect the selected indicators.This would be a good opportunity to also consider adding weights to the indicators, goals, or dimension levels.We will also try to experiment with alternative indicators and revisit the sensitivity analysis of our model.
Right now our model has some mixed indicators.As future work, we would like to experiment with two separate versions of our model; one targeting specifically OOP OSS projects and another one focusing on non-OOP OSS projects.In general, researching for possible variations of the model that will work better with specific OSS applications domains, programming languages, or even development stacks is something that our team is highly motivated in pursuing in the future.Moreover, we would like to apply OSSRF to projects that faced a specific crises of stress over the years.We will try to identify if and how those crises affected the resiliency levels of the project and maybe try and organize the OSS projectrelated stresses and crises with a systematic way, for example, a taxonomy.
We are also working on solutions that will allow us to semi-automate our assessment process providing assistance to the interested stakeholders to easier conduct experiments related to the resilience of OSS projects.Finally, it is in our most immediate goals to try and get feedback by key players of the OSS international community.We are interested in the opinions of developers, academics, stakeholders, project managers, and OSS-related company owners regarding OSSRF.

2 . 3 . 4 .
Economy and society: Related to the organization of cities on a social and economic level.Goals: (1) Sustainable economy, (2) Comprehensive security and rule of law, (3) Collective identity and community support.Infrastructure and environment: Related to place, the quality of infrastructure and ecosystems.Goals: (1) Reliable mobility and communications, (2) Effective provision of critical services, Reduced exposure and fragility.Leadership and strategy: Related to knowledge of the past and adapting appropriately for the future.Goals: (1) Effective leadership and management, (2) Empowered stakeholders, (3) Integrated development planning.

F I G U R E 2
Source code dimension, goals and indicators.Since both (corrections and improvements) are indicators that can apply to several different aspects of the software (i.e., changes of the environment of the software, the requirements, and the functional specification) as Miguel et al 7 state, we propose them as qualitative indicators (Likert scale) with the following values.1-limited, 2-little, 3-moderate, 4-good, and 5-great.
The company provides enterprise-level support and services for open source software, and it uses a dual-licensing model to provide both open source and commercial licenses for its products.Red Hat's most well-known product is Red Hat Enterprise Linux (RHEL), which is an open source operating system based on the Linux kernel.RHEL is available under a dual-licensing model, with a free, open source version that is licensed under the GPL and a commercial version that is licensed under a proprietary license.Customers who use the commercial version of RHEL receive access to enterprise-level support, security updates, and other services, while users of the open source version can access the source code and make modifications.MongoDB is another popular, cross-platform documentoriented database system that uses a dual-licensing model.The software is available under the Server Side Public License (SSPL), an open F I G U R E 3 Business and legal dimension, goals and indicators.

3 .
Self-contained (I20): It is defined as "the function that the component performs must be fully performed within itself".McColl et al 40 conduct a performance evaluation of open source graph databases projects and conclude that self-containment makes the project a better candidate over competitive ones.To our best knowledge, there is not a well-established metric for the self-containment of an OSS project we propose that this should be a qualitative indicator described with the following values provided by an expert.1-limited, 2-little, 3-moderate, 4-good, and 5-great.F I G U R E 4 Integration and reuse dimension, goals and indicators.

T A B L E 4
Abbreviation: LCOM, lack of cohesion in method.

••
The indicator's value ranks between [0,1].Active developers (I32): It is proposed as the rate of developers that have been active, contributing to the project, the last 6 months to the total number of developers.Depending on the version control system (CVS) the project uses (i.e., Trac, Git, Mercurial), active developers are the ones that have contributed commits in the CVS in the timeframes defined above.The indicator's value ranks between [0,1]."Number of open issues (I33): It is proposed as the number of the current open issues to the total issues reported since the beginning of the project.This indicator gives us a perspective of the activity of the community regarding bug reporting.This indicator ranks between [0,1].The lowest the number the less open issues is has therefore for this indicator the final value that we use to the framework calculations is 1number of open issues.
indicators.Boolean indicators, Likert scale indicators, indicators ranking between [0,1], and percentage indicators.In order to simplify the visualization of decisions and results, for starters, we express the indicators' level to percentages.For Boolean indicators, this means that the 0 / 1 values are transformed to 0% / 100%.For the Likert scale indicators, depending on the number of available options (either 3 or 5 in our case), the value is being divided by the total of possible answers (i.e., on an Likert scale indicator with 5 possible values, if we have a score of three, this will be expressed as 3 / 5 = 60%).For the percentage indicators and the indicators with values between [0,1], we do not need to do any transformation as they are already expressed as percentages.

2. Expert # 2 :
It has working experience as a researcher in Computer Science for more than 3 years.Expert #2 holds a masters degree in Computer Science.

3. Expert # 3 : 4 . 5 .
It has working experience as an OSS practitioner (software engineering) for more than 5 years.Expert #3 holds a masters degree in Computer Science.Expert #4: It has working experience as an OSS practitioner for more than 5 years and as a researcher for more than 2 years.Expert #4 is a PhD Candidate in the field of Computer Science.Expert #5: It has working experience as a researcher for more than 15 years.Expert #5 is a Post Doc in Computer Science.

1 .
OSS project's official website: It was used to provide information for the following indicators: all the market-related indicators, all the supportrelated indicators, coding standards, documentation standards indicators, and the user guide indicator.

2 . 3 .
OSS code repositories (i.e., Github): It was used to provide information for the following indicators: effectiveness (if the source code repository's issue tracker is being used by the project), testing process, license, governance model, project roadmap and code of conduct indicators, all of the developer base-related indicators (if the source code repository's issue tracker is being used by the project), and the issue tracking activity indicator.PHPCoverage Tool 56 : This open source tool was used to measure the coverage indicator for object-oriented projects written in PHP. 4. PHPQA 57 : This open source tool was used to measure the following indicators for object-oriented projects written in PHP: complexity, instability, cohesion, and documentation.

F I G U R E 1 1
Resilience evolution between releases for PHPExcel in open source software resilience framework (OSSRF) dimensions level.and others are mixed indicators (i.e., measurable by tools for object-oriented projects or considered qualitative indicators to be evaluated by expert for nonobject-oriented projects).We considered each indicator, goal, and dimension equally important (no weights are applied in the current version of OSSRF), and the respective threshold for classifying a project as resilient or nonresilient is a fixed 50%.Aggregating the factors from the indicators level to the dimensions level (bottom up) is happening by calculating the average of the peer factors (indicators, goals, and dimensions, respectively) since they are equally valued.
Projects and versions where OSSRF was applied.
Abbreviations: OSS, open source software; OSSRF, open source software resilience framework.