MIT Tool Will Help Companies Assess Weak Links in Supply Chains

Researchers say the platform will help manufacturers more quickly rebound from natural disasters and political unrest.

By Joel Schectman

Massachusetts Institute of Technology researchers are building a tool that will help companies visualize and assess the vulnerability to their supply chains in the event of a catastrophe. Companies are grappling with increasingly far flung networks of suppliers and manufacturers. At the same time, many companies have reduced the number of backup suppliers, in order to cut costs, said Thomas Dinges, an analyst at the research firm IHS Inc. This means that losing a single supplier can cause millions in lost revenue. Even worse, a natural disaster or political unrest can wipe out manufacturing in a critical regional supply hub. For example, the 2011 Japan earthquake and tsunami exposed the connection between certain regions in Japan and the dizzying array of manufacturers, including some quite small suppliers, responsible for critical auto and electronics components. The dual disasters caused shortages for automakers and electronics companies felt worldwide. Reacting to the disruptions can take companies weeks as they untangle how the disruption from a supplier, that might be one among a network of hundreds, affects a product line, said Bruce Arntzen, a senior research director at MIT's supply chain management program . “When a disaster happens, we want companies to instantly see a map of the potential impacts and how they might be mitigated,” Mr. Arntzen said. The new MIT system will allow companies to view a global map of supplier locations. And the tool will assign a dollar amount to the damage if a calamity took them offline, Mr. Arntzen said. Companies will feed product and supplier data, such as which parts come from which plant, into the new platform from existing logistics databases. Companies will also input the value of revenue from each product line. That data will feed into a Web-based platform built by Cambridge, Massachusetts -based data visualization company, Sourcemap Inc . For each supplier, Sourcemap will then display the amount of revenue the company would stand to lose if that partner was taken offline. To calculate the revenue at risk with each supplier, the system will account for the number of days it would take to find a new partner and how long existing inventory will last. Mr. Arntzen says parts of his team’s system will be available commercially in six months. Beyond alerting executives to the fragility of a supply chain before a disaster happens, the system will also help companies more quickly react as a disruption occurs. The platform will pull in real-time feeds from news events and alert company executives on the potential impact to a supply chain as a disaster or political event unfolds. The software will then help business leaders determine alternate supply networks. “We want companies to know the recovery time from the beginning. If they can replace a supplier in two days it might not be a big deal,” Mr. Arntzen said. “If it takes a year it’s very bad.” The team’s biggest obstacle is finding a way to automatically feed in critical pieces of information not included in most corporate databases, Mr. Arntzen said. For example, while logistics databases usually include the corporate headquarters of suppliers, they often don’t have the physical address of plants, Mr. Arntzen said. And if companies have to continually enter the data manually, the system will lose much of its utility. “If you have to take two weeks to call a bunch of people, it defeats the whole purpose,” Mr. Arntzen said. “Most data we can get automatically from corporate database but we’re trying to pull some in that the systems weren’t designed to capture.”

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.7(3); 2021 Mar

Logo of heliyon

Human factor, a critical weak point in the information security of an organization's Internet of things

Kwesi hughes-lartey.

a School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China

d Computer Science Department, Koforidua Technical University, Koforidua, Ghana

b Institute of Electronic and Information Engineering UESTC in Guangdong, China

Francis E. Botchey

c Network and Data Security Key Laboratory of Sichuan Province, China

Associated Data

Data associated with this study has been deposited at https://www.kaggle.com/archangell/hipaa-breaches-from-20092017 .

Internet of Things (IoT) presents opportunities for designing new technologies for organizations. Many organizations are beginning to accept these technologies for their daily work, where employees can be connected, both on the organization's premises and the “outside”, for business continuity. However, organizations continue to experience data breach incidents. Even though there is a plethora of researches in Information Security, there “seems” to be little or lack of interest from the research community, when it comes to human factors and its relationship to data breach incidents. The focus is usually on the technological component of Information Technology systems. Regardless of any technological solutions introduced, human factors continue to be an area that lacks the required attention. Making the assumption that people will follow expected secure behavioral patterns and therefore system security expectations will be satisfied, may not necessarily be true. Security is not something that can simply be purchased; human factors will always prove to be an important space to explore. Hence, human factors are without a doubt a critical point in Information Security. In this study, we propose an Organizational Information Security Framework For Human Factors applicable to the Internet of Things, which includes countermeasures that can help prevent or reduce data breach incidents as a result of human factors. Using linear regression on data breach incidents reported in the United States of America from 2009 to 2017, the study validates human factors as a weak-point in information security that can be extended to Internet of Things by predicting the relationship between human factors and data breach incidents, and the strength of these relationships. Our results show that five breach incidents out of the seven typified human factors to statistically and significantly predict data breach incidents. Furthermore, the results also show a positive correlation between human factors and these data breach incidents.

Data breach; Human behavior; Human factors; Information security; Internet of things

1. Introduction

Internet of Things (IoT) has been gaining grounds rapidly over the years concerning the Internet or Computer Networks [1] and according to [2] IoT mainly refers to the augmentation of physical objects and devices, where these objects and devices have sensing, computing and communicating abilities and are connected in a network to utilize them collectively. Over the years, there has been a lot of research on IoT and they have mainly been from the perspective of thing-oriented. These researches have covered areas such as identification of objects, tracking of objects, privacy control, sensing data visualization and object networking [1] . However, the interaction between humans and IoT is an arena that has a lot to be explored [2] , not to even mention or talk about information security vulnerabilities on IoT as a result of human factors. The actual concept of IoT was conceptualized by the Auto-ID Center at Massachusetts Institute of Technology (MIT), which began to design and propagate across a company radio frequency Identification Infrastructure. The concept was to make all objects in the world network connected and to represent a vision where the Internet makes an extension into the real world concerning everyday objects [3] . The whole idea of ‘things’ in a network structure refers to either real or virtual actors, such as real-world objects, virtual data, intelligent software and human beings being participants. This is to create an environment that provides access to basic information from one object to the other, to facilitate information sharing with others in the real-world effectively [4] . Nicolescu et al. [5] provide a better and current working definition of IoT. The term IoT can mean different things to different actors and the values associated with IoT do not merely vary with the more obvious technological, economic, and political factors, but also with behavioral patterns and cultural practices across individuals, communities, and demographics.

It must be noted that IoT does not have a unique definition. Nevertheless, a broad understanding of IoT is one that provides any service over the traditional Internet which enables or provisions human-to-thing, thing-to-thing or thing-to-things communications. Due to the connections and communications among these actors, there are many potential threats and attacks against the security and privacy of information or things, which has become a source of concern. Hence, these concerns need to be investigated or studied and addressed appropriately. These studies should make simple the design and development of IoT objects that will enable a plethora of services for human beings in different sectors such as the health sector in which different end-user devices (smartphones, laptops, tablets, etc.) and their periphery.

Security breach of sensitive data such as personal data has become a phenomenon that occurs regularly over the last few years. These acts are perpetrated by malicious cyber actors who attack organizations' information systems through different means for information ex-filtration [6] . It affects almost every sector, especially those with ‘valuable’ data such as the health sector. Today cyber threats keep increasing and the vulnerability of industries such as healthcare is apparent. In 2015, Anthem Inc. was hacked, and this led to an exposure of millions of data, where individuals lost their Protected Health Information (PHI), potentially putting them at risk for identity theft and fraud [7] . Modern research shows how the focus of information security is mostly geared towards providing solutions through the technology, such as deep learning network architecture proposed for human activity recognition based on mobile sensor data [8] , collaborative privacy-preserved deep neural network architecture (dubbed MSCryptoNet) based on a fully homomorphic cryptosystem [9] , deep learning framework to identify smartphone users based on the original smartphone sensor data, acquired when users shake their smartphones or perform some daily actions [10] , predicting demographic information by leveraging the perspectives of smartphone application usage [11] , lightweight device authentication protocol; speaker-to-microphone (S2M) by leveraging the frequency response of a speaker and a microphone from two wireless IoT devices as the acoustic hardware fingerprint [12] , partially hidden policy to protect private information in an access policy [13] , attribute hiding predicate encryption with equality test is formulated to provide the privacy preservation of user attributes and flexible search capability on ciphertexts simultaneously [14] , semi-supervised generative adversarial network (GAN) for channel state information (CSI)-based activity recognition (CsiGAN) based on the general semi-supervised GANs [15] and a hybrid framework, Super-Recognition of Pedestrian Re-Identification (SRPRID), to strengthen pedestrian re-identification based on multi–resolution images captured by disparate cameras [16] .

People, just like information technology, are part and parcel of information security and even though there have been many technological advances in information technology with such sophistication that makes it difficult for data breach incidents to occur on a technological level, the same cannot be said about people. Data breach offenders are becoming more and more aware of the fact that human factors, whether error or behavior, may be a weak point of an information security structure for their success [17] . This is a clear indication that the strength of any good information security system is in the hands of those who use it and not just the technology. Even though the popularity of mobile devices and other sensors can be used to collect biometric information to ensure security [18] , human behavior is still critical. The ideology that good technological systems can solve an organization's security problem is an indication of a lack of understanding of the problem and also a lack of understanding of technology [19] . [20] claims that the major type of information breach that organizations face is to some extent related to the exploitation of human resources in terms of the error they commit or their user behavior. The staff of an organization is seen as the Achilles hill for information security breaches [20] , [21] , [22] , [23] . Research also shows that most of the time, organizations have the ‘habit’ of consistently overlooking human factors as a key cause of security breaches and would rather prioritize their resources on technological controls and solutions [24] . The inescapable tie between information security and humans cannot just be overlooked. And when it comes to information security system evaluations, organizations will mostly evaluate the technological aspect of the system and will perform very little or no evaluation of human factors, which can greatly impact the vulnerability of the security system [25] . The protections of data or information is dependent on a good information security plan, which must be one that does not overlook human factors and the various controls on user behavior and habits apart from the usual practice of technological controls [26] . This work makes use of data made available by [27] on Kaggle's official website which contains the official dataset from the Department of Health and Human Services (DHHS) on all reported Protected Health Information (PHI) data breaches from medical centres, including dental centres in the United States of America. The data provides an archive of reported Health Insurance Portability and Accountability Act of 1996 (HIPAA). HIPAA is a US Federal Law that sets standards of how healthcare plans, healthcare clearing houses and healthcare providers protect the privacy of their patients' health information. HIPAA privacy rule allows the recognition that it is not practicable to remove all risks of incidental disclosure. Also when there are policies with reasonable safeguards and appropriate limits of how PHI is used and disclosed, then incident disclosure does not violate the rule [28] .

According to [29] the National Conference of State Legislatures passed by all 50 states, Puerto Rico, District of Columbia, and the U.S. Virgin Islands, required government and private entities to notify individuals who have been impacted by information security breaches that may compromise their personally identifiable information. Typically the laws define what is classified as personally identifiable information in each state, what entities are required to comply with, what specifically constitutes a breach, the timing and method of notice required to individuals and regulatory agencies, and consumer credit reporting agencies, and any exemptions that apply, such as exemptions for encrypted data. Also business entities or organizations report or notify an entity designated by the US Homeland Security information about information security incidents, threats, and vulnerabilities. These entities shall promptly notify and provide that same information to the United States Secret Service, the Federal Bureau of Investigation, and the Commission for civil law enforcement purposes, and shall make it available as appropriate to other federal agencies for law enforcement, national security, or computer security purposes [29] .

The motivation of this study is to make available to organizations or business entities information about the potential connections between data breach incident types and human factors. The study provides an analysis covering nine years of data breach incidents. This is done by predicting the relationship between human factors and five data breach incident types. This paper also examines the relationship between human factors and ‘other’ and ‘unknown’ breach types. These breach types are explained in section 4 . The dataset used, to the best of our knowledge, was last updated in October 2017, before and during its analysis. The classification of the dataset is discussed in section 4 . The three main contributions of this paper are summarized as follows:

  • • To propose a holistic information security framework for human factors and technology based on literature. The proposed framework may be useful in reducing the number of data breach incidents due to the increase in human factors.
  • • To predict the relationship between data breach incidents and human factors. The study takes into consideration different types of data breach incidents that happened as a result of human action and model the relationship between incidents and human factors. The predicted relationship will help us understand the extent to which human factors affect data breach incidents and information security at large. By doing this, the paper explains from the dataset, the variation in data breach incidents that can be attributed to human factors or to determine whether human factors have no relationship with data breach incidents.
  • • To evaluate the strength of the relationship between data breach incidents and human factors. By establishing the dependence or the association of data breach incidents and human factors, the analysis will not determine a causation but rather the strength of the relationship.

The rest of this paper is arranged as follows: a review of related work in section 2 . In section 3 , the study proposes a framework for human factors and technology. In section 4 , a classification of the dataset used is explained and also an evaluation of the linear regression is provided. In Section 5 , a validation of human factors as a weak link in information security by providing an analysis of different data breach incidents with human factors underpinning them and predicting the relationship between human factors and data breach incidents. A discussion on the regression models in section 6 and the conclusion in section 7 .

2. Related work

In the world of IoT, information communications and collaborations among hospital staff is a problem in the healthcare industry. One of the factors that contributes to this, is the lack of computer and information security knowledge on how to use health information and also the lack of controls over the transmission and receipt of information during high loading work periods and most often leads to a lot of difficulties, including the use of electronic health information, hence compromising the security of the system. The study further indicated that nurses, pharmacists and public health workers used health information more than physicians [30] . [31] elaborates on the importance of preserving privacy in IoT, and human behavior in the use of IoT can not be downplayed.

2.1. Developments in cyber risk assessment for Internet of things

According to [5] IoT technology is associated with an entire spectrum of values that is yet to be assessed, and should be assessed on three main domains: economic, technical and social. The value of IoT can not be minimized to one or two of these domains, even though such practices go on in today's world. It is important to note that social and cultural customs can norm and limit not only economic aspects of IoT but also the technological part and this becomes critical for good information security. This implies that research into the security of IoT must be broadened to social aspects, a domain where human factors can be addressed. They adapted IoT of both Cyber Value at Risk model, a well-established model for measuring the maximum possible loss over a given time, and the MicroMort model, a widely used model for predicting uncertainty through units of mortality risk. The resulting new IoT MicroMort for calculating IoT risk is tested and validated with real data from the BullGuard's IoT Scanner of over 310,000 scans, and the Garner report on IoT connected devices. With these two calculations were developed, the current state of IoT cyber risk and the future forecasts of IoT cyber risk. Therefore, their work focused on the advances in the efforts of integrating cyber risk impact assessments and offer a better understanding of economic impact assessment for IoT cyber risk. [32] proposed a decomposed cyber security risk assessment standard in which there is a combination of concepts of building a model for building standardization of impact of assessments. The proposed model is identified to have two problems: new design principles for assessing cyber risk, and the identification of different risk vectors. Their paper focused on the analysis of the best approach for quantifying the impact of cyber risk in the IoT space. The model and the documented process represents a new design for mapping IoT risk vectors and optimizing IoT risk impact assessment. Radanliev et al. [33] argue that designing a holistic model for IoT risk assessment and risk management remain a challenge. The design of any assessment model must focus on, IoT economic impact, IoT machine ethics, IoT sensor networks, IoT safety, IoT cyber security and IoT equipment. They discussed how interdisciplinary research could prove to be very beneficial to help more people to understand and consider the many issues around the risk in IoT systems and ultimately, make a contribution to the design of a holistic approach to IoT risk assessment.

These trends and advances in managing risks associated with internet security provide no or little parameters or vectors for human factors as a major component of the assessments.

2.2. Behavior

According to [26] , the notion that suggests informal behavior is a central theme in describing those characteristics of people, organizations, and acts of communication which affect information. This means that the management of information security connotes the management of the integrity of communication. The argument therefore then continues that behavior and communication should be considered as opposite sides of the same coin and that, any kind of discordance in behavioral patterns could potentially lead to security breaches. There is hence, an understanding that a cause and effect relationship between unwarranted behavior and breakdown in communication may lead to a security breach. Furthermore, consideration for information systems and communication must be made, understanding that the consideration of both to be the same thing is not a new idea. Organizations must recognize that information system facilitates communication and must be interlaced from threads of communication. It will therefore not be a far-fetched idea that any problem with the system of communication will most certainly affect information security systems that facilitate it directly and vice-versa. Organizations and businesses that need to protect themselves against attacks, often do so because of the wealth of resources at their disposal to protect their information technology. However, these resources barely have any link what so ever to their investment in making their staff immune to data breach incidents. By this, attackers know that people are likely to be the weakest linkage in the chain of information security and invest many hours to track down and exploit their vulnerability and carelessness even without any guarantee of success. This is because when dealing with people, they can be confident of discovering any number of vulnerabilities or careless behaviors, while being just a little creative [34] .

2.3. Human factors

In academic literature, several theories postulated after investigations by researchers and reported human factors to have had an impact on user behavior, both negative and positive. According to [35] as cited by [36] ), one of these theories is security culture (cultural factor). This is a human factor that is associated positively with an employee's willingness to follow laid down security procedures. In every organization, corporate culture can exist whether employees are aware of it or not. In other words, they may not necessarily be aware consciously of such a culture but may be operating in it [37] . Organizational culture is not the only parameter to be considered when dealing with the cultural aspect of human factors, but also factors such as national culture, regional location and religion. National culture has a direct effect on the usefulness of the level of information protection and behavior. Studies conducted mostly on Western culture and Asian culture indicate that Western organizational cultures are more individualist while Asian organizational cultures are more of a collective one [38] . Another human factor to consider is personality. [36] claims that five traits are often used to describe people according to their psychology; these being openness, agreeableness, extraversion, conscientiousness and neuroticism. They further go on to explain the relationships of these personality traits and information security compliance behavior based on a study conducted by [39] . This research sampled 120 users with a research model based on the five major personality traits aforementioned. Their results revealed that conscientiousness and agreeableness have a significant impact on user compliance with information security policy. Alotaibi et al. also explained in another study by [40] , in which the study was designed to understand personality traits that underpin behavior and the extent to which it affects users' intention to comply with information security policy. They did this by implementing and empirically validating a comprehensive theoretical model that aimed to assess the impact of the personality factors. The results of their research on 481 participants showed that more open, conscientious and agreeable participants were likely to comply with information security policy. Conversely, the participants who were more extroverted and neurotic often tend to violate information security policy [40] . Alotaibi et al. considers perception as yet another human factor which was investigated by [41] . The study considered perception to be a key component of human behavior and a major part of intelligence. Proctor argues that human interpretation or recognition of sensory information has a substantial impact on user behavior. So the perception of employees for information technology has a great impact on their behavior and decisions [36] . This concept is complemented by [42] , where the investigation showed that when organizations are dealing with users' perception of information security, their perception is determined by several factors, such as awareness, knowledge, controllability, severity and possibility, which will, in turn, become an influencing factor to their behavior and decisions. It is very important to understand that when users have a complete picture and full awareness of what is happening in an information security policy space, it will positively impact their ability to recognize potential threats. Therefore, perception can be considered as knowledge about a particular domain and as such employees should keep up to date with the latest threat patterns and the consequential security requirements. An important human factor that also influences user behavior cited in academic literature in the form of reported incidents is gender. [43] Hanley et al. in their study found that, 94 percent of insider incidents were associated with males while a technical report by [44] also found that the majority of insider incidents were male initiated. [43] . However, [45] makes a counter-argument that both genders pose an equal threat to information security. Their study found that 50 percent of insider threats were associated with females and 50 percent with males. An examination of habits theory proposes that humans perform many actions without making conscious decisions and then get familiar with executing these actions. The argument explains that information technology usage is directly related to habits. The actual behavior of users is highly influenced by their technology usage habits. And so some researchers think that habitual behavior explains information security policy non-compliance [36] . Pahnila et al. studied factors that impact users' compliance using a theoretical model and one of the factors was users' habits. The study was an empirical one, provided by a model of over 245 participants from a Finnish company. The results exposed that users' habits have a significant impact on intention to comply with information security policy [46] . Looking at employee satisfaction, which is a component of human factors as defined by [35] . It is the employee's overall feeling of well-being while at work. It is widely accepted that an employee who is satisfied with his or her employer is most likely to conform to the organization's information security policy. Users who report positive feelings about their organization are expected to have a good sense of their responsibilities, especially in terms of conforming to information security policy. [35] further argues that some studies have investigated the relationship between job satisfaction and employee conformity. They provided empirical support for the claim that job satisfaction has a positive impact on compliance with security policy. Their examples examined the influence of job satisfaction on user's information security policy compliance decisions and in their theoretical research model, they hypothesized that satisfaction is positively associated with security compliance intention. The research model was tested on 223 survey participants, and the results suggested that job satisfaction contributes to security policy compliance. The result further found a strong relationship between users' intention to conform to information security and job satisfaction.

The last human factor discussed, is technology democracy. [47] explains that systems and applications that are used at work and home have converged and have become intertwined over the years. Applications that are used in home environments are now used in business systems as well, which potentially creates a challenge to the status quo of the use of technology in many organizations. Users today demand more freedom to use a wider variety of applications and devices to do their work more effectively. This is classified as ‘technology democracy’. Again, when there is a mixture of work and home environments, employees will more likely demonstrate unintelligent behavior towards security [47] .

3. Organizational information security framework for human factors in an Internet of things

In this section, an information security framework for human factors is proposed for an IoT as shown in Fig. 1 . The framework mainly focuses on the non-technological aspects because information technology is much more protected than the users who use it [34] . Therefore, in this framework, the discussion is centered on countermeasures to the breaches mentioned in section 4 . The IoT part mainly represents the technological aspect and is divided into four parts, with all needing or requiring an appropriate security:

  • • Technology: Technology represents the type of processor chips, sensors, Radio-Frequency Identification (RFID), Near Field Communication (NFC), and cyber-physical systems [48] .
  • • ‘Things’: These are objects such as wearables, televisions, laptops, tablets, smartphones, cars, e.t.c. [48]
  • • Infrastructure: IoT infrastructure consists of access technologies, data storage and processing, data analytics, and security. These are the pillars that enable growth for future IoT solutions [49] .
  • • Software: A complete IoT system requires software. It addresses the domains of networking and action through platforms, embedded systems, middleware, and partner systems. The individual and master applications are accountable for data acquisition, device integration, real-time analytics, and process extension in an IoT network [49] .
  • • Security: Security for IoT found in all the domains mentioned above. It is designed to ensure the steady working of all the functionalities in an operational system, so that devices that are connected can give a business a real boost. Any thing or device connected to the Internet can be exposed to cyber-attacks. IoT security is the technology area that provides ‘safety’ for the connected devices and networks in the Internet of Things [48] , [49] .

Figure 1

Organizational Information Security Framework For Human Factors in an IoT.

3.1. Hacking

3.1.1. user awareness.

Organizations need to have a consistent policy that focuses on having their employees trained or educated, and updated on the best practices in protecting themselves from hackers which will in turn be a protection for the organization. To ensure that users don't become weak links in an IoT, they must be equipped with the relevant knowledge. This must be done to shield and reduce user susceptibility to hacking activities. [26] , [36] , [50] all highlight the impact of security awareness on employee behavior and its significance in influencing their intention to comply with the best practices. Employees are likely to violate security policies when there is a lack of awareness and knowledge.

3.1.2. User habits

User habits in an IoT is crucial to its security. Once again education is essential. Educating and training users on how to behave online will help in the modification of their online habits. This will particularly help when it comes to a hacker tracking users' habit online. For example, when users are contacted to verify an account, they need to be well informed not to comply but to contact the appropriate entity in the organization from which the apparent verification came from to have its legitimacy verified. User habits, such as clicking on hyperlinks will have to be discouraged, since hackers may use these links deceptively in obtaining their user credentials. Rather they need to be well informed as much as possible to always type the organization's correct web address directly in a web browser's address bar, in order not to be vulnerable to phishing [51] .

3.1.3. Password management

[34] makes the argument of how many people have insufficient understanding of the inner workings of a computer and due to that, it is quite difficult for many to appreciate computer security principles. Often, user understanding is fuzzy in rather basic things, such as password management. Again education remains the fundamental theme. User training should be aimed at helping end-users understand some of the best practices in ‘simple but complex’ password credentials. Password management must include the following:

  • • Not writing down passwords anywhere
  • • Changing password at least every 3 months
  • • Not using words or phrases associated with the user
  • • Not using the same password for every account
  • • Every password must be at least 8 characters
  • • Passwords must be a mix of letters (uppercase and lowercase), numbers and special characters
  • • Not using default passwords

Users must be made aware that they are the weakest link in the chain of security and that hackers are willing to spend many hours tracking, monitoring, and exploiting new vulnerabilities without guaranteed success. They are confident of discovering a number of passwords since people are involved.

3.2. Loss and theft

The loss and theft of mobile or portal devices are threats that could result in data loss. When it comes to data loss, one must ascertain whether it bothers on data-at-rest and data-in-motion to ensure the confidentiality and integrity of the data. However, portable devices are more sophisticated than that. This must involve protecting data on the device, data in the applications, and data over the network [52] . Organizations can set out the following to help mitigate device loss or theft.

3.2.1. Installing security software on portable or mobile devices

This is an important countermeasure, where IT experts of an organization should pay equal attention to just like other hardware pieces like servers on the corporate network.

3.2.2. Monitoring user behavior

Employees most of the time are oblivious when their devices are compromised, hence, putting themselves at risk. For this, a consistent monitoring of user behavior can show anomalies that can be indicative that an attack is on the way. Furthermore, automated monitoring may also prove crucial when making sure your organizations' IoT security policies are not infringed upon.

3.2.3. Establishing a clear and concise portable or mobile device usage policy

Organizational security policies must include a mobile or portable device usage policy. This should sufficiently cover the acceptable use of anti-loss and anti-theft procedures and guidelines and a mandatory security sitting. The guidelines should also implement a compliance monitoring and remediation of deficiencies.

3.2.4. Encrypting and reducing visibility into devices that have access to the organizational network

It is best if a malicious user cannot easily access data on the device, in cases where a device gets lost or stolen. The taking over of a lost or stolen device should also not allow the malicious user to have a ‘walk in the network’ of the organization. To achieve this, user and device identities must be placed in a comprehensive identity and access management (IAM) system.

3.2.5. Segmenting data and software in user devices that participate on the organizational network

To minimize the exposed attack surface area when a device is lost or stolen, data segmentation, by placing users with mobile or portable devices into role-based groups with different levels of access privileges, can be employed. When device software is segmented, it prevents users from installing unwanted software that might cause interference into the corporate network.

3.3. Unauthorized access or disclosure

To safeguard the system from unauthorized access or disclosure, there must be an implementation of logical access control to an organization's critical and confidential information to reduce the impact when there is a security breach. This will be a control over who and what is accessed to a specific IoT resource as well as controlling the type of permitted access. To do this, the control must be embedded into the software such as applications, database management systems, operating systems, or implemented in network devices like routers [53] . For logical access to be effective the following must be done:

3.3.1. Access control model

According to [53] , the access control model is one that must define the rules and guidelines of how objects are accessed by subjects. It must provide confidentiality and integrity while ensuring that there is accountability in three main ways: discretionary, mandatory, and role-based.

3.3.2. Information flow model

This is a model that must ensure information flow direction and security levels to ensure the confidentiality of information. By doing this, it will prevent the flow of higher security level information down to a lower level where a read permission allows a subject at a higher level to read an object at an equal or lower level, while a write permission allows a subject at a lower level to write up an object at an equal or lower level and the only subject that can make changes to the resource's security label will be the object, hence, ensuring confidentiality and not integrity.

3.3.3. Integrity model

Unlike the information flow model that ensures confidentiality, the integrity model does not ensure the same, but data integrity. In a read permission, integrity is achieved by allowing the subject to read when its integrity is equal or higher than the object and for write permission, the subject is allowed to write objects that it has an equal or lower integrity.

3.4. Improper disposal

Proper device disposal is critical for every organization. An improper disposal could potentially lead to data confidentiality issues with both legal and ethical implications. Therefore, having a policy that provides a proper cleaning or destruction of devices with sensitive and confidential data and licensed software on them is important and policies can be developed around the following areas:

3.4.1. Sanitization

Organizations must ensure that when devices are to be disposed of, the data on them must be removed using different methods such as overwriting and erasing data by utilizing methods prescribed by the National Institute of Standards and Technology (NIST) special publication 800-88 [54] .

3.4.2. Degaussing

Proper methods must be used when storage media is subjected to a powerful magnetic field to remove the data on the media by rearranging the magnetic field on electronic media to completely erase its content. For example, computer hard drives and other electronic storage devices such as computer tapes store data within magnetic fields containing layers of magnetic materials [54] .

4. Classification of breach incident types of dataset

The dataset used in this work consists of over 1600 recorded cases of data breaches, specifying the name of the covered entity (CE), the state the entity is located in, the number of individuals affected, date of submission of the breach, type of breach, location of breach, business associate present and the description of the breach from October 2009 to November 2017. To stay within the objective of predicting how human factors influence data breaches in organizations, only a selected number of parameters are considered; date of submission of the breach, the type of breach, and the description. The descriptive parameter narrates what led to the breach. A few of the records had missing values in all the columns except for the year (date of submission of breach). Such records were removed and not considered in this study. To clean data in a way that will be supported by quantitative analysis, the descriptive column, which is a string format was examined, record by record, case by case and where it was indicative of human factors such that if the underlying cause of the breach was directly due to human error or behavior, a score of 1 was assigned, otherwise 0. The data was then extracted according to the type of breach, the year the breach happened, and the number of human factors associated with it for that particular year. An assumption that even though undetected and unreported data breach incidences may be significant to the findings of this study, the reported data breach provides a confidence that typify data breach incidences in general.

An analysis of variance (ANOVA) for linear regression is used for the analysis of this study and the study uses Pearson's r which measures a linear relationship between two continuous variables. The regression line used is, D A T A = F I T + R E S I D U A L , that is:

Where the first term is the total variation in the dependent variable(s) y from the dataset, the second term is the variation in the mean observation, while the third term is the residual value, then square each of the given terms in equation (1) and add them over all the observations n, which gives the equation

Equation (2) can be rewritten as S S T = S S E + S S M , where SST is the notation for the total sums of square, SSE error sums of square and SSM is the model sums of squares. The sum of the samples is equal to the ratio of the model's sums of square, r 2 = S S M / S S T . With this, there is a formalization that the interpretation r 2 which explains the fraction of the variability in the data, that is explained by the regression model. The variance s y 2 is given by:

Where DFT is the total degree of freedom.

Where DFM is a model degree of freedom. In equation (4) the mean square model (MSM) applies because the regression model has one explanatory variable x. The corresponding mean square error (MSE) is the estimate of the variance of the population of the regression line ( σ 2 )

The ANOVA calculations for the regression are shown in Table 1 .

Equation (6) is used to compute the correlation matrix of all the dependent variables. It is a Pearson correlation matrix between the variables x j and x k .

ANOVA for Regression of Human Factors and Types of Breach.

4.1. Characterization of breached incident types

The study characterizes the different types of breaches according to the breach type and its description as reported in the dataset:

4.1.1. Theft

These are breaches that occurred as a result of an electronic device being physically stolen and subsequently leading to the breach of information. Some of the devices that were stolen were desktop computers from front desk areas, backup tapes, stolen records from an entities office, laptops from offices, employee vehicles, USB drives, and external hard drives containing the PHI of several individuals.

4.1.2. Loss

A breach classified as loss is one which involved the misplacement of data that may have led to data being compromised. It is important to note, that the dataset does not explicitly refute the possibility of it being stolen, which would mean classifying it as theft. Neither does it imply loss itself a causation of other types of attacks. As a result, this work classifies the loss as a type of breach based on the reported cases in HIPAA. So where it is not known as to how data got missing and later being compromised is thereby classified as loss.

4.1.3. Unauthorized access or disclosure

Breaches that happened as a result of former workforce members, while still employed, downloading the names and certain personal information of its clients are classified as Unauthorized Access or Disclosure (UAD). UAD also includes employees or CE sharing PHI with authorized people. In a case study, some software vendors and business associates (BA) for the CE failed to disable a software switch, which allowed Google to index files on the CE's hosted website containing the electronic Protected Health Information (ePHI) of thousands of individuals. The ePHI included individual names, addresses, zip codes, Medicaid numbers, and primary care physician's names and addresses. Other cases of unauthorized access included employees sending Medicaid reports to their email, leading to a breach that affected over 270,00 individuals and the types of protected health information (PHI) involved in the breach included names, addresses, phone numbers, social security numbers, and their Medicaid identification numbers.

4.1.4. Improper disposal

A breach that happened as a result of the CE mailing envelopes containing PHI that arrived at the contracted provider's address damaged, with the contents missing is classified as Improper Disposal (ImD). Envelopes that were damaged at the postal facility where they were processed and contained member claim information of individuals, including members' names, identification numbers, claim numbers, dates of service, procedure codes, charges, and provider information is also ImD. Breaches that occurred as a result of employee erroneously distributing emails containing ePHI of thousands of individuals to the wrong recipients are classified as same. The last category of ImD are cases where electronic devices that were classified as “spoilt” were trashed with data still accessible on them, which led to a breach and after an investigation by the CE, it was found that the way the devices were disposed of was the cause of the breach.

4.1.5. Hacking or IT incident

Breaches that are classified as hacking or IT incidents (HITi) includes events such as a foreign Internet Protocol (IP) address accessing a CE's website, which contained a database containing PHI of clients or an unknown assailant associated with a foreign IP address that attempted to bypass the security mechanisms of a computer server of a former third party administrator and BA. A lot of individuals were affected by such breaches. The servers contained PHI regarding some of the CE participants such as names, addresses, social security numbers, and clinical information, including information regarding healthcare providers and types of service. When file servers at the entities' offices are compromised and impermissibly accessed and there is a compromise that potentially exposes the prescription records of thousands of individuals to an unauthorized source via electronic transmission, classified as HITi. In such cases the PHI involved in the breach included names, addresses diagnostic codes, name of medication prescribed, medication costs, and some social security numbers. Cases that included computer malware that was detected on the CE unencrypted billing software program are also classified as HITi. In these incidents the CE did not know when the malware entered its system. And thousands of individuals were potentially affected by this malware. The types of PHI involved included demographic, financial (claims information), and clinical information (diagnoses/conditions, medications, lab results, and other treatment information). Finally, instances where database web servers containing the ePHI of many clients were breached by an unknown external person(s) for use as game servers. The ePHI on the database web servers included names, dates of birth, types of x-rays, and dates of x-rays.

4.1.6. Other and unknown

Other and Unknown are also types of breaches in which the former are breaches that are not classified or close to two or more of the aforementioned types, and the later are breaches that even though detected that a breach had taken place, there was no way of knowing the actual breach that took place nor its proximity to any of the previous classifications. In other words, breaches that were reported with most of the parameters given but a missing type of breach were classified in this study as unknown.

5. Validation of human factors as a weakness in information security

An extraction of nine (9) observations was made from the 1722 records that were reported from 2009 to 2017 comprising of nine (9) years. The number of reported cases for each breach incidence computed for each year in separate columns and the overall number of human factors that were associated with that year's breach incidents were also computed in one column based on the descriptive column of the dataset. Human factors (HF) are used as an independent variable, while the different types of breaches used as dependent variables in a linear regression computation.

5.1. Results

5.1.1. analysis of data breach incidents.

The distribution of data breaches shown in Fig. 2 typifies the weaknesses that human factors pose in an information security set-up. In this case, human factors that led to a breach, attributed 48.02% to a breach of theft and 27.11% to UAD, giving them a combined share of 75.13%. Thus, theft and UAD are the two most common breaches to occur when a data breach is a result of human factors. HITi attacks formed 10.36% of breach cases when human factors are at the center of a breach. It may not be as large as theft and UAD, but still reasonably high. And clearly showing how human factors can easily make a ‘secure’ information security vulnerable to such attacks. The study also revealed that 7.61% of breaches were attributed to loss, 2.13% to ImD and others making up 4.37% of breaches caused by human factors.

Figure 2

Distribution of Breaches applied to Human Factors.

The distribution illustrated in Fig. 2 only indicates the overall percentiles from 2009 to 2017. Fig. 3 shows the yearly distribution of human factors applied to the different types of breaches for each year as they were reported from 2009 to 2017.

The study revealed that human factors underpinning a breach of theft were closely distributed, with the lowest being 2016, 2009, and 2017 year periods accounting for 6.77%, 2.54%, and 0.21% respectively. And 2010, 2013, 2014 and 2015 recording 18.18%, 14.59% ,12.47% and 12.47% in that order while 2012 and 2011 had 13.11% each.

The breach of loss caused by human factors was very high in the year 2015 accounting for 28%, followed by 2014 with 18.67% and 14.67% attributed to 2013. The rest of the years' results were 13.33% for 2016, 12% for 2012, while 5.33%, 4%, 2.67% and 1.33% were the recorded for 2010, 2011, 2017 and 2009 respectively.

In 2016 there was a huge rise in human factors concerning information security. The study revealed that 42.16% of human elements that led to HITi happened in 2016 while 2013, 2015, and 2014 had 14.71%, 13.73%, and 12.75% respectively. The results also showed 2012 with 7.84%, and 2011 and 2010 having 3.92% each, while 2017 accounting for 0.98%

In the years 2015 and 2013, the largest percentages for ImD breaches of 28.57% and 23.81% respectively were recorded, and in 2016, 19.05% of ImD breach as a result of human factors. The year 2014 assumed 14.29%. The rest of the years, 2012 and 2010 were 9.52% and 4.76% respectively.

Except 2009, the study revealed that from 2009 to 2017, a breach of UAD with the descriptive parameter of the dataset alluding to human factors as the problem had the year 2015 accounting for 27.34%, 2016 had 23.60% and 2014 with 17.98%. 2013 assumed 16.48% while 2012, 2011, 2017 and 2010 comprised of 6.74%, 3.75%, 2.62% and 1.50% respectively.

Our study further showed that from the period under consideration, 2009 to 2017, five (5) of the years had other breaches that were caused by underlying human factors as reported in the descriptive parameters of the dataset, where 2014 had 39.53%, 2010 with 29.91%, 2012 with 18.60%, 2013 having 11.63% and 2.33% for 2009.

Figure 3

Distribution of Yearly Human Factors Applied to Breaches 2009 to 2017.

5.2. Relationship between human factors and data breach incidents

5.2.1. human factors statistically predict breach types.

From the ANOVA for the linear regression in Table 1 , the following observations can be deduced between HF and the dependent variables. Firstly, with HITi, it can be established that HF could statistically and significantly predict HITi, the F statistics are equal to 13.259, with a distribution of [1, 7) and the probability of observing the value is greater or equal to 13.259 being less than 0.01. Secondly, ImD computation proved that HF could statistically and significantly predict ImD, giving an F statistics of 8.173 and a distribution of [1, 7), which gives a probability of observing the value that is greater or equal to 8.173 less than 0.05. Next it can be seen that the independent variable HF with the breach type loss, a dependent variable. The analysis shows that HF could statistically and significantly predict loss, with an F statistics being equal to 12.406, a distribution of [1, 7), and a probability of observing the value which is greater or equal to 12.406, to be less than 0.05. The analysis then shows that HF could statistically and significantly predict UAD, giving an F statistic of 21.530, and again a distribution of [1, 7). The probability of observing this value being greater or equal to 21.530 is less than 0.005. The next dependent variable measured with HF is theft. Unlike the previous analysis, HF could not statistically and significantly predict theft with an F statistic of 3.788 and a distribution of [1, 7). The probability of observing the value greater or equal to 3.788 is greater than 0.05. Now with an F statistic of 1.159 and a distribution of [1, 7), HF could not statistically and significantly predict others. The probability of observing the value that is greater or equal to 1.159 is greater than 0.05. Finally, the ANOVA for the linear regression between HF and unknown as indicated that HF could not statistically and significantly predict unknown, with F[1, 7) = 0.988, and the probability of observing the value being greater or equal to 0.988 is greater than 0.05.

5.2.2. Variation explained by human factors

Table 2 has the measurement of the proportion of the variations in the dependent variables that are explained by the independent variable, which in this case is HF. HF accounted for 60.5%, 47.3%, 58.8%, 72.0%, 25.8%, 1.9% and −0.01% of the explained variability in HITi, ImD, loss, UAD, theft, other and unknown respectively. In other words, non-human factors account for 39.5%, 52.7%, 41.2%, 28%, 74.2%, and 98.1% of the unexplained variability of HITi, ImD, loss, UAD, theft and others respectively. These non-human factors will require an empirical study to ascertain the degree to which they affect the aforementioned data breaches or attacks. This result clearly establishes that the success of attacks like hacking, unauthorized access are hugely influenced by human factors and so organizations must adopt an information security framework that comprehensively covers human factors.

Model Summary of Human Factors and Types of Breach.

5.2.3. Regression of human factors and the breach types

In the analysis on the data shown in Table 3 , the regression for each of the dependent variables and HF, such that the equation predicted H I T i = − 4.716 + 0.270 x ( x = H F ) . And so for each change or increase of human factors, the average change in the mean of hacking or IT incident is about 0.270. The regression equation for ImD, predicted I m D = 1.995 + 0.049 x ( x = H F ) , indicating the average change in the mean of improper disposal to be about 0.049 for every increase in human factors. The dependent variable Loss saw an average change in its mean of about 0.120 for each change in human factors, given by its equation which predicted L o s s = 4.453 + 0.120 x ( x = H F ) . The regression equations also predicted that U A D = − 7.380 + 0.534 x ( x = H F ) , t h e f t = 32.902 + 0.505 x ( x = H F ) , o t h e r = 2.870 + 0.074 x ( x = H F ) and u n k n o w n = 2.104 + 0.042 x ( x = H F ) .

Coefficients of Human Factors and Types of Breach.

5.2.4. Evaluation of the strength of the predictions

A Pearson correlation coefficient computed as shown in Table 4 to evaluate the strength of the relationship between HF the dependent variables that is when a breach occurred. The result indicated that there was a positive correlation between the HF and HITi, where r = 0.809 , p being significant at 0.01. HF and ImD, the result indicated that there is a positive correlation between the two variables, where r = 0.734 , and p is significant at 0.05. The correlation also showed that the strength of the relationship between HF and Loss, when a breach occurred, is positive, with a correlation r = 0.8 and p being significant at 0.01. The strength of the relationship between HF and UAD when a breach occurred is a strong positive correlation where r = 0.869 and p significant at 0.005. The correlation coefficient computed on HF and theft when a breach occurred indicates a positive correlation between the two variables, r = 0.593 . However, p is not significant. Also the Pearson correlation coefficient computed on HF and other, when a breach occurred, indicated a minimal positive correlation between the two variables, r = 0.377 with p not being significant. Finally, HF and unknown breach types showed a minimal positive correlation between the two variables, r = 0.352 , and yet again p is not significant. Therefore, an increase in four variables HITi, ImD, Loss, and UAD is positively correlated with an increase in HF, and increases in the remaining three variables, theft, other, and unknown did not correlate with increases in HF.

Correlations Matrix.

5.2.5. T-test

Table 6 shows the results of a T-test. HiTi was a reported breach with underlining human factors (M = 24, SD = 20.664) in all the breach incidents reported as a whole, t(8) = 3.484, p = 0.008.ImD was a reported breach with underlining human factors (M = 7.222, SD = 4.147) in all the breach incidents reported as a whole, t(8) = 5.225, p = 0.02. Loss was also a reported breach with underlining human factors (M = 17.22, SD = 9.298) in all the breach incidents reported as a whole, t(8) = 5.557, p = 0.01. UAD was a reported breach with underlining human factors (M = 49.33, SD = 38) in all the breach incidents reported as a whole, t(8) = 3.894, p = 0.05. Again from the all the reported breach incidents, Theft was a reported breach with underlining human factors (M = 86.556, SD = 52.712), t(8) = 4.926, p = 0.01. Other reported breaches with underlining human factors (M = 10.78, SD = 12.215) in all the breach incidents reported as a whole computed t(8) = 2.647, p = 0.029, and Unknown reported breaches with underlining human factors (M = 6.56, SD = 7.367) in all the breach incidents reported as a whole computed t(8) = 2.669, p = 0.028.

T-Test of Human Factors in Types of Breach.

6. Discussion

6.1. socio-technical systems.

According to Shin [55] , the technical parts are usually the focus, when it comes to investigating a system and its applications. The normal approach is to rather highlight the technological interactions, while ignoring the people who use it. It is important to note that the working conditions affect the whole system. The type of system and environmental factors could include laws and regulations, market competition, or human factors. A holistic approach to system analysis is fundamental, since both technology and people define the overall performance of a system [56] .

Sommerville and Dewsbury [57] also argue that there needs to be a cross-disciplinary framework that represents all the aspects of technological systems. This should include the technical equipment, the market, the people, and the society for which the system was created or adopted. Hence, failure is inevitable if all aspects of the system are not adequately examined.

The growth of IoT has major socio-technical implications not only for individuals, but also for organizations and society. IoTs have developed in a way that enables new ways of working, to increase safety and to facilitate coordination. This may however lead to interference with established work practices, undermine security, productivity, and individual satisfaction, creating an unforeseen impact on relations of behavior, power, and control. This is a question of socio-technical perspective. These perspectives, however, are rarely addressed in the development and research for IoT [55] . This study has clarified a practical point of view of the conceptualization of the IoT as a human-centered system, by clarifying a series of data breach incidents and how they affect computing, including IoT.

6.2. Vulnerability

The findings of this study provide evidence that human factors present a great threat to the information security system of organizations and in this case, it is most significant at 0.05 as shown in Table 5 . It creates an avenue by which the security of an organization becomes vulnerable and ultimately making it easy for information to be compromised. There is an ever-increasing threat of data ex-filtration through loss, improper disposal, unauthorized user access or disclosure, and hacking or other information technology incidents. However, there is no evidence that human factors are a major player in data breach of theft, others, or unknown as depicted in Table 5 . When the perceived value of data on the black market is very high, the probability that the threat to organizational data will decrease shortly is very low. Organizations or companies may use ‘modern’ techniques to frustrate breaches on a network or their information system. However, there will always be dedicated attacks on valuable data, due to their worth on the black market [6] . Thus, human factors become a critical point in the prevention of data breach and data ex-filtration.

Significance Level of Breach Types.

Also sensitive data is shared among various participants and actors. Data or information sharing and external collaboration with other entities, which have become more and more common in today's businesses make data ex-filtration issues worse. Furthermore, as human resources are also becoming more mobile, where employees are allowed to work from outside the organization's premises, it increases the potential for data to be breached [58] .

According to [50] , it is a natural thing to want to make people behave in a way that results in more security. But the truth is that when there is an increase in user awareness, it does not often lead to sufficient secure behavior and the reality of behavioral transformation is a complex phenomenon altogether. It is difficult for people to quit habits that are detrimental to their health, despite the abundance of information on the risks associated with such habits or behaviors due to their short term gratification. This is also true and applicable in the world of information security. It does not mean changing habits or behaviors is not possible, but usually requires that one chooses a veracious intervention for the job at hand. It should be an intervention that changes a behavior which directly targets the actor or one that can indirectly affect the actor's behavior through technological or organizational solutions, fitting both purpose and use.

6.3. Increasing user security awareness

User security awareness is critical to the overall security of any organization. Information security awareness should be a preventive measure that must be used by organizations to firmly establish correct security procedures and security principles in the minds of all employees. It is essential because any security technique can be misused or misconstrued, thereby not benefiting from its real value. Increased awareness minimizes user-related security threats and maximizes the efficiency of security techniques from the human point of view. [26]

To increase user security awareness, McLean provides a proposal which ‘sells’ information security to people via campaigns. These campaigns can prove very useful in terms of security education, and providing a positive impetus to information security. Thereby maintaining the importance of security in the eyes of all employees. Campaigns are good measures for improving attitudes in organizations [59] . Campaigns can also be based on the Hammer theory, which aims to make information security an ‘in’ topic in an organization. The theory is such that when a new concept is properly introduced in an organization, everybody is interested to use it [60] . Campaigns and ‘in’ topics can be used synchronically within awareness programs, and they are critical in providing incentives for end-users and in invigorating the importance of these factors in people's minds.

7. Limitations

A notable limitation in this paper is the sole focus on only reported breaches of HIPAA. The study does not also attempt to identify behavioral elements that may play a critical role in behavior, which may lead to a breach, and so these factors have not been discussed in the current study. Furthermore, sociological forces that may shape an individual's perceptions of organizational abuse and discipline have not been considered.

8. Conclusion

Even though there are very good technologies that organizations can employ to protect sensitive data from breaches on their network, it only solves one part of the problem. As long as human beings are a part of IoT, good information security solutions must incorporate human factors in them. This paper has, through the analysis, predicted the relationship between data breach incidents and human factors that provides an understanding of how human factors affect information security. The study has also shown the strength of the relationship between human factors and the different types of data breach incidents. The paper also proposed a framework that integrates technology and human factors that may be useful in reducing the number of data breach incidents due to an increase in human factors.

Author contribution statement

K. Hughes-Lartey: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.

M. Li, F. E. Botchey: Conceived and designed the experiments; Analyzed and interpreted the data.

Z. Qin: Conceived and designed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data.

Funding statement

This work was supported in part by the National Natural Science Foundation of China (No. 61672135), the Frontier Science and Technology Innovation Projects of National Key RD Program (No. 2019QY1405), the Sichuan Science and Technology Innovation Platform and Talent Plan (No. 20JCQN0256), and the Fundamental Research Funds for the Central Universities (No. 2672018ZYGX2018J057).

Data availability statement

Declaration of interests statement.

The authors declare no conflict of interest.

Additional information

No additional information is available for this paper.

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China (No. 61672135), the Frontier Science and Technology Innovation Projects of National Key R&D Program (No. 2019QY1405), the Sichuan Science and Technology Innovation Platform and Talent Plan (No. 20JCQN0256), and the Fundamental Research Funds for the Central Universities (No. 2672018ZYGX2018J057).

To read this content please select one of the options below:

Please note you do not have access to teaching notes, finding the weak link in the chain: an integrated performance measurement framework for complex logistics systems.

Human Resource Management International Digest

ISSN : 0967-0734

Article publication date: 5 October 2020

Issue publication date: 7 January 2021

This paper aims to review the latest management developments across the globe and pinpoint practical implications from cutting-edge research and case studies.

This briefing is prepared by an independent writer who adds their own impartial comments and places the articles in context.

The study develops an integrated performance management framework for complex, multi-role organizations that incorporates performance management design, systems thinking and problem-solving (theory of constraint). The authors test the applicability of their framework through an oil and gas industry case study, demonstrating its usability.

Originality

The briefing saves busy executives and researchers hours of reading time by selecting only the very best, most pertinent information and presenting it in a condensed and easy-to-digest format.

  • Performance management
  • Systems dynamics
  • Logistic performance
  • Transport logistics
  • Theory of constraint

(2021), "Finding the weak link in the chain: an integrated performance measurement framework for complex logistics systems", Human Resource Management International Digest , Vol. 29 No. 1, pp. 18-22. https://doi.org/10.1108/HRMID-08-2020-0191

Emerald Publishing Limited

Copyright © 2020, Emerald Publishing Limited

Related articles

We’re listening — tell us what you think, something didn’t work….

Report bugs here

All feedback is valuable

Please share your general feedback

Join us on our journey

Platform update page.

Visit emeraldpublishing.com/platformupdate to discover the latest news and updates

Questions & More Information

Answers to the most commonly asked questions here

the weak link case study

January 21, 2016

Is a team only as strong as the weakest link.

Scott Searle

the weak link case study

The best coaches and leaders are the ones who reflect on their practice and strive to improve. In many respects, reflecting is the easy part.  If you watch sports on television, visit a book store, or listen to talk radio after a big game you will be inundated with coaching advice. The challenge is figuring out what pieces of advice will work in your context with your athletes.

For example, nearly every coaching book I have read includes some variation of the cliché that "a team is only as strong as its weakest link.” But as a fan of sports and student of coaching; I have been left to wonder if this true for all teams in all sports? There is no doubt that the Chicago Bulls started to win championships when Michael Jordan learned to work more effectively with his teammates. But, does that mean the success of Lionel Messi is determined by a person who sits the bench? Is the success of Tom Brady and the Patriots a result of the back-up punter? I remain unconvinced.

Sports are not all equal in their reliance on teamwork. To find out if your team is reliant on its weakest link, you must first analyze the sport or context in which you coach. To help, Robert W. Reidel wrote a book entitled “Game Plans: Sports Strategies for Business”. In this book, Reidel presents three sports as archetypes for team dynamics.  He analyzed; american football, baseball, and basketball. Reidel argues that the unique nature of each sport presents different needs for team selection and development. For example, in football, the most important player is the system itself .  As a casual observer of the NFL, I am frequently surprised at the ability of many teams to recover and move forward when injuries occur to top players. Reidel explains this by suggesting;

‍ A football team is a lot like a machine. It's made up of parts. I like to think of it as a Cadillac. A Cadillac's a pretty good car. All the refined parts working together make the team. If one part doesn't work, one player pulling against you and not doing his job, the whole machine fails. Nobody is indispensable. - Robert W. Reidel

‍ A basketball team on the other hand, requires all of the players to work in harmony and the coach must manage the flow.  Perhaps the best professional example of this is the “Triangle Offence” made famous by Phil Jackson and the Chicago Bulls.  This system relied on perfect harmony between the team, and each player knowing what the other would do. An injury to a key player would require the team to slow down to accommodate the replacement.  In this system, movement of the ball is critical and winning requires a five-man coordinated effort.  Phil Jackson was frequently the subject of derision for only coaching the best athletes.  Jackson defended himself in an article called the “Triangle Offense” when he wrote,

Yes, Micheal Jordan, Scottie Pippen, Shaquille O’Neal, and Kobe Bryant have thrived in the system, but those four all-time greats would excel and score in any system.  What the triangle really does is help players who aren’t so gifted contribute to a team’s success at the offensive end.  The system...uniquely offers an offense the option to play unselfishly as a unit while still allowing players creative individuality in the offensive decisions.

'Triangle Offense' in operation

The key factor for the basketball coach becomes “how do I influence the flow of the game?”. The basketball coach is responsible for teaching a system, then allowing the athletes the freedom to execute it.

Keidel’s final metaphor is baseball. Baseball teams are made up of a group of individuals, with diverse skill sets, as well as skill levels.  A baseball player who is a great hitter, but with a poor level of fitness level, and a poor fielder, will still find a place on nearly every team they try out for. The work of baseball players is also nonsequential, the work of the shortstop is totally independent from the work of the right fielder. Baseball teams are also characterized by “infrequent-and brief interactions among team members”, their contribution to the team and outcome of the game is made autonomously.  In baseball, the most important decisions for the coach, are: who do you want on the team, and who do you put in the line-up.  Earl Weaver is a Hall of Fame manager who managed the Baltimore Orioles to the third highest winning percentage in the history of Major League Baseball.  When asked about the key to success, he responded “Get the guy up there you want”.

‍ A coach must know what game they are playing before they decide on their approach to develop individual technique. In my sport of softball, our context is most similar to baseball. At the highest level, the team with the best players almost always wins.  I have the good fortune to coach at a number of levels and in a number of different contexts, and this remains true regardless of gender, level or age.  In the Ontario Intercollegiate Women’s Fastpitch Association, the University of Western Ontario has recruited 3 athletes from the Provincial Team, and 1 athlete and 1 from the National team.  Correspondingly they have won the last 5 Provincial Championships, beating the team I coach in the final for 3 of those Championship Games. At the International Softball Congress World Championships which is the highest level of Men’s Fastpitch in the World, the Hill United Chiefs boast a line-up of top hitters from around the world, and the best pitcher in Adam Folkard.  Not surprisingly, they have won the last 3 World Championships.

These facts highlight the importance of athlete recruitment and selection for baseball and softball coaches, but should not excuse coaches from helping individual athletes improve their technique. Coaches must be mindful of their responsibility to develop individual technique, while remaining aware of the limitations their sport presents.  For most coaches their most valuable asset, and biggest limitation is time.  Given these constraints, successful coaches must be mindful of where they invest their time during training.  In Canada, the National Coaching Certification Program reminds coaches of a rule of thirds.  1/3 of the athletes on most teams will be above average, 1/3 will be below average, and 1/3 will be average.

While this might seem obvious, many coaches fall into a trap of spending too much time with the best athletes on the team. It is certainly enjoyable to see an athlete master a skill, and elite athletes can provide a quick result which helps the team and boosts the ego of the coach.  However, in a team sport, you cannot always guarantee that it will be the best hitter at the plate with the game on the line.

‍ A successful coach must ensure all athletes have the capacity to be successful and that they are making a contribution to team success. The challenge for coaches of team sports becomes; how can they work with one athlete, while keeping others engaged?

‍ I have been involved with the University of Ottawa Softball team for 13 years and it presents an interesting case study.  Everyone involved with our program is proud of our success, winning 11 medals in our 13 year history.  We have accomplished those results with a huge spectrum of ability within our own team.  Our team has included athletes who have competed at World Championships, won Canadian and Provincial Championships, and been named to National Level All-Star teams. Those athletes are very easy to coach.  On the flip side, our team has also included athletes, who are trying softball for the first time, have played at a recreational level, or have only played slow-pitch.  Perhaps the best illustration of this, was two years ago when we had an athlete who told us that she had never successfully made it as far as 2nd base, hitting in front of an athlete who had been named to the National All-Star team. Clearly, we had to take different approaches with these two athletes.

We have had a great deal of success by differentiating practice time, and encouraging athletes to work on something specific to their skill set.  In our experience, higher level and more experienced athletes are usually self-aware about the skills they need to develop.  This has presented a problem because those athletes are not always the ones who need the extra assistance.  In designing training sessions, our staff has worked with athletes to identify gaps in performance, and paired those athletes with teammates who have mastered that skill.

‍ Our most successful practices have been ones where athletes take turns, teaching a skill and being taught by a teammate. This engages all of the athletes, and makes them all feel like they are invested in each other’s success. Once athletes have a clear understanding of their abilities and limitations, and are engaged in closing the gap between themselves and their teammates, training sessions become fun and easy for the coach to manage. In this environment, all athletes are working towards improving themselves, helping their teammates, and increasing the chances of victory in competition.  This allows the softball coach, to manage, and as Earl Weaver suggested make sure the right player is at bat when the game is on the line.

[1] Keidel, Robert W. Game Plans: Sports Strategies for Business. New York: Dutton, 1985.[2] Ibid Page 8[3] Gandolfi, Giorgio, and Phil Jackson. "Triangle Offense." In NBA Coaches Playbook: Techniques, Tactics, and Teaching Points, 90. Champaign, IL: Human Kinetics, 2009. Pg. 89[4] Keidel, Robert W. Game Plans: Sports Strategies for Business. New York: Dutton, 1985. Pg 59[5] Ibid Page 20[6] Ibid Page 22

Recommended Articles:

the weak link case study

Would you call me sweetheart?

The Prejudice towards Female Sports Coaches

the weak link case study

Pep Talks - does a motivating pre-game speech suit every athlete?

Nearly every movie about sports or coaching ends with a powerful pep talk from the coach that gets all of the athletes fired up and ready to go.

the weak link case study

Coaching Philosophy

In the context of sport, Dr. Seuss could hardly be more accurate. In sport it is the coach that steers the direction, sets the pace, and establishes priorities. But what priorities will they be? On the surface, a coaching philosophy seems like a simple task. It is your answer to the question “Why do you coach?” But, as many problems, the more we think about them, the more complicated they seem to get.

Get Started

Coach Logic offers a fully-functional free trial, no credit card details required.

the weak link case study

Identification of Weak Links in Active Distribution Network Based on Vulnerability Assessment

  • Conference paper
  • First Online: 03 September 2022
  • Cite this conference paper

Book cover

  • Yongchun Yu 42 ,
  • Shu Mao 42 ,
  • Hailei Meng 43 ,
  • Chenyu Zhao 43 ,
  • Xiankai Chen 44 &
  • Chaoqun Zhou 44  

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 899))

614 Accesses

With the large amount of distributed energy and intermittent load access, the operation risk of distribution system is increasing day by day. In order to improve the reliability of power supply in distribution network, weak link identification must be indispensable. Aiming at this problem, this paper proposes a weak link identification method for active distribution network (ADN) based on vulnerability assessment. Firstly, considering the correlation of distributed generators’ output, some index of vulnerability assessment based on complex network theory is proposed. Then, considering the N−1+1 of the line, combined with the repeated power flow calculation method, the line power supply weakness index is proposed. Finally, an example is given to demonstrate the effectiveness of the proposed method. The results show that the proposed method based on vulnerability assessment is effective. The weak links in the distribution network can be quickly identified by this method. The identification results can assist the scheduler to make decision.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

You Y, Liu D, Zhong Q (2015) Research on optimal schedule strategy for active distribution network. Autom Electric Power Syst 38(9):177–183

Google Scholar  

Xu L, Chen D (2011) Control and operation of a DC microgrid with variable generation and energy storage. IEEE Trans Power Deliv 26(4):2513–2522

Article   Google Scholar  

Liu H, Mao C, Lu J et al (2010) Energy storage system of electronic power transformer and its optimal control. Trans China Electrotech Soc 25(3):54–60

Ding M, Han P (2008) Vulnerability assessment to small-world power grid based on weighted topological model. Proc CSEE 28(10):20–25

Chen W, Jiang Q, Cao Y (2005) Voltage vulnerability assessment based on risk theory and fuzzy reasoning. Proc CSEE 25(24):20–25

Hua W, James DM, Vittal V (2000) Risk based voltage security assessment. IEEE Trans Power Syst 15(4):1247–1254

Download references

Acknowledgements

Funding: State Grid Shandong Electric Power Company Science and Technology Project Funding “Research on Key Technologies of Highly Reliable Power Supply in Guzhenkou Innovation Demonstration Zone” (5206002000T2).

Author information

Authors and affiliations.

Power Reliability Management and Project Quality Supervision Center, National Energy Administration, Beijing, 100031, China

Yongchun Yu & Shu Mao

State Grid Shandong Electric Power Company, Jinan, 250001, China

Hailei Meng & Chenyu Zhao

Qingdao Power Supply Company, State Grid Shandong Electric Power Company, Qingdao, 266002, China

Xiankai Chen & Chaoqun Zhou

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Yongchun Yu .

Editor information

Editors and affiliations.

School of Electrical Engineering and Automation, Anhui University, Hefei, Anhui, China

Wenping Cao

Department of Electrical Engineering, Tsinghua University, Beijing, China

Pinjia Zhang

School of Electrical Engineering, Shandong University, Jinan, Shandong, China

Zhenbin Zhang

School of Electrical Engineering and Automation, Anhui University, Hefei, China

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Cite this paper.

Yu, Y., Mao, S., Meng, H., Zhao, C., Chen, X., Zhou, C. (2022). Identification of Weak Links in Active Distribution Network Based on Vulnerability Assessment. In: Hu, C., Cao, W., Zhang, P., Zhang, Z., Tang, X. (eds) Conference Proceedings of 2021 International Joint Conference on Energy, Electrical and Power Engineering. Lecture Notes in Electrical Engineering, vol 899. Springer, Singapore. https://doi.org/10.1007/978-981-19-1922-0_37

Download citation

DOI : https://doi.org/10.1007/978-981-19-1922-0_37

Published : 03 September 2022

Publisher Name : Springer, Singapore

Print ISBN : 978-981-19-1921-3

Online ISBN : 978-981-19-1922-0

eBook Packages : Energy Energy (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Research Report: Strengthening Weak Links in the PDF Trust Chain

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

  • Work & Careers
  • Life & Arts
  • Currently reading: Business school teaching case study: can green hydrogen’s potential be realised?
  • Business school teaching case study: how electric vehicles pose tricky trade dilemmas
  • Business school teaching case study: is private equity responsible for child labour violations?

Business school teaching case study: can green hydrogen’s potential be realised?

Close-up of a green and white sign featuring the chemical symbol for hydrogen, ‘H2’

  • Business school teaching case study: can green hydrogen’s potential be realised? on x (opens in a new window)
  • Business school teaching case study: can green hydrogen’s potential be realised? on facebook (opens in a new window)
  • Business school teaching case study: can green hydrogen’s potential be realised? on linkedin (opens in a new window)
  • Business school teaching case study: can green hydrogen’s potential be realised? on whatsapp (opens in a new window)

Jennifer Howard-Grenville and Ujjwal Pandey

Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.

Hydrogen is often hyped as the “Swiss army knife” of the energy transition because of its potential versatility in decarbonising fossil fuel-intensive energy production and industries. Making use of that versatility, however, will require hydrogen producers and distributors to cut costs, manage technology risks, and obtain support from policymakers.

To cut carbon dioxide emissions, hydrogen production must shift from its current reliance on fossil fuels. The most common method yields “grey hydrogen”, made from natural gas but without emissions capture. “Blue hydrogen,” which is also made from natural gas but with the associated carbon emissions captured and stored, is favourable.

But “green hydrogen” uses renewable energy sources, including wind and solar, to split water into hydrogen and oxygen via electrolysis. And, because there are no carbon emissions during production or combustion, green hydrogen can help to decarbonise energy generation as well as industry sectors — such as steel, chemicals and transport — that rely heavily on fossil fuels.

Ultimately, though, the promise of green hydrogen will hinge on how businesses and policymakers weigh several questions, trade-offs, and potential long-term consequences. We know from previous innovations that progress can be far from straightforward.

Offshore wind turbines

Wind power, for example, is a mature renewable energy technology and a key enabler in green hydrogen production, but it suffers vulnerabilities on several fronts. Even Denmark’s Ørsted — the world’s largest developer of offshore wind power and a beacon for renewable energy — recently said it was struggling to deliver new offshore wind projects profitably in the UK.

Generally, the challenge arises from interdependencies between macroeconomic conditions — such as energy costs and interest rates — and business decision-making around investments. In the case of Ørsted, it said the escalating costs of turbines, labour, and financing have exceeded the inflation-linked fixed price for electricity set by regulators.

Business leaders will also need to steer through uncertainties — such as market demand, technological risks, regulatory ambiguity, and investment risks — as they seek to incorporate green hydrogen.

Test yourself

This is the third in a series of monthly business school-style teaching case studies devoted to responsible-business dilemmas faced by organisations. Read the piece and FT articles suggested at the end before considering the questions raised.

About the authors: Jennifer Howard-Grenville is Diageo professor of organisation studies at Cambridge Judge Business School; Ujjwal Pandey is an MBA candidate at Cambridge Judge and a former consultant at McKinsey.

The series forms part of a wide-ranging collection of FT ‘instant teaching case studies ’ that explore business challenges.

Two factors could help business leaders gain more clarity.

The first factor will be where, and how quickly, costs fall and enable the necessary increase to large-scale production. For instance, the cost of the electrolysers needed to split water into hydrogen and oxygen remains high because levels of production are too low. These costs and slow progress in expanding the availability and affordability of renewable energy sources have made green hydrogen much more expensive than grey hydrogen, so far — currently, two to three times the cost.

The FT’s Lex column calculated last year that a net zero energy system would create global demand for hydrogen of 500mn tonnes, annually, by 2050 — which would require an investment of $20tn. However, only $29bn had been committed by potential investors, Lex noted, despite some 1,000 new projects being announced globally and estimated to require total investment of $320bn.

A worker in a cleanroom suit inspects a large flexible solar panel in a high-tech manufacturing setting, with the panel’s reflection visible on a shiny surface below

Solar power faced similar challenges a decade ago. Thanks to low-cost manufacturing in China and supportive government policies, the sector has grown and is, within a very few years , expected to surpass gas-fired power plant installed capacity, globally. Green hydrogen requires a similar concerted effort. With the right policies and technological improvements, the cost of green hydrogen could fall below the cost of grey hydrogen in the next decade, enabling widespread adoption of the former.

Countries around the world are introducing new and varied incentives to address this gap between the expected demand and supply of green hydrogen. In Canada, for instance, Belgium’s Tree Energy Solutions plans to build a $4bn plant in Quebec, to produce synthetic natural gas from green hydrogen and captured carbon, attracted partly by a C$17.7bn ($12.8bn) tax credit and the availability of hydropower.

Such moves sound like good news for champions of green hydrogen, but companies still need to manage the short-term risks from potential policy and energy price swings. The US Inflation Reduction Act, which offers tax credits of up to $3 per kilogramme for producing low-carbon hydrogen, has already brought in limits , and may not survive a change of government.

Against such a backdrop, how should companies such as Hystar — a Norwegian maker of electrolysers already looking to expand capacity from 50 megawatts to 4 gigawatts a year in Europe — decide where and when to open a North American production facility?

The second factor that will shape hydrogen’s future is how and where it is adopted across different industries. Will it be central to the energy sector, where it can be used to produce synthetic fuels, or to help store the energy generated by intermittent renewables, such as wind and solar? Or will it find its best use in hard-to-abate sectors — so-called because cutting their fossil fuel use, and their CO₂ emissions, is difficult — such as aviation and steelmaking?

Steel producers are already seeking to pivot to hydrogen, both as an energy source and to replace the use of coal in reducing iron ore. In a bold development in Sweden, H2 Green Steel says it plans to decarbonise by incorporating hydrogen in both these ways, targeting 2.5mn tonnes of green steel production annually .

Meanwhile, the global aviation industry is exploring the use of hydrogen to replace petroleum-based aviation fuels and in fuel cell technologies that transform hydrogen into electricity. In January 2023, for instance, Anglo-US start-up ZeroAvia conducted a successful test flight of a hydrogen fuel cell-powered aircraft.

A propeller-driven aircraft with the inscription ‘ZEROAVIA’ is seen ascending above a grassy airfield with buildings and trees in the background

The path to widespread adoption, and the transformation required for hydrogen’s range of potential applications, will rely heavily on who invests, where and how. Backers have to be willing to pay a higher initial price to secure and build a green hydrogen supply in the early phases of their investment.

It will also depend on how other technologies evolve. No industry is looking only to green hydrogen to achieve their decarbonisation aims. Other, more mature technologies — such as battery storage for renewable energy — may instead dominate, leaving green hydrogen to fulfil niche applications that can bear high costs.

As with any transition, there will be unintended consequences. Natural resources (sun, wind, hydropower) and other assets (storage, distribution, shipping) that support the green hydrogen economy are unevenly distributed around the globe. There will be new exporters — countries with abundant renewables in the form of sun, wind or hydropower, such as Australia or some African countries — and new importers, such as Germany, with existing industry that relies on hydrogen but has relatively low levels of renewable energy sourced domestically.

How will the associated social and environmental costs be borne, and how will the economic and development benefits be shared? Tackling climate change through decarbonisation is urgent and essential, but there are also trade-offs and long-term consequences to the choices made today.

Questions for discussion

Lex in depth: the staggering cost of a green hydrogen economy

How Germany’s steelmakers plan to go green

Hydrogen-electric aircraft start-up secures UK Infrastructure Bank backing

Aviation start-ups test potential of green hydrogen

Consider these questions:

Are the trajectories for cost/scale-up of other renewable energy technologies (eg solar, wind) applicable to green hydrogen? Are there features of the current economic, policy, and business landscape that point to certain directions for green hydrogen’s development and application?

Take the perspective of someone from a key industry that is part of, or will be affected by, the development of green hydrogen. How should you think about the technology and business opportunities and risks in the near term, and longer term? How might you retain flexibility while still participating in these key shifts?

Solving one problem often creates or obscures new ones. For example, many technologies that decarbonise (such as electric vehicles) have other impacts (such as heavy reliance on certain minerals and materials). How should those participating in the emerging green hydrogen economy anticipate, and address, potential environmental and social impacts? Can we learn from energy transitions of the past?

Climate Capital

the weak link case study

Where climate change meets business, markets and politics.  Explore the FT’s coverage here .

Are you curious about the FT’s environmental sustainability commitments?  Find out more about our science-based targets here

Promoted Content

Explore the series.

A Smart EQ Fortwo electric vehicle charges at a Belib’ station alongside a locked bicycle in Paris, France

Follow the topics in this article

  • Carbon footprint Add to myFT
  • Climate change Add to myFT
  • Renewable energy Add to myFT
  • Environment Add to myFT
  • Business school Add to myFT

International Edition

An official website of the United States government

Here’s how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock Locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Guidance on building better digital services in government

Determining the true value of a website: A GSA case study

the weak link case study

Cleaning up: A hypothetical scenario

Consider this scenario: you’ve been told to clean up a giant room full of Things Your Agency Has Made in the Past and Now Maintains for Public Use . This means disposing of the Things that no longer add value, and sprucing up the Things that are still useful. How do you determine which Things belong in which category, especially when all the Things in that giant room have been used by the public, and available for all to see?

When the “things” we’re talking about are websites, this determination is often much more complicated than it might appear on the surface. This scenario is one facing web teams across the government, including at the U.S. General Services Administration (GSA), every single day. If you’re in this situation, consider all the ways you might begin to tackle this cleanup job.

Evaluating by visits

You decide to start by determining how many people visit each website each month. Delighted, you pull those numbers together and produce a chart that looks something like this:

the weak link case study

The chart states that the 10 least-visited GSA websites had only about 66 visits in the past 30 days, whereas the top 10 websites averaged over 629,000 visits, and the agency average websites averaged over 244,000 monthly visits. So there you have it: clearly, it appears the websites with only 66 visits are the least useful and should be decommissioned. (Note that the low-traffic websites all show 66 visits because of the analytics tool’s statistical sampling methodology.)

However, you stop to examine one of the low-traffic sites. In studying it, you realize that it was never designed to have many visitors. Instead, it was designed to support a very small audience that only appears at random, unpredictable intervals; say, when a natural disaster strikes. Clearly, you don’t want to get rid of that website, since it’s meeting a specific need of a small but well-defined and important audience.

Through this consideration, you realize that using the number of visitors to determine the usefulness of a website incorrectly assumes:

  • Each visit across all your websites is of the same value.
  • Each audience, whether 66 people, or 629,000, have the same level of urgency and need for each website, even if one website is intended to serve a large, continuous audience, while another is designed to serve a small, irregular audience.

Since both of these assumptions are false, visitor numbers are not enough to determine the usefulness of a website. You need another evaluation tactic.

Evaluating by accessibility

After some consideration, you realize that all the websites have to be fully accessible to everyone, regardless of ability. You also have the tools and processes to help determine whether that standard has been reached. Excited, you start by assembling and running your automated accessibility tests.

the weak link case study

Five websites stand out as having the worst accessibility errors, according to your tests. Clearly, these websites must go. As you prepare to get rid of them, however, you notice that the vast majority of the errors in the worst website are identical and all seem to originate from the same part of the website. You look closer and realize that the problem causing all those errors is actually quite basic and can be fixed easily, taking the worst website out of the bottom ranking. Looking at the other websites in your list, you realize that other errors that have surfaced are only errors in an automatic test, not a human one. Many of them aren’t on critical paths for the website’s use, so while they should be addressed, they are not meaningfully blocking access to the website.

That throws your entire evaluation into question: how can you possibly batch and judge the usefulness of a website by accessibility, if the severity and impact of each accessibility error varies so much? Instead, you must pair automated accessibility tests with manual testing to reach conclusions on the least accessible websites. That won’t help you quickly get rid of the lowest value websites, so yet another evaluation tactic is needed.

Evaluating by speed and performance

After considering the number of visits and the accessibility, you realize that an evaluation of usefulness needs to consider a basic question: is the performance and speed of the website reasonable? If a product is so frustratingly slow that people don’t use it, then nothing else matters.

To figure out which websites are so slow as to be essentially non-functional, you find a free online tool that tests website performance. Additionally, you get smart based on your previous experiments: this tool tests for a few different parameters, not just one element of performance. It then compiles these parameters into a single index score, so its results are compelling.

the weak link case study

This performance metric shows you that, on average, your websites perform at 84% of a perfect 100% score, and there are a few low-performing websites at 26% performance or lower. This works for you; you know you need to get rid of your agency’s low-performing websites. As you’re planning to decommission these sites, however, a user visits one of them to complete a task and provides some feedback.

Evaluating by customer research

The user waits while the website slowly loads. Then, they interact with the website and exit the page. To gauge their satisfaction, you prompt them to give you feedback on the page by asking, “Was this page helpful?” The user shares:

“This website does work; it just works slowly. I’m willing to wait, though, because I need the information. There’s nowhere else to get this information, so please don’t get rid of this website; I have to come back and get information from it every month.”

After taking this customer research into account, you realize that visits, accessibility, performance, and speed do not, on their own, fully reflect the website’s value, so you still don’t know which websites to decommission.

At this point, you’ve discovered that evaluating websites is a multidimensional problem — one that cannot be determined by a single, simple metric. Indeed, even when you consider several metrics, your conclusions lack a customer’s perspective.

Determining the value of agency websites therefore must use an index that is not just composed of similar metrics (like the performance index) but is in fact a composite index of different datasets of different data types. This approach will allow you to evaluate the website’s purpose, function, and ultimately, value, to your agency and your customers. This aggregation of dataset types is known as a composite indicator.

Methodology: The Enterprise Digital Experience composite indicator

This is the story of evaluating websites in GSA. Websites seem simple to evaluate: do they work or not? But in truth, they are a multidimensional problem. In taking on the definition and evaluation of GSA public-facing websites, the Service Design team in GSA’s Office of Customer Experience researched and designed a composite indicator of multiple data sets of different types to evaluate the value of websites in GSA. Since 2021, we’ve been doing this by examining six things:

Accessibility , scored by our agency standard accessibility tool ( quantitative data, 21st Century IDEA Section 3A.1 )

Customer-centricity , scored by a human-centered design interview ( qualitative data, 21st Century IDEA Section 3A.6 and OMB Circular A-11 280.1 and 280.8 )

  • Stated audience : Can the website team succinctly and precisely name their website’s primary audience?
  • Stated purpose : Can the website team succinctly and precisely name their website’s primary purpose?
  • Measurement of purpose : Does the website have a replicable means to measure if the website’s purpose is being achieved?
  • Repeatable customer feedback mechanism : Does the website team have a repeatable customer feedback mechanism in place, such as an embedded survey, or recurring, well-promoted and attended meetings, or focus groups with customers? (Receiving ad hoc feedback from customer call centers or email submissions does not meet this mark.)
  • Ability to action : Does the website team have a skillset that can contribute to rapidly improving the website based on feedback and need, such as human-centered design research, user experience, writing, or programming skills?
  • Ability to measure impact : Does the website team have the ability to measure the impact of the improvements they implement? Have they devised and implemented a measurement methodology specifically for their changes (an ability to measure impact) or do they rely solely on blanket measures such as Digital Analytics Program data (no ability to measure impact)?

Performance and search engine optimization , scored by Google Lighthouse ( quantitative data, 21st Century IDEA Section 3A.8 )

Required links , scored by the Site Scanning Program ’s website scan ( quantitative data, 21st Century IDEA Section 3A.1 & 3E )

User behavior, non-duplication , scored by Google Analytics with related sites ( qualitative + quantitative data, 21st Century IDEA Section 3A.3 )

U.S. Web Design System implementation , scored by Site Scanning Program’s website scan ( qualitative + quantitative data, 21st Century IDEA Section 3A.1 & 3E )

View all sections of the law and the circular mentioned above:

  • 21st Century IDEA (Public Law No. 115-336)
  • OMB Circular A-11 (PDF, 385 KB, 14 pages, 2023)

We visualize this evaluation in website maps, rendered as charts that are available internally to GSA employees. This helps us see examples of good performers, such as Website A (on the left), and not-so-good performers, like Website B (on the right.)

the weak link case study

In addition, these charts, like all maps [1] , contains some decisions that prioritize how the information is rendered. They include:

  • An equal weight to all datasets and data types, regardless of fidelity . In the charts above, the slices spread out from 0 along even increments. Our measurement of customer-centricity gives equal weight to whether a site proactively listens to their customers, as well as to whether it has the resources to implement change.
  • A direct comparison by slice . For example, our customer-centricity slice gives the same amount of distance from the center for listening to its customers as our required links slice gives for including information about privacy, regardless of the fact that customer listening is foundationally different (and more complicated) as an activity than including required links.

We made these decisions because to weight all of the metrics would be to travel down the coastline paradox [2] , meaning: we had to identify a stopping point for measurement and comparison that is somewhat arbitrary because, paradoxically, the more closely we measure and compare, the less clear the GSA digital ecosystem would become. These measures are the baseline because, broadly, they are fair in their unfairness: some things are easier to do, and some things are harder, but what is “easier” and what is “harder” differs depending on the resources available to each website team.

But even in comparing websites using charts and maps containing multiple dataset types, we’re missing some nuance. “Website A” is a simple, informational site, whereas “Website B” contains a pricing feature, which introduces additional complexities that are more difficult to manage than simple textual information. To give visibility to this nuance, the Service Design team uses these maps as part of a broader website evaluation package, which includes qualitative research interviews and subsequent evaluation write ups. These are sent to every website team within three weeks after we conduct the research interview. Taken together, the quantitative and qualitative data in the website evaluation packages allow GSA staff to consistently measure how digital properties are functioning, and what their impact is on customers.

Concluding which websites should exist

The reality is: value exists in dimensions, not in single data points, or even in single datasets. To further complicate things, the closer you look at single datasets, the more your decision-making process is complicated, rather than clarified. This is because each data type and each data point in complex systems can be broken down into infinitely smaller pieces, rendering decisions made based on these pieces more accurate, but also of smaller and smaller impact. [3]

None of the measures in the Enterprise Digital Experience composite indicator or their use as a whole pie results in an affirmation or denial of the value of a digital property to the agency or to the public; value will always exist as an interpretation of these datasets. The indicator can tell us how existing sites are doing, but not whether we should continue supporting them.

To understand whether a website is worth supporting and how to evolve it, the Service Design team pairs qualitative and quantitative data with mission and strategic priorities to evaluate which websites to improve, and which to stop supporting. To achieve this pairing, three elements must come together:

  • Technical evaluations
  • Regular dialogue with each website’s customers, including internal stakeholders and leadership
  • Enterprise-level meta-analysis of a digital property’s functions in comparison to other digital properties

Customer dialogue is the responsibility of each team, and technical evaluations are readily available, thanks to tools like the Digital Analytics Program (DAP), but enterprise-level meta-analyses require a cross-functional view. This view can be attained through matrixed initiatives like GSA’s Service Design program, or cross-functional groups like GSA’s Digital Council, in collaboration with program teams and leadership.

From an enterprise perspective, the next phase in our evaluation of GSA properties is to apply service categories to each website, to better understand how GSA is working along categorical lines, instead of businesses or brands. Taxonomical work like this is the domain of enterprise architecture. Our service category taxonomy was compiled by using the Federal Enterprise Architecture Framework (FEAF) [4] as a starting point, and crosswalks a website’s designed function with its practical function, evaluated through general and agency use.

We’re starting to leverage service categories, and working with teams to create a more coalesced view of website value as we do so.

What can I do next?

Review an introduction to analytics to learn how metrics and data can improve understanding of how people use your website.

If you work at a U.S. federal government agency, and would like to learn more about this work, reach out to GSA’s Service Design team at [email protected] .

Disclaimer : All references to specific brands, products, and/or companies are used only for illustrative purposes and do not imply endorsement by the U.S. federal government or any federal government agency.

Join a Community

  • Web Analytics and Optimization
  • Innovation Adoption
  • Customer Experience
  • Web Managers

Related Topics

2024-04-16-determining-the-true-value-of-a-website-a-gsa-case-study.md

news/2024/04/2024-04-16-determining-the-true-value-of-a-website-a-gsa-case-study.md

Link Shortcode

{{< link "news/2024/04/2024-04-16-determining-the-true-value-of-a-website-a-gsa-case-study.md

" >}}

Join 60,000 others in government and subscribe to our newsletter — a round-up of the best digital news in government and across our field.

Digital.gov

An official website of the U.S. General Services Administration

A popular YouTuber's negative video of Humane's AI Pin raises questions about critical reviews in the age of innovation

  • This post originally appeared in the Insider Today newsletter.
  • You can sign up for Business Insider's daily newsletter here .

Insider Today

Hello there! If you're struggling to decide the foods worth buying organic, best-selling author Michael Pollan has some suggestions for the ones worth splurging on to avoid harmful chemicals .  

In today's big story, we're looking at a critical tech review that caused a bit of a stir on social media .

What's on deck:

Markets: Goldman Sachs quiets the haters with a monster earnings report .

Tech: Leaked docs show one of Prime Video's biggest issues, forcing customers to abandon shows .

Business: The best bet in business these days? Targeting young men who like to gamble .

But first, the review is in!

If this was forwarded to you, sign up here.

The big story

Up for review.

"The Worst Product I've Ever Reviewed… For Now"

Marques Brownlee, the YouTuber better known as MKBHD, didn't mince words with the title of his review of Humane's AI Pin .

In a 25-minute video , Brownlee details all the issues he encountered using the AI device. (Spoiler alert: There were a lot.)

Brownlee's review aligns with other criticisms of the device . But not all of those came from someone with as much sway. His YouTube channel has more than 18 million subscribers.

One user on X pointed that out , calling the review "almost unethical" for "potentially killing someone else's nascent project" in a post reposted over 2,000 times. 

Most of the internet disagreed, and a Humane exec even thanked Brownlee on X for the "fair and valid critiques." 

But it highlights the power of Brownlee's reviews. Earlier this year, a negative video of Fisker's Ocean SUV by Brownlee also made waves on social media . 

Critical reviews in the age of innovation raise some interesting questions.

To be clear, there was nothing wrong with Brownlee's review. Humane's AI Pin costs $700. Watering down his review to ease the blow would be a disservice to the millions of fans relying on his perspective before making such a significant purchase.

Too often, companies view potential customers as an extension of their research and development. They are happy to sell a product that is still a work in progress on the promise they'll fix it on the fly. ("Updates are coming!")

But in a world of instant gratification, it can be hard to appreciate that innovation takes time. 

Even Apple can run into this conundrum. Take the Apple Vision Pro. Reviewers are impressed with the technology behind the much-anticipated gadget — but are still struggling to figure out what they can do with it . Maybe, over time, that will get sorted out. It's also worth remembering how cool tech can be, as Business Insider's Peter Kafka wrote following a bunch of trips in Waymo's software-powered taxis in San Francisco . Sure, robotaxis have their issues, Peter said, but they also elicit that "golly-gee-can-you-believe-it" sense.

As for Humane, America loves a comeback story. Just look at "Cyberpunk 2077." The highly anticipated video game had a disastrous launch in 2020 , but redeemed itself three years later, ultimately winning a major award .

Still, Humane shouldn't get a pass for releasing a product that didn't seem ready for primetime, according to the reviews. 

And its issue could be bigger than glitchy tech. Humane's broader thesis about reducing screen time might not be as applicable. As BI's Katie Notopolous put it: " I love staring at my iPhone ."

3 things in markets

1. Goldman finally strikes gold. After a rough stretch, the vaunted investment bank crushed earnings expectations , sending its stock soaring. A big tailwind, according to CEO David Solomon, is AI spawning " enormous opportunities " for the bank. 

2. Buy the dip, Wedbush says. Last week's drop among tech stocks shouldn't scare away investors , according to Wedbush. A strong earnings report, buoyed by the ongoing AI craze, should keep them soaring, strategists said. But JPMorgan doesn't see it that way, saying prices are already stretched .   

3. China's economy beat analysts' expectations. The country's GDP grew 5.3% in the first quarter of 2024, according to data published by the National Bureau of Statistics on Tuesday. It's a welcome return to form for the world's second-largest economy, although below-par new home and retail sales remain a cause for concern .

3 things in tech

1. Amazon Prime Video viewers are giving up on its shows. Leaked documents show viewers are fed up with the streamer's error-ridden catalog system , which often has incomplete titles and missing episodes. In 2021, 60% of all content-related complaints were about Prime Video's catalog.

2. Eric Newcomer is bringing his Cerebral Valley AI Summit to New York. The conference, originally held in San Francisco, is famous for producing one of the largest generative AI acquisitions ever. Now, it's coming to New York in June .

3. OpenAI is plotting an expansion to NYC. Two people familiar with the plans told BI that the ChatGPT developer is looking to open a New York office next year. That would be the company's fifth office, alongside its current headquarters in San Francisco, a just-opened site in Tokyo, and spots in London and Dublin.

3 things in business

1. America's young men are spending their money like never before. From sports betting to meme coins, young men are more willing than ever to blow money in the hopes of making a fortune .

2. Investors are getting into women's sports. With women like Caitlin Clark dominating March Madness headlines, investors see a big opportunity. BI compiled a list of 13 investors and fund managers pouring money into the next big thing in sports.

3. Bad news for Live Nation. The Wall Street Journal reports that the Justice Department could hit the concert giant with an antitrust lawsuit as soon as next month. Live Nation, which owns Ticketmaster, has long faced criticism over its high fees.

In other news

Blackstone hires Walmart AI whiz to supercharge its portfolio companies .

Taylor Swift, Rihanna, Blackpink's Lisa: Celebrities spotted at Coachella 2024 . 

NYC's rat czar says stop feeding the pigeons if you want the vermin gone .

A major Tesla executive left after 18 years at the company amid mass layoffs .

Some Tesla factory workers realized they were laid off when security scanned their badges and sent them back on shuttles, sources say .

New York is in, San Francisco is very much out for tech workers relocating .

AI could split workers into 2: The ones whose jobs get better and the ones who lose them completely .

Oh look at that! Now Google is using AI to answer search queries .

A longtime banker gives a rare inside look at how he is thinking about his next career move, from compensation to WFH .

Clarence Thomas didn't show up for work today .

What's happening today

Today's earnings: United Airlines, Bank of America, Morgan Stanley, and others are reporting . 

It's Free Cone Day at participating Ben & Jerry's stores. 

The Insider Today team: Dan DeFrancesco , deputy editor and anchor, in New York. Jordan Parker Erb , editor, in New York. Hallam Bullock , senior editor, in London. George Glover , reporter, in London.

Watch: Nearly 50,000 tech workers have been laid off — but there's a hack to avoid layoffs

the weak link case study

  • Main content

An iPhone displays a photo of a women at a music festival.

What a Terror Attack in Israel Might Reveal About Psychedelics and Trauma

Thousands of Israelis were using mind-altering substances when Hamas-led fighters attacked a desert festival on Oct. 7. Now, scientists are studying the ravers to determine the effects of such drugs at a moment of extreme trauma.

This photo of Yuval Tapuhi was taken at the Tribe of Nova festival on Oct. 7, before the Hamas-led terrorist attack. Credit... Avishag Shaar-Yashuv for The New York Times

Supported by

  • Share full article

By Natan Odenheimer ,  Aaron Boxerman and Gal Koplewitz

  • April 11, 2024

One Israeli said that being high on LSD during the Hamas-led attack on Oct. 7 prompted a spiritual revelation that helped him escape the carnage at a desert rave. Another is certain the drug MDMA made him more decisive and gave him the strength to carry his girlfriend as they fled the scene. A third said that experiencing the assault during a psychedelic trip has helped him more fully process the trauma.

Listen to this article with reporter commentary

Some 4,000 revelers gathered on the night of Oct. 6 at a field in southern Israel, mere miles from the Gaza border, for the Tribe of Nova music festival. At dawn, thousands of Hamas-led terrorists stormed Israel’s defenses under the cover of a rocket barrage.

About 1,200 people were killed that day, the deadliest in Israeli history according to the Israeli authorities, including 360 at the rave alone. Many of the ravers were under the influence of mind-altering substances like LSD, MDMA and ketamine as they witnessed the carnage or fled for their lives.

For a group of Israeli researchers at the University of Haifa, the attack has created a rare opportunity to study the intersection of trauma and psychedelics, a field that has drawn increased interest from scientists in recent years.

The survivors of the Nova festival present a case study that would be impossible to replicate in a lab: a large group of people who endured trauma while under the influence of substances that render the brain more receptive and malleable.

Illegal in most countries, including Israel, these substances are now on the cusp of entering the psychiatric mainstream. Recent research suggests that careful doses of drugs like MDMA and psilocybin , the active ingredient in “magic mushrooms,” might be useful in treating post-traumatic stress disorder.

The festival participants were under the influence during their trauma, not in a controlled clinical setting, but researchers say studying them could help scientists better understand how psychedelics might be used to treat patients after a traumatic event.

The researchers surveyed more than 650 Nova survivors. Roughly 23 percent said they took hallucinogens like LSD, also known as acid, and about 27 percent used MDMA, a stimulant and psychedelic commonly called molly or ecstasy. Many attendees used more than one substance.

Rubbish litters the ground in a stand of trees, including a sign that reads, “Chill Out Zone.”

Participants in the survey described a variety of experiences while using drugs on Oct. 7, ranging from hallucinations to extreme clarity, from panic to resolve and from paralysis to action.

“Even though people were dropping on the ground screaming next to me, I felt a growing sense of confidence, that I was invincible,” said Yarin Reichenthal, 26, a judo coach who experienced the attack while on LSD. “I felt enlightened. I felt no fear at all.”

In many instances, according to preliminary results of the researchers’ survey, even festivalgoers using the same drugs experienced the attack in different ways — variances that might have meant the difference between life and death.

The scientists cautioned that the study was not a comprehensive review of how every participant at the rave fared because so many were killed.

“We only hear the stories of those who made it out alive,” said Roy Salomon, a cognitive science professor at the University of Haifa and a co-author of the study. “So our understanding is influenced by survivors’ bias.”

Witnesses said that for many attendees, drug use appeared to hamper their ability to flee for safety. Some ravers were too zoned out on psychedelics to realize what was happening and escape. The researchers said that those experiences were also important to their findings.

“There are two main questions,” said Roee Admon, a University of Haifa psychology professor and a co-author of the study. “How is the traumatic event experienced under different psychedelics, and what might the long-term clinical impact be?”

Professor Admon and Professor Salomon, who are leading the survey, are studying the survivors in the hopes of gleaning information about how drug use affected their experience of trauma. They are also studying how the attendees appear to be recovering and coping. A graduate student, Ophir Netzer, also helped write the study.

Of those who made it out alive, some survivors appeared to be recovering well and others reported feeling numb and detached. Some said they had increased their drug use since the attack to cope.

“We were all in such a heightened emotional state, which made us all the more vulnerable when the attack began,” said Tal Avneri, 18, who said he stayed relatively lucid on Oct. 7 after taking MDMA. “And when you’re hurt at your most fragile, you can later become numb.”

For devotees of Israel’s trance scene, a festival like Nova is more than just a way to let loose. Many view the raves — often held in forests and deserts, with pounding electronic beats and mind-altering substances — as spiritual journeys amid a like-minded community.

“The love I felt on the dance floor, the raves, the psychedelics — they helped me cope with my mother’s death,” said Yuval Tapuhi, a 27-year-old Nova survivor from Tel Aviv.

Around 6:30 a.m. on Oct. 7, as the sky turned pink and many revelers were beginning the most intense part of their trips, rockets from Gaza suddenly streaked through the sky. Air-raid sirens and loud explosions cut through the music.

Some people fell to the ground and burst out crying, multiple survivors said. Some attendees scrambled to evade the terrorists by hiding in bushes, behind trees or in riverbeds. Others sprinted through open fields, running for hours before reaching safety.

Still others fled in their cars, creating a huge traffic jam at the rave’s main exit, where they became easy targets for Palestinian gunmen swarming across the border.

Amid the gunfire and rocket barrage, Mr. Reichenthal, the judo coach, had what he describes as a transcendent experience, which he credits with his survival. The LSD trip, he said, made it feel as if his fear had been stripped away, and he murmured Bible verses as he ran to safety.

Many survivors described their initial panic being replaced with a coolheaded resolve — a function, one expert said, of stress counteracting the effects of the drugs.

Sebastian Podzamczer, 28, attributed his survival, at least in part, to a huge rush of energy and clarity he experienced while using MDMA. The drug’s influence, he said, gave him what he believes was the strength to carry his girlfriend, who had been paralyzed by fear.

Mr. Podzamczer, a former combat medic in the Israeli military, had PTSD after his service. Taking psychedelics recreationally, he said, helped him unravel some of that pain, allowing him to speak about his military service without shaking and panicking.

“But I always thought that if I was caught in an extreme situation like that, I’d be paralyzed by panic from my PTSD,” Mr. Podzamczer said. Instead, he found that the MDMA he took at the rave “helped me stay afloat, to act more quickly and decisively.”

High levels of stress can almost “overwhelm” the effects of a drug and jolt people back to reality, said Rick Doblin, the founder of the Multidisciplinary Association for Psychedelic Studies, a nonprofit organization in California that finances scientific research but is not involved in the Nova survivor study.

Almog Arad, 28, said that her acid trip kicked in after the attack began but that the circumstances quickly “minimized” the drug’s effects. While she continued to see intense colors and patterns as she fled, her decision-making remained relatively sound, she said.

“Adrenaline was the strongest drug I took that day,” she said.

The University of Haifa researchers plan to follow the survivors for years, tracking their neural activity with functional magnetic resonance imaging, or fMRI.

They have presented their preliminary findings in a preprint paper , a scientific manuscript undergoing peer review.

Compared with survivors who used other substances, attendees who used MDMA are recovering better and showing less severe symptoms of PTSD, according to the study’s preliminary conclusions.

Many MDMA users in particular, the researchers said, believe that using the drug helped them survive. That perception, the scientists added, could have influenced their ability to cope with their trauma.

“The way in which we remember the trauma has a great impact on how we process it,” Professor Admon said. “So even if a victim’s perception is subjective, it will still have a great impact on their recovery.”

The researchers said it was difficult to assess the exact doses that the festivalgoers used, making it hard to analyze how different quantities of drugs affected people.

Mr. Reichental said he witnessed one man at the rave who appeared to be so out of it that as gunfire sounded and another raver tried to help him escape, the man instead began to flirt with her. “How lucky it is that destiny brought us together,” Mr. Reichenthal recalled the man saying. He does not believe the man survived the attack.

Psychologists and survivors said those ravers who took ketamine, a psychedelic with an intense tranquilizing and dissociative effect, appeared to be one of the groups hit hardest.

Immediately after the Nova massacre, a group of therapists and experts established a volunteer relief network for survivors, known as Safe Heart, that provided psychological support for more than 2,200 people. The group has collaborated with the University of Haifa researchers as well as with a separate , qualitative study led by Guy Simon, a psychotherapist and doctoral candidate at Bar-Ilan University.

“Most people who undergo a traumatic experience do not develop PTSD,” Professor Admon said. “Identifying those who do and treating them as early as possible is critical to their healing.”

Read by Natan Odenheimer

Audio produced by Adrienne Hurst .

Aaron Boxerman is a Times reporting fellow with a focus on international news. More about Aaron Boxerman

Our Coverage of the Israel-Hamas War

News and Analysis

Britain, the United States, France and other allies of Israel have voiced their anger over the death toll in Gaza, but when Iran launched a missile barrage at Israel, they set it aside . At least for the moment.

Israeli settlers fatally shot two Palestinians in the West Bank , according to Israeli and Palestinian officials, as tensions  continued to spike in the Israeli-occupied territory.

Ruth Patir, the artist representing Israel at the Venice Biennale, said she wouldn’t open her show in the national pavilion  until Israel and Hamas reach “a cease-fire and hostage release agreement.”

Mobilizing the American Left: As the death toll in Gaza climbed, the pro-Palestinian movement grew into a powerful, if disjointed, political force in the United States . Democrats are feeling the pressure.

Riding Rage Over Israel: Jackson Hinkle’s incendiary commentary  has generated over two million new followers on X since October — a surge that some researchers say is aided by inauthentic accounts by the online celebrity.

Psychedelics and Trauma: Thousands of festival-goers were using mind-altering substances when Hamas-led fighters attacked on Oct 7. Now, scientists are studying the effects of such drugs at a moment of trauma .

Turmoil at J Street: The war in Gaza has raised serious concerns within the Jewish political advocacy group about its ability to hold a middle position  without being pulled apart by forces on the right and the left.

Advertisement

What to know about the crisis of violence, politics and hunger engulfing Haiti

A woman carrying two bags of rice walks past burning tires

A long-simmering crisis over Haiti’s ability to govern itself, particularly after a series of natural disasters and an increasingly dire humanitarian emergency, has come to a head in the Caribbean nation, as its de facto president remains stranded in Puerto Rico and its people starve and live in fear of rampant violence. 

The chaos engulfing the country has been bubbling for more than a year, only for it to spill over on the global stage on Monday night, as Haiti’s unpopular prime minister, Ariel Henry, agreed to resign once a transitional government is brokered by other Caribbean nations and parties, including the U.S.

But the very idea of a transitional government brokered not by Haitians but by outsiders is one of the main reasons Haiti, a nation of 11 million, is on the brink, according to humanitarian workers and residents who have called for Haitian-led solutions. 

“What we’re seeing in Haiti has been building since the 2010 earthquake,” said Greg Beckett, an associate professor of anthropology at Western University in Canada. 

Haitians take shelter in the Delmas 4 Olympic Boxing Arena

What is happening in Haiti and why?

In the power vacuum that followed the assassination of democratically elected President Jovenel Moïse in 2021, Henry, who was prime minister under Moïse, assumed power, with the support of several nations, including the U.S. 

When Haiti failed to hold elections multiple times — Henry said it was due to logistical problems or violence — protests rang out against him. By the time Henry announced last year that elections would be postponed again, to 2025, armed groups that were already active in Port-au-Prince, the capital, dialed up the violence.

Even before Moïse’s assassination, these militias and armed groups existed alongside politicians who used them to do their bidding, including everything from intimidating the opposition to collecting votes . With the dwindling of the country’s elected officials, though, many of these rebel forces have engaged in excessively violent acts, and have taken control of at least 80% of the capital, according to a United Nations estimate. 

Those groups, which include paramilitary and former police officers who pose as community leaders, have been responsible for the increase in killings, kidnappings and rapes since Moïse’s death, according to the Uppsala Conflict Data Program at Uppsala University in Sweden. According to a report from the U.N . released in January, more than 8,400 people were killed, injured or kidnapped in 2023, an increase of 122% increase from 2022.

“January and February have been the most violent months in the recent crisis, with thousands of people killed, or injured, or raped,” Beckett said.

Image: Ariel Henry

Armed groups who had been calling for Henry’s resignation have already attacked airports, police stations, sea ports, the Central Bank and the country’s national soccer stadium. The situation reached critical mass earlier this month when the country’s two main prisons were raided , leading to the escape of about 4,000 prisoners. The beleaguered government called a 72-hour state of emergency, including a night-time curfew — but its authority had evaporated by then.

Aside from human-made catastrophes, Haiti still has not fully recovered from the devastating earthquake in 2010 that killed about 220,000 people and left 1.5 million homeless, many of them living in poorly built and exposed housing. More earthquakes, hurricanes and floods have followed, exacerbating efforts to rebuild infrastructure and a sense of national unity.

Since the earthquake, “there have been groups in Haiti trying to control that reconstruction process and the funding, the billions of dollars coming into the country to rebuild it,” said Beckett, who specializes in the Caribbean, particularly Haiti. 

Beckett said that control initially came from politicians and subsequently from armed groups supported by those politicians. Political “parties that controlled the government used the government for corruption to steal that money. We’re seeing the fallout from that.”

Haiti Experiences Surge Of Gang Violence

Many armed groups have formed in recent years claiming to be community groups carrying out essential work in underprivileged neighborhoods, but they have instead been accused of violence, even murder . One of the two main groups, G-9, is led by a former elite police officer, Jimmy Chérizier — also known as “Barbecue” — who has become the public face of the unrest and claimed credit for various attacks on public institutions. He has openly called for Henry to step down and called his campaign an “armed revolution.”

But caught in the crossfire are the residents of Haiti. In just one week, 15,000 people have been displaced from Port-au-Prince, according to a U.N. estimate. But people have been trying to flee the capital for well over a year, with one woman telling NBC News that she is currently hiding in a church with her three children and another family with eight children. The U.N. said about 160,000 people have left Port-au-Prince because of the swell of violence in the last several months. 

Deep poverty and famine are also a serious danger. Gangs have cut off access to the country’s largest port, Autorité Portuaire Nationale, and food could soon become scarce.

Haiti's uncertain future

A new transitional government may dismay the Haitians and their supporters who call for Haitian-led solutions to the crisis. 

But the creation of such a government would come after years of democratic disruption and the crumbling of Haiti’s political leadership. The country hasn’t held an election in eight years. 

Haitian advocates and scholars like Jemima Pierre, a professor at the University of British Columbia, Vancouver, say foreign intervention, including from the U.S., is partially to blame for Haiti’s turmoil. The U.S. has routinely sent thousands of troops to Haiti , intervened in its government and supported unpopular leaders like Henry.

“What you have over the last 20 years is the consistent dismantling of the Haitian state,” Pierre said. “What intervention means for Haiti, what it has always meant, is death and destruction.”

Image: Workers unload humanitarian aid from a U.S. helicopter at Les Cayes airport in Haiti, Aug. 18, 2021.

In fact, the country’s situation was so dire that Henry was forced to travel abroad in the hope of securing a U.N. peacekeeping deal. He went to Kenya, which agreed to send 1,000 troops to coordinate an East African and U.N.-backed alliance to help restore order in Haiti, but the plan is now on hold . Kenya agreed last October to send a U.N.-sanctioned security force to Haiti, but Kenya’s courts decided it was unconstitutional. The result has been Haiti fending for itself. 

“A force like Kenya, they don’t speak Kreyòl, they don’t speak French,” Pierre said. “The Kenyan police are known for human rights abuses . So what does it tell us as Haitians that the only thing that you see that we deserve are not schools, not reparations for the cholera the U.N. brought , but more military with the mandate to use all kinds of force on our population? That is unacceptable.”  

Henry was forced to announce his planned resignation from Puerto Rico, as threats of violence — and armed groups taking over the airports — have prevented him from returning to his country.  

An elderly woman runs in front of the damaged police station building with tires burning in front of it

Now that Henry is to stand down, it is far from clear what the armed groups will do or demand next, aside from the right to govern. 

“It’s the Haitian people who know what they’re going through. It’s the Haitian people who are going to take destiny into their own hands. Haitian people will choose who will govern them,” Chérizier said recently, according to The Associated Press .

Haitians and their supporters have put forth their own solutions over the years, holding that foreign intervention routinely ignores the voices and desires of Haitians. 

In 2021, both Haitian and non-Haitian church leaders, women’s rights groups, lawyers, humanitarian workers, the Voodoo Sector and more created the Commission to Search for a Haitian Solution to the Crisis . The commission has proposed the “ Montana Accord ,” outlining a two-year interim government with oversight committees tasked with restoring order, eradicating corruption and establishing fair elections. 

For more from NBC BLK, sign up for our weekly newsletter .

CORRECTION (March 15, 2024, 9:58 a.m. ET): An earlier version of this article misstated which university Jemima Pierre is affiliated with. She is a professor at the University of British Columbia, Vancouver, not the University of California, Los Angeles, (or Columbia University, as an earlier correction misstated).

the weak link case study

Patrick Smith is a London-based editor and reporter for NBC News Digital.

the weak link case study

Char Adams is a reporter for NBC BLK who writes about race.

Help | Advanced Search

Computer Science > Computation and Language

Title: pretraining and updating language- and domain-specific large language model: a case study in japanese business domain.

Abstract: Several previous studies have considered language- and domain-specific large language models (LLMs) as separate topics. This study explores the combination of a non-English language and a high-demand industry domain, focusing on a Japanese business-specific LLM. This type of a model requires expertise in the business domain, strong language skills, and regular updates of its knowledge. We trained a 13-billion-parameter LLM from scratch using a new dataset of business texts and patents, and continually pretrained it with the latest business documents. Further we propose a new benchmark for Japanese business domain question answering (QA) and evaluate our models on it. The results show that our pretrained model improves QA accuracy without losing general knowledge, and that continual pretraining enhances adaptation to new information. Our pretrained model and business domain benchmark are publicly available.

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

IMAGES

  1. CASE 1.docx

    the weak link case study

  2. (PDF) The weak link

    the weak link case study

  3. The Weak Link. “Two are better than one, because they…

    the weak link case study

  4. ‘The Weakest Link’

    the weak link case study

  5. “The Weakest Link” Awareness Poster

    the weak link case study

  6. Finding the Weak Link- Movement Screening

    the weak link case study

VIDEO

  1. The Weakest Link (2001)

  2. RAST- Risk analysis screening tool software. Tutorial Sem VII ,HARA Subject

  3. AV Link Case Study

  4. Determination of dissociation constant of weak acid by using potentiometer|| calculation, result

  5. WEKAEST LINK INTERNATIONAL OPENING V2

  6. Link Building Case Study: My #1 Strategy Right Now

COMMENTS

  1. Could climate become the weak link in your supply chain?

    For additional details on the risks to supply chains and possible adaptation measures, download the case study, Could climate become the weak link in your supply chain? (PDF-865KB). About this case study: In January 2020, the McKinsey Global Institute published Climate risk and response: Physical hazards and socioeconomic impacts. In that ...

  2. PDF Could climate become the weak link in your supply chain?

    Supply chain-3. Being prepared for extreme weather impacts can minimize supply chain disruptions. In the case of disruption to the semiconductor supply chain, an unprepared downstream company could lose about 35% of annual revenue while preparation limits the loss to about 5%. Illustrative.

  3. Group incentives or individual incentives? A real-effort weak-link

    The case of Continental Airlines, presented in Knez and Simester (2001) as a prototypical example of weak-link in the field, presents a similar evolution of incentives over time. It has furthermore been shown to matter in previous (chosen-effort) weak-link experiments (Brandts & Cooper, 2006a) and Hamman et al. (2007). 16

  4. How to identify and fix your organization's weakest links

    Duguid spoke with HR Dive about the use of analytics to help identify weak links, using the example of Nintex's Hawkeye platform. Hawkeye, he said, looks at the effectiveness of the processes ...

  5. MIT Tool Will Help Companies Assess Weak Links in Supply Chains

    Researchers say the platform will help manufacturers more quickly rebound from natural disasters and political unrest. By Joel Schectman. Massachusetts Institute of Technology researchers are building a tool that will help companies visualize and assess the vulnerability to their supply chains in the event of a catastrophe.

  6. Human factor, a critical weak point in the information security of an

    The findings of this study provide evidence that human factors present a great threat to the information security system of organizations and in this case, it is most significant at 0.05 as shown in Table 5. It creates an avenue by which the security of an organization becomes vulnerable and ultimately making it easy for information to be ...

  7. Are You the Weak Link?

    Case studies of cybersecurity breaches have shown that humans, rather than technology, are the weakest link, responsible for risky cybersecurity behaviors that provide access points for ...

  8. Securing the weak link of federated systems via trusted execution: a

    The interconnection of organisations from distributed, heterogeneous, and autonomous domains having different regulations often requires a trusted third-party gateway to translate security means applied in one domain to those of a different domain. At that point, sensitive data is exposed unencrypted on the gateway host, thus vulnerable to attacks.

  9. Securing the weak link of federated systems via trusted execution: a

    DOI: 10.1504/ijccbs.2019.10028524 Corpus ID: 218523020; Securing the weak link of federated systems via trusted execution: a case study from the eHealth domain @article{Coppolino2019SecuringTW, title={Securing the weak link of federated systems via trusted execution: a case study from the eHealth domain}, author={Luigi Coppolino and Salvatore D'Antonio and Giovanni Mazzeo and Luigi Romano and ...

  10. What are Weak Links in the npm Supply Chain?

    cialists identify weak links in a software supply chain by empirically. studying npm package metadata. In this paper, we analyzed the metadata of 1.63 million JavaScript. npm packages. We pr opose ...

  11. Rethinking the Weakest Link in the Cybersecurity Chain

    日本語. The cybersecurity chain consists of IT systems, software, networks and the people who interact with this technology. Most cyber researchers consider humans to be the weakest link in the cybersecurity chain. Nine out of 10 (88 percent) data breach incidents are caused by employee mistakes. 1 Employees are also often unwilling to admit ...

  12. Finding the weak link in the chain: an integrated performance

    The study develops an integrated performance management framework for complex, multi-role organizations that incorporates performance management design, systems thinking and problem-solving (theory of constraint). The authors test the applicability of their framework through an oil and gas industry case study, demonstrating its usability.

  13. Securing the weak link of federated systems via trusted execution: a

    Request PDF | On Jan 1, 2019, Luigi Romano and others published Securing the weak link of federated systems via trusted execution: a case study from the eHealth domain | Find, read and cite all ...

  14. PDF Target Cyber Attack: A Columbia University Case Study

    In this case study, we examine the 2013 breach of American retailer Target, which led to the theft ... Since company devices were highly connected to one another, the attackers' first step was to find a weak link - a less secure device - and from there spread to other devices, such as POS machines .4 The attackers found their weak link in ...

  15. PDF What are Weak Links in the npm Supply Chain?

    One of our case studies identified 11 malicious packages from the install scripts signal. We also found 2,818 maintainer email addresses associated with ... In this study, we define a signal as a weak link if the signal exposes a package to a higher risk of a supply chain attack and an attacker can exploit the signal to execute a supply

  16. Is a Team only as strong as the Weakest Link?

    In Canada, the National Coaching Certification Program reminds coaches of a rule of thirds. 1/3 of the athletes on most teams will be above average, 1/3 will be below average, and 1/3 will be average. While this might seem obvious, many coaches fall into a trap of spending too much time with the best athletes on the team.

  17. PDF Identification of Weak Links in Active Distribution Network ...

    Identification of Weak Links in Active Distribution Network… 453. 4 Case Study Results . In this paper, a typical IEEE 33-bus distributed system is used for simulation. The IEEE33-bus distribution network topology is shown in Fig. 1. Table 1 shows the distributed generator output correlation scenario.

  18. Research Report: Strengthening Weak Links in the PDF Trust Chain

    The integrity of these references and their context must be ensured so that an unambiguous DOM graph is established from a basis of trust.This paper describes a case study of a critical instance of such a design, namely the construction of PDF cross-reference data, in the presence of potentially multiple incremental updates and multiple complex ...

  19. CASE STUDY: Title: The Weak Link Case Type:...

    CASE STUDY: Title: The Weak Link Case Type: Incident Case Main Subjects: Authority, Organizational Politics, Organizational Structure, Recruiting/Selection, Team Building Who's Who: Pasca Gateau, Executive Chef Matthew Knorr, Director of Food and Beverage Jennifer Ortiz, Restaurant Manager Service was really subpar today," Chef Gateau told Matthew after dinner service.

  20. Securing the weak link of federated systems via trusted execution: a

    The interconnection of organisations from distributed, heterogeneous, and autonomous domains having different regulations often requires a trusted third-party gateway to translate security means applied in one domain to those of a different domain. At that point, sensitive data is exposed unencrypted on the gateway host, thus vulnerable to attacks. In this paper, we provide a solution to this ...

  21. Business school teaching case study: can green hydrogen's potential be

    This is the third in a series of monthly business school-style teaching case studies devoted to responsible-business dilemmas faced by organisations. Read the piece and FT articles suggested at ...

  22. CASE STUDY # 1: Title: The Weak Link Case Type: Incident Case Main

    THE WEAK LINK CASE TYPE . The person in the group who can be least trusted. After a hectic day at work, it was almost time for dinner when the cook finally left. He was obviously really hungry. He went to the closest restaurant where he knew for sure that he would have the tastiest food. He settled onto a chair at a table and carefully studied ...

  23. Determining the true value of a website A GSA case study

    Required links, scored by the Site Scanning Program's website scan (quantitative data, 21st Century IDEA Section 3A.1 & 3E) User behavior, non-duplication , scored by Google Analytics with related sites ( qualitative + quantitative data, 21st Century IDEA Section 3A.3 )

  24. CASE STUDY # 1: Title: The Weak Link Case Type: Incident Case Main

    CASE STUDY # 1: Title: The Weak Link Case Type: Incident Case Main Subjects: Authority, Organizational Politics, Organizational Structure, Recruiting/Selection, Team Building Who's Who: Pasca Gateau, Executive Chef Matthew Knorr, Director of Food and Beverage Jennifer Ortiz, Restaurant Manager Service was really subpar today," Chef Gateau told Matthew after dinner service.

  25. (PDF) Topic Detection Based on Weak Tie Analysis: A Case Study of LIS

    This research uses link prediction and structural-entropy methods to predict scientific breakthrough topics. Temporal changes in the structural entropy of a knowledge network can be used to ...

  26. MKBHD Review of Humane AI Is a Case Study of Criticism of Innovation

    In a 25-minute video, Brownlee details all the issues he encountered using the AI device.(Spoiler alert: There were a lot.) Brownlee's review aligns with other criticisms of the device.But not all ...

  27. How I Built an AI-Powered, Self-Running Propaganda Machine for $105

    I paid a website developer to create a fully automated, AI-generated 'pink-slime' news site, programmed to create false political stories. The results were impressive—and, in an election ...

  28. What a Terror Attack in Israel Might Reveal About Psychedelics and

    The survivors of the Nova festival present a case study that would be impossible to replicate in a lab: a large group of people who endured trauma while under the influence of substances that ...

  29. The Haiti crisis, explained: Violence, hunger and unstable political

    Chaos has gutted Port-au-Prince and Haiti's government, a crisis brought on by decades of political disruption, a series of natural disasters and a power vacuum left by the president's assassination.

  30. [2404.08262] Pretraining and Updating Language- and Domain-specific

    Several previous studies have considered language- and domain-specific large language models (LLMs) as separate topics. This study explores the combination of a non-English language and a high-demand industry domain, focusing on a Japanese business-specific LLM. This type of a model requires expertise in the business domain, strong language skills, and regular updates of its knowledge. We ...