Social Engineering in Cybersecurity: Effect Mechanisms, Human Vulnerabilities and Attack Methods

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Book cover

International Conference on Advanced Research in Technologies, Information, Innovation and Sustainability

ARTIIS 2021: Advanced Research in Technologies, Information, Innovation and Sustainability pp 474–483 Cite as

Social Engineering: The Art of Attacks

  • Nelson Duarte   ORCID: orcid.org/0000-0001-6650-0778 8 ,
  • Nuno Coelho   ORCID: orcid.org/0000-0001-5517-9181 9 &
  • Teresa Guarda   ORCID: orcid.org/0000-0002-9602-0692 8 , 10 , 11 , 12  
  • Conference paper
  • First Online: 17 November 2021

1298 Accesses

4 Citations

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1485))

The correct management of information systems security is often overlooked in technological measures and management efforts, and although there are now many tools to address security threats, the human aspect has been neglected. This paper discusses the human factors that could potentially lead to intrusions with social engineering. Social engineering is a method used by hackers to obtain access to systems by manipulating flaws in behavior known as mental preconceptions. Social engineering is a risk to security information and must be considered just as important as in technological areas. In this paper we also approach social engineering, taking an introductory brief in its history, what is psychological manipulation and human weaknesses, what are the social engineering attacks, how they use authority and fear establishment, it is also approached how a social engineering attack is executed, providing value monetizing the scam, and identity exploration.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Ballagas, R., Rohs, M., Sheridan, J.G., Borchers, J.: BYOD: bring your own device. In: Proceedings of the Workshop on Ubiquitous Display Environments (2004)

Google Scholar  

Krombholz, K., Hobel, H., Huber, M., Weippl, E.: Advanced social engineering attacks. J. Inf. Secur. Appl. 22 , 113–122 (2015)

Drucker, P.F.: Landmarks of Tomorrow: A Report on the New “Post-Modern” World, 1st edn. Harper, New York (1959)

RSA: Anatomia de um ataque. RSA, 17 Julho 2013. http://blogs.rsa.com/anatomy-of-an-attack/

Schwartz, M.J.: Microsoft Hacked: Joins Apple, Facebook, Twitter, InformationWeek, 25 February 2013. https://www.darkreading.com/attacks-and-breaches/microsoft-hacked-joins-apple-facebook-twitter/d/d-id/1108800 ? Accessed 26 Feb 2021

Perlroth, N.: Hackers in China attacked the times for last 4 months. N. Y. Times (2013). https://www.nytimes.com/2013/01/31/technology/chinese-hackers-infiltrate-new-york-times-computers.html . Accessed 26 Feb 2021

Huber, M., Mulazzani, M., Leithner, M., Schrittwieser, S., Wondracek, G., Weippl, E.: Social snapshots: digital forensics for online social networks. In: 27th Annual Computer Security Applications Conference (2011)

Maurya, R.: Social Engineering: Manipulating the Human, vol. 1. Scorpio Net Security Services (2013)

Kamis, A.: Behavior Decision Theory, istheory.byu.edu (2011). http://istheory.byu.edu/wiki/Behavioral_ . Accessed 1 Sept 2017

Jackson, S.: Research Methods and Statistics: A Critical Thinking Approach. Wadsworth, Cengage Learning, Belmont, CA (2008)

Qin, T., Burgoon, J.K.: An investigation of heuristics of human judgment in detecting deception and potential implications in countering social engineering. Intell. Secur. Inf., 152–159 (2007)

Peltier, T.: Social engineering: concepts and solutions. Inf. Syst. Secur. 5 (15), 13–21 (2006)

Article   Google Scholar  

Granger, S.: Social engineering fundamentals, Part I: hacker tactics. SecurityFocus (2001)

Foozy, C.F.M., Ahmad, R., Abdollah, M.F., Yusof, R., Mas’ud, M.Z.: Generic taxonomy of social engineering attack and defence mechanism for handheld computer study. In: Alaysian Technical Universities International Conference on Engineering & Technology, Batu Pahat, Johor (2011)

Wagner, A.: Social Engineering Attacks, Techniques & Prevention. Lightning Source, UK (2019)

Parsons, K., McCormac, A., Pattinson, M., Butavicius, M., Jerram, C.: Phishing for the truth: a scenario-based experiment of users’ behavioural response to emails. In: FIP Advances in Information and Communication Technology (2013)

Tam, L., Glassman, M., Vandenwauver, M.: The psychology of password management: a tradeoff between security and convenience. Behav. Inf. Technol., 233–244 (2010)

Workman, M.: A Teste of Interventions for Security Threats From Social Engineering. Emerald Group Publishing Limited (2008)

Mitnick, K.D., Simon, W.L.: The Art of Intrusion: The Real Stories Behind the Exploits of Hackers, Intruders, & Deceivers. Wiley, Indianapolis (2006)

Huber, M., Kowalski, S., Nohlberg, M., Tjoa, S.: Towards automating social engineering using social networking site. In: CSE 2009 International Conference on Computational Science and Engineering, vol. 3, pp. 117–124 (2009)

Huber, M., Mulazzani, M., Leithner, M., Schrittwieser, S., Wondracek, G., Weippl, E.: Social snapshots: digital forensics for online social networks. In: Proceedings of the 27th Annual Computer Security Applications Conference (2011)

Silva, F.: Classificação Taxonómica dos Ataques de Engenharia Social (2013)

Kharraz, A., Robertson, W., Balzarotti, D., Bilge, L., Kirda, E.: Cutting the gordian knot: a look under the hood of ransomware attacks. In: International Conference on Detection of Intrusions and Malware, and Vulnerability Assessment, pp. 3–24, July 2015

Feizollah, A., Anuar, N.B., Salleh, R., Wahab, A.W.A.: A review on feature selection in mobile malware detection. Digit. Investig. 13 , 22–37 (2015)

Sahs, J., Khan, L.: A machine learning approach to android malware detection. In: 2012 European Intelligence and Security Informatics Conference, pp. 141–147, August 2012

Haley, K.: Symantec’s Cloud Security Threat Report Shines a Light on the Cloud’s Real Risks, 24 June 2019. https://symantec-enterprise-blogs.security.com/blogs/feature-stories/symantecs-cloud-security-threat-report-shines-light-clouds-real-risks . Accessed 9 Mar 2021

Download references

Author information

Authors and affiliations.

ISLA Santarém, Santarém, Portugal

Nelson Duarte & Teresa Guarda

ISLA Gaia, Santarém, Portugal

Nuno Coelho

Universidad Estatal Peninsula de Santa Elena, Santa Elena, Ecuador

Teresa Guarda

CIST – Centro de Investigación en Sistemas y Telecomunicaciones, Universidad Estatal Península de Santa Elena, La Libertad, Ecuador

Algoritmi Centre, Minho University, Guimarães, Portugal

You can also search for this author in PubMed   Google Scholar

Editor information

Editors and affiliations.

Universidad Estatal Península de Santa Elena, La Libertad, Ecuador

Universidade do Minho, Guimarães, Portugal

Filipe Portela

Manuel Filipe Santos

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Cite this paper.

Duarte, N., Coelho, N., Guarda, T. (2021). Social Engineering: The Art of Attacks. In: Guarda, T., Portela, F., Santos, M.F. (eds) Advanced Research in Technologies, Information, Innovation and Sustainability. ARTIIS 2021. Communications in Computer and Information Science, vol 1485. Springer, Cham. https://doi.org/10.1007/978-3-030-90241-4_36

Download citation

DOI : https://doi.org/10.1007/978-3-030-90241-4_36

Published : 17 November 2021

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-90240-7

Online ISBN : 978-3-030-90241-4

eBook Packages : Computer Science Computer Science (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Open access
  • Published: 05 March 2020

Predicting individuals’ vulnerability to social engineering in social networks

  • Samar Muslah Albladi   ORCID: orcid.org/0000-0001-9246-9540 1 &
  • George R. S. Weir 2  

Cybersecurity volume  3 , Article number:  7 ( 2020 ) Cite this article

19k Accesses

22 Citations

5 Altmetric

Metrics details

The popularity of social networking sites has attracted billions of users to engage and share their information on these networks. The vast amount of circulating data and information expose these networks to several security risks. Social engineering is one of the most common types of threat that may face social network users. Training and increasing users’ awareness of such threats is essential for maintaining continuous and safe use of social networking services. Identifying the most vulnerable users in order to target them for these training programs is desirable for increasing the effectiveness of such programs. Few studies have investigated the effect of individuals’ characteristics on predicting their vulnerability to social engineering in the context of social networks. To address this gap, the present study developed a novel model to predict user vulnerability based on several perspectives of user characteristics. The proposed model includes interactions between different social network-oriented factors such as level of involvement in the network, motivation to use the network, and competence in dealing with threats on the network. The results of this research indicate that most of the considered user characteristics are factors that influence user vulnerability either directly or indirectly. Furthermore, the present study provides evidence that individuals’ characteristics can identify vulnerable users so that these risks can be considered when designing training and awareness programs.

Introduction

Individuals and organisations are becoming increasingly dependent on working with computers, accessing the Internet, and more importantly sharing data through virtual communications. This makes cybersecurity one of today’s most significant issues. Protecting people and organisations from being targeted by cybercriminals is becoming a priority for industry and academia (Gupta et al. 2018 ). This is due to the substantial damage that may result from losing valuable data and documents in such attacks. Rather than exploiting technical means to reach their victims, cybercriminals may instead use deceptive social engineering (SE) strategies to convince their targets to accept the lure. Social engineers exploit individuals motives, habits, and behaviour to manipulate their victims (Mitnick and Simon 2003 ).

Often, security practitioners still rely on technical measures to protect from online threats while overlooking the fact that cybercriminals are targeting human weak points to spread and conduct their attacks (Krombholz et al. 2015 ). According to the human-factor report (Proofpoint 2018 ), the number of social engineering attacks that exploit human vulnerabilities dramatically increased over the year examined. This raises the necessity of finding a solution that helps the user toward acceptable defensive behaviour in the social network (SN) setting. Identifying the user characteristics that make them more or less vulnerable to social engineering threats is a major step toward protecting against such threats (Albladi and Weir 2018 ). Knowing where weakness resides can help focus awareness-raising and target training sessions for those individuals, with the aim of reducing their likely victimisation.

With such objectives in mind, the present research developed a conceptual model that reflects the extent to which the user-related factors and dimensions are integrated as a means to predict users’ vulnerability to social engineering-based attacks. This study used a scenario-based experiment to examine the relationships between the behavioural constructs in the conceptual model and the model’s ability to predict user vulnerability to SE victimisation.

The organisation of this paper is as follows: Theoretical background section briefly analyses the related literature that was considered in developing the proposed model. The methods used to evaluate this model are described in Methods section. Following this, the results of the analysis are summarised in Results section. Discussion section provides a discussion of the findings while Theoretical and practical implications section presents the theory and practical implications. An outline approach to a semi-automated advisory system is proposed in A semi-automated security advisory system section. Finally, Conclusion section draws conclusions from this work.

Theoretical background

People’s vulnerability to cyber-attacks, and particularly to social engineering-based attacks, is not a newly emerging problem. Social engineering issues have been studied in email environments (Alseadoon et al. 2015 ; Halevi et al. 2013 ; Vishwanath et al. 2016 ), organisational environments (Flores et al. 2014 , 2015 ), and recently in social network environments (Algarni et al. 2017 ; Saridakis et al. 2016 ; Vishwanath 2015 ). Yet, the present research argues that the context of these exploits affects peoples’ ability to detect them, and that the influences create new characteristics and elements which warrant further investigation.

The present study investigated user characteristics in social networks, particularly Facebook, from different angles such as peoples’ behaviour, perceptions, and socio-emotions, in an attempt to identify the factors that could predict individuals’ vulnerability to SE threats. People’s vulnerability level will be identified based on their response to a variety of social engineering scenarios. The following sub-sections will address in detail the relationship between each factor of the three perspectives and user susceptibility to SE victimisation.

Habitual perspective

Due to the importance of understanding the impact of peoples’ habitual factors on their susceptibility to SE in SNs, this study aims to measure the effect of level of involvement, number of SN connections, percentage of known friends among the network’s connections, and SN experience on predicting user susceptibility to SE in the conceptual model.

Level of involvement

This construct is intended to measure the extent to which a user engages in Facebook activities. When people are highly involved with a communication service, they tend to be relaxed and ignore any cues associated with such service that warn of deception risk (Vishwanath et al. 2016 ). User involvement in a social network can be measured by the number of minutes spent on the network every day and the frequency of commenting on other people’s status updates or pictures (Vishwanath 2015 ). Time spent on Facebook is positively associated with disclosing highly sensitive information (Chang and Heo 2014 ). Furthermore, people who are more involved in the network are believed to be more exposed to social engineering victimisation (Saridakis et al. 2016 ; Vishwanath 2015 ).

Conversely, highly involved users are supposed to have more experience with the different types of threat that could occur online. Yet, it has been observed that active Facebook users are less concerned about sharing their private information as they usually have less restrictive privacy settings (Halevi et al. 2013 ). Users’ tendency to share private information could relate to the fact that individuals who spend a lot of time using the network usually exhibit high trust in the network (Sherchan et al. 2013 ). Therefore, the following hypotheses have been proposed.

Ha1. Users with a higher level of involvement will be more susceptible to social engineering attacks (i.e., there will be a positive relationship).

◦ Hb1. The user’s level of involvement positively influences the user’s experience with cybercrime.

◦ Hb2. The user’s level of involvement positively influences the user’s trust.

Number of connections

Despite of the fact that having large number of SN connections could increase people’s life satisfaction if they are motivated to engage in the network to maintain friendships (Rae and Lonborg 2015 ), this high number of contacts in the network is claimed to increase vulnerability to online risks (Buglass et al. 2016 ; Vishwanath 2015 ). Risky behaviour such as disclosing personal information in Facebook is closely associated with users’ desire to maintain and increase the number of existing friends (Chang and Heo 2014 ; Cheung et al. 2015 ). Users with a high number of social network connections are motivated to be more involved in the network by spending more time sharing information and maintaining their profiles (Madden et al. 2013 ).

Furthermore, a high number of connections might suggest that users are not only connected with their friends but also with strangers. Vishwanath ( 2015 ) has claimed that connecting with strangers on Facebook can be considered as the first level of cyber-attack victimisation, as those individuals are usually less suspicious of the possible threats that can result from connecting with strangers in the network. Furthermore, Alqarni et al. ( 2016 ) have adopted this view to test the relationship between severity and vulnerability of phishing attacks and connection with strangers (as assumed to present the basis for phishing attacks). Their study indicated a negative relationship between the number of strangers that the user is already connected to and the user’s perception of the severity and their vulnerability to phishing attacks in Facebook. Therefore, if users are connected mostly with known friends on Facebook, this could be seen as a mark of less vulnerable individuals. With all of these points in mind, the following hypotheses are generated.

Ha2: Users with a higher number of connections will be more susceptible to social engineering attacks (i.e., there will be a positive relationship).

◦ Hb3: The user’s number of connections positively influences the user’s level of involvement.

Ha3: Users with higher connections with known friends will be less susceptible to social engineering attacks (i.e., there will be a negative relationship).

Social network experience

People’s experience in using information communication technologies makes them more competent to detect online deception in SNs (Tsikerdekis and Zeadally 2014 ). For instance, it has been found that the more time elapsed since joining Facebook makes the user more capable of detecting SE attacks (Algarni et al. 2017 ). Furthermore, despite the fact that some researchers argue that computer experience has no significant impact on their phishing susceptibility (Halevi et al. 2013 ; Saridakis et al. 2016 ), other research on email phishing found positive impact from number of years of using the Internet and number of years of using email on people’s detection ability with email phishing (Alseadoon 2014 ; Sheng et al. 2010 ). Therefore, the present study suggests that the more experienced are the users with SNs, the less vulnerable they are to SE victimisation.

Additionally, in the context of the social network, Internet experience has been found to predict precautionary behaviour, and further causes greater sensitivity to associated risks in using Facebook (Van Schaik et al. 2018 ). Thus, years of experience in using the network could increase the individual’s awareness of the risk associated with connecting with strangers. Accordingly, the present study postulates that more experienced users would have a high percentage of connections with known friends in the network.

Ha4: Users with a higher level of experience with social network will be less susceptible to social engineering attacks (i.e., there will be a negative relationship).

◦ Hb4: The user’s social network experience positively influences the user’s connections with known friends.

Perceptual perspective

People’s risk perception, competence, and cybercrime experience are the three perceptual factors that are believed to influence their susceptibility to social engineering attacks. The strength and direction of these factors’ impact will be discussed as follows.

Risk perception

Facebook users have a different level of risk perception that might affect their decision in times of risk. Vishwanath et al. ( 2016 ) has described risk perception as the bridge between user’s previous knowledge about the expected risk and their competence to deal with that risk. Many studies have considered perceiving the risk associated with engaging in online activities as having a direct influence on avoiding using online services (Riek et al. 2016 ) and more importantly as decreasing their vulnerability to online threats (Vishwanath et al. 2016 ). Facebook users’ perceived risk of privacy and security threats significantly predict their strict privacy and security settings (Van Schaik et al. 2018 ). Thus, if online users are aware of the potential risks and their consequences that might be encountered on Facebook, they will probably avoid clicking on malicious links and communicating with strangers on the network. This indicates that risk perception contributes to the user’s competence in dealing with online threats and should lead to a decrease in susceptibility to SE. Therefore, the following relationships have been proposed.

Ha5: Users with a higher level of risk perception will be less susceptible to social engineering attacks (i.e., there will be a negative relationship).

◦ Hb5: The user’s perceived risk positively influences the user’s competence.

User competence has been considered an essential determinant of end-user capability to accomplish tasks in many different fields. In the realm of information systems, user competence can be defined as the individual’s knowledge of the intended technology and ability to use it effectively (Munro et al. 1997 ). To gain insight into user competence in detecting security threats in the context of online social networks, investigating the multidimensional space that determines this user competence level is fundamental (Albladi and Weir 2017 ). The role of user competence and its dimensions in facilitating the detection of online threats is still a controversial topic in the information security field. The dimensions used in the present study to measure the concept are security awareness, privacy awareness, and self-efficacy. The scales used to measure these factors can determine the level of user competence in evaluating risks associated with social network usage.

User competence in dealing with risky situations in a social network setting is a major predictor of the user’s response to online threats. When individuals feel competent to control their information in social networks, they are found to be less vulnerable to victimisation (Saridakis et al. 2016 ). Furthermore, Self-efficacy, which is one of the user’s competence dimensions, has been found to play a critical role in users’ safe and preservative behaviour online (Milne et al. 2009 ). People who have confidence in their ability to protect themselves online as well as having high-security awareness can be perceived as highly competent users when facing cyber-attacks (Wright and Marett 2010 ). This study hypothesised that highly competent users are less susceptible to SE victimisation.

Ha6: Users with a higher level of competence will be less susceptible to social engineering attacks (i.e., there will be a negative relationship).

Cybercrime experience

Past victimisation is observed as profoundly affecting the person’s view of happiness and safety in general (Mahuteau and Zhu 2016 ). Also, such unpleasant experience is inclined to change behaviour, for example, reducing the likelihood of engagement in online-shopping (Bohme and Moore 2012 ) or even increasing antisocial behaviour (Cao and Lin 2015 ). Furthermore, previous email phishing victimisation is claimed to raise user awareness and vigilance and thus prevent them from being victimised again (Workman 2007 ). Yet, recent studies found this claim to be not significant (Iuga et al. 2016 ; Wang et al. 2017 ). As experience with cybercrimes could also be used as a determinant of people’s weakness in protecting themselves from such threats.

Experience with cybercrime has been found to increase people’s perceived risk of social network services (Riek et al. 2016 ). Those who are knowledgeable and have previous experience with online threats could be assumed to have high-risk perception (Vishwanath et al. 2016 ). However, unlike the context of email phishing, little is known about the role of prior knowledge and experiences with cybercrime in preventing people from being vulnerable to social engineering attacks in the context of social networks. Thus, this study proposes that past experience could raise the user’s risk perception but also could be used as a predictor of the user’s risk of being victimised again. To this extent, the following hypotheses have been assumed.

Ha7: Users with a previous experience with cybercrime will be more susceptible to social engineering attacks (i.e., there will be a positive relationship).

◦ Hb6: The user’s experience with cybercrime positively influences the user’s perceived risk.

Socio-emotional perspective

Little is known regarding the impact that this perspective has on SE victimisation in a SN context. However, previous research has highlighted the positive effect of people’s general trust or belief in their victimisation in email phishing context (Alseadoon et al. 2015 ), which encourages the present study to investigate more socio-emotional factors such as the dimensions of user trust and motivation, in order to consider their possible impact on user’s risky behaviour.

Some studies in email phishing (e.g., Alseadoon et al. 2015 ; Workman 2008 ) stress that the disposition to trust is a predictor of the user’s probability of being deceived by cyber-attacks. In the context of social networks, trust can be derived from the members’ trust for each other as well as trusting the network provider. These two dimensions of trust have been indicated to negatively influence people’s perceived risk in disclosing personal information (Cheung et al. 2015 ). Trust has also been found to strongly increase disclosing personal information among social networks users (Beldad and Hegner 2017 ; Chang and Heo 2014 ). With all of this in mind, the present study hypothesised that trusting the social network provider as well as other members may cause higher susceptibility to cyber-attacks.

Ha8: Users with a higher level of trust will be more susceptible to social engineering attacks (i.e., there will be a positive relationship).

According to the uses and gratification theory, people are using the communication technologies that fulfil their needs (Joinson 2008 ). Users’ motivation to use communication technologies must be taken into consideration in order to understand online user behaviour. This construct has been acknowledged by researchers in many fields such as marketing (Chiu et al. 2014 ), and mobile technology (Kim et al. 2013 ) in order to understand their target users. However, information security research has limitedly adopted this view toward understanding the online users’ risky behaviour. Users can be motivated by different stimuli to engage in social networks such as entertainment or information seeking (Basak and Calisir 2015 ). Additionally, people use Facebook for social reasons such as maintaining existing relationships and making new friends (Rae and Lonborg 2015 ). According to SE victimisation, these motivations can shed light on understanding the user’s behaviour at times of risk. For example, hedonically motivated users who usually seek enjoyment are assumed to be persuaded to click on links that provide new games or apps. While socially motivated users are generally looking to meet new people online, this makes them more likely to connect with strangers. This connections with strangers is considered risky behaviour nowadays (Alqarni et al. 2016 ). Therefore, this study predicts that the users’ vulnerability to social engineering-based attacks will be different based on their motives to access the social network.

User’s differing motivation to use social networking sites can explain their attitude online, such as tendency to disclose personal information in social networks (Chang and Heo 2014 ). Additionally, people’s perceived benefit of network engagement has a positive impact on their willingness to share their photos online (Beldad and Hegner 2017 ). Thus, the present study assumes that motivated users are more vulnerable to SE victimisation than others. Additionally, motivated users could be inclined to be more trusting when using technology (Baabdullah 2018 ). This motivation could lead the individual to spend more time and show higher involvement in the network (Ross et al. 2009 ). This involvement could ultimately lead motivated individuals to experience or at least be familiar with different types of cybercrime that could happen in the network. Hence, the following hypotheses have been postulated.

Ha9: Users with a higher level of motivation will be more susceptible to social engineering attacks (i.e., there will be a positive relationship).

◦ Hb7: The user’s motivation positively influences the user’s trust.

◦ Hb8: The user’s motivation positively influences the user’s level of involvement.

◦ Hb9: The user’s motivation positively influences the user’s experience with cybercrime.

The previous sub-sections explain the nature and the directions of the relationships among the constructs in the present study. Based on these 18 proposed hypotheses, a novel conceptual model has been developed and presented in Fig.  1 . This conceptual model relies on three different perspectives which are believed to predict user behaviour toward SE victimisation on Facebook. Developing and validating such a holistic model gives a clear indication of the contribution of the present study.

figure 1

Research Model

To evaluate the hypotheses of the conceptual model, an online-questionnaire was designed using the Qualtrics online survey tool. The questionnaire incorporated three main parts starting with questions about participants’ demographics, followed by questions that measure the constructs of the proposed model, and finally, a scenario-based experiment. An invitation email was sent to a number of faculty staff in two universities, asking them to distribute the online-questionnaire among their students and staff.

Hair et al. ( 2017 ) suggested using a sophisticated guideline that relies on Cohen ( 1988 ) recommendations to calculate the required sample size by using power estimates. In this case, for 9 predictors (which is the number of independent variables in the conceptual model) with an estimated medium effect size of 0.15, the target sample size should be at least 113 to achieve a power level of 0.80 with a significance level of 0.05 (Soper 2012 ). In this study, 316 participants have completed the questionnaire (after the primary data screening). The descriptive analysis of participants’ demographics in Table  1 revealed a variety of profiles in terms of gender (39% male, 61% female), education level, and education major. The majority of participants in the study were younger adults (age 18–24), representing 76% of the total participants. However, this was expected as the survey was undertaken in two universities where students considered vital members of the higher education environment.

Measurement scales

The proposed conceptual model includes five reflective factors and four second-order formative constructs which are risk, competence, trust, and motivation. Repeated indicator approach was used to measure the formative constructs values. This method recommends using the same number of items on all the first order factors in order to guarantee that all first-order factors have the same weight on the second order factors and to ensure no weight bias are existed (Ringle et al. 2012 ).

The scales used to measure the user habits in SN has been adopted from (Fogel and Nehmad 2009 ). To measure the risk perception dimensions, scales were adapted from Milne et al. ( 2009 ), with some modification and changes to fit the present study context. The scales used to measure the three dimensions of user competence were adopted from Albladi and Weir ( 2017 ). Motivation dimension items were adopted from previous literature (Al Omoush et al. 2012 ; Basak and Calisir 2015 ; Orchard et al. 2014 ; Yang and Lin 2014 ). The scale used to measure users’ trust was adopted with some modification from Fogel and Nehmad ( 2009 ) and Chiu et al. ( 2006 ) studies. Appendix 1 presents a summary of the measurement items.

A scenario-based experiment has been chosen as an empirical approach to examining users’ susceptibility to SE victimisation. In such scenario-based experiments, the human is recruited to take a role in reviewing a set of scripted information which can be in the form of text or images, then asked to react or respond to this predetermined information (Rungtusanatham et al. 2011 ). This method is considered suitable and realistic for many social engineering studies (e.g., (Algarni et al. 2017 ; Iuga et al. 2016 )) due to the ethical concerns associated with conducting real attacks. Our scenario-based experiment includes 6 images of Facebook posts (4 high-risk scenarios, and 2 low-risk scenarios). Each post contains a type of cyber-attack which has been chosen from the most prominent cyber-attacks that occur in social networks (Gao et al. 2011 ).

In the study model, only high-risk scenarios (which include phishing, clickjacking with an executable file, malware, and phishing scam) have been considered to measure user susceptibility to SE attacks. However, comparing individuals’ response to the high-risk attacks and their response to the low-risk attacks aims to examine if users rely on their characteristics when judging the different scenarios and not on other influencing factors such as visual message triggers (Wang et al. 2012 ). Participants were asked to indicate their response to these Facebook posts, as if they had encountered them in their real accounts, by rating a number of statements such as “I would click on this button to read the file” using a 5-point Likert-scale from 1 “strongly disagree” to 5 “strongly agree”. Appendix 2 includes a summary of the scenarios used in this study.

Analysis approach

To evaluate the proposed model, partial least squares structural equation modelling (PLS-SEM) has been used due to its suitability in dealing with complex predictive models that consist in a combination of formative and reflective constructs (Götz et al. 2010 ), even with some limitations regarding data normality and sample size (Hair et al. 2012 ). The SmartPLS v3 software package (Ringle et al. 2015 ) was used to analyse the model and its associated hypotheses.

To evaluate the study model, three different procedures have been conducted. First, using the PLS-algorithm to provide standard model estimations such as path coefficient, the coefficient of determination (R 2 values), effect size, and collinearity statistics. Secondly, using a bootstrapping approach to test the structural model relationships significance. In such approach, the collected data sample is treated as the population sample where the algorithm used a replacement technique to generate a random and large number of bootstrap samples (recommended to predefine as 5000) all with the same amount of cases as the original sample (Henseler et al. 2009 ). The present study conducted the bootstrapping procedure with 5000 bootstrap samples, two-tailed testing, and an assumption of 5% significant level.

Finally, a blindfolding procedure was also used to evaluate the predictive relevance (Q 2 ) of the structural model. In this approach, part of the data points are omitted and considered missing from the constructs’ indicators, and the parameters are estimated using the remaining data points (Hair et al. 2017 ). These estimations are then used to predict the missing data points which will be compared later with the real omitted data to measure Q 2 value. Blindfolding is considered a sample reuse approach which only applied to endogenous constructs (Henseler et al. 2009 ). Endogenous constructs are the variables that are affected by other variables in the study model (Götz et al. 2010 ), such as user susceptibility, involvement, and trust.

The part of the conceptual model that includes the relations between the measurement items and their associated factors is called the measurement model, while the hypothesised relationships among the different factors is called the structural model (Tabachnick and Fidel 2013 ). The present study’s measurement model, which includes all the constructs along with their indicators’ outer loadings, can be found in Appendix 3 . The result of the measurement model analysis in Table  2 reveals that the Cronbach alpha and the composite reliability were acceptable for all constructs as they were above the threshold of 0.70. Additionally, since the average variance extracted (AVE) for all constructs was above the threshold of 0.5 (Hair et al. 2017 ), the convergent validity of the model’s reflective constructs was confirmed.

However, in order to assess the model’s predictive ability and to examine the significance of relationships between the model’s constructs, the structural model should be tested. The assessment of the structural model involves the following testing steps.

Assessing collinearity

This step is vital to determine if there are any collinearity issues among the predictors of each endogenous construct. Failing to do so could lead to a biased path coefficient estimation if a critical collinearity issue exists among the construct predictors (Hair et al. 2017 ). Table  3 presents all the endogenous constructs (represented by the columns) which indicate that the variance inflation factor (VIF) values for all predictors of each endogenous construct (represented by the rows) are below the threshold of 5. Thus, no collinearity issues exist in the structural model.

Assessing path coefficients (hypotheses testing)

The path coefficient was calculated using the bootstrap re-sampling procedure (Hair et al. 2017 ). This procedure provides estimates of the direct impact that each construct has on user susceptibility to cyber-attack. The result of the direct effect test in Table  4 shows that trust (t = 5.202, p  < 0.01) is the highest variable that predicts the user’s susceptibility to SE victimisation, followed by user’s involvement (t = 5.002, p  < 0.01), cybercrime experience (t = 3.736, p  < 0.01), social network experience (t = − 3.015, p  < 0.01), and percentage of known friends among Facebook connections (t = − 2.735, p  < 0.01). The direct effects of user competence to deal with threats (t = − 2.474, p  < 0.05) and the number of connections (t = − 2.428, p  < 0.05) were relatively small, yet still statistically significant in explaining the target variable. However, the impact of the number of connections on users’ susceptibility was negative which opposes hypothesis (Ha2) that claims that this relationship is positive.

Most importantly, the result indicated that perceived risk and motivation have no direct effect on user’s vulnerability ( p  > 0.05). This could be caused by the fact that both factors are second-order formative variables, while their first order factors have different direction effects on user’s susceptibility. As can be seen from the result of the regression analysis in Table  5 , perceived risk is the second order factor of perceived severity of threat which has a significant negative effect on the user’s susceptibility and perceived likelihood of threat which has a positive impact on user’s susceptibility. Therefore, their joint effect logically will be not significant, because the opposite effects of the two dimensions of perceived risk have cancelled each other. Thus, Ha5 could be considered as partially supported.

The situation with Motivation is similar as it is also a second-order formative factor and its first order factors (hedonic and social) have an opposite effect on users’ susceptibility. Table 5 presents the result of the regression analysis of first-order factors for the motivation construct. The result provides evidence that hedonic motivation is negatively related to the user’s susceptibility while social motivation is positively associated with user’s susceptibility. However, when the two dimensions of motivation were aggregated to create one index to measure the total effect of user’s motivation (both direct and indirect), as illustrated in Table  6 , the model revealed a significant predictor of users’ susceptibility (t = 3.854, p  < 0.01). Thus, the direct effect of motivation on user susceptibility is statistically rejected, while the total effect of motivation on users’ susceptibility is statistically significant and considered one of the strongest predictors in the study model.

Evaluating the total effect of a particular construct on user susceptibility is considered useful, especially if the goal of the study is to explore the impact of the relationships between different drivers to predict one latent construct (Hair et al. 2017 ). The total impact includes both the construct’s direct effect and indirect effects through mediating constructs in the model. The total effect analysis in Table 6 revealed that most of the constructs have a significant overall impact on user susceptibility ( p  < 0.05). Although the number of connections has been proven to have a significant negative direct effect on user susceptibility, its total effect when considering all the direct and indirect relationships seems to be very low and not significant (t = − 0.837, p  > 0.05). Furthermore, both the direct and total effect of perceived risk has been found to be not substantial (t = − 1.559, p  > 0.05).

The rest of the hypotheses (group b) aim to examine the relationships between the independent constructs of the study model, which will be tested according to estimates of the path coefficient between the related constructs. Table  7 shows that all nine hypotheses are statistically significant ( p  < 0.05). This also shows the most substantial relationship was between social network experience and the percentage of known friends among Facebook connections (t = 6.091, p  < 0.01), followed by the favourable impact motivation and level of involvement have on increasing users trust (with t-value = 4.821, and t-value = 3.914, respectively).

Furthermore, motivation (t = 3.640, p  < 0.01) and the number of connections (t = 3.106, p  < 0.01) are two factors found to increase users’ level of involvement in the network. Level of involvement also plays a notable role in raising people’s previous experience with cybercrime (t = 2.532, p  < 0.05), while past cybercrime expertise significantly increases people’s perceived risk associated with using Facebook (t = 2.968, p  < 0.01). Nevertheless, the contribution of perceived risk in raising user competence level to deal with online threats was not very strong, although considered statistically significant (t = 2.241, p  < 0.05).

Finally, there was no significant difference with regard to the user characteristics that affect people’s susceptibility or resistance to the high-risk scenarios and low-risk scenarios. This means that participants rely on their perceptions and experience to judge those scenarios.

The coefficient of determination - R 2

The coefficient of determination is a traditional criterion that is used to evaluate the structural model’s predictive power. In this study, this coefficient measure will represent the joint effect of all the model variables in explaining the variance in people’s susceptibility to SE attacks. According to Hair et al. ( 2017 ), the acceptable R 2 value is hard to determine as it might vary depending on the study discipline and the model complexity. Cohen ( 1988 ) has suggested a rule of thumb to assess the R 2 values for models with several independent variables which are: 0.26, 0.13, and 0.02 to be considered substantial, moderate, and weak respectively. Table  8 illustrates the coefficient of determination for the endogenous variables in the study model. The R 2 values indicate that the nine prediction variables together have substantial predictive power and explain 33.5% of the variation in users’ susceptibility to SE attacks. Furthermore, users’ involvement and motivation combined effect on users’ trust is considered moderate as it explains 13.2% of the variation in users’ trust.

Predictive relevance Q 2

To measure the model’s predictive capabilities, a blindfolding procedure has been used to obtain the model’s predictive relevance (Q 2 value). Stone-Geisser’s Q 2 value, which is a measure to assess how well a model predicts the data of omitted cases, should be higher than zero in order to indicate that the path model has a cross-validated predictive relevance (Hair et al. 2017 ). Table 8 presents results of the predictive relevance test and shows that all of the endogenous constructs in the research model have predictive relevance greater than zero, which means that the model has appropriate predictive ability.

Hair et al. ( 2017 ) and Henseler et al. ( 2014 ) have recommended using SRMR and RMS theta as indices to test a model’s goodness of fit. While, SRMR represents the discrepancy between the observed correlations and the model’s implied correlations where its cut-point value should be less than 0.08 (Hu and Bentler 1998 ), RMS theta value of less than 0.12 represents an appropriate model fit (Hair et al. 2017 ; Henseler et al. 2014 ). Normed Fit Index (NFI) is an incremental model fit evaluation approach which compares the structural model with a null model of entirely uncorrelated variables, whereby an NFI value of more than 0.90 represents good model fit (Bentler and Bonett 1980 ). Additionally, Dijkstra and Henseler ( 2015 ), recommend using the squared euclidean distance (d LS ) and the geodesic distance (d G ) as measures to assess model fit by comparing the distance between the sample covariance matrix and a structured covariance matrix. Comparing the original values of d LS and d G with their confidence intervals could indicate a good model fit if their values are less than the upper bound of the 95% confidence interval.

Table  9 illustrates the result of the model fit indices that was obtained from the SmartPLS report. The empirical test of the structural model revealed a good model fit as the SRMR value was 0.05, the RMS theta value was 0.099, the NFI was 0.858, which, if rounded, will be 0.9, and the values of d LS and d G were less than the upper bound of their confidence interval. Thereby, the results of all the considered model fit indices reflect a satisfactory model fit when considering the complexity of the present study model.

Demographic variables effect

One of the present study goals is to examine if specific users’ demographics (age, gender, education, and major) are associated with users’ susceptibility to social engineering attacks. To explore this relationship, regression analysis, as well as variance tests such as t-test and ANOVA test, have been conducted. Table  10 summarises these tests results.

Gender has been found to affect the user’s susceptibility to SE victimisation (Std. beta = 0.133, p  < 0.05) and the t-test indicates that women are more vulnerable to victimisation (t(271.95) = 2.415, p  < 0.05). Also, the user’s major has a significant effect on the user’s vulnerability (Std. beta = 0.112, p  < 0.05). When comparing the groups’ behaviour via ANOVA test, users who are specialised in technical majors such as computer and engineering have been indicated as less susceptible to social engineering attacks than those specialised in humanities and business (F(6) = 5.164, p  < 0.001). Furthermore, the results show that age has no significant impact on user vulnerability (Std. beta = 0.096, p  > 0.05). However, when comparing the means of age groups, it can be seen that younger adults (M = 1.97, SD = 0.99) are less susceptible than older adults (M = 2.56, SD = 0.92). Moreover, the educational level has no significant impact on users’ vulnerability as revealed by the result of the regression analysis (Std. beta = 0.068, p  > 0.05).

Facebook users’ involvement level is revealed in the present study to have a strong significant effect on their susceptibility to SE victimisation. This finding confirms the results of previous research (Saridakis et al. 2016 ; Vishwanath 2015 ). Since most social network users are highly involved in online networks, it is hard to generalise that all involved people are vulnerable. However, high involvement affects other critical factors in the present model, i.e., experience with cybercrime and trust, which in turn have powerful impacts on users’ susceptibility to victimisation.

The number of friends has been found to have a direct negative impact on people’s vulnerability, which is against what the present study hypothesised, as this relationship had been assumed to be positive in order to concur with previous claims that large network size makes individuals more vulnerable to SNs risks (Buglass et al. 2016 ; Vishwanath 2015 ). Facebook users seem to accept friend requests from strangers to expand their friendship network. Around 48% of the participants in this study stated that they know less than 10% of their Facebook network personally. Connecting with strangers on the network has previously been seen as the first step in falling prey to social engineering attacks (Vishwanath 2015 ), while also being regarded as a measure of risky behaviour on social networks (Alqarni et al. 2016 ). A high percentage of strangers with whom the user is connected can be seen as a determinant of the user’s low level of suspicion.

Furthermore, social network experience has been found to significantly predict people’s susceptibility to social engineering in the present study. People’s ability to detect social network deception has been said to depend on information communication technology literacy (Tsikerdekis and Zeadally 2014 ). Thus, experienced users are more familiar with cyber-attacks such as phishing and clickjacking, and easily detect them. This is further supported by Algarni et al. ( 2017 ), who pointed out that the less time that has elapsed since the user joined Facebook, the more susceptible he or she is to social engineering. Yet, their research treated user experience with social networks as a demographic variable and did not examine whether this factor might affect other aspects of user behaviour. For instance, results from the present study reveal that users who are considered more experienced in social networks have fewer connections with strangers (t = 6.091, p  < 0.01), which further explains why they are less susceptible than novice users.

Perception of risk has no direct influence on people’s vulnerability, but the present study found perceived risk to significantly increase people’s level of competence to deal with social engineering attacks. This also accords with the Van Schaik et al. ( 2018 ) study, which found that Facebook users with high risk perception adopt precautionary behaviours such as restrictive privacy and security-related settings. Most importantly, perceived cybercrime risk has also been indicated as influencing people to take precautions and avoid using online social networks (Riek et al. 2016 ).

Measuring user competence levels would contribute to our understanding of the reasons behind user weakness in detecting online security or privacy threats. In the present study, the measure of an individual’s competence level in dealing with cybercrime was based upon three dimensions: security awareness, privacy awareness, and self-efficacy. The empirical results show that this competence measure can significantly predict the individual’s ability to detect SE attacks on Facebook. Individuals’ perception of their self-ability to control the content shared on social network websites has been previously considered a predictor of their ability to detect social network threats (Saridakis et al. 2016 ), as individuals who have this confidence in their self-ability as well as in their security knowledge seem to be competent in dealing with cyber threats (Flores et al. 2015 ; Wright and Marett 2010 ).

Furthermore, our results accord with the finding of Riek et al. ( 2016 ) that previous cybercrime experience has a positive and substantial impact on users’ perceived risk. Yet, this high-risk perception did not decrease users’ vulnerability in the present study. This could be because experience and knowledge of the existence of threats do not need to be reflected in people’s behaviour. For example, individuals who had previously undertaken security awareness training still underestimated the importance of some security practices, such as frequent change of passwords (Kim 2013 ).

The present research found that people’s trust in the social network’s provider and members were the strongest determinants of their vulnerability to social engineering attacks (t = 5.202, p  < 0.01). Previous email phishing research (e.g., Alseadoon et al. 2015 ; Workman 2008 ) has also stressed that people’s disposition to trust has a significant impact on their weakness in detecting phishing emails. Yet, little was known about the impact of trust in providers and other members of social networks on people’s vulnerability to cyber-attacks. These two types of trust have been found to decrease users’ perception of the risks associated with disclosing private information on SNs (Cheung et al. 2015 ). Similarly, trusting social network providers to protect members’ private information has caused Facebook users (especially females) to be more willing to share their photos in the network (Beldad and Hegner 2017 ). These findings draw attention to the huge responsibility that social network providers have to protect their users. In parallel, users should be encouraged to be cautious about their privacy and security.

People’s motivation to use social networks has no direct influence on their vulnerability to SE victimisation, as evidenced by the results of this study. Yet, this motivation significantly affects different essential aspects of user behaviour and perception such as user involvement, trust, and previous experience with cybercrime, which in turn substantially predict user vulnerability. This result accords with the claim that people’s motivation of using SNs increase their disclosure of private information (Beldad and Hegner 2017 ; Chang and Heo 2014 ).

Theoretical and practical implications

Most of the proposed measures to mitigate SE threats in the literature (e.g. (Fu et al. 2018 ; Gupta et al. 2018 )) are focused on technical solutions. Despite the importance and effectiveness of these proposed technical solutions, social engineers try to exploit human vulnerabilities; hence we require solutions that understand and guard against human weaknesses. Given the limited number of studies that investigate the impact of human characteristics on predicting vulnerability to social network security threats, the present study can be considered useful, having critical practical implications that should be acknowledged in this section.

The developed conceptual model shows an acceptable prediction ability of people’s vulnerability to social engineering in social networks as revealed by the results of this study. The proposed model could be used by information security researchers (or researchers from different fields) to predict responses to different security-oriented risks. For instance, decision-making research could benefit from the proposed framework and model as they indicate new perspectives on user-related characteristics that could affect decision-making abilities in times of risk.

Protecting users’ personal information is an essential element in promoting sustainable use of social networks (Kayes and Iamnitchi 2017 ). SN providers should provide better privacy rules and policies and develop more effective security and privacy settings. A live chat threat report must be essential in SN channels in order to reduce the number of potential victims of specific threatening posts or accounts. Providing security and privacy-related tools could also help increase users’ satisfaction with social networks.

Despite the importance of online awareness campaigns as well as the rich training programs that organisations adopt, problems persist because humans are still the weakest link (Aldawood and Skinner 2018 ). Changing beliefs and behaviour is a complex procedure that needs more research. However, the present study offers clear insight into specific individual characteristics that make people more vulnerable to cybercrimes. Using these characteristics to design training programs is a sensible approach to the tuning of security awareness messages. Similarly, our results will be helpful in conducting more successful training programs that incorporate the identified essential attributes from the proposed perspectives, as educational elements to increase people’s awareness. While these identified factors might reflect a user’s weak points, the factors could also be targeted by enforcing behavioural security strategies in order to mitigate social engineering threats.

The developed conceptual model could be used in the assessment process for an organisation’s employees, especially those working in sensitive positions. Also, the model and associated scales could be of help in employment evaluation tests, particularly in security-critical institutions, since the proposed model may predict those weak aspects of an individual that could increase his/her vulnerability to social engineering.

A semi-automated security advisory system

One of the practical usefulness of the proposed prediction model can be demonstrated through integrating this model in a semi-automated advisory system (Fig.  2 ). Based on the idea of user profiling, this research has established a practical solution which can semi-automatically predict users’ vulnerability to various types of social engineering attacks.

figure 2

A Semi-Automated Advisory System

The designed semi-automated advisory system could be used as an approach with which to classify social network users according to their vulnerability type and level after completing an assessment survey. The local administrator can determine the threshold and the priority for each type of attack based on their knowledge. Then, the network provider could send awareness posts to each segment that target the particular group’s needs. Assessing social network users and segmenting them based on their behaviour and vulnerabilities is essential in order to design relevant advice that meets users’ needs. Yet, since social engineering techniques are rapidly changing and improving, the attack scenarios that are used in the assessment step could be updated from time to time. The registered users in the semi-automated advisory system also need to be reassessed regularly in order to observe any changes in their vulnerability.

Significant outcomes were noted with practical implications for how social network users could be assessed and segmented based on their characteristics, behaviour, and vulnerabilities, in turn facilitating their protection from such threats by targeting them with relevant advice and education that meets users’ needs. This system is considered cost and time effective, as integrating individuals’ needs with the administrator’s knowledge of existing threats could avoid the overhead and inconvenience of sending blanket advice to all users.

The study develops a conceptual model to test the factors that influence social networks users’ judgement of social engineering-based attacks in order to identify the weakest points of users’ detection behaviour, which also helps to predict vulnerable individuals. Proposing such a novel conceptual model helped in bridging the gap between theory and practice by providing a better understanding of how to predict vulnerable users. The findings of this research indicate that most of the considered user characteristics influence users’ vulnerability either directly or indirectly. This research also contributes to the existing knowledge of social engineering in social networks, particularly augmenting the research area of predicting user behaviour toward security threats by proposing a new influencing perspective, the socio-emotional, which has not been satisfactorily reported in the literature before, as a dimension affecting user vulnerability. This new perspective could also be incorporated to investigate user behaviour in several other contexts.

Using a scenario-based experiment instead of conducting a real attack study is one of the main limitations of the present study but was considered unavoidable due to ethical considerations. However, the selected attack scenarios were designed carefully to match recent and real social engineering-based attacks on Facebook. Additionally, the present study was undertaken in full consciousness of the fact that when measuring people’s previous experience with cybercrime, some participants might be unaware of their previous victimisation and so might respond inaccurately. In order to mitigate this limitation, different types of SE attacks have been considered in the scale that measures previous experience with cybercrime, such as phishing, identity theft, harassment, and fraud.

Furthermore, this research has focused only on academic communities as all the participants in this study were students, academic, and administrative staff of two universities. This could be seen as a limitation as the result may not reflect the behaviour of the general public. The university context is important however, and cyber-criminals have targeted universities recently due to their importance in providing online resources to their students and community (Öğütçü et al. 2016 ). Additionally, while several steps have been taken to ensure the inclusion of all influential factors in the model, it is not feasible to guarantee that all possibly influencing attributes are included in this study. Further efforts are needed in this sphere, as predicting human behaviour is a complex task.

The conceptual study model could be used to test user vulnerability to different types of privacy or security hazards associated with the use of social networks: for instance, by measuring users’ response to the risk related to loose privacy restrictions, or to sharing private information on the network. Furthermore, investigating whether social networks users have different levels of vulnerability to privacy and security associated risks is another area of potential future research. The proposed model’s prediction efficiency could be compared to different types of security and privacy threats. This comparison would offer a reasonable future direction for researchers to consider. Future research could focus more on improving the proposed model by giving perceived trust greater attention, as this factor was the highest behaviour predictor in the present model. The novel conceptualisation of users’ competence in the conceptual model has proved to have a profound influence on their behaviour toward social engineering victimisation, a finding which can offer additional new insight for future investigations.

Availability of data and materials

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Al Omoush KS, Yaseen SG, Atwah Alma’Aitah M (2012) The impact of Arab cultural values on online social networking: the case of Facebook. Comput Hum Behav 28(6):2387–2399. https://doi.org/10.1016/j.chb.2012.07.010

Article   Google Scholar  

Albladi SM, Weir GRS (2017) Competence measure in social networks. In: 2017 International Carnahan Conference on Security Technology (ICCST). IEEE, pp 1–6. https://doi.org/10.1109/CCST.2017.8167845

Albladi SM, Weir GRS (2018) User characteristics that influence judgment of social engineering attacks in social networks. Hum-Cent Comput Info Sci 8(1):5. https://doi.org/10.1186/s13673-018-0128-7

Aldawood H, Skinner G (2018) Educating and raising awareness on cyber security social engineering: a literature review. In: 2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering. IEEE, pp 62–68. https://doi.org/10.1109/TALE.2018.8615162

Algarni A, Xu Y, Chan T (2017) An empirical study on the susceptibility to social engineering in social networking sites: the case of Facebook. Eur J Inf Syst 26(6):661–687. https://doi.org/10.1057/s41303-017-0057-y

Alqarni Z, Algarni A, Xu Y (2016) Toward predicting susceptibility to phishing victimization on Facebook. In: 2016 IEEE International Conference on Services Computing (SCC). IEEE, pp 419–426. https://doi.org/10.1109/SCC.2016.61

Alseadoon IMA (2014) The impact of users’ characteristics on their ability to detect phishing emails. Doctoral Thesis. Queensland University of Technology. https://eprints.qut.edu.au/72873/ .

Alseadoon I, Othman MFI, Chan T (2015) What is the influence of users’ characteristics on their ability to detect phishing emails? In: Sulaiman HA, Othman MA, Othman MFI, Rahim YA, Pee NC (eds) Advanced computer and communication engineering technology, vol 315. Springer International Publishing, Cham, pp 949–962. https://doi.org/10.1007/978-3-319-07674-4_89

Chapter   Google Scholar  

Baabdullah AM (2018) Consumer adoption of Mobile Social Network Games (M-SNGs) in Saudi Arabia: the role of social influence, hedonic motivation and trust. Technol Soc 53:91–102. https://doi.org/10.1016/j.techsoc.2018.01.004

Basak E, Calisir F (2015) An empirical study on factors affecting continuance intention of using Facebook. Comput Hum Behav 48:181–189. https://doi.org/10.1016/j.chb.2015.01.055

Beldad AD, Hegner SM (2017) More photos from me to thee: factors influencing the intention to continue sharing personal photos on an Online Social Networking (OSN) site among young adults in the Netherlands. Int J Hum–Comput Interact 33(5):410–422. https://doi.org/10.1080/10447318.2016.1254890

Bentler PM, Bonett DG (1980) Significance tests and goodness of fit in the analysis of covariance structures. Psychol Bull 88(3):588–606. https://doi.org/10.1037//0033-2909.88.3.588

Bohme R, Moore T (2012) How do consumers react to cybercrime? In: 2012 eCrime Researchers Summit. IEEE, pp 1–12. https://doi.org/10.1109/eCrime.2012.6489519

Buglass SL, Binder JF, Betts LR, Underwood JDM (2016) When ‘friends’ collide: social heterogeneity and user vulnerability on social network sites. Comput Hum Behav 54:62–72. https://doi.org/10.1016/j.chb.2015.07.039

Cao B, Lin W-Y (2015) How do victims react to cyberbullying on social networking sites? The influence of previous cyberbullying victimization experiences. Comput Hum Behav 52:458–465. https://doi.org/10.1016/j.chb.2015.06.009

Chang C-W, Heo J (2014) Visiting theories that predict college students’ self-disclosure on Facebook. Comput Hum Behav 30:79–86. https://doi.org/10.1016/j.chb.2013.07.059

Cheung C, Lee ZWY, Chan TKH (2015) Self-disclosure in social networking sites: the role of perceived cost, perceived benefits and social influence. Internet Res 25(2):279–299. https://doi.org/10.1108/IntR-09-2013-0192

Chiu C-M, Hsu M-H, Wang ETG (2006) Understanding knowledge sharing in virtual communities: an integration of social capital and social cognitive theories. Decis Support Syst 42(3):1872–1888. https://doi.org/10.1016/j.dss.2006.04.001

Chiu C-M, Wang ETG, Fang Y-H, Huang H-Y (2014) Understanding customers’ repeat purchase intentions in B2C e-commerce: the roles of utilitarian value, hedonic value and perceived risk. Inf Syst J 24(1):85–114. https://doi.org/10.1111/j.1365-2575.2012.00407.x

Cohen J (1988) Statistical power analysis for the behavioral sciences, 2nd edn

MATH   Google Scholar  

Dijkstra TK, Henseler J (2015) Consistent and asymptotically normal PLS estimators for linear structural equations. Comput Stat Data Anal 81:10–23. https://doi.org/10.1016/j.csda.2014.07.008

Article   MathSciNet   MATH   Google Scholar  

Flores WR, Holm H, Nohlberg M, Ekstedt M (2015) Investigating personal determinants of phishing and the effect of national culture. Inf Comput Secur 23(2):178–199. https://doi.org/10.1108/ICS-05-2014-0029

Flores WR, Holm H, Svensson G, Ericsson G (2014) Using phishing experiments and scenario-based surveys to understand security behaviours in practice. Inf Manag Comput Secur 22(4):393–406. https://doi.org/10.1108/IMCS-11-2013-0083

Fogel J, Nehmad E (2009) Internet social network communities: risk taking, trust, and privacy concerns. Comput Hum Behav 25(1):153–160. https://doi.org/10.1016/j.chb.2008.08.006

Fu Q, Feng B, Guo D, Li Q (2018) Combating the evolving spammers in online social networks. Comput Secur 72:60–73. https://doi.org/10.1016/j.cose.2017.08.014

Gao H, Hu J, Huang T, Wang J, Chen Y (2011) Security issues in online social networks. IEEE Internet Comput 15(4):56–63. https://doi.org/10.1109/MIC.2011.50

Götz O, Liehr-Gobbers K, Krafft M (2010) Evaluation of structural equation models using the partial least squares (PLS) approach. In: Esposito Vinzi V, Chin W, Henseler J, Wang H (eds) Handbook of partial least squares. Springer Berlin Heidelberg, pp 691–711. https://doi.org/10.1007/978-3-540-32827-8_30

Gupta BB, Arachchilage NAG, Psannis KE (2018) Defending against phishing attacks: taxonomy of methods, current issues and future directions. Telecommun Syst 67(2):247–267. https://doi.org/10.1007/s11235-017-0334-z

Hair JF, Hult GTM, Ringle CM, Sarstedt M (2017) A primer on partial least squares structural equation modeling (PLS-SEM), 2nd edn. SAGE Publications. https://search.lib.byu.edu/byu/record/lee.6690785 .

Hair JF, Sarstedt M, Ringle CM, Mena JA (2012) An assessment of the use of partial least squares structural equation modeling in marketing research. J Acad Mark Sci 40(3):414–433. https://doi.org/10.1007/s11747-011-0261-6

Halevi, T., Lewis, J., & Memon, N. (2013). Phishing, personality traits and Facebook. ArXiv Preprint. Retrieved from http://arxiv.org/abs/1301.7643

Google Scholar  

Henseler J, Dijkstra TK, Sarstedt M, Ringle CM, Diamantopoulos A, Straub DW et al (2014) Common beliefs and reality about PLS. Organ Res Methods 17(2):182–209. https://doi.org/10.1177/1094428114526928

Henseler J, Ringle CM, Sinkovics RR (2009) The use of partial least squares path modeling in international marketing. Adv Int Mark 20(1):277–319. https://doi.org/10.1108/S1474-7979(2009)0000020014

Hu L, Bentler PM (1998) Fit indices in covariance structure modeling: sensitivity to underparameterized model misspecification. Psychol Methods 3(4):424–453. https://doi.org/10.1037/1082-989X.3.4.424

Iuga C, Nurse JRC, Erola A (2016) Baiting the hook: factors impacting susceptibility to phishing attacks. Hum-Cent Comput Info Sci 6(1):8. https://doi.org/10.1186/s13673-016-0065-2

Joinson AN (2008) Looking at, looking up or keeping up with people? Motives and uses of Facebook. In: Proceeding of the twenty-sixth annual CHI conference on human factors in computing systems. ACM Press, New York, pp 1027–1036. https://doi.org/10.1145/1357054.1357213

Kayes I, Iamnitchi A (2017) Privacy and security in online social networks: a survey. Online Soc Netw Media 3–4:1–21. https://doi.org/10.1016/j.osnem.2017.09.001

Kim EB (2013) Information security awareness status of business college: undergraduate students. Inf Secur J 22(4):171–179. https://doi.org/10.1080/19393555.2013.828803

Kim YH, Kim DJ, Wachter K (2013) A study of mobile user engagement (MoEN): engagement motivations, perceived value, satisfaction, and continued engagement intention. Decis Support Syst 56(1):361–370. https://doi.org/10.1016/j.dss.2013.07.002

Krombholz K, Hobel H, Huber M, Weippl E (2015) Advanced social engineering attacks. J Inf Secur Appl 22:113–122. https://doi.org/10.1016/j.jisa.2014.09.005

Madden M, Lenhart A, Cortesi S, Gasser U, Duggan M, Smith A, Beaton M (2013) Teens, social media, and privacy. Pew Research Center Retrieved from http://www.pewinternet.org/2013/05/21/teens-social-media-and-privacy/

Mahuteau S, Zhu R (2016) Crime victimisation and subjective well-being: panel evidence from Australia. Health Econ 25(11):1448–1463. https://doi.org/10.1002/hec.3230

Milne GR, Labrecque LI, Cromer C (2009) Toward an understanding of the online consumer’s risky behavior and protection practices. J Consum Aff 43(3):449–473. https://doi.org/10.1111/j.1745-6606.2009.01148.x

Mitnick KD, Simon WL (2003) The art of deception: controlling the human element in security. Wiley. https://books.google.com.sa/books?hl=ar&lr=&id=rmvDDwAAQBAJ&oi=fnd&pg=PR7&dq=Mitnick+KD,+Simon+WL+(2003)+The+art+of+deception:+controlling+the+human+1217+element+in+security.+Wiley&ots=_eyXWB11Wd&sig=9QEMsNUp8X2oiGmAnh7S800L160&redir_esc=y#v=onepage&q&f=false .

Munro MC, Huff SL, Marcolin BL, Compeau DR (1997) Understanding and measuring user competence. Inf Manag 33(1):45–57. https://doi.org/10.1016/S0378-7206(97)00035-9

Öğütçü G, Testik ÖM, Chouseinoglou O (2016) Analysis of personal information security behavior and awareness. Comput Secur 56:83–93. https://doi.org/10.1016/j.cose.2015.10.002

Orchard LJ, Fullwood C, Galbraith N, Morris N (2014) Individual differences as predictors of social networking. J Comput-Mediat Commun 19(3):388–402. https://doi.org/10.1111/jcc4.12068

Proofpoint. (2018). The human factor 2018 report. Retrieved from https://www.proofpoint.com/sites/default/files/pfpt-us-wp-human-factor-report-2018-180425.pdf

Rae JR, Lonborg SD (2015) Do motivations for using Facebook moderate the association between Facebook use and psychological well-being? Front Psychol 6:771. https://doi.org/10.3389/fpsyg.2015.00771

Riek M, Bohme R, Moore T (2016) Measuring the influence of perceived cybercrime risk on online service avoidance. IEEE Trans Dependable Secure Comput 13(2):261–273. https://doi.org/10.1109/TDSC.2015.2410795

Ringle CM, Sarstedt M, Straub D (2012) A critical look at the use of PLS-SEM in MIS quarterly. MIS Q 36(1) Retrieved from https://ssrn.com/abstract=2176426

Ringle CM, Wende S, Becker J-M (2015) SmartPLS 3. SmartPLS, Bönningstedt Retrieved from http://www.smartpls.com

Ross C, Orr ES, Sisic M, Arseneault JM, Simmering MG, Orr RR (2009) Personality and motivations associated with Facebook use. Comput Hum Behav 25(2):578–586. https://doi.org/10.1016/j.chb.2008.12.024

Rungtusanatham M, Wallin C, Eckerd S (2011) The vignette in a scenario-based role-playing experiment. J Supply Chain Manag 47(3):9–16. https://doi.org/10.1111/j.1745-493X.2011.03232.x

Saridakis G, Benson V, Ezingeard J-N, Tennakoon H (2016) Individual information security, user behaviour and cyber victimisation: an empirical study of social networking users. Technol Forecast Soc Chang 102:320–330. https://doi.org/10.1016/j.techfore.2015.08.012

Sheng S, Holbrook M, Kumaraguru P, Cranor LF, Downs J (2010) Who falls for phish? In: Proceedings of the 28th international conference on human factors in computing systems - CHI ‘10. ACM Press, New York, pp 373–382. https://doi.org/10.1145/1753326.1753383

Sherchan W, Nepal S, Paris C (2013) A survey of trust in social networks. ACM Comput Surv 45(4):1–33. https://doi.org/10.1145/2501654.2501661

Soper, D. (2012). A-priori sample size calculator. Retrieved from https://www.danielsoper.com/statcalc/calculator.aspx?id=1

Tabachnick BG, Fidel LS (2013) Using multivariate statistics, 6th edn. Pearson, Boston

Tsikerdekis M, Zeadally S (2014) Online deception in social media. Commun ACM 57(9):72–80. https://doi.org/10.1145/2629612

Van Schaik P, Jansen J, Onibokun J, Camp J, Kusev P (2018) Security and privacy in online social networking: risk perceptions and precautionary behaviour. Comput Hum Behav 78:283–297. https://doi.org/10.1016/j.chb.2017.10.007

Vishwanath A (2015) Habitual Facebook use and its impact on getting deceived on social media. J Comput-Mediat Commun 20(1):83–98. https://doi.org/10.1111/jcc4.12100

Article   MathSciNet   Google Scholar  

Vishwanath A, Harrison B, Ng YJ (2016) Suspicion, cognition, and automaticity model of phishing susceptibility. Commun Res. https://doi.org/10.1177/0093650215627483

Wang J, Herath T, Chen R, Vishwanath A, Rao HR (2012) Research article phishing susceptibility: an investigation into the processing of a targeted spear phishing email. IEEE Trans Prof Commun 55(4):345–362. https://doi.org/10.1109/TPC.2012.2208392

Wang J, Li Y, Rao HR (2017) Coping responses in phishing detection: an investigation of antecedents and consequences. Inf Syst Res 28(2):378–396. https://doi.org/10.1287/isre.2016.0680

Workman M (2007) Gaining access with social engineering: an empirical study of the threat. Inf Syst Secur 16(6):315–331. https://doi.org/10.1080/10658980701788165

Workman M (2008) A test of interventions for security threats from social engineering. Inf Manag Comput Secur 16(5):463–483. https://doi.org/10.1108/09685220810920549

Wright RT, Marett K (2010) The influence of experiential and dispositional factors in phishing: an empirical investigation of the deceived. J Manag Inf Syst 27(1):273–303. https://doi.org/10.2753/MIS0742-1222270111

Yang H-L, Lin C-L (2014) Why do people stick to Facebook web site? A value theory-based view. Inf Technol People 27(1):21–37. https://doi.org/10.1108/ITP-11-2012-0130

Download references

Acknowledgements

We are sincerely grateful to the many individuals who voluntarily participated in this research.

This work is supported by the University of Jeddah, Kingdom of Saudi Arabia as part of the first author’s research conducted at the University of Strathclyde in Glasgow, UK.

Author information

Authors and affiliations.

College of Computer Science and Engineering, University of Jeddah, Jeddah, Kingdom of Saudi Arabia

Samar Muslah Albladi

Department of Computer and Information Sciences, University of Strathclyde, Glasgow, UK

George R. S. Weir

You can also search for this author in PubMed   Google Scholar

Contributions

SMA conducted the study, analysed the collected data, and drafted the manuscript. GRSW participated in drafting the manuscript. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Samar Muslah Albladi .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

figure 3

Measurement Model

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Albladi, S.M., Weir, G.R.S. Predicting individuals’ vulnerability to social engineering in social networks. Cybersecur 3 , 7 (2020). https://doi.org/10.1186/s42400-020-00047-5

Download citation

Received : 08 October 2019

Accepted : 20 February 2020

Published : 05 March 2020

DOI : https://doi.org/10.1186/s42400-020-00047-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Information security
  • Social engineering
  • Social network
  • Vulnerability

social engineering research papers

  • Reference Manager
  • Simple TEXT file

People also looked at

Review article, phishing attacks: a recent comprehensive study and a new anatomy.

www.frontiersin.org

  • Cardiff School of Technologies, Cardiff Metropolitan University, Cardiff, United Kingdom

With the significant growth of internet usage, people increasingly share their personal information online. As a result, an enormous amount of personal information and financial transactions become vulnerable to cybercriminals. Phishing is an example of a highly effective form of cybercrime that enables criminals to deceive users and steal important data. Since the first reported phishing attack in 1990, it has been evolved into a more sophisticated attack vector. At present, phishing is considered one of the most frequent examples of fraud activity on the Internet. Phishing attacks can lead to severe losses for their victims including sensitive information, identity theft, companies, and government secrets. This article aims to evaluate these attacks by identifying the current state of phishing and reviewing existing phishing techniques. Studies have classified phishing attacks according to fundamental phishing mechanisms and countermeasures discarding the importance of the end-to-end lifecycle of phishing. This article proposes a new detailed anatomy of phishing which involves attack phases, attacker’s types, vulnerabilities, threats, targets, attack mediums, and attacking techniques. Moreover, the proposed anatomy will help readers understand the process lifecycle of a phishing attack which in turn will increase the awareness of these phishing attacks and the techniques being used; also, it helps in developing a holistic anti-phishing system. Furthermore, some precautionary countermeasures are investigated, and new strategies are suggested.

Introduction

The digital world is rapidly expanding and evolving, and likewise, as are cybercriminals who have relied on the illegal use of digital assets—especially personal information—for inflicting damage to individuals. One of the most threatening crimes of all internet users is that of ‘identity theft’ ( Ramanathan and Wechsler, 2012 ) which is defined as impersonating the person’s identity to steal and use their personal information (i.e., bank details, social security number, or credit card numbers, etc.) by an attacker for the individuals’ own gain not just for stealing money but also for committing other crimes ( Arachchilage and Love, 2014 ). Cyber criminals have also developed their methods for stealing their information, but social-engineering-based attacks remain their favorite approach. One of the social engineering crimes that allow the attacker to perform identity theft is called a phishing attack. Phishing has been one of the biggest concerns as many internet users fall victim to it. It is a social engineering attack wherein a phisher attempts to lure the users to obtain their sensitive information by illegally utilizing a public or trustworthy organization in an automated pattern so that the internet user trusts the message, and reveals the victim’s sensitive information to the attacker ( Jakobsson and Myers, 2006 ). In phishing attacks, phishers use social engineering techniques to redirect users to malicious websites after receiving an email and following an embedded link ( Gupta et al., 2015 ). Alternatively, attackers could exploit other mediums to execute their attacks such as Voice over IP (VoIP), Short Message Service (SMS) and, Instant Messaging (IM) ( Gupta et al., 2015 ). Phishers have also turned from sending mass-email messages, which target unspecified victims, into more selective phishing by sending their emails to specific victims, a technique called “spear-phishing.”

Cybercriminals usually exploit users with a lack of digital/cyber ethics or who are poorly trained in addition to technical vulnerabilities to reach their goals. Susceptibility to phishing varies between individuals according to their attributes and awareness level, therefore, in most attacks, phishers exploit human nature for hacking, instead of utilising sophisticated technologies. Even though the weakness in the information security chain is attributed to humans more than the technology, there is a lack of understanding about which ring in this chain is first penetrated. Studies found that certain personal characteristics make some persons more receptive to various lures ( Iuga et al., 2016 ; Ovelgönne et al., 2017 ; Crane, 2019 ). For example, individuals who usually obey authorities more than others are more likely to fall victim to a Business Email Compromise (BEC) that is pretending to be from a financial institution and requests immediate action by seeing it as a legitimate email ( Barracuda, 2020 ). Greediness is another human weakness that could be used by an attacker, for example, emails that offering either great discounts, free gift cards, and others ( Workman, 2008 ).

Various channels are used by the attacker to lure the victim through a scam or through an indirect manner to deliver a payload for gaining sensitive and personal information from the victim ( Ollmann, 2004 ). However, phishing attacks have already led to damaging losses and could affect the victim not only through a financial context but could also have other serious consequences such as loss of reputation, or compromise of national security ( Ollmann, 2004 ; Herley and Florêncio, 2008 ). Cybercrime damages have been expected to cost the world $6 trillion annually by 2021, up from $3 trillion in 2015 according to Cybersecurity Ventures ( Morgan, 2019 ). Phishing attacks are the most common type of cybersecurity breaches as stated by the official statistics from the cybersecurity breaches survey 2020 in the United Kingdom ( GOV.UK, 2020 ). Although these attacks affect organizations and individuals alike, the loss for the organizations is significant, which includes the cost for recovery, the loss of reputation, fines from information laws/regulations, and reduced productivity ( Medvet et al., 2008 ).

Phishing is a field of study that merges social psychology, technical systems, security subjects, and politics. Phishing attacks are more prevalent: a recent study ( Proofpoint, 2020 ) found that nearly 90% of organizations faced targeted phishing attacks in 2019. From which 88% experienced spear-phishing attacks, 83% faced voice phishing (Vishing), 86% dealt with social media attacks, 84% reported SMS/text phishing (SMishing), and 81% reported malicious USB drops. The 2018 Proofpoint 1 annual report ( Proofpoint, 2019a ) has stated that phishing attacks jumped from 76% in 2017 to 83% in 2018, where all phishing types happened more frequently than in 2017. The number of phishing attacks identified in the second quarter of 2019 was notably higher than the number recorded in the previous three quarters. While in the first quarter of 2020, this number was higher than it was in the previous one according to a report from Anti-Phishing Working Group (APWG 2 ) ( APWG, 2018 ) which confirms that phishing attacks are on the rise. These findings have shown that phishing attacks have increased continuously in recent years and have become more sophisticated and have gained more attention from cyber researchers and developers to detect and mitigate their impact. This article aims to determine the severity of the phishing problem by providing detailed insights into the phishing phenomenon in terms of phishing definitions, current statistics, anatomy, and potential countermeasures.

The rest of the article is organized as follows. Phishing Definitions provides a number of phishing definitions as well as some real-world examples of phishing. The evolution and development of phishing attacks are discussed in Developing a Phishing Campaign . What Attributes Make Some People More Susceptible to Phishing Attacks Than Others explores the susceptibility to these attacks. The proposed phishing anatomy and types of phishing attacks are elaborated in Proposed Phishing Anatomy . In Countermeasures , various anti-phishing countermeasures are discussed. The conclusions of this study are drawn in Conclusion .

Phishing Definitions

Various definitions for the term “phishing” have been proposed and discussed by experts, researchers, and cybersecurity institutions. Although there is no established definition for the term “phishing” due to its continuous evolution, this term has been defined in numerous ways based on its use and context. The process of tricking the recipient to take the attacker’s desired action is considered the de facto definition of phishing attacks in general. Some definitions name websites as the only possible medium to conduct attacks. The study ( Merwe et al., 2005 , p. 1) defines phishing as “a fraudulent activity that involves the creation of a replica of an existing web page to fool a user into submitting personal, financial, or password data.” The above definition describes phishing as an attempt to scam the user into revealing sensitive information such as bank details and credit card numbers, by sending malicious links to the user that leads to the fake web establishment. Others name emails as the only attack vector. For instance, PishTank (2006) defines phishing as “a fraudulent attempt, usually made through email, to steal your personal information.” A description for phishing stated by ( Kirda and Kruegel, 2005 , p.1) defines phishing as “a form of online identity theft that aims to steal sensitive information such as online banking passwords and credit card information from users.” Some definitions highlight the usage of combined social and technical skills. For instance, APWG defines phishing as “a criminal mechanism employing both social engineering and technical subterfuge to steal consumers’ personal identity data and financial account credentials” ( APWG, 2018 , p. 1). Moreover, the definition from the United States Computer Emergency Readiness Team (US-CERT) states phishing as “a form of social engineering that uses email or malicious websites (among other channels) to solicit personal information from an individual or company by posing as a trustworthy organization or entity” ( CISA, 2018 ). A detailed definition has been presented in ( Jakobsson and Myers, 2006 , p. 1), which describes phishing as “a form of social engineering in which an attacker, also known as a phisher, attempts to fraudulently retrieve legitimate users’ confidential or sensitive credentials by mimicking electronic communications from a trustworthy or public organization in an automated fashion. Such communications are most frequently done through emails that direct users to fraudulent websites that in turn collect the credentials in question.”

In order to understand the anatomy of the phishing attack, there is a necessity for a clear and detailed definition that underpins previous existent definitions. Since a phishing attack constitutes a mix of technical and social engineering tactics, a new definition (i.e., Anatomy) has been proposed in this article, which describes the complete process of a phishing attack. This provides a better understanding for the readers as it covers phishing attacks in depth from a range of perspectives. Various angles and this might help beginner readers or researchers in this field. To this end, we define phishing as a socio-technical attack, in which the attacker targets specific valuables by exploiting an existing vulnerability to pass a specific threat via a selected medium into the victim’s system, utilizing social engineering tricks or some other techniques to convince the victim into taking a specific action that causes various types of damages.

Figure 1 depicts the general process flow for a phishing attack that contains four phases; these phases are elaborated in Proposed Phishing Anatomy . However, as shown in Figure 1 , in most attacks, the phishing process is initiated by gathering information about the target. Then the phisher decides which attack method is to be used in the attack as initial steps within the planning phase. The second phase is the preparation phase, in which the phisher starts to search for vulnerabilities through which he could trap the victim. The phisher conducts his attack in the third phase and waits for a response from the victim. In turn, the attacker could collect the spoils in the valuables acquisition phase, which is the last step in the phishing process. To elaborate the above phishing process using an example, an attacker may send a fraudulent email to an internet user pretending to be from the victim’s bank, requesting the user to confirm the bank account details, or else the account may be suspended. The user may think this email is legitimate since it uses the same graphic elements, trademarks, and colors of their legitimate bank. Submitted information will then be directly transmitted to the phisher who will use it for different malicious purposes such as money withdrawal, blackmailing, or committing further frauds.

www.frontiersin.org

FIGURE 1 . General phishing attack process.

Real-World Phishing Examples

Some real-world examples of phishing attacks are discussed in this section to present the complexity of some recent phishing attacks. Figure 2 shows the screenshot of a suspicious phishing email that passed a University’s spam filters and reached the recipient mailbox. As shown in Figure 2 , the phisher uses the sense of importance or urgency in the subject through the word ‘important,’ so that the email can trigger a psychological reaction in the user to prompt them into clicking the button “View message.” The email contains a suspicious embedded button, indeed, when hovering over this embedded button, it does not match with Uniform Resource Locator (URL) in the status bar. Another clue in this example is that the sender's address is questionable and not known to the receiver. Clicking on the fake attachment button will result in either installation of a virus or worm onto the computer or handing over the user’s credentials by redirecting the victim onto a fake login page.

www.frontiersin.org

FIGURE 2 . Screenshot of a real suspicious phishing email received by the authors’ institution in February 2019.

More recently, phishers take advantage of the Coronavirus pandemic (COVID-19) to fool their prey. Many Coronavirus-themed scam messages sent by attackers exploited people’s fear of contracting COVID-19 and urgency to look for information related to Coronavirus (e.g., some of these attacks are related to Personal Protective Equipment (PPE) such as facemasks), the WHO stated that COVID-19 has created an Infodemic which is favorable for phishers ( Hewage, 2020 ). Cybercriminals also lured people to open attachments claiming that it contains information about people with Coronavirus within the local area.

Figure 3 shows an example of a phishing e-mail where the attacker claimed to be the recipient’s neighbor sending a message in which they pretended to be dying from the virus and threatening to infect the victim unless a ransom was paid ( Ksepersky, 2020 ).

www.frontiersin.org

FIGURE 3 . Screenshot of a coronavirus related phishing email ( Ksepersky, 2020 ).

Another example is the phishing attack spotted by a security researcher at Akamai organization in January 2019. The attack attempted to use Google Translate to mask suspicious URLs, prefacing them with the legit-looking “ www.translate.google.com ” address to dupe users into logging in ( Rhett, 2019 ). That attack followed with Phishing scams asking for Netflix payment detail for example, or embedded in promoted tweets that redirect users to genuine-looking PayPal login pages. Although the tricky/bogus page was very well designed in the latter case, the lack of a Hypertext Transfer Protocol Secure (HTTPS) lock and misspellings in the URL were key red flags (or giveaways) that this was actually a phishing attempt ( Keck, 2018 ). Figure 4A shows a screenshot of a phishing email received by the Federal Trade Commission (FTC). The email promotes the user to update his payment method by clicking on a link, pretending that Netflix is having a problem with the user's billing information ( FTC, 2018 ).

www.frontiersin.org

FIGURE 4 . Screenshot of the (A) Netflix scam email and (B) fraudulent text message (Apple) ( Keck, 2018 ; Rhett, 2019 )

Figure 4B shows a text message as another example of phishing that is difficult to spot as a fake text message ( Pompon et al., 2018 ). The text message shown appears to come from Apple asking the customer to update the victim’s account. A sense of urgency is used in the message as a lure to motivate the user to respond.

Developing a Phishing Campaign

Today, phishing is considered one of the most pressing cybersecurity threats for all internet users, regardless of their technical understanding and how cautious they are. These attacks are getting more sophisticated by the day and can cause severe losses to the victims. Although the attacker’s first motivation is stealing money, stolen sensitive data can be used for other malicious purposes such as infiltrating sensitive infrastructures for espionage purposes. Therefore, phishers keep on developing their techniques over time with the development of electronic media. The following sub-sections discuss phishing evolution and the latest statistics.

Historical Overview

Cybersecurity has been a major concern since the beginning of APRANET, which is considered to be the first wide-area packet-switching network with distributed control and one of the first networks to implement the TCP/IP protocol suite. The term “Phishing” which was also called carding or brand spoofing, was coined for the first time in 1996 when the hackers created randomized credit card numbers using an algorithm to steal users' passwords from America Online (AOL) ( Whitman and Mattord, 2012 ; Cui et al., 2017 ). Then phishers used instant messages or emails to reach users by posing as AOL employees to convince users to reveal their passwords. Attackers believed that requesting customers to update their account would be an effective way to disclose their sensitive information, thereafter, phishers started to target larger financial companies. The author in ( Ollmann, 2004 ) believes that the “ph” in phishing comes from the terminology “Phreaks” which was coined by John Draper, who was also known as Captain Crunch, and was used by early Internet criminals when they phreak telephone systems. Where the “f” in ‘fishing’ replaced with “ph” in “Phishing” as they both have the same meaning by phishing the passwords and sensitive information from the sea of internet users. Over time, phishers developed various and more advanced types of scams for launching their attack. Sometimes, the purpose of the attack is not limited to stealing sensitive information, but it could involve injecting viruses or downloading the malicious program into a victim's computer. Phishers make use of a trusted source (for instance a bank helpdesk) to deceive victims so that they disclose their sensitive information ( Ollmann, 2004 ).

Phishing attacks are rapidly evolving, and spoofing methods are continuously changing as a response to new corresponding countermeasures. Hackers take advantage of new tool-kits and technologies to exploit systems’ vulnerabilities and also use social engineering techniques to fool unsuspecting users. Therefore, phishing attacks continue to be one of the most successful cybercrime attacks.

The Latest Statistics of Phishing Attacks

Phishing attacks are becoming more common and they are significantly increasing in both sophistication and frequency. Lately, phishing attacks have appeared in various forms. Different channels and threats are exploited and used by the attackers to trap more victims. These channels could be social networks or VoIP, which could carry various types of threats such as malicious attachments, embedded links within an email, instant messages, scam calls, or other types. Criminals know that social engineering-based methods are effective and profitable; therefore, they keep focusing on social engineering attacks, as it is their favorite weapon, instead of concentrating on sophisticated techniques and toolkits. Phishing attacks have reached unprecedented levels especially with emerging technologies such as mobile and social media ( Marforio et al., 2015 ). For instance, from 2017 to 2020, phishing attacks have increased from 72 to 86% among businesses in the United Kingdom in which a large proportion of the attacks are originated from social media ( GOV.UK, 2020 ).

The APWG Phishing Activity Trends Report analyzes and measures the evolution, proliferation, and propagation of phishing attacks reported to the APWG. Figure 5 shows the growth in phishing attacks from 2015 to 2020 by quarters based on APWG annual reports ( APWG, 2020 ). As demonstrated in Figure 5 , in the third quarter of 2019, the number of phishing attacks rose to 266,387, which is the highest level in three years since late 2016. This was up 46% from the 182,465 for the second quarter, and almost double the 138,328 seen in the fourth quarter of 2018. The number of unique phishing e-mails reported to APWG in the same quarter was 118,260. Furthermore, it was found that the number of brands targeted by phishing campaigns was 1,283.

www.frontiersin.org

FIGURE 5 . The growth in phishing attacks 2015–2020 by quarters based on data collected from APWG annual reports.

Cybercriminals are always taking advantage of disasters and hot events for their own gains. With the beginning of the COVID-19 crisis, a variety of themed phishing and malware attacks have been launched by phishers against workers, healthcare facilities, and even the general public. A report from Microsoft ( Microsoft, 2020 ) showed that cyber-attacks related to COVID-19 had spiked to an unprecedented level in March, most of these scams are fake COVID-19 websites according to security company RiskIQ ( RISKIQ, 2020 ). However, the total number of phishing attacks observed by APWG in the first quarter of 2020 was 165,772, up from the 162,155 observed in the fourth quarter of 2019. The number of these unique phishing reports submitted to APWG during the first quarter of 2020 was 139,685, up from 132,553 in the fourth quarter of 2019, 122,359 in the third quarter of 2019, and 112,163 in the second quarter of 2019 ( APWG, 2020 ).

A study ( KeepnetLABS, 2018 ) confirmed that more than 91% of system breaches are caused by attacks initiated by email. Although cybercriminals use email as the main medium for leveraging their attacks, many organizations faced a high volume of different social engineering attacks in 2019 such as Social Media Attacks, Smishing Attacks, Vishing Attacks, USB-based Attacks (for example by hiding and delivering malware to smartphones via USB phone chargers and distributing malware-laden free USBs) ( Proofpoint, 2020 ). However, info-security professionals reported a higher frequency of all types of social engineering attacks year-on-year according to a report presented by Proofpoint. Spear phishing increased to 64% in 2018 from 53% in 2017, Vishing and/or SMishing increased to 49% from 45%, and USB attacks increased to 4% from 3%. The positive side shown in this study is that 59% of suspicious emails reported by end-users were classified as potential phishing, indicating that employees are being more security-aware, diligent, and thoughtful about the emails they receive ( Proofpoint, 2019a ). In all its forms, phishing can be one of the easiest cyber attacks to fall for. With the increasing levels of different phishing types, a survey was conducted by Proofpoint to identify the strengths and weaknesses of particular regions in terms of specific fundamental cybersecurity concepts. In this study, several questions were asked of 7,000 end-users about the identification of multiple terms like phishing, ransomware, SMishing, and Vishing across seven countries; the US, United Kingdom, France, Germany, Italy, Australia, and Japan. The response was different from country to country, where respondents from the United Kingdom recorded the highest knowledge with the term phishing at 70% and the same with the term ransomware at 60%. In contrast, the results showed that the United Kingdom recorded only 18% for each Vishing and SMishing ( Proofpoint, 2019a ), as shown in Table 1 .

www.frontiersin.org

TABLE 1 . Percentage of respondents understanding multiple cybersecurity terms from different countries.

On the other hand, a report by Wombat security reflects responses from more than 6,000 working adults about receiving fraudulent solicitation across six countries; the US, United Kingdom, Germany, France, Italy, and Australia ( Ksepersky, 2020 ). Respondents from the United Kingdom stated that they were recipients of fraudulent solicitations through the following sources: email 62%, phone call 27%, text message 16%, mailed letter 8%, social media 10%, and 17% confirmed that they been the victim of identity theft ( Ksepersky, 2020 ). However, the consequences of responding to phishing are serious and costly. For instance, the United Kingdom losses from financial fraud across payment cards, remote banking, and cheques totaled £768.8 million in 2016 ( Financial Fraud Action UK, 2017 ). Indeed, the losses resulting from phishing attacks are not limited to financial losses that might exceed millions of pounds, but also loss of customers and reputation. According to the 2020 state of phish report ( Proofpoint, 2020 ), damages from successful phishing attacks can range from lost productivity to cash outlay. The cost can include; lost hours from employees, remediation time for info security teams’ costs due to incident response, damage to reputation, lost intellectual property, direct monetary losses, compliance fines, lost customers, legal fees, etc.

There are many targets for phishing including end-user, business, financial services (i.e., banks, credit card companies, and PayPal), retail (i.e., eBay, Amazon) and, Internet Service Providers ( wombatsecurity.com, 2018 ). Affected organizations detected by Kaspersky Labs globally in the first quarter of 2020 are demonstrated in Figure 6 . As shown in the figure, online stores were at the top of the targeted list (18.12%) followed by global Internet portals (16.44%) and social networks in third place (13.07%) ( Ksepersky, 2020 ). While the most impersonated brands overall for the first quarter of 2020 were Apple, Netflix, Yahoo, WhatsApp, PayPal, Chase, Facebook, Microsoft eBay, and Amazon ( Checkpoint, 2020 ).

www.frontiersin.org

FIGURE 6 . Distribution of organizations affected by phishing attacks detected by Kaspersky in quarter one of 2020.

Phishing attacks can take a variety of forms to target people and steal sensitive information from them. Current data shows that phishing attacks are still effective, which indicates that the available existing countermeasures are not enough to detect and prevent these attacks especially on smart devices. The social engineering element of the phishing attack has been effective in bypassing the existing defenses to date. Therefore, it is essential to understand what makes people fall victim to phishing attacks. What Attributes Make Some People More Susceptible to Phishing Attacks Than Others discusses the human attributes that are exploited by the phishers.

What Attributes Make Some People More Susceptible to Phishing Attacks Than Others

Why do most existing defenses against phishing not work? What personal and contextual attributes make them more susceptible to phishing attacks than other users? Different studies have discussed those two questions and examined the factors affecting susceptibility to a phishing attack and the reasons behind why people get phished. Human nature is considered one of the most affecting factors in the process of phishing. Everyone is susceptible to phishing attacks because phishers play on an individual’s specific psychological/emotional triggers as well as technical vulnerabilities ( KeepnetLABS, 2018 ; Crane, 2019 ). For instance, individuals are likely to click on a link within an email when they see authority cues ( Furnell, 2007 ). In 2017, a report by PhishMe (2017) found that curiosity and urgency were the most common triggers that encourage people to respond to the attack, later these triggers were replaced by entertainment, social media, and reward/recognition as the top emotional motivators. However, in the context of a phishing attack, the psychological triggers often surpass people’s conscious decisions. For instance, when people are working under stress, they tend to make decisions without thinking of the possible consequences and options ( Lininger and Vines, 2005 ). Moreover, everyday stress can damage areas of the brain that weakens the control of their emotions ( Keinan, 1987 ). Several studies have addressed the association between susceptibility to phishing and demographic variables (e.g., age and gender) as an attempt to identify the reasons behind phishing success at different population groups. Although everyone is susceptible to phishing, studies showed that different age groups are more susceptible to certain lures than others are. For example, participants with an age range between 18 and 25 are more susceptible to phishing than other age groups ( Williams et al., 2018 ). The reason that younger adults are more likely to fall for phishing, is that younger adults are more trusting when it comes to online communication, and are also more likely to click on unsolicited e-mails ( Getsafeonline, 2017 ). Moreover, older participants are less susceptible because they tend to be less impulsive ( Arnsten et al., 2012 ). While some studies confirmed that women are more susceptible than men to phishing as they click on links in phishing emails and enter information into phishing websites more often than men do. The study published by Getsafeonline (2017) identifies a lack of technical know-how and experience among women than men as the main reason for this. In contrast, a survey conducted by antivirus company Avast found that men are more susceptible to smartphone malware attacks than women ( Ong, 2014 ). These findings confirmed the results from the study ( Hadlington, 2017 ) that found men are more susceptible to mobile phishing attacks than women. The main reason behind this according to Hadlington (2017) is that men are more comfortable and trusting when using mobile online services. The relationships between demographic characteristics of individualls and their ability to correctly detect a phishing attack have been studied in ( Iuga et al., 2016 ). The study showed that participants with high Personal Computer (PC) usage tend to identify phishing efforts more accurately and faster than other participants. Another study ( Hadlington, 2017 ) showed that internet addiction, attentional, and motor impulsivity were significant positive predictors for risky cybersecurity behaviors while a positive attitude toward cybersecurity in business was negatively related to risky cybersecurity behaviors. On the other hand, the trustworthiness of people in some web sites/platforms is one of the holes that the scammers or crackers exploit especially when it based on visual appearance that could fool the user ( Hadlington, 2017 ). For example, fraudsters take advantage of people’s trust in a website by replacing a letter from the legitimate site with a number such as goog1e.com instead of google.com . Another study ( Yeboah-Boateng and Amanor, 2014 ) demonstrates that although college students are unlikely to disclose personal information as a response to an email, nonetheless they could easily be tricked by other tactics, making them alarmingly susceptible to email phishing attacks. The reason for that is most college students do not have a basis in ICT especially in terms of security. Although security terms like viruses, online scams and worms are known by some end-users, these users could have no knowledge about Phishing, SMishing, and Vishing and others ( Lin et al., 2012 ). However, study ( Yeboah-Boateng and Amanor, 2014 ) shows that younger students are more susceptible than older students, and students who worked full-time were less likely to fall for phishing.

The study reported in ( Diaz et al., 2020 ) examines user click rates and demographics among undergraduates by sending phishing attacks to 1,350 randomly selected students. Students from various disciplines were involved in the test, from engineering and mathematics to arts and social sciences. The study observed that student susceptibility was affected by a range of factors such as phishing awareness, time spent on the computer, cyber training, age, academic year, and college affiliation. The most surprising finding is that those who have greater phishing knowledge are more susceptible to phishing scams. The authors consider two speculations for these unexpected findings. First, user’s awareness about phishing might have been increased with the continuous falling for phishing scams. Second, users who fell for the phish might have less knowledge about phishing than they claim. Other findings from this study agreed with findings from other studies that is, older students were more able to detect a phishing email, and engineering and IT majors had some of the lowest click rates as shown in Figure 7 , which shows that some academic disciplines are more susceptible to phishing than others ( Bailey et al., 2008 ).

www.frontiersin.org

FIGURE 7 . The number of clicks on phishing emails by students in the College of Arts, Humanities, and Social Sciences (AHSS), the College of Engineering and Information Technology (EIT), and the College of Natural and Mathematical Sciences (NMS) at the University of Maryland, Baltimore County (UMBC) ( Diaz et al., 2020 ).

Psychological studies have also illustrated that the user’s ability to avoid phishing attacks affected by different factors such as browser security indicators and user's awareness of phishing. The author in ( Dhamija et al., 2006 ) conducted an experimental study using 22 participants to test the user’s ability to recognize phishing websites. The study shows that 90% of these participants became victims of phishing websites and 23% of them ignored security indexes such as the status and address bar. In 2015, another study was conducted for the same purpose, where a number of fake web pages was shown to the participants ( Alsharnouby et al., 2015 ). The results of this study showed that participants detected only 53% of phishing websites successfully. The authors also observed that the time spent on looking at browser elements affected the ability to detect phishing. Lack of knowledge or awareness and carelessness are common causes for making people fall for a phishing trap. Most people have unknowingly opened a suspicious attachment or clicked a fake link that could lead to different levels of compromise. Therefore, focusing on training and preparing users for dealing with such attacks are essential elements to minimize the impact of phishing attacks.

Given the above discussion, susceptibility to phishing varies according to different factors such as age, gender, education level, internet, and PC addiction, etc. Although for each person, there is a trigger that can be exploited by phishers, even people with high experience may fall prey to phishing due to the attack sophistication that makes it difficult to be recognized. Therefore, it is inequitable that the user has always been blamed for falling for these attacks, developers must improve the anti-phishing systems in a way that makes the attack invisible. Understanding the susceptibility of individuals to phishing attacks will help in better developing prevention and detection techniques and solutions.

Proposed Phishing Anatomy

Phishing process overview.

Generally, most of the phishing attacks start with an email ( Jagatic et al., 2007 ). The phishing mail could be sent randomly to potential users or it can be targeted to a specific group or individuals. Many other vectors can also be used to initiate the attack such as phone calls, instant messaging, or physical letters. However, phishing process steps have been discussed by many researchers due to the importance of understanding these steps in developing an anti-phishing solution. The author in the study ( Rouse, 2013 ) divides the phishing attack process into five phases which are planning, setup, attack, collection, and cash. A study ( Jakobsson and Myers, 2006 ) discusses the phishing process in detail and explained it as step-by-step phases. These phases include preparation for the attack, sending a malicious program using the selected vector, obtaining the user’s reaction to the attack, tricking a user to disclose their confidential information which will be transmitted to the phisher, and finally obtaining the targeted money. While the study ( Abad, 2005 ) describes a phishing attack in three phases: the early phase which includes initializing attack, creating the phishing email, and sending a phishing email to the victim. The second phase includes receiving an email by the victim and disclosing their information (in the case of the respondent) and the final phase in which the defrauding is successful. However, all phishing scams include three primary phases, the phisher requests sensitive valuables from the target, and the target gives away these valuables to a phisher, and phisher misuses these valuables for malicious purposes. These phases can be classified furthermore into its sub-processes according to phishing trends. Thus, a new anatomy for phishing attacks has been proposed in this article, which expands and integrates previous definitions to cover the full life cycle of a phishing attack. The proposed new anatomy, which consists of 4 phases, is shown in Figure 8 . This new anatomy provides a reference structure to look at phishing attacks in more detail and also to understand potential countermeasures to prevent them. The explanations for each phase and its components are presented as follows:

www.frontiersin.org

FIGURE 8 . The proposed anatomy of phishing was built upon the proposed phishing definition in this article, which concluded from our understanding of a phishing attack.

Figure 8 depicts the proposed anatomy of the phishing attack process, phases, and components drawn upon the proposed definition in this article. The proposed phishing anatomy explains in detail each phase of phishing phases including attackers and target types, examples about the information that could be collected by the attacker about the victim, and examples about attack methods. The anatomy, as shown in the figure, illustrates a set of vulnerabilities that the attacker can exploit and the mediums used to conduct the attack. Possible threats are also listed, as well as the data collection method for a further explanation and some examples about target responding types and types of spoils that the attacker could gain and how they can use the stolen valuables. This anatomy elaborates on phishing attacks in depth which helps people to better understand the complete phishing process (i.e., end to end Phishing life cycle) and boost awareness among readers. It also provides insights into potential solutions for phishing attacks we should focus on. Instead of always placing the user or human in an accusation ring as the only reason behind phishing success, developers must be focusing on solutions to mitigate the initiation of the attack by preventing the bait from reaching the user. For instance, to reach the target’s system, the threat has to pass through many layers of technology or defenses exploiting one or more vulnerabilities such as web and software vulnerabilities.

Planning Phase

This is the first stage of the attack, where a phisher makes a decision about the targets and starts gathering information about them (individuals or company). Phishers gather information about the victims to lure them based on psychological vulnerability. This information can be anything like name, e-mail addresses for individuals, or the customers of that company. Victims could also be selected randomly, by sending mass mailings or targeted by harvesting their information from social media, or any other source. Targets for phishing could be any user with a bank account and has a computer on the Internet. Phishers target businesses such as financial services, retail sectors such as eBay and Amazon, and internet service providers such as MSN/Hotmail, and Yahoo ( Ollmann, 2004 ; Ramzan and Wuest, 2007 ). This phase also includes devising attack methods such as building fake websites (sometimes phishers get a scam page that is already designed or used, designing malware, constructing phishing emails. The attacker can be categorized based on the attack motivation. There are four types of attackers as mentioned in studies ( Vishwanath, 2005 ; Okin, 2009 ; EDUCBA, 2017 ; APWG, 2020 ):

▪ Script kiddies: the term script kiddies represents an attacker with no technical background or knowledge about writing sophisticated programs or developing phishing tools but instead they use scripts developed by others in their phishing attack. Although the term comes from children that use available phishing kits to crack game codes by spreading malware using virus toolkits, it does not relate precisely to the actual age of the phisher. Script kiddies can get access to website administration privileges and commit a “Web cracking” attack. Moreover, they can use hacking tools to compromise remote computers so-called “botnet,” the single compromised computer called a “zombie computer.” These attackers are not limited to just sit back and enjoy phishing, they could cause serious damage such as stealing information or uploading Trojans or viruses. In February 2000, an attack launched by Canadian teen Mike Calce resulted in $1.7 million US Dollars (USD) damages from Distributed Denial of Service (DDoS) attacks on CNN, eBay, Dell, Yahoo, and Amazon ( Leyden, 2001 ).

▪ Serious Crackers: also known as Black Hats. These attackers can execute sophisticated attacks and develop worms and Trojans for their attack. They hijack people's accounts maliciously and steal credit card information, destroy important files, or sell compromised credentials for personal gains.

▪ Organized crime: this is the most organized and effective type of attacker and they can incur significant damage to victims. These people hire serious crackers for conducting phishing attacks. Moreover, they can thoroughly trash the victim's identity, and committing devastated frauds as they have the skills, tools, and manpower. An organized cybercrime group is a team of expert hackers who share their skills to build complex attacks and to launch phishing campaigns against individuals and organizations. These groups offer their work as ‘crime as a service’ and they can be hired by terrorist groups, organizations, or individuals.

▪ Terrorists: due to our dependency on the internet for most activities, terrorist groups can easily conduct acts of terror remotely which could have an adverse impact. These types of attacks are dangerous since they are not in fear of any aftermath, for instance going to jail. Terrorists could use the internet to the maximum effect to create fear and violence as it requires limited funds, resources, and efforts compared to, for example, buying bombs and weapons in a traditional attack. Often, terrorists use spear phishing to launch their attacks for different purposes such as inflicting damage, cyber espionage, gathering information, locating individuals, and other vandalism purposes. Cyber espionage has been used extensively by cyber terrorists to steal sensitive information on national security, commercial information, and trade secrets which can be used for terrorist activities. These types of crimes may target governments or organizations, or individuals.

Attack Preparation

After making a decision about the targets and gathering information about them, phishers start to set up the attack by scanning for the vulnerabilities to exploit. The following are some examples of vulnerabilities exploited by phishers. For example, the attacker might exploit buffer overflow vulnerability to take control of target applications, create a DoS attack, or compromise computers. Moreover, “zero-day” software vulnerabilities, which refer to newly discovered vulnerabilities in software programs or operating systems could be exploited directly before it is fixed ( Kayne, 2019 ). Another example is browser vulnerabilities, adding new features and updates to the browser might introduce new vulnerabilities to the browser software ( Ollmann, 2004 ). In 2005, attackers exploited a cross-domain vulnerability in Internet Explorer (IE) ( Symantic, 2019 ). The cross-domain used to separate content from different sources in Microsoft IE. Attackers exploited a flaw in the cross-domain that enables them to execute programs on a user's computer after running IE. According to US-CERT, hackers are actively exploiting this vulnerability. To carry out a phishing attack, attackers need a medium so that they can reach their target. Therefore, apart from planning the attack to exploit potential vulnerabilities, attackers choose the medium that will be used to deliver the threat to the victim and carry out the attack. These mediums could be the internet (social network, websites, emails, cloud computing, e-banking, mobile systems) or VoIP (phone call), or text messages. For example, one of the actively used mediums is Cloud Computing (CC). The CC has become one of the more promising technologies and has popularly replaced conventional computing technologies. Despite the considerable advantages produced by CC, the adoption of CC faces several controversial obstacles including privacy and security issues ( CVEdetails, 2005 ). Due to the fact that different customers could share the same recourses in the cloud, virtualization vulnerabilities may be exploited by a possible malicious customer to perform security attacks on other customers’ applications and data ( Zissis and Lekkas, 2012 ). For example, in September 2014, secret photos of some celebrities suddenly moved through the internet in one of the more terrible data breaches. The investigation revealed that the iCloud accounts of the celebrities were breached ( Lehman and Vajpayee, 2011 ). According to Proofpoint, in 2017, attackers used Microsoft SharePoint to infect hundreds of campaigns with malware through messages.

Attack Conducting Phase

This phase involves using attack techniques to deliver the threat to the victim as well as the victim’s interaction with the attack in terms of responding or not. After the victim's response, the system may be compromised by the attacker to collect user's information using techniques such as injecting client-side script into webpages ( Johnson, 2016 ). Phishers can compromise hosts without any technical knowledge by purchasing access from hackers ( Abad, 2005 ). A threat is a possible danger that that might exploit a vulnerability to compromise people’s security and privacy or cause possible harm to a computer system for malicious purposes. Threats could be malware, botnet, eavesdropping, unsolicited emails, and viral links. Several Phishing techniques are discussed in sub- Types and Techniques of Phishing Attacks .

Valuables Acquisition Phase

In this stage, the phisher collects information or valuables from victims and uses it illegally for purchasing, funding money without the user’s knowledge, or selling these credentials in the black market. Attackers target a wide range of valuables from their victims that range from money to people’s lives. For example, attacks on online medical systems may lead to loss of life. Victim’s data can be collected by phishers manually or through automated techniques ( Jakobsson et al., 2007 ).

The data collection can be conducted either during or after the victim’s interaction with the attacker. However, to collect data manually simple techniques are used wherein victims interact directly with the phisher depending on relationships within social networks or other human deception techniques ( Ollmann, 2004 ). Whereas in automated data collection, several techniques can be used such as fake web forms that are used in web spoofing ( Dhamija et al., 2006 ). Additionally, the victim’s public data such as the user’s profile in social networks can be used to collect the victim’s background information that is required to initialize social engineering attacks ( Wenyin et al., 2005 ). In VoIP attacks or phone attack techniques such as recorded messages are used to harvest user's data ( Huber et al., 2009 ).

Types and Techniques of Phishing Attacks

Phishers conduct their attack either by using psychological manipulation of individuals into disclosing personal information (i.e., deceptive attack as a form of social engineering) or using technical methods. Phishers, however, usually prefer deceptive attacks by exploiting human psychology rather than technical methods. Figure 9 illustrates the types of phishing and techniques used by phishers to conduct a phishing attack. Each type and technique is explained in subsequent sections and subsections.

www.frontiersin.org

FIGURE 9 . Phishing attack types and techniques drawing upon existing phishing attacks.

Deceptive Phishing

Deceptive phishing is the most common type of phishing attack in which the attacker uses social engineering techniques to deceive victims. In this type of phishing, a phisher uses either social engineering tricks by making up scenarios (i.e., false account update, security upgrade), or technical methods (i.e., using legitimate trademarks, images, and logos) to lure the victim and convince them of the legitimacy of the forged email ( Jakobsson and Myers, 2006 ). By believing these scenarios, the user will fall prey and follow the given link, which leads to disclose his personal information to the phisher.

Deceptive phishing is performed through phishing emails; fake websites; phone phishing (Scam Call and IM); social media; and via many other mediums. The most common social phishing types are discussed below;

Phishing e-Mail

The most common threat derived by an attacker is deceiving people via email communications and this remains the most popular phishing type to date. A Phishing email or Spoofed email is a forged email sent from an untrusted source to thousands of victims randomly. These fake emails are claiming to be from a person or financial institution that the recipient trusts in order to convince recipients to take actions that lead them to disclose their sensitive information. A more organized phishing email that targets a particular group or individuals within the same organization is called spear phishing. In the above type, the attacker may gather information related to the victim such as name and address so that it appears to be credible emails from a trusted source ( Wang et al., 2008 ), and this is linked to the planning phase of the phishing anatomy proposed in this article. A more sophisticated form of spear phishing is called whaling, which targets high-rank people such as CEOs and CFOs. Some examples of spear-phishing attack victims in early 2016 are the phishing email that hacked the Clinton campaign chairman John Podesta’s Gmail account ( Parmar, 2012 ). Clone phishing is another type of email phishing, where the attacker clones a legitimate and previously delivered email by spoofing the email address and using information related to the recipient such as addresses from the legitimate email with replaced links or malicious attachments ( Krawchenko, 2016 ). The basic scenario for this attack is illustrated previously in Figure 4 and can be described in the following steps.

1. The phisher sets up a fraudulent email containing a link or an attachment (planning phase).

2. The phisher executes the attack by sending a phishing email to the potential victim using an appropriate medium (attack conducting phase).

3. The link (if clicked) directs the user to a fraudulent website, or to download malware in case of clicking the attachment (interaction phase).

4. The malicious website prompts users to provide confidential information or credentials, which are then collected by the attacker and used for fraudulent activities. (Valuables acquisition phase).

Often, the phisher does not use the credentials directly; instead, they resell the obtained credentials or information on a secondary market ( Jakobsson and Myers, 2006 ), for instance, script kiddies might sell the credentials on the dark web.

Spoofed Website

This is also called phishing websites, in which phishers forge a website that appears to be genuine and looks similar to the legitimate website. An unsuspicious user is redirected to this website after clicking a link embedded within an email or through an advertisement (clickjacking) or any other way. If the user continues to interact with the spoofed website, sensitive information will be disclosed and harvested by the phisher ( CSIOnsite, 2012 ).

Phone Phishing (Vishing and SMishing)

This type of phishing is conducted through phone calls or text messages, in which the attacker pretends to be someone the victim knows or any other trusted source the victim deals with. A user may receive a convincing security alert message from a bank convincing the victim to contact a given phone number with the aim to get the victim to share passwords or PIN numbers or any other Personally Identifiable Information (PII). The victim may be duped into clicking on an embedded link in the text message. The phisher then could take the credentials entered by the victim and use them to log in to the victims' instant messaging service to phish other people from the victim’s contact list. A phisher could also make use of Caller IDentification (CID) 3 spoofing to dupe the victim that the call is from a trusted source or by leveraging from an internet protocol private branch exchange (IP PBX) 4 tools which are open-source and software-based that support VoIP ( Aburrous et al., 2008 ). A new report from Fraud Watch International about phishing attack trends for 2019 anticipated an increase in SMishing where the text messages content is only viewable on a mobile device ( FraudWatchInternational, 2019 ).

Social Media Attack (Soshing, Social Media Phishing)

Social media is the new favorite medium for cybercriminals to conduct their phishing attacks. The threats of social media can be account hijacking, impersonation attacks, scams, and malware distributing. However, detecting and mitigating these threats requires a longer time than detecting traditional methods as social media exists outside of the network perimeter. For example, the nation-state threat actors conducted an extensive series of social media attacks on Microsoft in 2014. Multiple Twitter accounts were affected by these attacks and passwords and emails for dozens of Microsoft employees were revealed ( Ramzan, 2010 ). According to Kaspersky Lab’s, the number of phishing attempts to visit fraudulent social network pages in the first quarter of 2018 was more than 3.7 million attempts, of which 60% were fake Facebook pages ( Raggo, 2016 ).

The new report from predictive email defense company Vade Secure about phishers’ favorites for quarter 1 and quarter 2 of 2019, stated that Soshing primarily on Facebook and Instagram saw a 74.7% increase that is the highest quarter-over- quarter growth of any industry ( VadeSecure, 2021 ).

Technical Subterfuge

Technical subterfuge is the act of tricking individuals into disclosing their sensitive information through technical subterfuge by downloading malicious code into the victim's system. Technical subterfuge can be classified into the following types:

Malware-Based Phishing

As the name suggests, this is a type of phishing attack which is conducted by running malicious software on a user’s machine. The malware is downloaded to the victim’s machine, either by one of the social engineering tricks or technically by exploiting vulnerabilities in the security system (e.g., browser vulnerabilities) ( Jakobsson and Myers, 2006 ). Panda malware is one of the successful malware programs discovered by Fox-IT Company in 2016. This malware targets Windows Operating Systems (OS). It spreads through phishing campaigns and its main attack vectors include web injects, screenshots of user activity (up to 100 per mouse click), logging of keyboard input, Clipboard pastes (to grab passwords and paste them into form fields), and exploits to the Virtual Network Computing (VNC) desktop sharing system. In 2018, Panda malware expanded its targets to include cryptocurrency exchanges and social media sites ( F5Networks, 2018 ). There are many forms of Malware-based phishing attacks; some of them are discussed below:

Key Loggers and Screen Loggers

Loggers are the type of malware used by phishers and installed either through Trojan horse email attachments or through direct download to the user’s personal computer. This software monitors data and records user keystrokes and then sends it to the phisher. Phisher uses the key loggers to capture sensitive information related to victims, such as names, addresses, passwords, and other confidential data. Key loggers can also be used for non-phishing purposes such as to monitor a child's use of the internet. Key loggers can also be implemented in many other ways such as detecting URL changes and logs information as Browser Helper Object (BHO) that enables the attacker to take control of the features of all IE’s, monitoring keyboard and mouse input as a device driver and, monitoring users input and displays as a screen logger ( Jakobsson and Myers, 2006 ).

Viruses and Worms

A virus is a type of malware, which is a piece of code spreading in another application or program by making copies of itself in a self-automated manner ( Jakobsson and Myers, 2006 ; F5Networks, 2018 ). Worms are similar to viruses but they differ in the execution manner, as worms are executed by exploiting the operating systems vulnerability without the need to modify another program. Viruses transfer from one computer to another with the document that they are attached to, while worms transfer through the infected host file. Both viruses and worms can cause data and software damaging or Denial-of-Service (DoS) conditions ( F5Networks, 2018 ).

Spying software is a malicious code designed to track the websites visited by users in order to steal sensitive information and conduct a phishing attack. Spyware can be delivered through an email and, once it is installed on the computer, take control over the device and either change its settings or gather information such as passwords and credit card numbers or banking records which can be used for identity theft ( Jakobsson and Myers, 2006 ).

Adware is also known as advertising-supported software ( Jakobsson and Myers, 2006 ). Adware is a type of malware that shows the user an endless pop-up window with ads that could harm the performance of the device. Adware can be annoying but most of it is safe. Some of the adware could be used for malicious purposes such as tracking the internet sites the user visits or even recording the user's keystrokes ( cisco, 2018 ).

Ransomware is a type of malware that encrypts the user's data after they run an executable program on the device. In this type of attack, the decryption key is held until the user pays a ransom (cisco, 2018). Ransomware is responsible for tens of millions of dollars in extortion annually. Worse still, this is hard to detect with developing new variants, facilitating the evasion of many antivirus and intrusion detection systems ( Latto, 2020 ). Ransomware is usually delivered to the victim's device through phishing emails. According to a report ( PhishMe, 2016 ), 93% of all phishing emails contained encryption ransomware. Phishing, as a social engineering attack, convinces victims into executing actions without knowing about the malicious program.

A rootkit is a collection of programs, typically malicious, that enables access to a computer or computer network. These toolsets are used by intruders to hide their actions from system administrators by modifying the code of system calls and changing the functionality ( Belcic, 2020 ). The term “rootkit” has negative connotations through its association with malware, and it is used by the attacker to alert existing system tools to escape detection. These kits enable individuals with little or no knowledge to launch phishing exploits. It contains coding, mass emailing software (possibly with thousands of email addresses included), web development software, and graphic design tools. An example of rootkits is the Kernel kit. Kernel-Level Rootkits are created by replacing portions of the core operating system or adding new code via Loadable Kernel Modules in (Linux) or device drivers (in Windows) ( Jakobsson and Myers, 2006 ).

Session Hijackers

In this type, the attacker monitors the user’s activities by embedding malicious software within a browser component or via network sniffing. The monitoring aims to hijack the session, so that the attacker performs an unauthorized action with the hijacked session such as financial transferring, without the user's permission ( Jakobsson and Myers, 2006 ).

Web Trojans

Web Trojans are malicious programs that collect user’s credentials by popping up in a hidden way over the login screen ( Jakobsson and Myers, 2006 ). When the user enters the credentials, these programs capture and transmit the stolen credentials directly to the attacker ( Jakobsson et al., 2007 ).

Hosts File Poisoning

This is a way to trick a user into going to the phisher’s site by poisoning (changing) the host’s file. When the user types a particular website address in the URL bar, the web address will be translated into a numeric (IP) address before visiting the site. The attacker, to take the user to a fake website for phishing purposes, will modify this file (e.g., DNS cache). This type of phishing is hard to detect even by smart and perceptive users ( Ollmann, 2004 ).

System Reconfiguration Attack

In this format of the phishing attack, the phisher manipulates the settings on a user’s computer for malicious activities so that the information on this PC will be compromised. System reconfigurations can be changed using different methods such as reconfiguring the operating system and modifying the user’s Domain Name System (DNS) server address. The wireless evil twin is an example of a system reconfiguration attack in which all user’s traffic is monitored via a malicious wireless Access Point (AP) ( Jakobsson and Myers, 2006 ).

Data theft is an unauthorized accessing and stealing of confidential information for a business or individuals. Data theft can be performed by a phishing email that leads to the download of a malicious code to the user's computer which in turn steals confidential information stored in that computer directly ( Jakobsson and Myers, 2006 ). Stolen information such as passwords, social security numbers, credit card information, sensitive emails, and other personal data could be used directly by a phisher or indirectly by selling it for different purposes.

Domain Name System Based Phishing (Pharming)

Any form of phishing that interferes with the domain name system so that the user will be redirected to the malicious website by polluting the user's DNS cache with wrong information is called DNS-based phishing. Although the host’s file is not a part of the DNS, the host’s file poisoning is another form of DNS based phishing. On the other hand, by compromising the DNS server, the genuine IP addresses will be modified which results in taking the user unwillingly to a fake location. The user can fall prey to pharming even when clicking on a legitimate link because the website’s domain name system (DNS) could be hijacked by cybercriminals ( Jakobsson and Myers, 2006 ).

Content Injection Phishing

Content-Injection Phishing refers to inserting false content into a legitimate site. This malicious content could misdirect the user into fake websites, leading users into disclosing their sensitive information to the hacker or it can lead to downloading malware into the user's device ( Jakobsson and Myers, 2006 ). The malicious content could be injected into a legitimate site in three primary ways:

1. Hacker exploits a security vulnerability and compromises a web server.

2. Hacker exploits a Cross-Site Scripting (XSS) vulnerability that is a programming flaw that enables attackers to insert client-side scripts into web pages, which will be viewed by the visitors to the targeted site.

3. Hacker exploits Structured Query Language (SQL) injection vulnerability, which allows hackers to steal information from the website’s database by executing database commands on a remote server.

Man-In-The-Middle Phishing

The Man In The Middle attack (MITM) is a form of phishing, in which the phishers insert communications between two parties (i.e. the user and the legitimate website) and tries to obtain the information from both parties by intercepting the victim’s communications ( Ollmann, 2004 ). Such that the message is going to the attacker instead of going directly to the legitimate recipients. For a MITM, the attacker records the information and misuse it later. The MITM attack conducts by redirecting the user to a malicious server through several techniques such as Address Resolution Protocol (ARP) poisoning, DNS spoofing, Trojan key loggers, and URL Obfuscation ( Jakobsson and Myers, 2006 ).

Search Engine Phishing

In this phishing technique, the phisher creates malicious websites with attractive offers and use Search Engine Optimization (SEO) tactics to have them indexed legitimately such that it appears to the user when searching for products or services. This is also known as black hat SEO ( Jakobsson and Myers, 2006 ).

URL and HTML Obfuscation Attacks

In most of the phishing attacks, phishers aim to convince a user to click on a given link that connects the victim to a malicious phishing server instead of the destination server. This is the most popular technique used by today's phishers. This type of attack is performed by obfuscating the real link (URL) that the user intends to connect (an attempt from the attacker to make their web address look like the legitimate one). Bad Domain Names and Host Name Obfuscation are common methods used by attackers to fake an address ( Ollmann, 2004 ).

Countermeasures

A range of solutions are being discussed and proposed by the researchers to overcome the problems of phishing, but still, there is no single solution that can be trusted or capable of mitigating these attacks ( Hong, 2012 ; Boddy, 2018 ; Chanti and Chithralekha, 2020 ). The proposed phishing countermeasures in the literature can be categorized into three major defense strategies. The first line of defense is human-based solutions by educating end-users to recognize phishing and avoid taking the bait. The second line of defense is technical solutions that involve preventing the attack at early stages such as at the vulnerability level to prevent the threat from materializing at the user's device, which means decreasing the human exposure, and detecting the attack once it is launched through the network level or at the end-user device. This also includes applying specific techniques to track down the source of the attack (for example these could include identification of new domains registered that are closely matched with well-known domain names). The third line of defense is the use of law enforcement as a deterrent control. These approaches can be combined to create much stronger anti-phishing solutions. The above solutions are discussed in detail below.

Human Education (Improving User Awareness About Phishing)

Human education is by far an effective countermeasure to avoid and prevent phishing attacks. Awareness and human training are the first defense approach in the proposed methodology for fighting against phishing even though it does not assume complete protection ( Hong, 2012 ). End-user education reduces user's susceptibility to phishing attacks and compliments other technical solutions. According to the analysis carried out in ( Bailey et al., 2008 ), 95% of phishing attacks are caused due to human errors; nonetheless, existing phishing detection training is not enough for combating current sophisticated attacks. In the study presented by Khonji et al. (2013) , security experts contradict the effectiveness and usability of user education. Furthermore, some security experts claim that user education is not effective as security is not the main goal for users and users do not have a motivation to educate themselves about phishing ( Scaife et al., 2016 ), while others confirm that user education could be effective if designed properly ( Evers, 2006 ; Whitman and Mattord, 2012 ). Moreover, user training has been mentioned by many researchers as an effective way to protect users when they are using online services ( Dodge et al., 2007 ; Salem et al., 2010 ; Chanti and Chithralekha, 2020 ). To detect and avoid phishing emails, a combined training approach was proposed by authors in the study ( Salem et al., 2010 ). The proposed solution uses a combination of tools and human learning, wherein a security awareness program is introduced to the user as a first step. The second step is using an intelligent system that detects the attacks at the email level. After that, the emails are classified by a fuzzy logic-based expert system. The main critic of this method is that the study chooses only limited characteristics of the emails as distinguishing features ( Kumaraguru et al., 2010 ; CybintCyberSolutions, 2018 ). Moreover, the majority of phishing training programs focus on how to recognize and avoid phishing emails and websites while other threatening phishing types receive less attention such as voice phishing and malware or adware phishing. The authors in ( Salem et al., 2010 ) found that the most used solutions in educating people are not useful if they ignore the notifications/warnings about fake websites. Training users should involve three major directions: the first one is awareness training through holding seminars or online courses for both employees within organizations or individuals. The second one is using mock phishing attacks to attack people to test users’ vulnerability and allow them to assess their own knowledge about phishing. However, only 38% of global organizations claim they are prepared to handle a sophisticated cyber-attack ( Kumaraguru et al., 2010 ). Wombat Security’s State of the Phish™ Report 2018 showed that approximately two-fifths of American companies use computer-based online awareness training and simulated phishing attacks as educating tools on a monthly basis, while just 15% of United Kingdom firms do so ( CybintCyberSolutions, 2018 ). The third direction is educating people by developing games to teach people about phishing. The game developer should take into consideration different aspects before designing the game such as audience age and gender, because people's susceptibility to phishing is varying. Authors in the study ( Sheng et al., 2007 ) developed a game to train users so that they can identify phishing attacks called Anti-Phishing Phil that teaches about phishing web pages, and then tests users about the efficiency and effectiveness of the game. The results from the study showed that the game participants improve their ability to identify phishing by 61% indicating that interactive games might turn out to be a joyful way of educating people. Although, user’s education and training can be very effective to mitigate security threats, phishing is becoming more complex and cybercriminals can fool even the security experts by creating convincing spear phishing emails via social media. Therefore, individual users and employees must have at least basic knowledge about dealing with suspicious emails and report it to IT staff and specific authorities. In addition, phishers change their strategies continuously, which makes it harder for organizations, especially small/medium enterprises to afford the cost of their employee education. With millions of people logging on to their social media accounts every day, social media phishing is phishers' favorite medium to deceive their victims. For example, phishers are taking advantage of the pervasiveness of Facebook to set up creative phishing attacks utilizing the Facebook Login feature that enables the phisher to compromise all the user's accounts with the same credentials (VadeSecure). Some countermeasures are taken by Social networks to reduce suspicious activities on social media such as Two-Factor authentication for logging in, that is required by Facebook, and machine-learning techniques used by Snapchat to detect and prevent suspicious links sent within the app ( Corrata, 2018 ). However, countermeasures to control Soshing and phone phishing attacks might include:

• Install anti-virus, anti-spam software as a first action and keep it up to date to detect and prevent any unauthorized access.

• Educate yourself about recent information on phishing, the latest trends, and countermeasures.

• Never click on hyperlinks attached to a suspicious email, post, tweet, direct message.

• Never trust social media, do not give any sensitive information over the phone or non-trusted account. Do not accept friend requests from people you do not know.

• Use a unique password for each account.

Training and educating users is an effective anti-phishing countermeasure and has already shown promising initial results. The main downside of this solution is that it demands high costs ( Dodge et al., 2007 ). Moreover, this solution requires basic knowledge in computer security among trained users.

Technical Solutions

The proposed technical solutions for detecting and blocking phishing attacks can be divided into two major approaches: non-content based solutions and content-based solutions ( Le et al., 2006 ; Bin et al., 2010 ; Boddy, 2018 ). Both approaches are briefly described in this section. Non-content based methods include blacklists and whitelists that classify the fake emails or webpages based on the information that is not part of the email or the webpage such as URL and domain name features ( Dodge et al., 2007 ; Ma et al., 2009 ; Bin et al., 2010 ; Salem et al., 2010 ). Stopping the phishing sites using blacklist and whitelist approaches, wherein a list of known URLs and sites is maintained, the website under scrutiny is checked against such a list in order to be classified as a phishing or legitimate site. The downside of this approach is that it will not identify all phishing websites. Because once a phishing site is taken down, the phisher can easily register a new domain ( Miyamoto et al., 2009 ). Content-based methods classify the page or the email relying on the information within its content such as texts, images, and also HTML, java scripts, and Cascading Style Sheets (CSS) codes ( Zhang et al., 2007 ; Maurer and Herzner, 2012 ). Content-based solutions involve Machine Learning (ML), heuristics, visual similarity, and image processing methods ( Miyamoto et al., 2009 ; Chanti and Chithralekha, 2020 ). and finally, multifaceted methods, which apply a combination of the previous approaches to detect and prevent phishing attacks ( Afroz and Greenstadt, 2009 ). For email filtering, ML techniques are commonly used for example in 2007, the first email phishing filter was developed by authors in ( Fette et al., 2007 ). This technique uses a set of features such as URLs that use different domain names. Spam filtering techniques ( Cormack et al., 2011 ) and statistical classifiers ( Bergholz et al., 2010 ) are also used to identify a phishing email. Authentication and verification technologies are also used in spam email filtering as an alternative to heuristics methods. For example, the Sender Policy Framework (SPF) verifies whether a sender is valid when accepting mail from a remote mail server or email client ( Deshmukh and raddha Popat, 2017 ).

The technical solutions for Anti-phishing are available at different levels of the delivery chain such as mail servers and clients, Internet Service Providers (ISPs), and web browser tools. Drawing from the proposed anatomy for phishing attacks in Proposed Phishing Anatomy , authors categorize technical solutions into the following approaches:

1. Techniques to detect the attack after it has been launched. Such as by scanning the web to find fake websites. For example, content-based phishing detection approaches are heavily deployed on the Internet. The features from the website elements such as Image, URL, and text content are analyzed using Rule-based approaches and Machine Learning that examine the presence of special characters (@), IP addresses instead of the domain name, prefix/suffix, HTTPS in domain part and other features ( Jeeva and Rajsingh, 2016 ). Fuzzy Logic (FL) has also been used as an anti-phishing model to help classify websites into legitimate or ‘phishy’ as this model deals with intervals rather than specific numeric values ( Aburrous et al., 2008 ).

2. Techniques to prevent the attack from reaching the user's system. Phishing prevention is an important step to defend against phishing by blocking a user from seeing and dealing with the attack. In email phishing, anti-spam software tools can block suspicious emails. Phishers usually send a genuine look-alike email that dupes the user to open an attachment or click on a link. Some of these emails pass the spam filter because phishers use misspelled words. Therefore, techniques that detect fake emails by checking the spelling and grammar correction are increasingly used, so that it can prevent the email from reaching the user's mailbox. Authors in the study ( Fette et al., 2007 ) have developed a new classification algorithm based on the Random Forest algorithm after exploring email phishing utilizing the C4.5 decision tree generator algorithm. The developed method is called "Phishing Identification by Learning on Features of Email Received" (PILFER), which can classify phishing email depending on various features such as IP based URLs, the number of links in the HTML part(s) of an email, the number of domains, the number of dots, nonmatching URLs, and availability of JavaScripts. The developed method showed high accuracy in detecting phishing emails ( Afroz and Greenstadt, 2009 ).

3. Corrective techniques that can take down the compromised website, by requesting the website's Internet Service Provider (ISP) to shut down the fake website in order to prevent more users from falling victims to phishing ( Moore and Clayton, 2007 ; Chanti and Chithralekha, 2020 ). ISPs are responsible for taking down fake websites. Removing the compromised and illegal websites is a complex process; many entities are involved in this process from private companies, self-regulatory bodies, government agencies, volunteer organizations, law enforcement, and service providers. Usually, illegal websites are taken down by Takedown Orders, which are issued by courts or in some jurisdictions by law enforcement. On the other hand, these can be voluntarily taken down by the providers themselves as a result of issued takedown notices ( Moore and Clayton, 2007 ; Hutchings et al., 2016 ). According to PHISHLABS ( PhishLabs, 2019 ) report, taking down phishing sites is helpful but it is not completely effective as these sites can still be alive for days stealing customers' credentials before detecting the attack.

4. Warning tools or security indicators that embedded into the web browser to inform the user after detecting the attack. For example, eBay Toolbar and Account Guard ( eBay Toolbar and Account Guard, 2009 ) protect customer’s eBay and PayPal passwords respectively by alerting the users about the authenticity of the sites that users try to type the password in. Numerous anti-phishing solutions rely mainly on warnings that are displayed on the security toolbar. In addition, some toolbars block suspicious sites to warn about it such as McAfee and Netscape. A study presented in ( Robichaux and Ganger, 2006 ) conducted a test to evaluate the performance of eight anti-phishing solutions, including Microsoft Internet Explorer 7, EarthLink, eBay, McAfee, GeoTrust, Google using Firefox, Netscape, and Netcraft. These tools are warning and blocking tools that allow legitimate sites while block and warn about known phishing sites. The study also found that Internet Explorer and Netcraft Toolbar showed the most effective results than other anti-phishing tools. However, security toolbars are still failing to avoid people falling victim to phishing despite these toolbars improving internet security in general ( Abu-Nimeh and Nair, 2008 ).

5. Authentication ( Moore and Clayton, 2007 ) and authorization ( Hutchings et al., 2016 ) techniques that provide protection from phishing by verifying the identity of the legitimate person. This prevents phishers from accessing a protected resource and conducting their attack. There are three types of authentication; single-factor authentication requires only username and password. The second type is two-factor authentication that requires additional information in addition to the username and password such as an OTP (One-Time Password) which is sent to the user’s email id or phone. The third type is multi-factor authentication using more than one form of identity (i.e., a combination of something you know, something you are, and something you have). Some widely used methods in the authorization process are API authorization and OAuth 2.0 that allow the previously generated API to access the system.

However, the progressive increase in phishing attacks shows that previous methods do not provide the required protection against most existing phishing attacks. Because no single solution or technology could prevent all phishing attacks. An effective anti-phishing solution should be based on a combination of technical solutions and increased user awareness ( Boddy, 2018 ).

Solutions Provided by Legislations as a Deterrent Control

A cyber-attack is considered a crime when an individual intentionally accesses personal information on a computer without permission, even if the individual does not steal information or damage the system ( Mince-Didier, 2020 ). Since the sole objective of almost all phishing attacks is to obtain sensitive information by knowingly intending to commit identity theft, and while there are currently no federal laws in the United States aimed specifically at phishing, therefore, phishing crimes are usually covered under identity theft laws. Phishing is considered a crime even if the victim does not actually fall for the phishing scam, the punishments depend on circumstances and usually include jail, fines, restitution, probation ( Nathan, 2020 ). Phishing attacks are causing different levels of damages to the victims such as financial and reputational losses. Therefore, law enforcement authorities should track down these attacks in order to punish the criminal as with real-world crimes. As a complement to technical solutions and human education, the support provided by applicable laws and regulations can play a vital role as a deterrent control. Increasingly authorities around the world have created several regulations in order to mitigate the increase of phishing attacks and their impact. The first anti-phishing laws were enacted by the United States, where the FTC in the US added the phishing attacks to the computer crime list in January 2004. A year later, the ‘‘Anti-Phishing Act’’ was introduced in the US Congress in March 2005 ( Mohammad et al., 2014 ). Meanwhile, in the United Kingdom, the law legislation is gradually conforming to address phishing and other forms of cyber-crime. In 2006, the United Kingdom government improved the Computer Misuse Act 1990 intending to bring it up to date with developments in computer crime and to increase penalties for breach enacted penalties of up to 10 years ( eBay Toolbar and Account Guard, 2009 ; PhishLabs, 2019 ). In this regard, a student in the United Kingdom who made hundreds of thousands of pounds blackmailing pornography website users was jailed in April 2019 for six years and five months. According to the National Crime Agency (NCA), this attacker was the most prolific cybercriminal to be sentenced in the United Kingdom ( Casciani, 2019 ). Moreover, the organizations bear part of the responsibility in protecting personal information as stated in the Data Protection Act 2018 and EU General Data Protection Regulation (GDPR). Phishing websites also can be taken down through Law enforcement agencies' conduct. In the United Kingdom, websites can be taken down by the National Crime Agency (NCA), which includes the National Cyber Crime Unit, and by the City of London Police, which includes the Police Intellectual Property Crime Unit (PIPCU) and the National Fraud Intelligence Bureau (NFIB) ( Hutchings et al., 2016 ).

However, anti-phishing law enforcement is still facing numerous challenges and limitations. Firstly, after perpetrating the phishing attack, the phisher can vanish in cyberspace making it difficult to prove the guilt attributed to the offender and to recover the damages caused by the attack, limiting the effectiveness of the law enforcement role. Secondly, even if the attacker’s identity is disclosed in the case of international attackers, it will be difficult to bring this attacker to justice because of the differences in countries' legislations (e.g., exchange treaties). Also, the attack could be conducted within a short time span, for instance, the average lifetime for a phishing web site is about 54 h as stated by the APWG, therefore, there must be a quick response from the government and the authorities to detect, control and identify the perpetrators of the attack ( Ollmann, 2004 ).

Phishing attacks remain one of the major threats to individuals and organizations to date. As highlighted in the article, this is mainly driven by human involvement in the phishing cycle. Often phishers exploit human vulnerabilities in addition to favoring technological conditions (i.e., technical vulnerabilities). It has been identified that age, gender, internet addiction, user stress, and many other attributes affect the susceptibility to phishing between people. In addition to traditional phishing channels (e.g., email and web), new types of phishing mediums such as voice and SMS phishing are on the increase. Furthermore, the use of social media-based phishing has increased in use in parallel with the growth of social media. Concomitantly, phishing has developed beyond obtaining sensitive information and financial crimes to cyber terrorism, hacktivism, damaging reputations, espionage, and nation-state attacks. Research has been conducted to identify the motivations and techniques and countermeasures to these new crimes, however, there is no single solution for the phishing problem due to the heterogeneous nature of the attack vector. This article has investigated problems presented by phishing and proposed a new anatomy, which describes the complete life cycle of phishing attacks. This anatomy provides a wider outlook for phishing attacks and provides an accurate definition covering end-to-end exclusion and realization of the attack.

Although human education is the most effective defense for phishing, it is difficult to remove the threat completely due to the sophistication of the attacks and social engineering elements. Although, continual security awareness training is the key to avoid phishing attacks and to reduce its impact, developing efficient anti-phishing techniques that prevent users from being exposed to the attack is an essential step in mitigating these attacks. To this end, this article discussed the importance of developing anti-phishing techniques that detect/block the attack. Furthermore, the importance of techniques to determine the source of the attack could provide a stronger anti-phishing solution as discussed in this article.

Furthermore, this article identified the importance of law enforcement as a deterrent mechanism. Further investigations and research are necessary as discussed below.

1. Further research is necessary to study and investigate susceptibility to phishing among users, which would assist in designing stronger and self-learning anti-phishing security systems.

2. Research on social media-based phishing, Voice Phishing, and SMS Phishing is sparse and these emerging threats are predicted to be significantly increased over the next years.

3. Laws and legislations that apply for phishing are still at their infant stage, in fact, there are no specific phishing laws in many countries. Most of the phishing attacks are covered under traditional criminal laws such as identity theft and computer crimes. Therefore, drafting of specific laws for phishing is an important step in mitigating these attacks in a time where these crimes are becoming more common.

4. Determining the source of the attack before the end of the phishing lifecycle and enforcing law legislation on the offender could help in restricting phishing attacks drastically and would benefit from further research.

It can be observed that the mediums used for phishing attacks have changed from traditional emails to social media-based phishing. There is a clear lag between sophisticated phishing attacks and existing countermeasures. The emerging countermeasures should be multidimensional to tackle both human and technical elements of the attack. This article provides valuable information about current phishing attacks and countermeasures whilst the proposed anatomy provides a clear taxonomy to understand the complete life cycle of phishing.

Author Contributions

This work is by our PhD student ZA supported by her Supervisory Team.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

AOL America Online

APWG Anti Phishing Working Group Advanced

APRANET Advanced Research Projects Agency Network.

ARP address resolution protocol.

BHO Browser Helper Object

BEC business email compromise

COVID-19 Coronavirus disease 2019

CSS cascading style sheets

DDoS distributed denial of service

DNS Domain Name System

DoS Denial of Service

FTC Federal Trade Commission

FL Fuzzy Logic

HTTPS Hypertext Transfer Protocol Secure

IE Internet Explorer

ICT Information and Communications Technology

IM Instant Message

IT Information Technology

IP Internet Protocol

MITM Man-in-the-Middle

NCA National Crime Agency

NFIB National Fraud Intelligence Bureau

PIPCU Police Intellectual Property Crime Unit

OS Operating Systems

PBX Private Branch Exchange

SMishing Text Message Phishing

SPF Sender Policy Framework

SMTP Simple Mail Transfer Protocol

SMS Short Message Service

Soshing Social Media Phishing

SQL structured query language

URL Uniform Resource Locator

UK United Kingdom

US United States

USB Universal Serial Bus

US-CERT United States Computer Emergency Readiness Team.

Vishing Voice Phishing

VNC Virtual Network Computing

VoIP Voice over Internet Protocol

XSS Cross-Site Scripting

1 Proofpoint is “a leading cybersecurity company that protects organizations’ greatest assets and biggest risks: their people. With an integrated suite of cloud-based solutions”( Proofpoint, 2019b ).

2 APWG Is “the international coalition unifying the global response to cybercrime across industry, government and law-enforcement sectors and NGO communities” ( APWG, 2020 ).

3 CalleR ID is “a telephone facility that displays a caller’s phone number on the recipient's phone device before the call is answered” ( Techpedia, 2021 ).

4 An IPPBX is “a telephone switching system within an enterprise that switches calls between VoIP users on local lines while allowing all users to share a certain number of external phone lines” ( Margaret, 2008 ).

Abad, C. (2005). The economy of phishing: a survey of the operations of the phishing market. First Monday 10, 1–11. doi:10.5210/fm.v10i9.1272

CrossRef Full Text | Google Scholar

Abu-Nimeh, S., and Nair, S. (2008). “Bypassing security toolbars and phishing filters via dns poisoning,” in IEEE GLOBECOM 2008–2008 IEEE global telecommunications conference , New Orleans, LA , November 30–December 2, 2008 ( IEEE) , 1–6. doi:10.1109/GLOCOM.2008.ECP.386

Aburrous, M., Hossain, M. A., Thabatah, F., and Dahal, K. (2008). “Intelligent phishing website detection system using fuzzy techniques,” in 2008 3rd international conference on information and communication technologies: from theory to applications (New York, NY: IEEE , 1–6. doi:10.1109/ICTTA.2008.4530019

Afroz, S., and Greenstadt, R. (2009). “Phishzoo: an automated web phishing detection approach based on profiling and fuzzy matching,” in Proceeding 5th IEEE international conference semantic computing (ICSC) , 1–11.

Google Scholar

Alsharnouby, M., Alaca, F., and Chiasson, S. (2015). Why phishing still works: user strategies for combating phishing attacks. Int. J. Human-Computer Stud. 82, 69–82. doi:10.1016/j.ijhcs.2015.05.005

APWG (2018). Phishing activity trends report 3rd quarter 2018 . US. 1–11.

APWG (2020). APWG phishing attack trends reports. 2020 anti-phishing work. Group, Inc Available at: https://apwg.org/trendsreports/ (Accessed September 20, 2020).

Arachchilage, N. A. G., and Love, S. (2014). Security awareness of computer users: a phishing threat avoidance perspective. Comput. Hum. Behav. 38, 304–312. doi:10.1016/j.chb.2014.05.046

Arnsten, B. A., Mazure, C. M., and April, R. S. (2012). Everyday stress can shut down the brain’s chief command center. Sci. Am. 306, 1–6. Available at: https://www.scientificamerican.com/article/this-is-your-brain-in-meltdown/ (Accessed October 15, 2019).

Bailey, J. L., Mitchell, R. B., and Jensen, B. k. (2008). “Analysis of student vulnerabilities to phishing,” in 14th americas conference on information systems, AMCIS 2008 , 75–84. Available at: https://aisel.aisnet.org/amcis2008/271 .

Barracuda (2020). Business email compromise (BEC). Available at: https://www.barracuda.com/glossary/business-email-compromise (Accessed November 15, 2020).

Belcic, I. (2020). Rootkits defined: what they do, how they work, and how to remove them. Available at: https://www.avast.com/c-rootkit (Accessed November 7, 2020).

Bergholz, A., De Beer, J., Glahn, S., Moens, M.-F., Paaß, G., and Strobel, S. (2010). New filtering approaches for phishing email. JCS 18, 7–35. doi:10.3233/JCS-2010-0371

Bin, S., Qiaoyan, W., and Xiaoying, L. (2010). “A DNS based anti-phishing approach.” in 2010 second international conference on networks security, wireless communications and trusted computing , Wuhan, China , April 24–25, 2010 . ( IEEE ), 262–265. doi:10.1109/NSWCTC.2010.196

Boddy, M. (2018). Phishing 2.0: the new evolution in cybercrime. Comput. Fraud Secur. 2018, 8–10. doi:10.1016/S1361-3723(18)30108-8

Casciani, D. (2019). Zain Qaiser: student jailed for blackmailing porn users worldwide. Available at: https://www.bbc.co.uk/news/uk-47800378 (Accessed April 9, 2019).

Chanti, S., and Chithralekha, T. (2020). Classification of anti-phishing solutions. SN Comput. Sci. 1, 11. doi:10.1007/s42979-019-0011-2

Checkpoint (2020). Check point research’s Q1 2020 brand phishing report. Available at: https://www.checkpoint.com/press/2020/apple-is-most-imitated-brand-for-phishing-attempts-check-point-researchs-q1-2020-brand-phishing-report/ (Accessed August 6, 2020).

cisco (2018). What is the difference: viruses, worms, Trojans, and bots? Available at: https://www.cisco.com/c/en/us/about/security-center/virus-differences.html (Accessed January 20, 2020).

CISA (2018). What is phishing. Available at: https://www.us-cert.gov/report-phishing (Accessed June 10, 2019).

Cormack, G. V., Smucker, M. D., and Clarke, C. L. A. (2011). Efficient and effective spam filtering and re-ranking for large web datasets. Inf. Retrieval 14, 441–465. doi:10.1007/s10791-011-9162-z

Corrata (2018). The rising threat of social media phishing attacks. Available at: https://corrata.com/the-rising-threat-of-social-media-phishing-attacks/%0D (Accessed October 29, 2019).

Crane, C. (2019). The dirty dozen: the 12 most costly phishing attack examples. Available at: https://www.thesslstore.com/blog/the-dirty-dozen-the-12-most-costly-phishing-attack-examples/#:∼:text=At some level%2C everyone is susceptible to phishing,outright trick you into performing a particular task (Accessed August 2, 2020).

CSI Onsite (2012). Phishing. Available at: http://csionsite.com/2012/phishing/ (Accessed May 8, 2019).

Cui, Q., Jourdan, G.-V., Bochmann, G. V., Couturier, R., and Onut, I.-V. (2017). Tracking phishing attacks over time. Proc. 26th Int. Conf. World Wide Web - WWW ’17 , Republic and Canton of Geneva, Switzerland: International World Wide Web Conferences Steering Committee . 667–676. doi:10.1145/3038912.3052654

CVEdetails (2005). Vulnerability in microsoft internet explorer. Available at: https://www.cvedetails.com/cve/CVE-2005-4089/ (Accessed August 20, 2019).

Cybint Cyber Solutions (2018). 13 alarming cyber security facts and stats. Available at: https://www.cybintsolutions.com/cyber-security-facts-stats/ (Accessed July 20, 2019).

Deshmukh, M., and raddha Popat, S. (2017). Different techniques for detection of phishing attack. Int. J. Eng. Sci. Comput. 7, 10201–10204. Available at: http://ijesc.org/ .

Dhamija, R., Tygar, J. D., and Hearst, M. (2006). “Why phishing works,” in Proceedings of the SIGCHI conference on human factors in computing systems - CHI ’06 , Montréal Québec, Canada , (New York, NY: ACM Press ), 581. doi:10.1145/1124772.1124861

Diaz, A., Sherman, A. T., and Joshi, A. (2020). Phishing in an academic community: a study of user susceptibility and behavior. Cryptologia 44, 53–67. doi:10.1080/01611194.2019.1623343

Dodge, R. C., Carver, C., and Ferguson, A. J. (2007). Phishing for user security awareness. Comput. Security 26, 73–80. doi:10.1016/j.cose.2006.10.009

eBay Toolbar and Account Guard (2009). Available at: https://download.cnet.com/eBay-Toolbar/3000-12512_4-10153544.html (Accessed August 7, 2020).

EDUCBA (2017). Hackers vs crackers: easy to understand exclusive difference. Available at: https://www.educba.com/hackers-vs-crackers/ (Accessed July 17, 2019).

Evers, J. (2006). Security expert: user education is pointless. Available at: https://www.cnet.com/news/security-expert-user-education-is-pointless/ (Accessed June 25, 2019).

F5Networks (2018). Panda malware broadens targets to cryptocurrency exchanges and social media. Available at: https://www.f5.com/labs/articles/threat-intelligence/panda-malware-broadens-targets-to-cryptocurrency-exchanges-and-social-media (Accessed April 23, 2019).

Fette, I., Sadeh, N., and Tomasic, A. (2007). “Learning to detect phishing emails,” in Proceedings of the 16th international conference on world wide web - WWW ’07 , Banff Alberta, Canada , (New York, NY: ACM Press) , 649–656. doi:10.1145/1242572.1242660

Financial Fraud Action UK (2017). Fraud the facts 2017: the definitive overview of payment industry fraud. London. Available at: https://www.financialfraudaction.org.uk/fraudfacts17/assets/fraud_the_facts.pdf .

Fraud Watch International (2019). Phishing attack trends for 2019. Available at: https://fraudwatchinternational.com/phishing/phishing-attack-trends-for-2019/ (Accessed October 29, 2019).

FTC (2018). Netflix scam email. Available at: https://www.ftc.gov/tips-advice/business-center/small-businesses/cybersecurity/phishing (Accessed May 8, 2019).

Furnell, S. (2007). An assessment of website password practices). Comput. Secur. 26, 445–451. doi:10.1016/j.cose.2007.09.001

Getsafeonline (2017). Caught on the net. Available at: https://www.getsafeonline.org/news/caught-on-the-net/%0D (Accessed August 1, 2020).

GOV.UK (2020). Cyber security breaches survey 2020. Available at: https://www.gov.uk/government/publications/cyber-security-breaches-survey-2020/cyber-security-breaches-survey-2020 (Accessed August 6, 2020).

Gupta, P., Srinivasan, B., Balasubramaniyan, V., and Ahamad, M. (2015). “Phoneypot: data-driven understanding of telephony threats,” in Proceedings 2015 network and distributed system security symposium , (Reston, VA: Internet Society ), 8–11. doi:10.14722/ndss.2015.23176

Hadlington, L. (2017). Human factors in cybersecurity; examining the link between internet addiction, impulsivity, attitudes towards cybersecurity, and risky cybersecurity behaviours. Heliyon 3, e00346-18. doi:10.1016/j.heliyon.2017.e00346

Herley, C., and Florêncio, D. (2008). “A profitless endeavor,” in New security paradigms workshop (NSPW ’08) , New Hampshire, United States , October 25–28, 2021 , 1–12. doi:10.1145/1595676.1595686

Hewage, C. (2020). Coronavirus pandemic has unleashed a wave of cyber attacks – here’s how to protect yourself. Conversat . Available at: https://theconversation.com/coronavirus-pandemic-has-unleashed-a-wave-of-cyber-attacks-heres-how-to-protect-yourself-135057 (Accessed November 16, 2020).

Hong, J. (2012). The state of phishing attacks. Commun. ACM 55, 74–81. doi:10.1145/2063176.2063197

Huber, M., Kowalski, S., Nohlberg, M., and Tjoa, S. (2009). “Towards automating social engineering using social networking sites,” in 2009 international conference on computational science and engineering , Vancouver, BC , August 29–31, 2009 ( IEEE , 117–124. doi:10.1109/CSE.2009.205

Hutchings, A., Clayton, R., and Anderson, R. (2016). “Taking down websites to prevent crime,” in 2016 APWG symposium on electronic crime research (eCrime) ( IEEE ), 1–10. doi:10.1109/ECRIME.2016.7487947

Iuga, C., Nurse, J. R. C., and Erola, A. (2016). Baiting the hook: factors impacting susceptibility to phishing attacks. Hum. Cent. Comput. Inf. Sci. 6, 8. doi:10.1186/s13673-016-0065-2

Jagatic, T. N., Johnson, N. A., Jakobsson, M., and Menczer, F. (2007). Social phishing. Commun. ACM 50, 94–100. doi:10.1145/1290958.1290968

Jakobsson, M., and Myers, S. (2006). Phishing and countermeasures: understanding the increasing problems of electronic identity theft . New Jersey: John Wiley and Sons .

Jakobsson, M., Tsow, A., Shah, A., Blevis, E., and Lim, Y. K. (2007). “What instills trust? A qualitative study of phishing,” in Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics) , (Berlin, Heidelberg: Springer ), 356–361. doi:10.1007/978-3-540-77366-5_32

Jeeva, S. C., and Rajsingh, E. B. (2016). Intelligent phishing url detection using association rule mining. Hum. Cent. Comput. Inf. Sci. 6, 10. doi:10.1186/s13673-016-0064-3

Johnson, A. (2016). Almost 600 accounts breached in “celebgate” nude photo hack, FBI says. Available at: http://www.cnbc.com/id/102747765 (Accessed: February 17, 2020).

Kayne, R. (2019). What are script kiddies? Wisegeek. Available at: https://www.wisegeek.com/what-are-script-kiddies.htm V V February 19, 2020).

Keck, C. (2018). FTC warns of sketchy Netflix phishing scam asking for payment details. Available at: https://gizmodo.com/ftc-warns-of-sketchy-netflix-phishing-scam-asking-for-p-1831372416 (Accessed April 23, 2019).

Keepnet LABS (2018). Statistical analysis of 126,000 phishing simulations carried out in 128 companies around the world. USA, France. Available at: www.keepnetlabs.com .

Keinan, G. (1987). Decision making under stress: scanning of alternatives under controllable and uncontrollable threats. J. Personal. Soc. Psychol. 52, 639–644. doi:10.1037/0022-3514.52.3.639

Khonji, M., Iraqi, Y., and Jones, A. (2013). Phishing detection: a literature survey. IEEE Commun. Surv. Tutorials 15, 2091–2121. doi:10.1109/SURV.2013.032213.00009

Kirda, E., and Kruegel, C. (2005). Protecting users against phishing attacks with AntiPhish. Proc. - Int. Comput. Softw. Appl. Conf. 1, 517–524. doi:10.1109/COMPSAC.2005.126

Krawchenko, K. (2016). The phishing email that hacked the account of John Podesta. CBSNEWS Available at: https://www.cbsnews.com/news/the-phishing-email-that-hacked-the-account-of-john-podesta/ (Accessed April 13, 2019).

Ksepersky (2020). Spam and phishing in Q1 2020. Available at: https://securelist.com/spam-and-phishing-in-q1-2020/97091/ (Accessed July 27, 2020).

Kumaraguru, P., Sheng, S., Acquisti, A., Cranor, L. F., and Hong, J. (2010). Teaching Johnny not to fall for phish. ACM Trans. Internet Technol. 10, 1–31. doi:10.1145/1754393.1754396

Latto, N. (2020). What is adware and how can you prevent it? Avast. Available at: https://www.avast.com/c-adware (Accessed May 8, 2020).

Le, D., Fu, X., and Hogrefe, D. (2006). A review of mobility support paradigms for the internet. IEEE Commun. Surv. Tutorials 8, 38–51. doi:10.1109/COMST.2006.323441

Lehman, T. J., and Vajpayee, S. (2011). “We’ve looked at clouds from both sides now,” in 2011 annual SRII global conference , San Jose, CA , March 20–April 2, 2011 , ( IEEE , 342–348. doi:10.1109/SRII.2011.46

Leyden, J. (2001). Virus toolkits are s’kiddie menace. Regist . Available at: https://www.theregister.co.uk/2001/02/21/virus_toolkits_are_skiddie_menace/%0D (Accessed June 15, 2019).

Lin, J., Sadeh, N., Amini, S., Lindqvist, J., Hong, J. I., and Zhang, J. (2012). “Expectation and purpose,” in Proceedings of the 2012 ACM conference on ubiquitous computing - UbiComp ’12 (New York, New York, USA: ACM Press ), 1625. doi:10.1145/2370216.2370290

Lininger, R., and Vines, D. R. (2005). Phishing: cutting the identity theft line. Print book . Indiana: Wiley Publishing, Inc .

Ma, J., Saul, L. K., Savage, S., and Voelker, G. M. (2009). “Identifying suspicious URLs.” in Proceedings of the 26th annual international conference on machine learning - ICML ’09 (New York, NY: ACM Press ), 1–8. doi:10.1145/1553374.1553462

Marforio, C., Masti, R. J., Soriente, C., Kostiainen, K., and Capkun, S. (2015). Personalized security indicators to detect application phishing attacks in mobile platforms. Available at: http://arxiv.org/abs/1502.06824 .

Margaret, R. I. P. (2008). PBX (private branch exchange). Available at: https://searchunifiedcommunications.techtarget.com/definition/IP-PBX (Accessed June 19, 2019).

Maurer, M.-E., and Herzner, D. (2012). Using visual website similarity for phishing detection and reporting. 1625–1630. doi:10.1145/2212776.2223683

Medvet, E., Kirda, E., and Kruegel, C. (2008). “Visual-similarity-based phishing detection,” in Proceedings of the 4th international conference on Security and privacy in communication netowrks - SecureComm ’08 (New York, NY: ACM Press ), 1. doi:10.1145/1460877.1460905

Merwe, A. v. d., Marianne, L., and Marek, D. (2005). “Characteristics and responsibilities involved in a Phishing attack, in WISICT ’05: proceedings of the 4th international symposium on information and communication technologies . Trinity College Dublin , 249–254.

Microsoft (2020). Exploiting a crisis: how cybercriminals behaved during the outbreak. Available at: https://www.microsoft.com/security/blog/2020/06/16/exploiting-a-crisis-how-cybercriminals-behaved-during-the-outbreak/ (Accessed August 1, 2020).

Mince-Didier, A. (2020). Hacking a computer or computer network. Available at: https://www.criminaldefenselawyer.com/resources/hacking-computer.html (Accessed August 7, 2020).

Miyamoto, D., Hazeyama, H., and Kadobayashi, Y. (2009). “An evaluation of machine learning-based methods for detection of phishing sites,” in international conference on neural information processing ICONIP 2008: advances in neuro-information processing lecture notes in computer science . Editors M. Köppen, N. Kasabov, and G. Coghill (Berlin, Heidelberg: Springer Berlin Heidelberg ), 539–546. doi:10.1007/978-3-642-02490-0_66

Mohammad, R. M., Thabtah, F., and McCluskey, L. (2014). Predicting phishing websites based on self-structuring neural network. Neural Comput. Applic 25, 443–458. doi:10.1007/s00521-013-1490-z

Moore, T., and Clayton, R. (2007). “Examining the impact of website take-down on phishing,” in Proceedings of the anti-phishing working groups 2nd annual eCrime researchers summit on - eCrime ’07 (New York, NY: ACM Press ), 1–13. doi:10.1145/1299015.1299016

Morgan, S. (2019). 2019 official annual cybercrime report. USA, UK, Canada. Available at: https://www.herjavecgroup.com/wp-content/uploads/2018/12/CV-HG-2019-Official-Annual-Cybercrime-Report.pdf .

Nathan, G. (2020). What is phishing? + laws, charges & statute of limitations. Available at: https://www.federalcharges.com/phishing-laws-charges/ (Accessed August 7, 2020).

Okin, S. (2009). From script kiddies to organised cybercrime. Available at: https://comsecglobal.com/from-script-kiddies-to-organised-cybercrime-things-are-getting-nasty-out-there/ (Accessed August 12, 2019).

Ollmann, G. (2004). The phishing guide understanding & preventing phishing attacks abstract. USA. Available at: http://www.ngsconsulting.com .

Ong, S. (2014). Avast survey shows men more susceptible to mobile malware. Available at: https://www.mirekusoft.com/avast-survey-shows-men-more-susceptible-to-mobile-malware/ (Accessed November 5, 2020).

Ovelgönne, M., Dumitraş, T., Prakash, B. A., Subrahmanian, V. S., and Wang, B. (2017). Understanding the relationship between human behavior and susceptibility to cyber attacks. ACM Trans. Intell. Syst. Technol. 8, 1–25. doi:10.1080/00207284.1985.11491413

Parmar, B. (2012). Protecting against spear-phishing. Computer Fraud Security , 2012, 8–11. doi:10.1016/S1361-3723(12)70007-6

Phish Labs (2019). 2019 phishing trends and intelligence report the growing social engineering threat. Available at: https://info.phishlabs.com/hubfs/2019 PTI Report/2019 Phishing Trends and Intelligence Report.pdf .

PhishMe (2016). Q1 2016 malware review. Available at: WWW.PHISHME.COM .

PhishMe (2017). Human phishing defense enterprise phishing resiliency and defense report 2017 analysis of susceptibility, resiliency and defense against simulated and real phishing attacks. Available at: https://cofense.com/wp-content/uploads/2017/11/Enterprise-Phishing-Resiliency-and-Defense-Report-2017.pdf .

PishTank (2006). What is phishing. Available at: http://www.phishtank.com/what_is_phishing.php?view=website&annotated=true (Accessed June 19, 2019).

Pompon, A. R., Walkowski, D., and Boddy, S. (2018). Phishing and Fraud Report attacks peak during the holidays. US .

Proofpoint (2019a). State of the phish 2019 report. Sport Mark. Q. 14, 4. doi:10.1038/sj.jp.7211019

Proofpoint (2019b). What is Proofpoint. Available at: https://www.proofpoint.com/us/company/about (Accessed September 25, 2019).

Proofpoint (2020). 2020 state of the phish. Available at: https://www.proofpoint.com/sites/default/files/gtd-pfpt-us-tr-state-of-the-phish-2020.pdf .

Raggo, M. (2016). Anatomy of a social media attack. Available at: https://www.darkreading.com/analytics/anatomy-of-a-social-media-attack/a/d-id/1326680 (Accessed March 14, 2019).

Ramanathan, V., and Wechsler, H. (2012). PhishGILLNET-phishing detection methodology using probabilistic latent semantic analysis, AdaBoost, and co-training. EURASIP J. Info. Secur. 2012, 1–22. doi:10.1186/1687-417X-2012-1

Ramzan, Z. (2010). “Phishing attacks and countermeasures,” in Handbook of Information and communication security (Berlin, Heidelberg: Springer Berlin Heidelberg ), 433–448. doi:10.1007/978-3-642-04117-4_23

Ramzan, Z., and Wuest, C. (2007). “Phishing Attacks: analyzing trends in 2006,” in Fourth conference on email and anti-Spam (Mountain View , ( California, United States ).

Rhett, J. (2019). Don’t fall for this new Google translate phishing attack. Available at: https://www.gizmodo.co.uk/2019/02/dont-fall-for-this-new-google-translate-phishing-attack/ (Accessed April 23, 2019). doi:10.5040/9781350073272

RISKIQ (2020). Investigate | COVID-19 cybercrime weekly update. Available at: https://www.riskiq.com/blog/analyst/covid19-cybercrime-update/%0D (Accessed August 1, 2020).

Robichaux, P., and Ganger, D. L. (2006). Gone phishing: evaluating anti-phishing tools for windows. Available at: http://www.3sharp.com/projects/antiphishing/gonephishing.pdf .

Rouse, M. (2013). Phishing defintion. Available at: https://searchsecurity.techtarget.com/definition/phishing (Accessed April 10, 2019).

Salem, O., Hossain, A., and Kamala, M. (2010). “Awareness program and AI based tool to reduce risk of phishing attacks,” in 2010 10th IEEE international conference on computer and information technology (IEEE) , Bradford, United Kingdom , June 29–July 1, 2010, 2001 ( IEEE ), 1418–1423. doi:10.1109/CIT.2010.254

Scaife, N., Carter, H., Traynor, P., and Butler, K. R. B. (2016). “Crypto lock (and drop it): stopping ransomware attacks on user data,” in 2016 IEEE 36th international conference on distributed computing systems (ICDCS) ( IEEE , 303–312. doi:10.1109/ICDCS.2016.46

Sheng, S., Magnien, B., Kumaraguru, P., Acquisti, A., Cranor, L. F., Hong, J., et al. (2007). “Anti-Phishing Phil: the design and evaluation of a game that teaches people not to fall for phish,” in Proceedings of the 3rd symposium on usable privacy and security - SOUPS ’07 (New York, NY: ACM Press ), 88–99. doi:10.1145/1280680.1280692

Symantic, (2019). Internet security threat report volume 24|February 2019 . USA.

Techpedia (2021). Caller ID. Available at: https://www.techopedia.com/definition/24222/caller-id (Accessed June 19, 2019).

VadeSecure (2021). Phishers favorites 2019. Available at: https://www.vadesecure.com/en/ (Accessed October 29, 2019).

Vishwanath, A. (2005). “Spear phishing: the tip of the spear used by cyber terrorists,” in deconstruction machines (United States: University of Minnesota Press ), 469–484. doi:10.4018/978-1-5225-0156-5.ch023

Wang, X., Zhang, R., Yang, X., Jiang, X., and Wijesekera, D. (2008). “Voice pharming attack and the trust of VoIP,” in Proceedings of the 4th international conference on security and privacy in communication networks, SecureComm’08 , 1–11. doi:10.1145/1460877.1460908

Wenyin, L., Huang, G., Xiaoyue, L., Min, Z., and Deng, X. (2005). “Detection of phishing webpages based on visual similarity,” in 14th international world wide web conference, WWW2005 , Chiba, Japan , May 10–14, 2005 , 1060–1061. doi:10.1145/1062745.1062868

Whitman, M. E., and Mattord, H. J. (2012). Principles of information security. Course Technol. 1–617. doi:10.1016/B978-0-12-381972-7.00002-6

Williams, E. J., Hinds, J., and Joinson, A. N. (2018). Exploring susceptibility to phishing in the workplace. Int. J. Human-Computer Stud. 120, 1–13. doi:10.1016/j.ijhcs.2018.06.004

wombatsecurity.com (2018). Wombat security user risk report. USA. Available at: https://info.wombatsecurity.com/hubfs/WombatProofpoint-UserRiskSurveyReport2018_US.pdf .

Workman, M. (2008). Wisecrackers: a theory-grounded investigation of phishing and pretext social engineering threats to information security. J. Am. Soc. Inf. Sci. 59 (4), 662–674. doi:10.1002/asi.20779

Yeboah-Boateng, E. O., and Amanor, P. M. (2014). Phishing , SMiShing & vishing: an assessment of threats against mobile devices. J. Emerg. Trends Comput. Inf. Sci. 5 (4), 297–307.

Zhang, Y., Hong, J. I., and Cranor, L. F. (2007). “Cantina,” in Proceedings of the 16th international conference on World Wide Web - WWW ’07 (New York, NY: ACM Press ), 639. doi:10.1145/1242572.1242659

Zissis, D., and Lekkas, D. (2012). Addressing cloud computing security issues. Future Generat. Comput. Syst. 28, 583–592. doi:10.1016/j.future.2010.12.006

Keywords: phishing anatomy, precautionary countermeasures, phishing targets, phishing attack mediums, phishing attacks, attack phases, phishing techniques

Citation: Alkhalil Z, Hewage C, Nawaf L and Khan I (2021) Phishing Attacks: A Recent Comprehensive Study and a New Anatomy. Front. Comput. Sci. 3:563060. doi: 10.3389/fcomp.2021.563060

Received: 17 May 2020; Accepted: 18 January 2021; Published: 09 March 2021.

Reviewed by:

Copyright © 2021 Alkhalil, Hewage, Nawaf and Khan. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Chaminda Hewage, [email protected]

This article is part of the Research Topic

2021 Editor's Pick: Computer Science

To read this content please select one of the options below:

Please note you do not have access to teaching notes, preventing social engineering: a phenomenological inquiry.

Information and Computer Security

ISSN : 2056-4961

Article publication date: 9 June 2022

Issue publication date: 9 February 2023

The purpose of this transcendental phenomenological qualitative research study is to understand the essence of what it is like to be an information systems professional working in the USA while managing and defending against social engineering attacks on an organization. The findings add to the information system (IS) body of literature by uncovering commonly shared attitudes, motivations, experiences and beliefs held by IS professionals who are responsible for protecting their company from social engineering attacks.

Design/methodology/approach

This is a qualitative, transcendental phenomenological study that was developed to gain a deeper understanding about the essence of what it is like to be an IS professional defending a US business against social engineering attacks. This research design is used when sharing the experiences of study participants is more important than presenting the interpretations of the researcher. To target participants from the industries identified as regularly targeted by social engineers, purposive sampling was used in conjunction with the snowball sampling technique to find additional participants until saturation was reached.

Ten themes emerged from the data analysis: (1) foster a security culture, (2) prevention means education, (3) layered security means better protection, (4) prepare, defend and move on, (5) wide-ranging responsibilities, (6) laying the pipes, (7) all hands on deck, (8) continuous improvement, (9) attacks will never be eliminated and (10) moving pieces makes it harder. The ten themes, together, reveal the essence of the shared experiences of the participants with the phenomenon.

Originality/value

Understanding how to defend an enterprise from social engineering attacks is an international issue with implications for businesses and IS professionals across the world. The findings revealed that to prevent social engineer attacks, all employees – IS and non-IS professionals alike – must be unified in their desire to protect the organization. This means IS professionals and organizational leadership must establish a strong security culture, not only through layered technology and electronic controls but also through open communication between all departments and continuously engaging, training and reinforcing social engineering education, policies, procedures and practices with all employees.

  • Social engineering
  • Security culture
  • Vulnerability
  • Weak human link

Pharris, L. and Perez-Mira, B. (2023), "Preventing social engineering: a phenomenological inquiry", Information and Computer Security , Vol. 31 No. 1, pp. 1-31. https://doi.org/10.1108/ICS-09-2021-0137

Emerald Publishing Limited

Copyright © 2022, Emerald Publishing Limited

Related articles

We’re listening — tell us what you think, something didn’t work….

Report bugs here

All feedback is valuable

Please share your general feedback

Join us on our journey

Platform update page.

Visit emeraldpublishing.com/platformupdate to discover the latest news and updates

Questions & More Information

Answers to the most commonly asked questions here

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Social Engineering Attacks During the COVID-19 Pandemic

Sushruth venkatesha.

Department of Computer Science and Engineering, National Institute of Technology Karnataka, Surathkal, Mangalore, India

K. Rahul Reddy

B. r. chandavarkar.

The prevailing conditions surrounding the COVID-19 pandemic has shifted a variety of everyday activities onto platforms on the Internet. This has led to an increase in the number of people present on these platforms and also led to jump in the time spent by existing participants online. This increase in the presence of people on the Internet is almost never preceded by education about cyber-security and the various types of attacks that an everyday User of the Internet may be subjected to. This makes the prevailing situation a ripe one for cyber-criminals to exploit and the most common type of attacks made are Social Engineering Attacks. Social Engineering Attacks are a group of sophisticated cyber-security attacks that exploit the innate human nature to breach secure systems and thus have some of the highest rate of success. This paper delves into the particulars of how the COVID-19 pandemic has set the stage for an increase in Social Engineering Attacks, the consequences of this and some techniques to thwart such attacks.

Introduction

The twenty-first century has seen an accelerated move of business, media, social interaction, education, etc. onto platforms on the Internet. As a result, the amount and importance of information flowing through the digital landscape has increased exponentially. This has led to increased criminal activity in the cyber-space which has materialized as data-breaches, malware, ransomware and phishing type attacks. There has been concentrated efforts by private organizations and governmental agencies to guard against these attacks and as a result in the past few years, traditional modes of attacks such as hacking have proven to be marginally less successful, but alternate areas of vulnerabilities have been exposed. One such area gaining attention is social engineering and its utilization in all modes of cyber-attacks [ 1 ].

Over the years, there have been a number of attempts to provide a concrete definition to the term ‘Social Engineering Attacks’ in the literature, each being slightly different but having the same overarching meaning. The term has been described by Conteh and Schmick as ‘Human Hacking’, an art of tricking people into disclosing their credentials and then using them to gain access to networks or accounts [ 2 ]. Ghafir et al. have defined the term as a breach of organizational security via interaction with people to trick them into breaking normal security procedures [ 3 ]. The lack of a structured definition has led to works that have focused solely on defining the term. One such work by Wang, Sun and Zhu has defined Social Engineering in cyber-security as a type of attack wherein the attacker(s) exploit human vulnerabilities by means of social interaction to breach cyber security, with or without the use of technical means and technical vulnerabilities [ 4 ]. Going forward, this paper will be using this definition by Wang et al., as the standard definition for the term ’Social Engineering Attack’ (SEA) in this study.

SEAs have traditionally followed a template made up of 4 steps: (1) Target research; (2) Forming a relation with the target; (3) Exploit the relation and formulate an attack; (4) Exit without leaving behind traces [ 5 ]. With time the tools used in each step have evolved. Target research has gone from searching in the target’s dumpster to extracting information from the target on social media platforms [ 6 ]. With the development of Machine Learning, steps (2) and (3) have become automated, with ’social bots’ tracking and engaging with targets on social media [ 7 ]. The wide array of technologies and methodologies used for SEAs and the speed with which they evolve make developing software solution such as spam and bot detectors a game of cat-and-mouse.

The global situation that has come into existence due to the COVID 19 pandemic has changed the equation in this space. The increase in Work-from-home situations, online education, entertainment via online platforms has created a sharp uptick in the number of Internet Users worldwide and also the amount of time spent by users on the Internet. Industry analysts have reported an increase of over 47 percent in the amount of broadband data usage worldwide during the March to May period in 2020, as seen in Fig.  1 below [ 8 ]. This has also reciprocated into an increase in social engineering attacks that use Internet as a medium of contacting the target. Phishing, one of the staples in the SEA arsenal has seen a huge increase, with technology companies such as Google and Microsoft recording trends where the attackers masquerade as officials from organizations working on COVID-19 such as the World Health Organization [ 9 , 10 ]

An external file that holds a picture, illustration, etc.
Object name is 42979_2020_443_Fig1_HTML.jpg

Data usage growth year-on-year in percentage terms [ 8 ] 

In this unprecedented situation brought on by the pandemic, having a deeper understanding of the various SEA strategies being deployed is valuable to both organizations and to private citizens. With this theme in mind, the second section of this paper gives an overview of how the participation of individuals in the Internet has changed and how SEAs have evolved as a response to the pandemic. The third section then takes up the task of providing guidelines as to how one can tackle these SEAs. These guidelines are presented based on the demographic that are most affected by these attacks. Guidelines particular to SEAs popular during the COVID-19 pandemic are also covered. The pandemic is currently an evolving situation and this paper aims to be a checkpoint from a time within this time-frame that will aid research and studies that will eventually be carried out after the end of this unprecedented time period.

Social Engineering Attacks During COVID 19

Humans are social creatures. Our society is built upon the cooperation of various groups of people communicating, understanding each other, and building on each other’s strengths. Throughout the history of the human species, this cooperation was only possible when all the parties were present at the same place at the same time. This changed with the invention of the Internet in the late 1960s and the launch of the World Wide Web in 1989. The commercial Internet allowed multitudes of online platforms that provided instant communication with anyone at anytime from anywhere. Technologies like instant messaging, video calling and video conferencing shrunk the world and allowed greater cooperation. Video conferencing especially is a technology that can truly replace the need for physical interactions as it allows the parties communicating to view each other’s facial expressions. The ability to see facial expressions has been proven to be the most important aspect in communication [ 11 ]. But, even with such technologies, the primary mode of human communication remained physical contact. This status-quo is now being forcefully being changed by the effects of the COVID-19 pandemic.

There has been a mass migration of human activities and interactions to platforms on the Internet. This sudden increase in activity online, coupled with the mental anguish brought on by the pandemic, creates a perfect storm of conditions for bad actors to carry out SEAs. These bad actors can be a single individual, an organization of cybercriminals or even government-backed entities from various countries around the world. While the main incentive for the bad actors from the first two categories is financial, the incentive for government-backed entities is usually geopolitical in nature [ 12 ]. Google, the technology company, has been tracking phishing attacks deduced to have been originated from such government-backed entities. Figure  2 shows the scale of such attacks based on the number of accounts flagged by Google.

An external file that holds a picture, illustration, etc.
Object name is 42979_2020_443_Fig2_HTML.jpg

Accounts that received a “government-backed attacker” warning each month of 2020 [ 9 ]

This section first looks at how the COVID-19 pandemic has affected human activities and the migration to platforms on the Internet. Then it looks at the changes seen in SEAs as a result of the pandemic and finally SEAs that have a theme pointing to the COVID-19 pandemic.

Migration of Activities to Online Platforms

The main three activities that lead to humans venturing out of their homes in the modern world are business/work, commerce/shopping, and leisure. Business can be people going to their educational institutions or to their places of work. Commerce can be producers and sellers going to the marketplace to sell their products and consumers going there to procure their needs. Leisure can be a multitude of activities ranging from international tourism to strolls in the park. The COVID-19 global pandemic and the resultant lockdown measure enacted by various governments worldwide has caused disruptions to all such activities. At the height of the pandemic lockdown measure in April 2020, a total of a third of the world’s population was estimated to be under various degrees of quarantine and social distancing measures.

The International Labour Organization(ILO) had estimated that 7.9 percent of the global workforce, that is, nearly 260 million people worked from home on a permanent basis before the COVID-19 pandemic. With the disruption in the normal work schedule brought on by the pandemic, according to the calculations of the ILO, the number of people working from home has the potential to reach 18 percent of the global workforce working from their homes permanently as a result of the pandemic. The report published by the ILO in April of 2020 suggests that this number is mainly made up of artisans, self-employed, business owners, freelancers, knowledge-based workers, and high-income earners [ 13 ]. This increase in remote work can easily be visualized by the spending patterns of various companies as shown in Fig.  3 below. Businesses worldwide have been investing in software products that support remote work and make the process more convenient.

An external file that holds a picture, illustration, etc.
Object name is 42979_2020_443_Fig3_HTML.jpg

Spending by business on software during the early stages of the COVID 19 pandemic [ 14 ]

Education is another area that has seen major disruption, with most of the countries closing their educational institutions in the early stages of the pandemic. In late April 2020, the World Economic Forum estimated that a total of 1.2 billion students from 186 countries have seen disruptions in their education brought on by the pandemic [ 15 ]. This has brought on a digital education revolution where the classes have been shifted onto platforms that allow remote learning. These platforms may come as lectures on the radio, as television programs, or full-fledged classroom education on platforms on the Internet. These platforms were used in over 110 countries and able to provide remote education during the pandemic, as seen in the UNESCO-UNICEF-World Bank Survey on National Education Responses to COVID-19 School Closures [ 16 ]. This study shows that pre-teenage children from over 70 countries have had to attend classes on platforms on the Internet. This is a worrying statistics as, without supervision, these children can become easy victims of Social Engineering Attacks.

Commerce has also seen the big jump onto online platforms that had been slowly going on in the background. The COVID-19 pandemic and the resulting social distancing measures have become a catalyst for the big migration of commerce onto the Internet. A joint survey conducted by the United Nations Conference on Trade and Development (UNCTD) and Netcomm Suisse E-commerce Association in October 2020 [ 17 ] in 9 representative countries has given an insight into the situation. The survey showed that consumers in emerging economies saw a greater shift to online shopping, with an increase in the number of customers for most product categories, as shown in Fig.  4 below. This can be seen as a one-to-one relation for the increase in participation on various Internet platforms while becoming a potent medium for Social Engineering Attacks.

An external file that holds a picture, illustration, etc.
Object name is 42979_2020_443_Fig4_HTML.jpg

Number of customers on online platforms increased for most product due to COVID-19 [ 17 ]

Leisure activities which include any traveling have been severely curtailed due to the pandemic. In the early stages of the pandemic, when strict quarantine and lockdown measures were enacted, the simple act of going out for a walk was prohibited in man heavily affected areas around the world. This lack of entertainment via outdoor activities has largely been compensated by entertainment via digital media such as television, social media platforms, and on-demand entertainment platforms on the Internet. Brightcove’s Q2 2020 Global Video Index [ 18 ] has reported a staggering 40 percent increase in video content consumption over the three month period from April to June 2020. Reports have also shown a 30 percent increase in the time spent by Indians during the pandemic as compared to before, on over-the-top entertainment platforms on the Internet.

The statistics highlighted so far in this section provide a glimpse of the extent of online migration caused by pandemic and the reaction to it. Next we will see how this change has impacted SEAs

Changes to Social Engineering Attacks During the Pandemic

Cybercrime can be thought of as a business, one with bad intentions. But like any other business entity, the ultimate aim of any cybercriminal is to turn a profit for the work they put in. When cybercrime is seen through such a perspective, the COVID-19 pandemic can be seen as a new business opportunity. This unique opportunity is exciting to these cybercriminals as the pandemic pushes more people onto the Internet, increasing the number of Users that can be targeted.

As per reports from Google, most SEAs are now carried out as Phishing Attacks via emails or websites [ 9 ]. These attacks deploy multiple tactics using the brand identity of well-known entities by their trademarks such as their company names and logos, to develop phishing websites or send emails that appear authentic and lure users into entering sensitive information such as usernames, passwords, banking details and other details that can help in identifying them. Google has reported that they block more than 100 million phishing email every day, with a claimed accuracy of 99.9 percent [ 19 ].

Microsoft, another major industry player, has reported an increase in email phishing activities, stating that phishing type attacks make almost 70 percent of all attacks. The September 2020 published Digital Defense Report [ 20 ] reports that the attackers deploying SEA are now deploying significant time, money, and effort to develop scams and attack strategies to trick even the users being wary of such attacks. This development can be attributed to an increase in information available to the users about such attacks, heightened awareness among users and technological advancements in detection of attacks. Microsoft based on the telemetry from their business software offerings “Office 365” has reported [ 20 ] that users face three main types of phishing attacks—credential phishing, Business email compromise, or a mix of both.

Credential phishing is carried out by the cybercriminal posing as a well-known service in an email template and trying to lure users into clicking on a link, which takes them to a fake login page. When the users enter their credentials on the page, those credentials can be used to further launch deeper and complex attacks to build a presence inside the organization by using cloud-only APIs and systems. This presence is then used to move around laterally to steal data, money, or otherwise breach the organization. An illustration of the process is shown in Fig.  5  above.

An external file that holds a picture, illustration, etc.
Object name is 42979_2020_443_Fig5_HTML.jpg

Outline of a credential phishing attack [ 20 ]

Business email compromise phishing attacks specifically target businesses and are in the limelight in this era of remote work. This type of attack is characterized by techniques that masquerade as someone the target usually takes notice of, such as the company CEO, CFO, or HR personnel. The attack can also involve a business-to-business transaction. For example, the attacker might fraudulently access a company’s system and then act as that company to criminally request payment from another company. An illustration of the attack is showing above in Fig.  6 .

An external file that holds a picture, illustration, etc.
Object name is 42979_2020_443_Fig6_HTML.jpg

Outline of a business email compromise attack [ 20 ]

Using phishing attacks as a vehicle to deliver malware or ransomware software is also common. During the pandemic period ransomware attacks are reported to be more common than malware because ransomware attacks can do the same information collection job along with monetary payments from the target paid to decrypt files that the ransomware software encrypts.

The pandemic and the resultant shift of work as well as education online can be seen as a catalyst for an increase in both credential phishing attacks as well as business email compromise attacks. Ransomware attacks have also pipped malware attacks as a result of increased online activity during the pandemic. Although these changes have been recorded during the pandemic, the highlight is the shift in the attack strategies from generic subjects to themes related to the pandemic. This is seen in detail in the upcoming section.

COVID-19 Themed Attacks

The public health emergency brought on by the COVID-19 pandemic and the fear, anxiety, and uncertainty present in the general public, and the desire for information on the pandemic presents the ideal opportunity for exploitation by cybercriminals deploying SEAs. When news about the pandemic made headlines in the early months of 2020, SEAs with the over-arching theme about the pandemic shot up in frequency [ 21 ]. This trend is illustrated below in Fig.  6 . Another report produced by consultancy firm Deloitte noted a 254 percent increase in new COVID-19 themed web and sub-domains registered per day in the early stages of the pandemic [ 22 ]. This has led to many government organizations such as the Federal Bureau of Investigation and the Cybersecurity and Infrastructure Security Agency in the USA and issuing warning about the trend.

Phishing emails that used the pandemic as their lure were seen to have subject lines such as “2020 Coronavirus Updates”, “Coronavirus Updates”, “2019-nCov: New confirmed cases in your City”, and “2019-nCov: Coronavirus outbreak in your city (Emergency)”. The above type of mails were targeting the human nature of curiosity, the instinct to gather information and the fear of missing out. The content of such emails contained attachments that deployed malware and ransomware or lead to fake sites to harvest user credentials. The contents were worded in such a manner that they encouraged users to visit websites that the attackers used to harvest valuable data, such as usernames and passwords, credit card information, and other personal information from the targets [ 23 ]. An example of how phishing emails changed after the pandemic can be seen below in Fig.  7 .

An external file that holds a picture, illustration, etc.
Object name is 42979_2020_443_Fig7_HTML.jpg

COVID-19 themed SEAs [ 10 ]

Another common tactic used in phishing emails was the impersonation of trusted sources that provided information about the pandemic such as the World Health Organization (WHO) and the US Center for Disease Control. As a result of this trend, the US Federal Bureau of Investigation released a notice above impersonation of the US CDC by attackers [ 23 ]. Impersonation of WHO was a common trend that was picked up by the security services provided by Google in their email service [ 19 ]. An example of a phishing email impersonating the WHO can be seen in Fig.  8 .

An external file that holds a picture, illustration, etc.
Object name is 42979_2020_443_Fig8_HTML.jpg

Modifications to phishing emails in response to the pandemic [ 24 ]

Multiple governments worldwide enacted measures that provided financial respite to their citizen through cash payments which were carried out via platforms on the Internet. This was another area where phishing emails and also in some cases phishing SMS were deployed. Figure  9 depicts one such phishing email example.

An external file that holds a picture, illustration, etc.
Object name is 42979_2020_443_Fig9_HTML.jpg

Phishing email impersonating the WHO [ 19 ]

Other common subject matters used in phishing attack campaigns included notices about mandatory COVID-19 testing, news related to remote work settings, and news regarding social distancing and stay-at-home or quarantine rules and regulations. There was also a marked increase in social media posting about the pandemic that carried misinformation (Fig. ​ (Fig.10 10 ).

An external file that holds a picture, illustration, etc.
Object name is 42979_2020_443_Fig10_HTML.jpg

Phishing SMS [ 23 ]

Tackling Social Engineering Attacks

This paper contains a survey on the affected demographic for Social Engineering attacks. It cover points factors which affect the demographic that are targeted by Social Engineering Attackers. Based on this, the paper provides guidelines to be followed by each of these groups. It also cover the demographic points and the guidelines for attacks during COVID. These guidelines are slightly different because of the reason that these attacks mainly use the fear of the population to their advantage [ 25 ]. The paper notes how these attacks target the vulnerable demographic with tactics involving pandemic panic and fear and presents guidelines considering all these factors.

General Guidelines

These guidelines are aimed towards social engineering attacks in general.

Social Engineering Attacks are most successful against those who were not aware at an early age [ 26 ] about the potential threats and attacks possible by fraudulent and malicious intent. The four major steps in Social Engineering Attacks include [ 27 ]

  • Gathering of information
  • Developing a relationship with the target
  • Exploiting the target based on a targeted attack strategy
  • Execute the Attack.

Affected Demographic

The Demographic analysis can be made based on the four common steps of Social Engineering Attacks mentioned above.

Gathering of Information

Most Social Engineering Attacks are directed towards a very large audience. The information gathered by attackers is of two types. One is the general contact information of as many targets as possible. This is used to make initial approach to these targets. Only after certain reciprocation activity by the targets, the attackers go further with the responding subset of targets. Once they have a lock on the specific targets, the attackers gather the other type of information. This is usually specific to the target [ 27 ]. This includes jargon and other terms that the target can associate themselves with. This information often helps the attacker in the next steps of the Social Engineering attack.

At this step of the Social Engineering Attack, the population which is unaware of the security consequences of information disclosure. The main mistakes made by the targets are that they do not understand the importance of the data to the social engineer, leading to the disclosure of information. These can involve revealing information that, from a security point of view, is apparently harmless from the eyes of the target, but can be beneficial to the attacker.

Developing a Relationship

The willingness of the target to share information and reciprocate plays a very crucial part in the attack. Only the subset of population that reciprocate to the initial approach made by the attackers are further targeted specifically. Based on the information gathered by the attacker in the previous step of the attack, they try to establish a relationship with the target. After foot-printing the target, attackers often use information about the target or any other contact of the target.

In the formation of relationships, an attacker exploits the inherent willingness of a target to be trusted and establishes relationships with them. The attacker will manoeuvre him into a position of confidence when establishing this partnership, which he can then manipulate. Human beings have the willingness to trust and care for others. In order to exploit and influence goals to create authenticity and gain confidence, social engineers often use these attributes. By touch, such as physical or interactive, the act of coercion can be performed. Digital contact is an interaction with media such as the telephone, e-mail or even social media. Manipulations such as overloading, reciprocating, and dissemination of transparency are universal psychological concepts.

Exploiting the Target

Social Engineering Attackers now have access to the target’s information like location or other crucial information by using the relationship and confidence generated with the target through coercion during the previous stage, or by implementing other similar tactics. After gaining the trust, attacker exploits the target to obtain passwords or perform acts that would not occur under normal circumstances. At this point, the attacker could end or carry it forward to the next level.

At this stage, the affected demographic is the population who are unaware of these kinds of attacks and the ones who are gullible to trusting the attackers based on the footprinted information.

Executing the Attack

At this stage, the attackers utilize all the information available to make the final move and cash in from the target. If all the previous stages were successful, the attackers almost always get away with the attack.

We have identified two major factors which must be focused upon to mitigate these attacks: Contact information and Bank transactions. This section contains general guidelines for Social Engineering Attacks:

  • Be extra cautious about controls on payments and wary of emails containing an attachment or connection. Contact the information security or information technology department for a questionable message when in doubt.
  • Reconcile your accounts regularly and confirm by calling a checked number that business partners have received payments. Be vigilant with demands for payment and account changes and pay particular attention to whom you pay.
  • Keep contact details up to date with several workers now working from home, so that your bank can contact you quickly if they suspect a suspicious payment.
  • Don’t trust any requests that come in from email alone for payments or account changes. Often allow callbacks from a record system to the person making the request using a recognized phone number.
  • Often conduct callbacks when changing business partners’ contact details as well. Don’t simply trust an email demanding that a trusted call-back number be updated.

There are also a lot of scope for organisational employees being targets of these attacks. A study [ 23 ] shows that Social Engineering attackers spend time on developing techniques to victimize top officials and experts as well. The following few guidelines can be followed to mitigate those attacks:

  • Wherever possible, allow multi-factor authentication, adding another layer of protection to any applications you use. In addition, a password manager can help deter risky behaviour, such as passwords being stored or exchanged.
  • Try using the Encrypted Network Connection VPN(Virtual Private Network) solution. Accessing IT resources within the organization and anywhere on the internet is secure for the worker.
  • The cybersecurity strategy of companies should be revised and home and remote work included. When the company adjusts to getting more individuals outside the workplace, make sure the strategy is appropriate. For employee access to documents and other information, they need to provide remote-working access management, the use of personal devices, and revised data privacy considerations.
  • Employees can communicate with colleagues using employer-provided IT equipment for official matters. In the context of business IT, there is also a variety of software installed that keeps people safe. The company and the employee could not be completely secured if a security incident has occurred on the personal computer of an employee.
  • Personal devices used to access working networks will leave organizations vulnerable to hacking without the right protections. If information is leaked from a personal computer or hacked, the company would be held responsible.

Tackling Attacks During COVID-19

These guidelines are aimed towards Social engineering attacks during current times, which are focused differently because of everything moving online.

Although the general demographic that was vulnerable to Social Engineering Attacks is still almost the same, the attackers are focusing more on the victims who could be manipulated based on health data. As discussed in the previous sections, gathering information and relationship development are very crucial parts of any Social Engineering Attack. During this global pandemic, everyone has moved to a place of fear. This allows the attackers to look for and target victims with existing health conditions and medical history [ 28 ]. These people are more susceptible for the attack and give out information willingly.

The paper presents few necessary guidelines to be followed by the demographics to overcome Social Engineering attacks during COVID-19 like times:

  • For Phishing attacks: Strong authentication can protect the users from the large number of identity attacks, reducing the possibility of security breaches with strong authentication. For the best protection and user experience, options for password less authentication are recommended. The preferred choice over SMS [ 23 ]/voice authentication is always the use of an authentication app.
  • For Healthcare related fraudulent calls: Verify all incoming calls for the authority. Do not share any information until and unless the contact made by the other person was expected. This is essentially to avoid reciprocation. Most attackers only proceed to attack those who reciprocate initial attempts of contact.
  • Avoid falling for targeted attack strategies: As discussed in the previous sections, the attackers gather information (footprint) about targets. This includes both personal information and that of their close members. This is essentially any information that the attackers can get their hands on. If a suspicious attempt with information of your past health history or that of any family member is made, it is possible that the attackers are targeting the victim based on some health based information received.
  • Health history based attacks: If any malicious attempt is made by quoting health history and medical records of the target or that of their family, trying to get this data validated and verified will help get out of this attacks.

This paper has discussed how the global pandemic has affected the Social Engineering Attacks. The pandemic has moved a range of daily practices to the Internet and online platforms. This increased the fraction of population online. This rise in the presence of people on the Internet is not accompanied by cyber security education and the different forms of attacks that an Internet user can be exposed to on a daily basis. We discuss a variety of these kinds of attacks and propose a few guidelines as to how to avoid and counter these. We present an analysis of the steps taken by these attackers from knowing(foot-printing) a target to successfully executing the attack. We present these guidelines based on the four major steps discussed.

This paper also presents a detailed analysis of COVID-19 themed attacks. Guidelines to avoid these targeted attacks like Phishing attacks, Healthcare related fraudulent calls, Health history based attacks have been presented too.

Compliance with Ethical Standards

Conflict of Interest: The authors declare that they have no conflict of interest. On behalf of all authors, the corresponding author states that there is no conflict of interest.

This article is part of the topical collection “Cyber Security and Privacy in Communication Networks” guest edited by Rajiv Misra, R K Shyamsunder, Alexiei Dingli, Natalie Denk, Omer Rana, Alexander Pfeiffer, Ashok Patel and Nishtha Kesswani.

The original online version of this article was revised: Due to incorrect figure caption in figures 2 to 10. Now, they have been corrected.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Change history

A Correction to this paper has been published: 10.1007/s42979-021-00550-7

Contributor Information

Sushruth Venkatesha, Email: moc.liamg@12hturhsusv .

K. Rahul Reddy, Email: moc.kooltuo@ydder_luhar_k .

B. R. Chandavarkar, Email: moc.liamg@ktincrb .

Subscribe to the PwC Newsletter

Join the community, edit social preview.

social engineering research papers

Add a new code entry for this paper

Remove a code repository from this paper, mark the official implementation from paper authors, add a new evaluation result row, remove a task, add a method, remove a method, edit datasets, guidelines for using mixed and multi methods research in software engineering.

9 Apr 2024  ·  Margaret-Anne Storey , Rashina Hoda , Alessandra Maciel Paz Milani , Maria Teresa Baldassarre · Edit social preview

Mixed and multi methods research is often used in software engineering, but researchers outside of the social or human sciences often lack experience when using these designs. This paper provides guidelines and advice on how to design mixed and multi method research, and to encourage the intentional, rigourous, and innovative use of mixed methods in software engineering. It also presents key characteristics of core mixed method research designs. Through a number of fictitious but recognizable software engineering research scenarios and personas of prototypical researchers, we showcase how to choose suitable designs and consider the inevitable tradeoffs any design choice leads to. We furnish the paper with recommended best practices and several antipatterns that illustrate what to avoid in mixed and multi method research.

Code Edit Add Remove Mark official

Datasets edit.

IMAGES

  1. (PDF) .2.4.12 Lab

    social engineering research papers

  2. How to identify Social Engineering

    social engineering research papers

  3. A Study of Social Engineering in Online Frauds

    social engineering research papers

  4. (PDF) Social engineering

    social engineering research papers

  5. Social Engineering Research Planning Checklist

    social engineering research papers

  6. 3.5.7-lab---social-engineering.docx

    social engineering research papers

VIDEO

  1. Social Engineering Artificial Intelligence #chatgpt

  2. Social Engineering Toolkit

  3. Social Engineering Explored Demo

  4. What is Social Engineering?

  5. Hacking and Social Engineering. Science on the Edge

  6. Engineering as a Social Experimentation |Professional Ethics|Mrs. S. Logesswari, AP, CSE, RMDEC

COMMENTS

  1. Hacking Humans? Social Engineering and the Construction of the

    This is only one example of the increasing insecurities of digital infrastructures due to social engineering practices. As a consequence, the cybersecurity industry is at an all-time high (Grand View Research 2020).In particular, the convergence of technical skills and insights into social behavior required for the identification and defense of social engineering attacks has given rise a ...

  2. An interdisciplinary view of social engineering: A call to action for

    Social engineering research lacks a framework within which to view the topic and to apply findings in real-world organizational settings. As a result of the prior literature reviews and the proposed interdisciplinary approach, the following diagram illustrates a suggested framework for future research on the topic of social engineering that is flexible enough to allow for a variety of theories ...

  3. Social Engineering Attacks Prevention: A Systematic Literature Review

    Social engineering is an attack on information security for accessing systems or networks. Social engineering attacks occur when victims do not recognize methods, models, and frameworks to prevent them. The current research explains user studies, constructs, evaluation, concepts, frameworks, models, and methods to prevent social engineering attacks. Unfortunately, there is no specific previous ...

  4. Social Engineering in Cybersecurity: Effect Mechanisms, Human

    Finally, 16 social engineering attack scenarios (including 13 attack methods) are presented to illustrate how these mechanisms, vulnerabilities and attack methods are used to explain the success of social engineering attacks. Besides, this paper offers lots of materials for security awareness training and future empirical research, and the ...

  5. A Study of Social Engineering Concepts Within a Deceptive Defense

    behavioral deception is the foundation of social engineering, where people that want something from another person could use a form of social engineering. Forms of human manipulation to gain a resource can be considered a form of. social engineering (CompTIA, 2022). It is an attack that has evolved with the.

  6. (PDF) Social Engineering: An Introducction

    known as social engineering or social attacks [1]. Social engineering consists of techniques used to manipulate people into performing actions or divulging. confidential information. It is the ...

  7. A Study on the Psychology of Social Engineering-Based ...

    Social engineering-based cyberattacks are extremely difficult to counter as they do not follow specific patterns or approaches for conducting an attack, making them highly effective, efficient, easy, and obscure approaches for compromising any organization. ... Understandably, several research papers have highlighted the impacts, approaches ...

  8. Social Engineering Attacks: Recent Advances and Challenges

    Social engineering attacks are an urgent security threat, with the number of detected attacks rising each year. In 2011, a global survey of 853 information technology professionals revealed that 48% of large companies have experienced 25 or more social engineering attacks in the past two years [].In 2018, the annual average cost of organizations that were targets of social engineering attacks ...

  9. Social Engineering: The Art of Attacks

    Social Engineering is the art of making someone to compromise computer structures and infrastructure. The Social Engineering life cycle, as referenced in Fig. 1, shows us that attackers first identify potential victims by searching for relevant information and selecting the best attacks, then they try to gain the victim's trust through social engineering schemes, using that information to ...

  10. (PDF) Social Engineering

    the operationalization of a social engineering experiment, the choice s of research designs can be reduced. Finally, since social engineering experiments typically involve humans (e.g.,

  11. Predicting individuals' vulnerability to social engineering in social

    The popularity of social networking sites has attracted billions of users to engage and share their information on these networks. The vast amount of circulating data and information expose these networks to several security risks. Social engineering is one of the most common types of threat that may face social network users. Training and increasing users' awareness of such threats is ...

  12. Frontiers

    In this paper, we review human cognition through the lens of social engineering cyberattacks. Then, we propose an extended framework of human cognitive functions to accommodate social engineering cyberattacks. ... Research in social engineering has mostly focused on understanding and/or detecting the attacks from a technological perspective (e ...

  13. Defending against social engineering attacks: A security pattern‐based

    Moreover, the existing social engineering defence research is highly dependent on manual analysis, which is time-consuming and labour-intensive and cannot solve practical problems efficiently and pragmatically. ... This paper proposes a systematic approach to generate countermeasures based on a typical social engineering attack process ...

  14. Overview of Social Engineering Attacks on Social Networks

    Research has shown that Social Engineering can be easily automated in many cases and can therefore be performed on a large scale. Social Engineering has become an emerging threat in virtual communities. ... Ana Ferreira and Al in their research paper- An Analysis of Social Engineering Principles in Effective Phishing talked a lot about the ...

  15. Social Engineering: Hacking into Humans by Shivam Lohani :: SSRN

    Social engineering is a really common practice to gather information and sensitive data through the use of mobile numbers, emails, SMS or direct approach. Social engineering can be really useful for the attacker if done in a proper manner.'Kevin Mitnik' is the most renowned social engineers of all time. In this paper, we are going to discuss ...

  16. (PDF) Analysing Social Engineering Attacks and its Impact

    To summarise, this study aims to improve knowledge, defence, and avoidance of social engineering assaults by providing a comprehensive viewpoint on these attacks. Discover the world's research 25 ...

  17. Phishing Attacks: A Recent Comprehensive Study and a New Anatomy

    It is a social engineering attack wherein a phisher attempts to lure the users to obtain their sensitive information by illegally utilizing a public or trustworthy organization in an automated pattern so that the internet user trusts the message, ... Research on social media-based phishing, Voice Phishing, and SMS Phishing is sparse and these ...

  18. Preventing social engineering: a phenomenological inquiry

    The purpose of this transcendental phenomenological qualitative research study is to understand the essence of what it is like to be an information systems professional working in the USA while managing and defending against social engineering attacks on an organization. The findings add to the information system (IS) body of literature by ...

  19. Social Engineering: A Technique for Managing Human Behavior

    Social engineer ing is a human behavior based tec hnique for. hacking & luring people f or s neaking into someone's security system. Since social. engineering relies heavily on human behavior ...

  20. Social Engineering Attacks During the COVID-19 Pandemic

    Social Engineering Attacks are a group of sophisticated cyber-security attacks that exploit the innate human nature to breach secure systems and thus have some of the highest rate of success. This paper delves into the particulars of how the COVID-19 pandemic has set the stage for an increase in Social Engineering Attacks, the consequences of ...

  21. Papers with Code

    Mixed and multi methods research is often used in software engineering, but researchers outside of the social or human sciences often lack experience when using these designs. This paper provides guidelines and advice on how to design mixed and multi method research, and to encourage the intentional, rigourous, and innovative use of mixed ...

  22. (PDF) Social Engineering Attacks: A Survey

    This paper provides an in-depth survey about the social engineering attacks, their classifications, detection strategies, and prevention procedures. Social engineering attacks.

  23. Engineering research-practice partnerships for social justice

    RPPs engineered for social justice must develop explicit strategies to address power imbalances (Bang & Vossoughi, 2016; Ishimaru et al., 2022). Research partners might draw on Milner's, 2007 framework for understanding how their positionality and that of others influences the design and interpretation of research. Researching the self ...

  24. (PDF) SOCIAL ENGINEERING AND CYBER SECURITY

    This paper can help the security analysts to acquire experiences into social engineering from an alternate point of view, and specifically, upgrade the current and future investigation on social ...