McCombs School of Business

  • Español ( Spanish )

Videos Concepts Unwrapped View All 36 short illustrated videos explain behavioral ethics concepts and basic ethics principles. Concepts Unwrapped: Sports Edition View All 10 short videos introduce athletes to behavioral ethics concepts. Ethics Defined (Glossary) View All 58 animated videos - 1 to 2 minutes each - define key ethics terms and concepts. Ethics in Focus View All One-of-a-kind videos highlight the ethical aspects of current and historical subjects. Giving Voice To Values View All Eight short videos present the 7 principles of values-driven leadership from Gentile's Giving Voice to Values. In It To Win View All A documentary and six short videos reveal the behavioral ethics biases in super-lobbyist Jack Abramoff's story. Scandals Illustrated View All 30 videos - one minute each - introduce newsworthy scandals with ethical insights and case studies. Video Series

Case Study UT Star Icon

The Costco Model

How can companies promote positive treatment of employees and benefit from leading with the best practices? Costco offers a model.

case study on ethical practices

Costco is often cited as one of the world’s most ethical companies. It has been called a “testimony to ethical capitalism” in large part due to its company practices and treatment of employees. Costco maintains a company code of ethics which states:

“The continued success of our company depends on how well each of Costco’s employees adheres to the high standards mandated by our Code of Ethics… By always choosing to do the right thing, you will build your own self-esteem, increase your chances for success and make Costco more successful, too.”

In debates over minimum wage in the United States, many commentators see Costco as an example of how higher wages can yield greater company success, often pointing to competitors such as Walmart and Target as examples that fall short in providing for their employees. Other commentators do not see Costco’s model as being easily replicable for different types of businesses, citing wages as only one of many factors to consider in companies’ best practices.

Costco tends to pay around 40% more and provides more comprehensive health and retirement benefits than Walmart and Target, saving large amounts in employee turnover costs. The company resists layoffs, invests in training its employees, and grants them substantial autonomy to solve problems. U.S. Secretary of Labor Thomas Perez stated:

“And the remarkable loyalty that [employees] have to [Costco cofounder Jim Sinegal] is a function of the fact that he categorically rejects the notion that, ‘I either take care of my shareholders or my workers.’ That is a false choice.”

While few disagree with the benefits of fair treatment of employees, some commentators credit the success of Costco to its broader business model that favors higher productivity, not employee satisfaction. Columnist and economist Megan McArdle explains:

“A typical Costco store has around 4,000 SKUs [stock keeping units], most of which are stacked on pallets so that you can be your own stockboy. A Walmart has 140,000 SKUs, which have to be tediously sorted, replaced on shelves, reordered, delivered, and so forth. People tend to radically underestimate the costs imposed by complexity, because the management problems do not simply add up; they multiply.”

Furthermore, McArdle notes that Costco mainly serves as a grocer rather than department store and caters to a generally affluent customer base in suburban areas.

Discussion Questions

1. How does Costco, as described, match up to the “best practices” explained in the video? Where does Costco fall short? Where does Costco succeed?

2. Walmart pays its employees substantially less than does Costco, even though the two companies often compete head-to-head. How can Costco stay in business when it pays up to 40% more to its employees than its direct competitors?

3. What do you think are the most important practices for a retail company to pursue to foster an ethical environment for workers and consumers? Why?

4. A stock analyst criticized Costco, saying: “Costco continues to be a company that is better at serving the club member and employee than the shareholder.” Do you think this a fair critique? Why or why not?

5. Another analyst complained that Jim Sinegal “has been too benevolent. He’s right that a happy employee is a productive long-term employee, but he could force employees to pick up a little more of the burden.” Again, do you think this a fair criticism? Why or why not?

6. Is a company that does not follow the Costco model a “bad” company? Explain.

Related Videos

Ethical Leadership, Part 2: Best Practices

Ethical Leadership, Part 2: Best Practices

Psychological research provides guidance as to how leaders can create a workplace culture that encourages ethical behavior by employees.

Bibliography

Unselfishness: The World’s Most Ethical Company & Why Collaboration Works http://www.rohitbhargava.com/2012/05/unselfishness-the-worlds-most-ethical-company-why-collaboration-works.html

How Costco Became the Anti-Wal-Mart http://www.nytimes.com/2005/07/17/business/yourmoney/how-costco-became-the-antiwalmart.html

Connecting the Dots Between Leadership, Ethics and Corporate Culture http://iveybusinessjournal.com/publication/connecting-the-dots-between-leadership-ethics-and-corporate-culture/

Why Be an Ethical Company? They’re Stronger and Last Longer http://www.bloomberg.com/news/articles/2009-08-17/why-be-an-ethical-company-theyre-stronger-and-last-longer

Labor Secretary Thomas Perez Says More Employers Need To Follow Costco’s Example http://www.huffingtonpost.com/2013/10/29/thomas-perez-costco-minimum-wage_n_4174249.html

Costco’s Profit Soars To $537 Million Just Days After CEO Endorses Minimum Wage Increase http://www.huffingtonpost.com/2013/03/12/costco-profit_n_2859250.html

Why Can’t Walmart Be More Like Costco? http://www.thedailybeast.com/articles/2012/11/26/why-can-t-walmart-be-more-like-costco.html

Why Costco and Other Warehouse Club Retailers Matter http://www.lek.com/sites/default/files/lek-why_costco_and_other_warehouse_club_retailers_matter.pdf

Ethical Leadership: A Primer on Ethical Responsibility in Management http://www.wiley.com/college/sc/scherm/ethicsfinal.pdf

Firms of Endearment: How World-Class Companies Profit From Passion And Purpose http://www.worldcat.org/title/firms-of-endearment-how-world-class-companies-profit-from-passion-and-purpose/oclc/70167640

Stay Informed

Support our work.

  • Browse All Articles
  • Newsletter Sign-Up

case study on ethical practices

  • 15 Apr 2024

Struggling With a Big Management Decision? Start by Asking What Really Matters

Leaders must face hard choices, from cutting a budget to adopting a strategy to grow. To make the right call, they should start by following their own “true moral compass,” says Joseph Badaracco.

case study on ethical practices

  • 26 Mar 2024
  • Cold Call Podcast

How Do Great Leaders Overcome Adversity?

In the spring of 2021, Raymond Jefferson (MBA 2000) applied for a job in President Joseph Biden’s administration. Ten years earlier, false allegations were used to force him to resign from his prior US government position as assistant secretary of labor for veterans’ employment and training in the Department of Labor. Two employees had accused him of ethical violations in hiring and procurement decisions, including pressuring subordinates into extending contracts to his alleged personal associates. The Deputy Secretary of Labor gave Jefferson four hours to resign or be terminated. Jefferson filed a federal lawsuit against the US government to clear his name, which he pursued for eight years at the expense of his entire life savings. Why, after such a traumatic and debilitating experience, would Jefferson want to pursue a career in government again? Harvard Business School Senior Lecturer Anthony Mayo explores Jefferson’s personal and professional journey from upstate New York to West Point to the Obama administration, how he faced adversity at several junctures in his life, and how resilience and vulnerability shaped his leadership style in the case, "Raymond Jefferson: Trial by Fire."

case study on ethical practices

  • 02 Jan 2024

Should Businesses Take a Stand on Societal Issues?

Should businesses take a stand for or against particular societal issues? And how should leaders determine when and how to engage on these sensitive matters? Harvard Business School Senior Lecturer Hubert Joly, who led the electronics retailer Best Buy for almost a decade, discusses examples of corporate leaders who had to determine whether and how to engage with humanitarian crises, geopolitical conflict, racial justice, climate change, and more in the case, “Deciding When to Engage on Societal Issues.”

case study on ethical practices

  • 12 Dec 2023

Can Sustainability Drive Innovation at Ferrari?

When Ferrari, the Italian luxury sports car manufacturer, committed to achieving carbon neutrality and to electrifying a large part of its car fleet, investors and employees applauded the new strategy. But among the company’s suppliers, the reaction was mixed. Many were nervous about how this shift would affect their bottom lines. Professor Raffaella Sadun and Ferrari CEO Benedetto Vigna discuss how Ferrari collaborated with suppliers to work toward achieving the company’s goal. They also explore how sustainability can be a catalyst for innovation in the case, “Ferrari: Shifting to Carbon Neutrality.” This episode was recorded live December 4, 2023 in front of a remote studio audience in the Live Online Classroom at Harvard Business School.

case study on ethical practices

  • 11 Dec 2023
  • Research & Ideas

Doing Well by Doing Good? One Industry’s Struggle to Balance Values and Profits

Few companies wrestle with their moral mission and financial goals like those in journalism. Research by Lakshmi Ramarajan explores how a disrupted industry upholds its values even as the bottom line is at stake.

case study on ethical practices

  • 27 Nov 2023

Voting Democrat or Republican? The Critical Childhood Influence That's Tough to Shake

Candidates might fixate on red, blue, or swing states, but the neighborhoods where voters spend their teen years play a key role in shaping their political outlook, says research by Vincent Pons. What do the findings mean for the upcoming US elections?

case study on ethical practices

  • 21 Nov 2023

The Beauty Industry: Products for a Healthy Glow or a Compact for Harm?

Many cosmetics and skincare companies present an image of social consciousness and transformative potential, while profiting from insecurity and excluding broad swaths of people. Geoffrey Jones examines the unsightly reality of the beauty industry.

case study on ethical practices

  • 09 Nov 2023

What Will It Take to Confront the Invisible Mental Health Crisis in Business?

The pressure to do more, to be more, is fueling its own silent epidemic. Lauren Cohen discusses the common misperceptions that get in the way of supporting employees' well-being, drawing on case studies about people who have been deeply affected by mental illness.

case study on ethical practices

  • 07 Nov 2023

How Should Meta Be Governed for the Good of Society?

Julie Owono is executive director of Internet Sans Frontières and a member of the Oversight Board, an outside entity with the authority to make binding decisions on tricky moderation questions for Meta’s companies, including Facebook and Instagram. Harvard Business School visiting professor Jesse Shapiro and Owono break down how the Board governs Meta’s social and political power to ensure that it’s used responsibly, and discuss the Board’s impact, as an alternative to government regulation, in the case, “Independent Governance of Meta’s Social Spaces: The Oversight Board.”

case study on ethical practices

  • 24 Oct 2023

From P.T. Barnum to Mary Kay: Lessons From 5 Leaders Who Changed the World

What do Steve Jobs and Sarah Breedlove have in common? Through a series of case studies, Robert Simons explores the unique qualities of visionary leaders and what today's managers can learn from their journeys.

case study on ethical practices

  • 03 Oct 2023
  • Research Event

Build the Life You Want: Arthur Brooks and Oprah Winfrey Share Happiness Tips

"Happiness is not a destination. It's a direction." In this video, Arthur C. Brooks and Oprah Winfrey reflect on mistakes, emotions, and contentment, sharing lessons from their new book.

case study on ethical practices

  • 12 Sep 2023

Successful, But Still Feel Empty? A Happiness Scholar and Oprah Have Advice for You

So many executives spend decades reaching the pinnacles of their careers only to find themselves unfulfilled at the top. In the book Build the Life You Want, Arthur Brooks and Oprah Winfrey offer high achievers a guide to becoming better leaders—of their lives.

case study on ethical practices

  • 10 Jul 2023
  • In Practice

The Harvard Business School Faculty Summer Reader 2023

Need a book recommendation for your summer vacation? HBS faculty members share their reading lists, which include titles that explore spirituality, design, suspense, and more.

case study on ethical practices

  • 01 Jun 2023

A Nike Executive Hid His Criminal Past to Turn His Life Around. What If He Didn't Have To?

Larry Miller committed murder as a teenager, but earned a college degree while serving time and set out to start a new life. Still, he had to conceal his record to get a job that would ultimately take him to the heights of sports marketing. A case study by Francesca Gino, Hise Gibson, and Frances Frei shows the barriers that formerly incarcerated Black men are up against and the potential talent they could bring to business.

case study on ethical practices

  • 04 Apr 2023

Two Centuries of Business Leaders Who Took a Stand on Social Issues

Executives going back to George Cadbury and J. N. Tata have been trying to improve life for their workers and communities, according to the book Deeply Responsible Business: A Global History of Values-Driven Leadership by Geoffrey Jones. He highlights three practices that deeply responsible companies share.

case study on ethical practices

  • 14 Mar 2023

Can AI and Machine Learning Help Park Rangers Prevent Poaching?

Globally there are too few park rangers to prevent the illegal trade of wildlife across borders, or poaching. In response, Spatial Monitoring and Reporting Tool (SMART) was created by a coalition of conservation organizations to take historical data and create geospatial mapping tools that enable more efficient deployment of rangers. SMART had demonstrated significant improvements in patrol coverage, with some observed reductions in poaching. Then a new predictive analytic tool, the Protection Assistant for Wildlife Security (PAWS), was created to use artificial intelligence (AI) and machine learning (ML) to try to predict where poachers would be likely to strike. Jonathan Palmer, Executive Director of Conservation Technology for the Wildlife Conservation Society, already had a good data analytics tool to help park rangers manage their patrols. Would adding an AI- and ML-based tool improve outcomes or introduce new problems? Harvard Business School senior lecturer Brian Trelstad discusses the importance of focusing on the use case when determining the value of adding a complex technology solution in his case, “SMART: AI and Machine Learning for Wildlife Conservation.”

case study on ethical practices

  • 14 Feb 2023

Does It Pay to Be a Whistleblower?

In 2013, soon after the US Securities and Exchange Commission (SEC) had started a massive whistleblowing program with the potential for large monetary rewards, two employees of a US bank’s asset management business debated whether to blow the whistle on their employer after completing an internal review that revealed undisclosed conflicts of interest. The bank’s asset management business disproportionately invested clients’ money in its own mutual funds over funds managed by other banks, letting it collect additional fees—and the bank had not disclosed this conflict of interest to clients. Both employees agreed that failing to disclose the conflict was a problem, but beyond that, they saw the situation very differently. One employee, Neel, perceived the internal review as a good-faith effort by senior management to identify and address the problem. The other, Akash, thought that the entire business model was problematic, even with a disclosure, and believed that the bank may have even broken the law. Should they escalate the issue internally or report their findings to the US Securities and Exchange Commission? Harvard Business School associate professor Jonas Heese discusses the potential risks and rewards of whistleblowing in his case, “Conflicts of Interest at Uptown Bank.”

case study on ethical practices

  • 17 Jan 2023

Good Companies Commit Crimes, But Great Leaders Can Prevent Them

It's time for leaders to go beyond "check the box" compliance programs. Through corporate cases involving Walmart, Wells Fargo, and others, Eugene Soltes explores the thorny legal issues executives today must navigate in his book Corporate Criminal Investigations and Prosecutions.

case study on ethical practices

  • 29 Nov 2022

How Will Gamers and Investors Respond to Microsoft’s Acquisition of Activision Blizzard?

In January 2022, Microsoft announced its acquisition of the video game company Activision Blizzard for $68.7 billion. The deal would make Microsoft the world’s third largest video game company, but it also exposes the company to several risks. First, the all-cash deal would require Microsoft to use a large portion of its cash reserves. Second, the acquisition was announced as Activision Blizzard faced gender pay disparity and sexual harassment allegations. That opened Microsoft up to potential reputational damage, employee turnover, and lost sales. Do the potential benefits of the acquisition outweigh the risks for Microsoft and its shareholders? Harvard Business School associate professor Joseph Pacelli discusses the ongoing controversies around the merger and how gamers and investors have responded in the case, “Call of Fiduciary Duty: Microsoft Acquires Activision Blizzard.”

case study on ethical practices

  • 15 Nov 2022

Stop Ignoring Bad Behavior: 6 Tips for Better Ethics at Work

People routinely overlook wrongdoing, even in situations that cause significant harm. In his book Complicit: How We Enable the Unethical and How to Stop, Max Bazerman shares strategies that help people do the right thing even when those around them aren't.

  • Carbon Accounting & Carbon Neutral Strategy
  • ESG, CSR, & Sustainability Reporting
  • Sustainability Strategy
  • ESG Regulatory Compliance
  • Portfolio Management & Reporting
  • AERA GHG Manager
  • EPIC for Corporates
  • ZENO for Financial Institutions
  • ESG Industry Reports

en_US

Ethical Business Practices: Case Studies and Lessons Learned

Introduction

Ethical business practices are a cornerstone of any successful company, influencing not only the public perception of a brand but also its long-term profitability. However, understanding what constitutes ethical behavior and how to implement it can be a complex process. This article explores some case studies that shine a light on ethical business practices, offering valuable lessons for businesses in any industry.

Case Study 1: Patagonia’s Commitment to Environmental Ethics

Patagonia, the outdoor clothing and gear company, has long set a standard for environmental responsibility. The company uses eco-friendly materials, promotes recycling of its products, and actively engages in various environmental causes.

Lessons Learned

  • Transparency : Patagonia is vocal about its ethical practices and even provides information on the environmental impact of individual products.
  • Consistency: Ethics are not an “add-on” for Patagonia; they are integrated into the very fabric of the company’s operations, from sourcing to production to marketing.
  • Engagement: The company doesn’t just focus on its practices; it encourages consumers to get involved in the causes it supports.

Case Study 2: Salesforce and Equal Pay

Salesforce, the cloud-based software company, took a stand on the gender pay gap issue. They conducted an internal audit and found that there was indeed a significant wage disparity between male and female employees for similar roles. To address this, Salesforce spent over $6 million to balance the scales.

  • Self-Audit: It’s crucial for companies to actively review their practices. What you don’t know can indeed hurt you, and ignorance is not an excuse.
  • Taking Responsibility: Rather than sweeping the issue under the rug, Salesforce openly acknowledged the problem and took immediate corrective action.
  • Long-Term Benefits: Fair treatment boosts employee morale and productivity, leading to long-term profitability.

Case Study 3: Starbucks and Racial Sensitivity Training

In 2018, Starbucks faced a public relations crisis when two Black men were wrongfully arrested at one of their Philadelphia stores. Instead of issuing just a public apology, Starbucks closed down 8,000 of its stores for an afternoon to conduct racial sensitivity training.

Lessons   Learned

  • Immediate Action : Swift and meaningful action is critical in showing commitment to ethical behavior.
  • Education: Sometimes, the problem is a lack of awareness. Investing in employee education can avoid repeated instances of unethical behavior.
  • Public Accountability: Starbucks made their training materials available to the public, showing a level of transparency and accountability that helped regain public trust.

Why Ethics Matter

Ethical business practices are not just morally correct; they have a direct impact on a company’s bottom line. Customers today are more informed and more sensitive to ethical considerations. They often make purchasing decisions based on a company’s ethical standing, and word-of-mouth (or the digital equivalent) travels fast.

The case studies above show that ethical business practices should be a top priority for companies of all sizes and industries. These are not isolated examples but are representative of a broader trend in consumer expectations and regulatory frameworks. The lessons gleaned from these cases—transparency, consistency, engagement, self-audit, taking responsibility, and education—are universally applicable and offer a robust roadmap for any business seeking to bolster its ethical standing.

By implementing ethical business practices sincerely and not as a marketing gimmick, companies not only stand to improve their public image but also set themselves up for long-term success, characterized by a loyal customer base and a motivated, satisfied workforce.

case study on ethical practices

RELATED ARTICLES

The role of technology in enhancing esg compliance in apac, transparency and trust: esg reporting guidelines in apac, the intersection of culture and esg: perspectives from apac, request esg software demo.

case study on ethical practices

Start Using The Seneca ESG Toolkit Today

Monitor ESG performance in portfolios, create your own ESG frameworks, and make better informed business decisions.

Interested? Contact us now

In order to contact us please fill the form on the right or directly email us at the address below

[email protected]

Our offices

Singapore office.

3 Church Street, 25/F, Suite 21 Singapore 049483 (+65) 6692 9267

Amsterdam Office

Gustav Mahlerplein 2 Amsterdam, Netherlands 1082 MA (+31) 6 4817 3634

Shanghai Office

No. 299, Tongren Road, #2604B Jing'an District, Shanghai, China 200040 (+86) 021 6229 8732

Taipei Office

77 Dunhua South Road, 7F Section 2, Da'an District Taipei City, Taiwan 106414 (+886) 02 2706 2108

Hanoi Office

Viet Tower 1, Thai Ha, Dong Da Hanoi, Vietnam 100000 (+84) 936 075 490

Lima Office

Av Jorge Basadre Grohmann 607 San Isidro, Lima, Peru 15073 (+51) 951 722 377

Sign up for our weekly newsletter

© 2024 • Seneca Technologies Pte Ltd • All rights reserved

  • ESG, CSR, & Sustainability Reporting
  • ESG Data Collection and Management
  • ESG Scoring and Target Setting
  • ESG Report Writing (ISSB, GRI, SASB, TCFD, CSRD)
  • Materiality Assessment
  • ESG Ratings Analyses and Improvement
  • ESG Performance Analyses and Benchmarking
  • Stock Exchange Reporting
  • EU Taxonomy Reporting (CSRD, SFDR, PAI)
  • Portfolio Management & Reporting
  • Portfolio Custom Scoring and Screening
  • Portfolio Analyses and Benchmarking
  • Product and Firm Level Regulatory Reporting (SFDR)
  • Carbon Accounting & Carbon Neutral Strategy
  • Carbon Inventory (GHG Protocol)
  • Science Based Target Setting (SBTi)
  • Carbon Neutral Strategy
  • Privacy Policy

qrcode_wechat

Disclaimer: The GRI Standards are used by Seneca Technologies Pte. Ltd. under licensed authority from GRI. GRI, as Licensor of the copyright in the GRI Standards, verified and validated the authentic and accurate representation of the GRI Standards in Seneca EPIC Platform. This verification was limited to ensuring the maintenance of the integrity, authenticity and accuracy of the Licensed Content. GRI therefore makes no implied or actual representations or warranties as to the correctness, compliance, trustworthiness, fitness of purpose or quality of Seneca EPIC Platform or any products resulting therefrom; or of Licensee’s use of the GRI copyrighted content; and expressly disclaims any implied or express representations that any report produced by Licensee meets the standards of an approved GRI Standards Report.

For the latest version of the gri standards, including the revised universal standards, adapted topic standards, sector standards, their recommendations and guidance sections, as well as the gri standards glossary, please visit the gri resource center: https://www.globalreporting.org/how-to-use-the-gri-standards/resource-center/.

case study on ethical practices

© 2023 • Seneca • All rights reserved

  • Based Target Setting (SBTi) Carbon

Discover more from Seneca ESG

Subscribe now to keep reading and get access to the full archive.

Type your email…

Continue reading

OEC logo

Site Search

  • How to Search
  • Advisory Group
  • Editorial Board
  • OEC Fellows
  • History and Funding
  • Using OEC Materials
  • Collections
  • Research Ethics Resources
  • Ethics Projects
  • Communities of Practice
  • Get Involved
  • Submit Content
  • Open Access Membership
  • Become a Partner

A discussion around the use of cases in teaching RCR, part of the  Instructor's Guide to Prepare Research Group Leaders as RCR Mentors .

NOTES TO THE INSTRUCTOR:

  • You should feel free to choose your own case for this section, or choose several, giving each small group a distinct case to discuss. Given the time constraints of both this workshop and most lab meetings, it would be best for the cases to be relatively uncomplicated, though still nuanced.
  • While this curriculum provides a basic case analysis scheme, if you use case analyses regularly, you likely know there are several ways of analyzing cases, and many frameworks out there to assist your students, depending on how you use / what you want the students to learn from using the cases. Some of those are included in the resources section of this curriculum; you could provide a couple of different evaluation schemas to determine if one is more appropriate for a particular discipline, or career stage, than another.
  • If you’re using an agenda which includes an over‐lunch discussion of a case, as the agenda in this instructor’s manual shows, we used the 15 minute window just before lunch to go over the case studies section of the syllabus, coming back to the question “How might cases be introduced into the research environment?” in the after‐lunch discussion.
  • It is important that the larger group discussion about the case(s) not become simply a discussion of the case per se, but that it also include a conversation about how useful this kind of discussion can be with their students. We found that our groups were eager to discuss the elements of the case, but we had to explicitly articulate the usefulness of such case discussions as tools for integrating ethics into their research environments.
  • You might also ask your workshop participants if other kinds of “cases” – those drawn from current events, for instance, or those written as “two minute challenges” [https://nationalethicscenter.org/resources/146/download/2MC%20methodology.pdf] – might also work in the research environment.
  • One of the evaluators of an earlier version of the curriculum noted that these workshops “could include tips on how to identify and choose in‐the‐news cases, challenges in discussing them, and bringing closure to such discussions. Of course an in‐the‐news case discussion would be modeled in the workshop as well. Alternatively, the workshop could promote the idea of providing case study (either created or found) discussion in a context similar to a journal club, or even as an occasional event in existing journal clubs.” This underscores the idea we had when creating this curriculum that all of those venues are considered “the research environment.”

What are case studies?

Based on real or contrived scenarios, case studies are a tool for discussing scientific integrity. Cases are designed to confront the readers with a specific problem that does not lend itself to easy answers. By providing a focus for discussion, cases help researchers to define or refine their own standards, to appreciate alternative approaches to identifying and resolving ethical problems, and to develop skills for dealing with hard problems on their own.

How should cases be analyzed?

Many of the skills necessary to analyze case studies can become tools for responding to real world problems. Cases, like the real world, contain uncertainties and ambiguities. Readers are encouraged to identify key issues, make assumptions as needed, and articulate various options for resolution. In addition to the specific questions accompanying some cases, an effective analysis will typically address the following criteria:

Who is affected (individuals, institutions, a field, society)? What significant interest(s) (material, financial, ethical, other) do those affected have in the situation? Which interests are in conflict ?

What specific, generalizable, and consistent principles (e.g., to tell the truth, to do no harm) are applicable to this case?

  • Alternate answers

What other courses of action are open to each of those affected? What is the likely outcome of each course of action? What actions could have been taken to avoid the conflict?

Are the final choice and its consequences defensible in public (e.g., reported through the media)? 

Is there a right answer?

  • Acceptable Solutions:

Most problems will have several acceptable solutions or answers, but a single perfect solution often cannot be found. At times, even the best solution will have unsatisfactory consequences.

  • Unacceptable Solutions:

While more than one acceptable solution may be possible, not all solutions are acceptable. For example, obvious violations of specific rules, regulations, or generally accepted standards of conduct would typically be unacceptable. However, it is also plausible that blind adherence to accepted rules or standards would sometimes be an unacceptable course of action.

  • Ethical Decision-making:

Ethical decision-making is a process rather than an outcome. The clearest instance of a wrong answer is the failure to engage in that process. Not trying to define a consistent and defensible basis for decisions or conduct is unacceptable.

How might cases be introduced into the research environment?

Cases are best seen as an opportunity to foster discussion among several individuals. As such, they might be most appropriate as an exercise to be used in the context of a research group meeting, journal club, or as part of a research lecture series.

During the lunch break, workshop participants will be assigned to small groups for the purpose of reviewing a case (scenario) describing a research ethics challenge. Ideally discussion group participants should be from diverse disciplines and people who do not already know one another well. This will increase the chance to better see challenges and find solutions for the case being reviewed. It also hopefully serves to increase personal connections among diverse members of the institution who can turn to one another with future ethics and ethics training questions or challenges.

Case for Discussion

How much is too much?

Qiao Zhi has recently arrived to work as a postdoctoral research in the United States from China. She studied English for many years as part of her schooling in China, but she had little real world experience in conversing and writing English. Qiao Zhi is a very talented scientist in her field and quickly found a position in a research group, largely consisting of other Chinese researchers and with Professor Wang, who was trained in China as well. During her first year of work, Qiao Zhi was extraordinarily lucky to have made an interesting finding and Professor Wang encouraged her to write the work up for publication in the journal Science. Qiao Zhi struggled to write the paper in English, but soon found that with the help of the Internet she could easily find phrases written well in English to express concepts that she wasn't sure of. Professor Wang lightly edited the paper written by Qiao Zhi, they submitted it to Science, and it was accepted for publication. Six months later, one of Wang's colleagues was looking at the Déjà vu website (http://dejavu.vbi.vt.edu/dejavu) and discovered that Qiao Zhi's paper received a very high score for using text duplicated from other papers. Wang took the concern of possible plagiarism to the Research Integrity Officer (RIO) at his institution. The RIO appointed a committee to determine if Qiao Zhi should be found guilty of plagiarism, an example of research misconduct. You are a member of that committee and have been asked to decide whether frequent use of phrases from other papers is plagiarism and if doing so should result in sanctions or penalties.

Recommended timetable:

During lunch:

  • Introductions (5 mins):

Introduce yourselves to one another, pick someone to serve as discussion leader (responsible for keeping discussion on track and on time), and someone to keep a written summary of key conclusions. If not all members of the group have already been introduced to the case, the group leader should read the case aloud.

  • Case Discussion (20 mins):

Collectively consider the (1) interests of individuals and groups in how this case is handled; (2) ethical principles or values at stake; (3) the alternative answers that might be considered as solutions; and (4) the rationales for selecting a particular choice of action agreeable to all.

  • Summary (10 mins):

As a group, figure out how best to articulate your findings of interests and principles that are at stake, the alternative answers to be considered, your recommended answer, and the rationale for choosing that answer.

After lunch

  • Presentation (~ variable)

Choose one member of your group to present your analysis, paying attention not just to the case per se, but also how this kind of exercise could be beneficial for your trainees.

Related Resources

Submit Content to the OEC   Donate

NSF logo

This material is based upon work supported by the National Science Foundation under Award No. 2055332. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

case study on ethical practices

  • Internet Ethics Cases
  • Markkula Center for Applied Ethics
  • Focus Areas
  • Internet Ethics

Find ethics case studies on topics in Internet ethics including privacy, hacking, social media, the right to be forgotten, and hashtag activism. (For permission to reprint articles, submit requests to [email protected] .)

Ethical questions arise in interactions among students, instructors, administrators, and providers of AI tools.

What can we learn from the Tay experience, about AI and social media ethics more broadly?

Who should be consulted before using emotion-recognition AI to report on constituents’ sentiments?

When 'algorithm alchemy' wrongly accuses people of fraud, who is accountable?

Which stakeholders might benefit from a new age of VR “travel”? Which stakeholders might be harmed?

Ethical questions about data collection, data-sharing, access, use, and privacy.

As PunkSpider is pending re-release, ethical issues are considered about a tool that is able to spot and share vulnerabilities on the web, opening those results to the public.

With URVR recipients can capture and share 360 3D moments and live them out together.

VR rage rooms may provide therapeutic and inexpensive benefits while also raising ethical questions.

A VR dating app intended to help ease the stress and awkwardness of early dating in a safe and comfortable way.

  • More pages:

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

O'Mathúna D, Iphofen R, editors. Ethics, Integrity and Policymaking: The Value of the Case Study [Internet]. Cham (CH): Springer; 2022. doi: 10.1007/978-3-031-15746-2_1

Cover of Ethics, Integrity and Policymaking

Ethics, Integrity and Policymaking: The Value of the Case Study [Internet].

Chapter 1 making a case for the case: an introduction.

Dónal O’Mathúna and Ron Iphofen .

Affiliations

Published online: November 3, 2022.

This chapter agues for the importance of case studies in generating evidence to guide and/or support policymaking across a variety of fields. Case studies can offer the kind of depth and detail vital to the nuances of context, which may be important in securing effective policies that take account of influences not easily identified in more generalised studies. Case studies can be written in a variety of ways which are overviewed in this chapter, and can also be written with different purposes in mind. At the same time, case studies have limitations, particularly when evidence of causation is sought. Understanding these can help to ensure that case studies are appropriately used to assist in policymaking. This chapter also provides an overview of the types of case studies found in the rest of this volume, and briefly summarises the themes and topics addressed in each of the other chapters.

1.1. Judging the Ethics of Research

When asked to judge the ethical issues involved in research or any evidence-gathering activity, any research ethicist worth their salt will (or should) reply, at least initially: ‘It depends’. This is neither sophistry nor evasive legalism. Instead, it is a specific form of casuistry used in ethics in which general ethical principles are applied to the specifics of actual cases and inferences made through analogy. It is valued as a structured yet flexible approach to real-world ethical challenges. Case study methods recognise the complexities of depth and detail involved in assessing research activities. Another way of putting this is to say: ‘Don’t ask me to make a judgement about a piece of research until I have the details of the project and the context in which it will or did take place.’ Understanding and fully explicating a context is vital as far as ethical research (and evidence-gathering) is concerned, along with taking account of the complex interrelationship between context and method (Miller and Dingwall 1997 ).

This rationale lies behind this collection of case studies which is one outcome from the EU-funded PRO-RES Project. 1 One aim of this project was to establish the virtues, values, principles and standards most commonly held as supportive of ethical practice by researchers, scientists and evidence-generators and users. The project team conducted desk research, workshops and consulted throughout the project with a wide range of stakeholders (PRO-RES 2021a ). The resulting Scientific, Trustworthy, and Ethical evidence for Policy (STEP) ACCORD was devised, which all stakeholders could sign up to and endorse in the interests of ensuring any policies which are the outcome of research findings are based upon ethical evidence (PRO-RES 2021b ).

By ‘ethical evidence’ we mean results and findings that have been generated by research and other activities during which the standards of research ethics and integrity have been upheld (Iphofen and O’Mathúna 2022 ). The first statement of the STEP ACCORD is that policy should be evidence-based, meaning that it is underpinned by high-quality research, analysis and evidence (PRO-RES 2021b ). While our topic could be said to be research ethics, we have chosen to refer more broadly to evidence-generating activities. Much debate has occurred over the precise definition of research under the apparent assumption that ‘non-research projects’ fall outside the purview of requirements to obtain ethics approval from an ethics review body. This debate is more about the regulation of research than the ethics of research and has contributed to an unbalanced approach to the ethics of research (O’Mathúna 2018 ). Research and evidence-generating activities raise many ethical concerns, some similar and some distinct. When the focus is primarily on which projects need to obtain what sort of ethics approval from which type of committee, the ethical issues raised by those activities themselves can receive insufficient attention. This can leave everyone involved with these activities either struggling to figure out how to manage complex and challenging ethical dilemmas or pushing ahead with those activities confident that their approval letter means they have fulfilled all their ethical responsibilities. Unfortunately, this can lead to a view that research ethics is an impediment and burden that must be overcome so that the important work in the research itself can get going.

The alternative perspective advocated by PRO-RES, and the authors of the chapters in this volume, is that ethics underpins all phases of research, from when the idea for a project is conceived, all the way through its design and implementation, and on to how its findings are disseminated and put into practice in individual decisions or in policy. Given the range of activities involved in all these phases, multiple types of ethical issues can arise. Each occurs in its own context of time and place, and this must be taken into account. While ethical principles and theories have important contributions to make at each of these points, case studies are also very important. These allow for the normative effects of various assumptions and declarations to be judged in context. We therefore asked the authors of this volume’s chapters to identify various case studies which would demonstrate the ethical challenges entailed in various types of research and evidence-generating activities. These illustrative case studies explore various innovative topics and fields that raise challenges requiring ethical reflection and careful policymaking responses. The cases highlight diverse ethical issues and provide lessons for the various options available for policymaking (see Sect.  1.6 . below). Cases are drawn from many fields, including artificial intelligence, space science, energy, data protection, professional research practice and pandemic planning. The issues are examined in different locations, including Europe, India, Africa and in global contexts. Each case is examined in detail and also helps to anticipate lessons that could be learned and applied in other situations where ethical evidence is needed to inform evidence-based policymaking.

1.2. The Case for Cases

Case studies have increasingly been used, particularly in social science (Exworthy and Powell 2012 ). Many reasons underlie this trend, one being the movement towards evidence-based practice. Case studies provide a methodology by which a detailed study can be conducted of a social unit, whether that unit is a person, an organization, a policy or a larger group or system (Exworthy and Powell 2012 ). The case study is amenable to various methodologies, mostly qualitative, which allow investigations via documentary analyses, interviews, focus groups, observations, and more.

At the same time, consensus is lacking over the precise nature of a case study. Various definitions have been offered, but Yin ( 2017 ) provides a widely cited definition with two parts. One is that a case study is an in-depth inquiry into a real-life phenomenon where the context is highly pertinent. The second part of Yin’s definition addresses the many variables involved in the case, the multiple sources of evidence explored, and the inclusion of theoretical propositions to guide the analysis. While Yin’s emphasis is on the case study as a research method, he identifies important elements of broader relevance that point to the particular value of the case study for examining ethical issues.

Other definitions of case studies emphasize their story or narrative aspects (Gwee 2018 ). These stories frequently highlight a dilemma in contextually rich ways, with an emphasis on how decisions can be or need to be made. Case studies are particularly helpful with ethical issues to provide crucial context and explore (and evaluate) how ethical decisions have been made or need to be made. Classic cases include the Tuskegee public health syphilis study, the Henrietta Lacks human cell line case, the Milgram and Zimbardo psychology cases, the Tea Room Trade case, and the Belfast Project in oral history research (examined here in Chap. 10 ). Cases exemplify core ethical principles, and how they were applied or misapplied; in addition, they examine how policies have worked well or not (Chaps. 2 , 3 and 5 ). Cases can examine ethics in long-standing issues (like research misconduct (Chap. 7 ), energy production (Chap. 8 ), or Chap. 11 ’s consideration of researchers breaking the law), or with innovations in need of further ethical reflection because of their novelty (like extended space flight (Chap. 9 ) and AI (Chaps. 13 and 14 ), with the latter looking at automation in legal systems). These case studies help to situate the innovations within the context of widely regarded ethical principles and theories, and allow comparisons to be made with other technologies or practices where ethical positions have been developed. In doing so, these case studies offer pointers and suggestions for policymakers given that they are the ones who will develop applicable policies.

1.3. Research Design and Causal Inference

Not everyone is convinced of the value of the case study. It must be admitted that they have limitations, which we will reflect on shortly. Yet we believe that others go too far in their criticisms, revealing instead some prejudices against the value of the case (Yin 2017 ). In what has become a classic text for research design, Campbell and Stanley ( 1963 ) have few good words for what they call the ‘One Shot Case Study.’ They rank it below two other ‘pre-experimental’ designs—the One-Group Pretest–Posttest and the Static-Group Comparison—and conclude that case studies “have such a total absence of control to be of almost no scientific value” (Campbell and Stanley 1963 , 6). The other designs have, in turn, a baseline and outcome measure and some degree of comparative analysis which provides them some validity. Such a criticism is legitimate if one prioritises the experimental method as the most superior in terms of effectiveness evidence and, as for Campbell and Stanley, one is striving to assess the effectiveness of educational interventions.

What is missing from that assessment is that different methodologies are more appropriate for different kinds of questions. Questions of causation and whether a particular treatment, policy or educational strategy is more effective than another are best answered by experimental methods. While experimental designs are better suited to explore causal relationships, case studies are more suited to explore “how” and “why” questions (Yin 2017 ). It can be more productive to view different methodologies as complementing one another, rather than examining them in hierarchical terms.

The case study approach draws on a long tradition in ethnography and anthropology: “It stresses the importance of holistic perspectives and so has more of a ‘humanistic’ emphasis. It recognises that there are multiple influences on any single individual or group and that most other methods neglect the thorough understanding of this range of influences. They usually focus on a chosen variable or variables which are tested in terms of their influence. A case study tends to make no initial assumptions about which are the key variables—preferring to allow the case to ‘speak for itself’” (Iphofen et al. 2009 , 275). This tradition has sometimes discouraged people from conducting or using case studies on the assumption that they take massive amounts of time and lead to huge reports. This is the case with ethnography, but the case study method can be applied in more limited settings and can lead to high-quality, concise reports.

Another criticism of case studies is that they cannot be used to make generalizations. Certainly, there are limits to their generalisability, but the same is true of experimental studies. One randomized controlled trial cannot be generalised to the whole population without ensuring that its details are evaluated in the context of how it was conducted.

Similarly, it should not be assumed that generalisability can adequately guide practice or policy when it comes to the specifics of an individual case. A case study should not be used to support statistical generalizations (that the same percentage found in the case will be found in the general public). But a case study can be used to expand and generalize theories and thus have much usefulness. It affords a method of examining the specific (complex) interactions occurring in a case which can only be known from the details. Such an analysis can be carried out for individuals, policies or interventions.

The current COVID-19 pandemic demonstrates the dangers of generalising in the wrong context. Some people have very mild cases of COVID-19 or are asymptomatic. Others get seriously ill and even die. Sometimes people generalise from cases they know and assume they will have mild symptoms. Then they refuse to take the COVID-19 vaccine, basically generalising from similar cases. Mass vaccination is recommended for the sake of the health of the public (generalised health) and to limit the spread of a deadly virus. Cases are reported of people having adverse reactions to COVID-19 vaccines, and some people generalise from these that they will not take whatever risks might be involved in receiving the vaccine themselves. It might be theoretically possible to discover which individuals WILL react adversely to immunisation on a population level. But it is highly complex and expensive to do so, and takes an extensive period of time. Given the urgency of benefitting the health of ‘the public’, policymakers have decided that the risks to a sub-group are warranted. Only after the emergence of epidemiological data disclosing negative effects of some vaccines on some individuals will it become more clear which characteristics typify those cases which are likely to experience the adverse effects, and more accurately quantify the risks of experiencing those effects.

Much literature now points to the advantages and disadvantages of case studies (Gomm et al. 2000 ), and how to use them and conduct them with adequate rigour to ensure the validity of the evidence generated (Schell 1992 ; Yin 2011 , 2017 ). At the same time, legitimate critiques have been made of some case studies because they have been conducted without adequate rigor, in unsystematic ways, or in ways that allowed bias to have more influence than evidence (Hammersley 2001 ). Part of the problem here is similar to interviewing, where some will assume that since interviews are a form of conversation, anyone can do it. Case studies have some similarities to stories, but that doesn’t mean they are quick and easy ways to report on events. That view can lead to the situation where “most people feel that they can prepare a case study, and nearly all of us believe we can understand one. Since neither view is well founded, the case study receives a lot of approbation it does not deserve” (Hoaglin et al., cited in Yin 2017 , 16).

Case studies can be conducted and used in a wide range of ways (Gwee 2018 ). Case studies can be used as a research method, as a teaching tool, as a way of recording events so that learning can be applied to practice, and to facilitate practical problem-solving skills (Luck et al. 2006 ). Significant differences exist between a case study that was developed and used in research compared to one used for teaching (Yin 2017 ). A valid rationale for studying a ‘case’ should be provided so that it is clear that the proposed method is suitable to the topic and subject being studied. The unit of study for a case could be an individual person, social group, community, or society. Sometimes that specific case alone will constitute the actual research project. Thus, the study could be of one individual’s experience, with insights and understanding gained of the individual’s situation which could be of use to understand others’ experiences. Often there will be attempts made at a comparison between cases—one organisation being compared to another, with both being studied in some detail, and in terms of the same or similar criteria. Given this variety, it is important to use cases in ways appropriate to how they were generated.

The case study continues to be an important piece of evidence in clinical decision-making in medicine and healthcare. Here, case studies do not demonstrate causation or effectiveness, but are used as an important step in understanding the experiences of patients, particularly with a new or confusing set of symptoms. This was clearly seen as clinicians published case studies describing a new respiratory infection which the world now knows to be COVID-19. Only as case studies were generated, and the patterns brought together in larger collections of cases, did the characteristics of the illness come to inform those seeking to diagnose at the bedside (Borges do Nascimento et al. 2020 ). Indeed case studies are frequently favoured in nursing, healthcare and social work research where professional missions require a focus on the care of the individual and where cases facilitate making use of the range of research paradigms (Galatzer-Levy et al. 2000 ; Mattaini 1996 ; Gray 1998 ; Luck et al. 2006 ).

1.4. Devil’s in the Detail

Our main concern in this collection is not with case study aetiology but rather to draw on the advantages of the method to highlight key ethical issues related to the use of evidence in influencing policy. Thus, we make no claim to causal ‘generalisation’ on the basis of these reports—but instead we seek to help elucidate ethics issues, if even theoretical, and anticipate responses and obstacles in similar situations and contexts that might help decision-making in novel circumstances. A key strength of case studies is their capacity to connect abstract theoretical concepts to the complex realities of practice and the real world (Luck et al. 2006 ). Ethics cases clearly fit this description and allow the contextual details of issues and dilemmas to be included in discussions of how ethical principles apply as policy is being developed.

Since cases are highly focussed on the specifics of the situation, more time can be given over to data gathering which may be of both qualitative and quantitative natures. Given the many variables involved in the ‘real life’ setting, increased methodological flexibility is required (Yin 2017 ). This means seeking to maximise the data sources—such as archives (personal and public), records (such as personal diaries), observations (participant and covert) and interviews (face-to-face and online)—and revisiting all sources when necessary and as case participants and time allows.

1.5. Cases and Policymaking

Case studies allow researchers and practitioners to learn from the specifics of a situation and apply that learning in similar situations. Ethics case studies allow such reflection to facilitate the development of ethical decision-making skills. This volume has major interests in ethics and evidence-generation (research), but also in a third area: policymaking. Cases can influence policymaking, such as how one case can receive widespread attention and become the impetus to create policy that aims to prevent similar cases. For example, the US federal Brady Law was enacted in 1993 to require background checks on people before they purchase a gun (ATF 2021 ). The law was named for White House Press Secretary James Brady, and his case became widely known in the US. He was shot and paralyzed during John Hinckley, Jr.’s 1981 assassination attempt on President Ronald Reagan. Another example, this time in a research context, was how the Tuskegee Syphilis Study led, after its public exposure in 1971, to the US Department of Health, Education and Welfare appointing an expert panel to examine the ethics of that case. This resulted in federal policymakers enacting the National Research Act in 1974, which included setting up a national commission that published the Belmont Report in 1976. This report continues to strongly influence research ethics practice around the world. These examples highlight the power of a case study to influence policymaking.

One of the challenges for policymakers, though, is that compelling cases can often be provided for opposite sides of an issue. Also, while the Belmont Report has been praised for articulating a small number of key ethical principles, how those principles should be applied in specific instances of research remains an ongoing challenge and a point of much discussion. This is particularly relevant for innovative techniques and technologies. Hence the importance of cases interacting with general principles and leading to ongoing reflection and debate over the applicable cases. At the same time, new areas of research and evidence generation activities will lead to questions about how existing ethical principles and values apply. New case studies can help to facilitate that reflection, which can then allow policymakers to consider whether existing policy should be adapted or whether whole new areas of policy are needed.

Case studies also can play an important role in learning from and evaluating policy. Policymakers tend to focus on practical, day-to-day concerns and with the introduction of new programmes (Exworthy and Peckam 2012 ). Time and resources may be scant when it comes to evaluating how well existing policies are performing or reflecting on how policies can be adapted to overcome shortcomings (Hunter 2003 ). Effective policies may exist elsewhere (historically or geographically) and be more easily adapted to a new context instead of starting policymaking from scratch. Case studies can permit learning from past policies (or situations where policies did not exist), and they can illuminate various factors that should be explored in more detail in the context of the current issue or situation. Chaps. 2 , 3 and 5 in this volume are examples of this type of case study.

1.6. The Moral Gain

This volume reflects the ambiguity of ethical dilemmas in contemporary policymaking. Analyses will reflect current debates where consensus has not been achieved yet. These cases illustrate key points made throughout the PRO-RES project: that ethical decision-making is a fluid enterprise, where values, principles and standards must constantly be applied to new situations, new events and new research developments. The cases illustrate how no ‘one point’ exists in the research process where judgements about ethics can be regarded as ‘final.’ Case studies provide excellent ways for readers to develop important decision-making skills.

Research produces novel products and processes which can have broad implications for society, the environment and relationships. Research methods themselves are modified or applied in new ways and places, requiring further ethical reflection. New topics and whole fields of research develop and require careful evaluation and thoughtful responses. New case studies are needed because research constantly generates new issues and new ethics questions for policymaking.

The cases found in this volume address a wide range of topics and involve several disciplines. The cases were selected by the parameters of the PRO-RES project and the Horizon 2020 funding call to which it responded. First, the call was concerned with both research ethics and scientific integrity and each of the cases addresses one or both of these areas. The call sought projects that addressed non-medical research, and the cases here address disciplines such as social sciences, engineering, artificial intelligence and One Health. The call also sought particular attention be given to (a) covert research, (b) working in dangerous areas/conflict zones and (c) behavioral research collecting data from social media/internet sources. Hence, we included cases that addressed each of these areas. Finally, while an EU-funded project can be expected to have a European focus, the issues addressed have global implications. Therefore, we wanted to include cases studies from outside Europe and did so by involving authors from India and Africa to reflect on the volume’s areas of interest.

The first case study offered in this volume (Chap. 2 ) examines a significant policy approach taken by the European Union to address ethics and integrity in research and innovation: Responsible Research and Innovation (RRI). This chapter examines the lessons that can be learned from RRI in a European context. Chapter 3 elaborates on this topic with another policy learning case study, but this time examining RRI in India. One of the critiques made of RRI is that it can be Euro-centric. This case study examines this claim, and also describes how a distinctively Indian concept, Scientific Temper, can add to and contextualise RRI. Chapter 4 takes a different approach in being a case study of the development of research ethics guidance in the United Kingdom (UK). It explores the history underlying the research ethics framework commissioned by the UK Research Integrity Office (UKRIO) and the Association of Research Managers and Administrators (ARMA), and points to lessons that can be learned about the policy-development process itself.

While staying focused on policy related to research ethics, the chapters that follow include case studies that address more targeted concerns. Chapter 5 examines the impact of the European Union’s (EU) General Data Protection Regulation (GDPR) in the Republic of Croatia. Research data collected in Croatia is used to explore the handling of personal data before and after the introduction of GDPR. This case study aims to provide lessons learned that could contribute to research ethics policies and procedures in other European Member States.

Chapter 6 moves from policy itself to the role of policy advisors in policymaking. This case study explores the distinct responsibilities of those elevated to the role of “policy advisor,” especially given the current lack of policy to regulate this field or how its advice is used by policymakers. Next, Chap. 7 straddles the previous chapters’ focus on policy and its evaluation while introducing the focus of the next section on historical case studies. This chapter uses the so-called “race for the superconductor” as a case study by which the PRO-RES ethics framework is used to explore specific ethical dilemmas (PRO-RES 2021b ). This case study is especially useful for policymakers because of how it reveals the multiple difficulties in balancing economic, political, institutional and professional requirements and values.

The next case study continues the use of historical cases, but here to explore the challenges facing innovative research into unorthodox energy technology that has the potential to displace traditional energy suppliers. The wave power case in Chap. 8 highlights how conducting research with integrity can have serious consequences and come with considerable cost. The case also points to the importance of transparency in how evidence is used in policymaking so that trust in science and scientists is promoted at the same time as science is used in the public interest. Another area of cutting-edge scientific innovation is explored in Chap. 9 , but this time looking to the future. This case study examines space exploration, and specifically the ethical issues around establishing safe exposure standards for astronauts embarking on extended duration spaceflights. This case highlights the ethical challenges in policymaking focused on an elite group of people (astronauts) who embark on extremely risky activities in the name of science and humanity.

Chapter 10 moves from the physical sciences to the social sciences. The Belfast Project provides a case study to explore the ethical challenges of conducting research after violent conflict. In this case, researchers promised anonymity and confidentiality to research participants, yet that was overturned through legal proceedings which highlighted the limits of confidentiality in research. This case points to the difficulty of balancing the value of research archives in understanding conflict against the value of providing juridical evidence to promote justice. Another social science case is examined in Chap. 11 , this time in ethnography. This so-called ‘urban explorer’ case study explores the justifications that might exist for undertaking covert research where researchers break the law (in this case by trespassing) in order to investigate a topic that would remain otherwise poorly understood. This case raises a number of important questions for policymakers around: the freedoms that researchers should be given to act in the public interest; when researchers are justified in breaking the law; and what responsibilities and consequences researchers should accept if they believe they are justified in doing so.

Further complexity in research and evidence generation is introduced in Chap. 12 . A case study in One Health is used to explore ethical issues at the intersection of animal, human and environmental ethics. The pertinence of such studies has been highlighted by COVID-19, yet policies lag behind in recognising the urgency and complexity of initiating investigations into novel outbreaks, such as the one discussed here that occurred among animals in Ethiopia. Chapter 13 retains the COVID-19 setting, but returns the attention to technological innovation. Artificial intelligence (AI) is the focus of these two chapters in the volume, here examining the ethical challenges arising from the emergency authorisation of using AI to respond to the public health needs created by the COVID-19 pandemic. Chapter 14 addresses a longer term use of AI in addressing problems and challenges in the legal system. Using the so-called Robodebt case, the chapter explores the reasons why legal systems are turning to AI and other automated procedures. The Robodebt case highlights problems when AI algorithms are built on inaccurate assumptions and implemented with little human oversight. This case shows the massive problems for hundreds of thousands of Australians who became victims of poorly conceived AI and makes recommendations to assist policymakers to avoid similar debacles. The last chapter (Chap. 15 ) draws some general conclusions from all the cases that are relevant when using case studies.

1.7. Into the Future

This volume focuses on ethics in research and professional integrity and how we can be clear about the lessons that can be drawn to assist policymakers. The cases provided cover a wide range of situations, settings, and disciplines. They cover international, national, organisational, group and individual levels of concern. Each case raises distinct issues, yet also points to some general features of research, evidence-generation, ethics and policymaking. All the studies illustrate the difficulties of drawing clear ‘boundaries’ between the research and the context. All these case studies show how in real situations dynamic judgements have to be made about many different issues. Guidelines and policies do help and are needed. But at the same time, researchers, policymakers and everyone else involved in evidence generation and evidence implementation need to embody the virtues that are central to good research. Judgments will need to be made in many areas, for example, about how much transparency can be allowed, or is ethically justified; how much risk can be taken, both with participants’ safety and also with the researchers’ safety; how much information can be disclosed to or withheld from participants in their own interests and for the benefit of the ‘science’; and many others. All of these point to just how difficult it can be to apply common standards across disciplines, professions, cultures and countries. That difficulty must be acknowledged and lead to open discussions with the aim of improving practice. The cases presented here point to efforts that have been made towards this. None of them is perfect. Lessons must be learned from all of them, towards which Chap. 15 aims to be a starting point. Only by openly discussing and reflecting on past practice can lessons be learned that can inform policymaking that aims to improve future practice. In this way, ethical progress can become an essential aspect of innovation in research and evidence-generation.

  • ATF (Bureau of Alcohol, Tobacco, Firearms and Explosives). 2021. Brady law. https://www ​.atf.gov/rules-and-regulations/brady-law . Accessed 1 Jan 2022.
  • Borges do Nascimento, Israel J., Thilo C. von Groote, Dónal P. O’Mathúna, Hebatullah M. Abdulazeem, Catherine Henderson, Umesh Jayarajah, et al. 2020. Clinical, laboratory and radiological characteristics and outcomes of novel coronavirus (SARS-CoV-2) infection in humans: a systematic review and series of meta-analyses. PLoS ONE 15(9):e0239235. https://doi ​.org/10.1371/journal ​.pone.0239235 . [ PMC free article : PMC7498028 ] [ PubMed : 32941548 ]
  • Campbell, D.T., and J.C. Stanley. 1963. Experimental and quasi-experimental designs for research . Chicago: Rand McNally and Company.
  • Exworthy, Mark, and Stephen Peckam. 2012. Policy learning from case studies in health policy: taking forward the debate. In Shaping health policy: case study methods and analysis , ed. Mark Exworthy, Stephen Peckham, Martin Powell, and Alison Hann, 313–328. Bristol, UK: Policy Press.
  • Exworthy, Mark, and Martin Powell. 2012. Case studies in health policy: an introduction. In Shaping health policy: case study methods and analysis , ed. Mark Exworthy, Stephen Peckham, Martin Powell, and Alison Hann, 3–20. Bristol, UK: Policy Press.
  • Galatzer-Levy, R.M., Bachrach, H., Skolnikoff, A., and Wadlron, S. Jr. 2000. The single case method. In Does Psychoanalysis Work? , 230–242. New Haven and London: Yale University Press.
  • Gomm, R., M. Hammersley, and P. Foster, eds. 2000. Case study method: Key issues, key texts . London: Sage.
  • Gray, M. 1998. Introducing single case study research design: an overview. Nurse Researcher 5 (4): 15–24. [ PubMed : 27712405 ]
  • Gwee, June. 2018. The case writer’s toolkit . Singapore: Palgrave Macmillan. [ CrossRef ]
  • Hammersley, M. 2001. Which side was Becker on? Questioning political and epistemological radicalism. Qualitative Research 1 (1): 91–110. [ CrossRef ]
  • Hunter, D.J. 2003. Evidence-based policy and practice: riding for a fall? Journal of the Royal Society of Medicine 96 (4): 194–196. https://www ​.ncbi.nlm ​.nih.gov/pmc/articles/PMC539453/ [ PMC free article : PMC539453 ] [ PubMed : 12668712 ]
  • Iphofen, R., and D. O’Mathúna (eds.). 2022. Ethical evidence and policymaking: interdisciplinary and international research . Bristol, UK: Policy Press.
  • Iphofen, R., A. Krayer, and C.A. Robinson. 2009. Reviewing and reading social care research: from ideas to findings . Bangor: Bangor University.
  • Luck, L., D. Jackson, and K. Usher. 2006. Case study: a bridge across the paradigms. Nursing Inquiry 13 (2): 103–109. [ PubMed : 16700753 ] [ CrossRef ]
  • Mattaini, M.A. 1996. The abuse and neglect of single-case designs. Research on Social Work Practice 6 (1): 83–90. [ CrossRef ]
  • Miller, G., and R. Dingwall. 1997. Context and method in qualitative research . London: Sage. [ CrossRef ]
  • O’Mathúna, Dónal. 2018. The dual imperative in disaster research ethics. In SAGE Handbook of qualitative research ethics , ed. Ron Iphofen and Martin Tolich, 441–454. London: SAGE. [ CrossRef ]
  • PRO-RES. 2021a. The foundational statements for ethical research. http: ​//prores-project ​.eu/the-foundational-statements-for-ethical-research-practice/ . Accessed 1 Jan 2022.
  • PRO-RES. 2021b. Accord. https: ​//prores-project.eu/#Accord . Accessed 1 Jan 2022.
  • Schell, C. 1992. The Value of the Case Study as a Research Strategy . Manchester Business School.
  • Yin, Robert K. 2011. Applications of case study research , 3rd ed. London: Sage.
  • Yin, Robert K. 2017. Case study research and applications: design and methods , 6th ed. London: Sage.

PRO-RES is a European Commission-funded project aiming to PROmote ethics and integrity in non-medical RESearch by building a supported guidance framework for all non-medical sciences and humanities disciplines adopting social science methodologies. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 788352. Open access fees for this volume were paid for through the PRO-RES funding.

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

  • Cite this Page O’Mathúna D, Iphofen R. Making a Case for the Case: An Introduction. 2022 Nov 3. In: O'Mathúna D, Iphofen R, editors. Ethics, Integrity and Policymaking: The Value of the Case Study [Internet]. Cham (CH): Springer; 2022. Chapter 1. doi: 10.1007/978-3-031-15746-2_1
  • PDF version of this page (219K)

In this Page

  • Judging the Ethics of Research
  • The Case for Cases
  • Research Design and Causal Inference
  • Devil’s in the Detail
  • Cases and Policymaking
  • The Moral Gain
  • Into the Future

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Similar articles in PubMed

  • Review Intersectoral Policy Priorities for Health. [Disease Control Priorities: Im...] Review Intersectoral Policy Priorities for Health. Watkins DA, Nugent R, Saxenian H, yamey G, Danforth K, González-Pier E, Mock CN, Jha P, Alwan A, Jamison DT. Disease Control Priorities: Improving Health and Reducing Poverty. 2017 Nov 27
  • SUPPORT Tools for evidence-informed health Policymaking (STP) 1: What is evidence-informed policymaking? [Health Res Policy Syst. 2009] SUPPORT Tools for evidence-informed health Policymaking (STP) 1: What is evidence-informed policymaking? Oxman AD, Lavis JN, Lewin S, Fretheim A. Health Res Policy Syst. 2009 Dec 16; 7 Suppl 1(Suppl 1):S1. Epub 2009 Dec 16.
  • SUPPORT Tools for evidence-informed health Policymaking (STP). [Health Res Policy Syst. 2009] SUPPORT Tools for evidence-informed health Policymaking (STP). Lavis JN, Oxman AD, Lewin S, Fretheim A. Health Res Policy Syst. 2009 Dec 16; 7 Suppl 1(Suppl 1):I1. Epub 2009 Dec 16.
  • SUPPORT Tools for evidence-informed health Policymaking (STP) 2: Improving how your organisation supports the use of research evidence to inform policymaking. [Health Res Policy Syst. 2009] SUPPORT Tools for evidence-informed health Policymaking (STP) 2: Improving how your organisation supports the use of research evidence to inform policymaking. Oxman AD, Vandvik PO, Lavis JN, Fretheim A, Lewin S. Health Res Policy Syst. 2009 Dec 16; 7 Suppl 1(Suppl 1):S2. Epub 2009 Dec 16.
  • Review Evidence Brief: The Effectiveness Of Mandatory Computer-Based Trainings On Government Ethics, Workplace Harassment, Or Privacy And Information Security-Related Topics [ 2014] Review Evidence Brief: The Effectiveness Of Mandatory Computer-Based Trainings On Government Ethics, Workplace Harassment, Or Privacy And Information Security-Related Topics Peterson K, McCleery E. 2014 May

Recent Activity

  • Making a Case for the Case: An Introduction - Ethics, Integrity and Policymaking Making a Case for the Case: An Introduction - Ethics, Integrity and Policymaking

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

case study on ethical practices

Princeton Dialogues on AI and Ethics

Princeton University

Case Studies

Princeton Dialogues on AI and Ethics Case Studies

The development of artificial intelligence (AI) systems and their deployment in society gives rise to ethical dilemmas and hard questions. By situating ethical considerations in terms of real-world scenarios, case studies facilitate in-depth and multi-faceted explorations of complex philosophical questions about what is right, good and feasible. Case studies provide a useful jumping-off point for considering the various moral and practical trade-offs inherent in the study of practical ethics.

Case Study PDFs : The Princeton Dialogues on AI and Ethics has released six long-format case studies exploring issues at the intersection of AI, ethics and society. Three additional case studies are scheduled for release in spring 2019.

Methodology : The Princeton Dialogues on AI and Ethics case studies are unique in their adherence to five guiding principles: 1) empirical foundations, 2) broad accessibility, 3) interactiveness, 4) multiple viewpoints and 5) depth over brevity.

case study on ethical practices

  • SOCIETY OF PROFESSIONAL JOURNALISTS

Home > Ethics > Ethics Case Studies

Ethics Ethics Case Studies

The SPJ Code of Ethics is voluntarily embraced by thousands of journalists, regardless of place or platform, and is widely used in newsrooms and classrooms as a guide for ethical behavior. The code is intended not as a set of "rules" but as a resource for ethical decision-making. It is not — nor can it be under the First Amendment — legally enforceable. For an expanded explanation, please follow this link .

case study on ethical practices

For journalism instructors and others interested in presenting ethical dilemmas for debate and discussion, SPJ has a useful resource. We've been collecting a number of case studies for use in workshops. The Ethics AdviceLine operated by the Chicago Headline Club and Loyola University also has provided a number of examples. There seems to be no shortage of ethical issues in journalism these days. Please feel free to use these examples in your classes, speeches, columns, workshops or other modes of communication.

Kobe Bryant’s Past: A Tweet Too Soon? On January 26, 2020, Kobe Bryant died at the age of 41 in a helicopter crash in the Los Angeles area. While the majority of social media praised Bryant after his death, within a few hours after the story broke, Felicia Sonmez, a reporter for The Washington Post , tweeted a link to an article from 2003 about the allegations of sexual assault against Bryant. The question: Is there a limit to truth-telling? How long (if at all) should a journalist wait after a person’s death before resurfacing sensitive information about their past?

A controversial apology After photographs of a speech and protests at Northwestern University appeared on the university's newspaper's website, some of the participants contacted the newspaper to complain. It became a “firestorm,” — first from students who felt victimized, and then, after the newspaper apologized, from journalists and others who accused the newspaper of apologizing for simply doing its job. The question: Is an apology the appropriate response? Is there something else the student journalists should have done?

Using the ‘Holocaust’ Metaphor People for the Ethical Treatment of Animals, or PETA, is a nonprofit animal rights organization known for its controversial approach to communications and public relations. In 2003, PETA launched a new campaign, named “Holocaust on Your Plate,” that compares the slaughter of animals for human use to the murder of 6 million Jews in WWII. The question: Is “Holocaust on Your Plate” ethically wrong or a truthful comparison?

Aaargh! Pirates! (and the Press) As collections of songs, studio recordings from an upcoming album or merely unreleased demos, are leaked online, these outlets cover the leak with a breaking story or a blog post. But they don’t stop there. Rolling Stone and Billboard often also will include a link within the story to listen to the songs that were leaked. The question: If Billboard and Rolling Stone are essentially pointing readers in the right direction, to the leaked music, are they not aiding in helping the Internet community find the material and consume it?

Reigning on the Parade Frank Whelan, a features writer who also wrote a history column for the Allentown, Pennsylvania, Morning Call , took part in a gay rights parade in June 2006 and stirred up a classic ethical dilemma. The situation raises any number of questions about what is and isn’t a conflict of interest. The question: What should the “consequences” be for Frank Whelan?

Controversy over a Concert Three former members of the Eagles rock band came to Denver during the 2004 election campaign to raise money for a U.S. Senate candidate, Democrat Ken Salazar. John Temple, editor and publisher of the Rocky Mountain News, advised his reporters not to go to the fundraising concerts. The question: Is it fair to ask newspaper staffers — or employees at other news media, for that matter — not to attend events that may have a political purpose? Are the rules different for different jobs at the news outlet?

Deep Throat, and His Motive The Watergate story is considered perhaps American journalism’s defining accomplishment. Two intrepid young reporters for The Washington Post , carefully verifying and expanding upon information given to them by sources they went to great lengths to protect, revealed brutally damaging information about one of the most powerful figures on Earth, the American president. The question: Is protecting a source more important than revealing all the relevant information about a news story?

When Sources Won’t Talk The SPJ Code of Ethics offers guidance on at least three aspects of this dilemma. “Test the accuracy of information from all sources and exercise care to avoid inadvertent error.” One source was not sufficient in revealing this information. The question: How could the editors maintain credibility and remain fair to both sides yet find solid sources for a news tip with inflammatory allegations?

A Suspect “Confession” John Mark Karr, 41, was arrested in mid-August in Bangkok, Thailand, at the request of Colorado and U.S. officials. During questioning, he confessed to the murder of JonBenet Ramsey. Karr was arrested after Michael Tracey, a journalism professor at the University of Colorado, alerted authorities to information he had drawn from e-mails Karr had sent him over the past four years. The question: Do you break a confidence with your source if you think it can solve a murder — or protect children half a world away?

Who’s the “Predator”? “To Catch a Predator,” the ratings-grabbing series on NBC’s Dateline, appeared to catch on with the public. But it also raised serious ethical questions for journalists. The question: If your newspaper or television station were approached by Perverted Justice to participate in a “sting” designed to identify real and potential perverts, should you go along, or say, “No thanks”? Was NBC reporting the news or creating it?

The Media’s Foul Ball The Chicago Cubs in 2003 were five outs from advancing to the World Series for the first time since 1945 when a 26-year-old fan tried to grab a foul ball, preventing outfielder Moises Alou from catching it. The hapless fan's identity was unknown. But he became recognizable through televised replays as the young baby-faced man in glasses, a Cubs baseball cap and earphones who bobbled the ball and was blamed for costing the Cubs a trip to the World Series. The question: Given the potential danger to the man, should he be identified by the media?

Publishing Drunk Drivers’ Photos When readers of The Anderson News picked up the Dec. 31, 1997, issue of the newspaper, stripped across the top of the front page was a New Year’s greeting and a warning. “HAVE A HAPPY NEW YEAR,” the banner read. “But please don’t drink and drive and risk having your picture published.” Readers were referred to the editorial page where White explained that starting in January 1998 the newspaper would publish photographs of all persons convicted of drunken driving in Anderson County. The question: Is this an appropriate policy for a newspaper?

Naming Victims of Sex Crimes On January 8, 2007, 13-year-old Ben Ownby disappeared while walking home from school in Beaufort, Missouri. A tip from a school friend led police on a frantic four-day search that ended unusually happily: the police discovered not only Ben, but another boy as well—15-year-old Shawn Hornbeck, who, four years earlier, had disappeared while riding his bike at the age of 11. Media scrutiny on Shawn’s years of captivity became intense. The question: Question: Should children who are thought to be the victims of sexual abuse ever be named in the media? What should be done about the continued use of names of kidnap victims who are later found to be sexual assault victims? Should use of their names be discontinued at that point?

A Self-Serving Leak San Francisco Chronicle reporters Mark Fainaru-Wada and Lance Williams were widely praised for their stories about sports figures involved with steroids. They turned their investigation into a very successful book, Game of Shadows . And they won the admiration of fellow journalists because they were willing to go to prison to protect the source who had leaked testimony to them from the grand jury investigating the BALCO sports-and-steroids. Their source, however, was not quite so noble. The question: Should the two reporters have continued to protect this key source even after he admitted to lying? Should they have promised confidentiality in the first place?

The Times and Jayson Blair Jayson Blair advanced quickly during his tenure at The New York Times , where he was hired as a full-time staff writer after his internship there and others at The Boston Globe and The Washington Post . Even accusations of inaccuracy and a series of corrections to his reports on Washington, D.C.-area sniper attacks did not stop Blair from moving on to national coverage of the war in Iraq. But when suspicions arose over his reports on military families, an internal review found that he was fabricating material and communicating with editors from his Brooklyn apartment — or within the Times building — rather than from outside New York. The question: How does the Times investigate problems and correct policies that allowed the Blair scandal to happen?

Cooperating with the Government It began on Jan. 18, 2005, and ended two weeks later after the longest prison standoff in recent U.S. history. The question: Should your media outlet go along with the state’s request not to release the information?

Offensive Images Caricatures of the Prophet Muhammad didn’t cause much of a stir when they were first published in September 2005. But when they were republished in early 2006, after Muslim leaders called attention to the 12 images, it set off rioting throughout the Islamic world. Embassies were burned; people were killed. After the rioting and killing started, it was difficult to ignore the cartoons. Question: Do we publish the cartoons or not?

The Sting Perverted-Justice.com is a Web site that can be very convenient for a reporter looking for a good story. But the tactic raises some ethical questions. The Web site scans Internet chat rooms looking for men who can be lured into sexually explicit conversations with invented underage correspondents. Perverted-Justice posts the men’s pictures on its Web site. Is it ethically defensible to employ such a sting tactic? Should you buy into the agenda of an advocacy group — even if it’s an agenda as worthy as this one?

A Media-Savvy Killer Since his first murder in 1974, the “BTK” killer — his own acronym, for “bind, torture, kill” — has sent the Wichita Eagle four letters and one poem. How should a newspaper, or other media outlet, handle communications from someone who says he’s guilty of multiple sensational crimes? And how much should it cooperate with law enforcement authorities?

A Congressman’s Past The (Portland) Oregonian learned that a Democratic member of the U.S. Congress, up for re-election to his fourth term, had been accused by an ex-girlfriend of a sexual assault some 28 years previously. But criminal charges never were filed, and neither the congressman, David Wu, nor his accuser wanted to discuss the case now, only weeks before the 2004 election. Question: Should The Oregonian publish this story?

Using this Process to Craft a Policy It used to be that a reporter would absolutely NEVER let a source check out a story before it appeared. But there has been growing acceptance of the idea that it’s more important to be accurate than to be independent. Do we let sources see what we’re planning to write? And if we do, when?

Join SPJ

SPJ News –  Region 4 Mark of Excellence Awards 2023 winners announced –  Region 5 Mark of Excellence Awards 2023 winners announced –  Region 6 Mark of Excellence Awards 2023 winners announced

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Ethical Considerations in Research | Types & Examples

Ethical Considerations in Research | Types & Examples

Published on October 18, 2021 by Pritha Bhandari . Revised on June 22, 2023.

Ethical considerations in research are a set of principles that guide your research designs and practices. Scientists and researchers must always adhere to a certain code of conduct when collecting data from people.

The goals of human research often include understanding real-life phenomena, studying effective treatments, investigating behaviors, and improving lives in other ways. What you decide to research and how you conduct that research involve key ethical considerations.

These considerations work to

  • protect the rights of research participants
  • enhance research validity
  • maintain scientific or academic integrity

Table of contents

Why do research ethics matter, getting ethical approval for your study, types of ethical issues, voluntary participation, informed consent, confidentiality, potential for harm, results communication, examples of ethical failures, other interesting articles, frequently asked questions about research ethics.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe for research subjects.

You’ll balance pursuing important research objectives with using ethical research methods and procedures. It’s always necessary to prevent permanent or excessive harm to participants, whether inadvertent or not.

Defying research ethics will also lower the credibility of your research because it’s hard for others to trust your data if your methods are morally questionable.

Even if a research idea is valuable to society, it doesn’t justify violating the human rights or dignity of your study participants.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Before you start any study involving data collection with people, you’ll submit your research proposal to an institutional review board (IRB) .

An IRB is a committee that checks whether your research aims and research design are ethically acceptable and follow your institution’s code of conduct. They check that your research materials and procedures are up to code.

If successful, you’ll receive IRB approval, and you can begin collecting data according to the approved procedures. If you want to make any changes to your procedures or materials, you’ll need to submit a modification application to the IRB for approval.

If unsuccessful, you may be asked to re-submit with modifications or your research proposal may receive a rejection. To get IRB approval, it’s important to explicitly note how you’ll tackle each of the ethical issues that may arise in your study.

There are several ethical issues you should always pay attention to in your research design, and these issues can overlap with each other.

You’ll usually outline ways you’ll deal with each issue in your research proposal if you plan to collect data from participants.

Voluntary participation means that all research subjects are free to choose to participate without any pressure or coercion.

All participants are able to withdraw from, or leave, the study at any point without feeling an obligation to continue. Your participants don’t need to provide a reason for leaving the study.

It’s important to make it clear to participants that there are no negative consequences or repercussions to their refusal to participate. After all, they’re taking the time to help you in the research process , so you should respect their decisions without trying to change their minds.

Voluntary participation is an ethical principle protected by international law and many scientific codes of conduct.

Take special care to ensure there’s no pressure on participants when you’re working with vulnerable groups of people who may find it hard to stop the study even when they want to.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

case study on ethical practices

Informed consent refers to a situation in which all potential participants receive and understand all the information they need to decide whether they want to participate. This includes information about the study’s benefits, risks, funding, and institutional approval.

You make sure to provide all potential participants with all the relevant information about

  • what the study is about
  • the risks and benefits of taking part
  • how long the study will take
  • your supervisor’s contact information and the institution’s approval number

Usually, you’ll provide participants with a text for them to read and ask them if they have any questions. If they agree to participate, they can sign or initial the consent form. Note that this may not be sufficient for informed consent when you work with particularly vulnerable groups of people.

If you’re collecting data from people with low literacy, make sure to verbally explain the consent form to them before they agree to participate.

For participants with very limited English proficiency, you should always translate the study materials or work with an interpreter so they have all the information in their first language.

In research with children, you’ll often need informed permission for their participation from their parents or guardians. Although children cannot give informed consent, it’s best to also ask for their assent (agreement) to participate, depending on their age and maturity level.

Anonymity means that you don’t know who the participants are and you can’t link any individual participant to their data.

You can only guarantee anonymity by not collecting any personally identifying information—for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, and videos.

In many cases, it may be impossible to truly anonymize data collection . For example, data collected in person or by phone cannot be considered fully anonymous because some personal identifiers (demographic information or phone numbers) are impossible to hide.

You’ll also need to collect some identifying information if you give your participants the option to withdraw their data at a later stage.

Data pseudonymization is an alternative method where you replace identifying information about participants with pseudonymous, or fake, identifiers. The data can still be linked to participants but it’s harder to do so because you separate personal information from the study data.

Confidentiality means that you know who the participants are, but you remove all identifying information from your report.

All participants have a right to privacy, so you should protect their personal data for as long as you store or use it. Even when you can’t collect data anonymously, you should secure confidentiality whenever you can.

Some research designs aren’t conducive to confidentiality, but it’s important to make all attempts and inform participants of the risks involved.

As a researcher, you have to consider all possible sources of harm to participants. Harm can come in many different forms.

  • Psychological harm: Sensitive questions or tasks may trigger negative emotions such as shame or anxiety.
  • Social harm: Participation can involve social risks, public embarrassment, or stigma.
  • Physical harm: Pain or injury can result from the study procedures.
  • Legal harm: Reporting sensitive data could lead to legal risks or a breach of privacy.

It’s best to consider every possible source of harm in your study as well as concrete ways to mitigate them. Involve your supervisor to discuss steps for harm reduction.

Make sure to disclose all possible risks of harm to participants before the study to get informed consent. If there is a risk of harm, prepare to provide participants with resources or counseling or medical services if needed.

Some of these questions may bring up negative emotions, so you inform participants about the sensitive nature of the survey and assure them that their responses will be confidential.

The way you communicate your research results can sometimes involve ethical issues. Good science communication is honest, reliable, and credible. It’s best to make your results as transparent as possible.

Take steps to actively avoid plagiarism and research misconduct wherever possible.

Plagiarism means submitting others’ works as your own. Although it can be unintentional, copying someone else’s work without proper credit amounts to stealing. It’s an ethical problem in research communication because you may benefit by harming other researchers.

Self-plagiarism is when you republish or re-submit parts of your own papers or reports without properly citing your original work.

This is problematic because you may benefit from presenting your ideas as new and original even though they’ve already been published elsewhere in the past. You may also be infringing on your previous publisher’s copyright, violating an ethical code, or wasting time and resources by doing so.

In extreme cases of self-plagiarism, entire datasets or papers are sometimes duplicated. These are major ethical violations because they can skew research findings if taken as original data.

You notice that two published studies have similar characteristics even though they are from different years. Their sample sizes, locations, treatments, and results are highly similar, and the studies share one author in common.

Research misconduct

Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement about data analyses.

Research misconduct is a serious ethical issue because it can undermine academic integrity and institutional credibility. It leads to a waste of funding and resources that could have been used for alternative research.

Later investigations revealed that they fabricated and manipulated their data to show a nonexistent link between vaccines and autism. Wakefield also neglected to disclose important conflicts of interest, and his medical license was taken away.

This fraudulent work sparked vaccine hesitancy among parents and caregivers. The rate of MMR vaccinations in children fell sharply, and measles outbreaks became more common due to a lack of herd immunity.

Research scandals with ethical failures are littered throughout history, but some took place not that long ago.

Some scientists in positions of power have historically mistreated or even abused research participants to investigate research problems at any cost. These participants were prisoners, under their care, or otherwise trusted them to treat them with dignity.

To demonstrate the importance of research ethics, we’ll briefly review two research studies that violated human rights in modern history.

These experiments were inhumane and resulted in trauma, permanent disabilities, or death in many cases.

After some Nazi doctors were put on trial for their crimes, the Nuremberg Code of research ethics for human experimentation was developed in 1947 to establish a new standard for human experimentation in medical research.

In reality, the actual goal was to study the effects of the disease when left untreated, and the researchers never informed participants about their diagnoses or the research aims.

Although participants experienced severe health problems, including blindness and other complications, the researchers only pretended to provide medical care.

When treatment became possible in 1943, 11 years after the study began, none of the participants were offered it, despite their health conditions and high risk of death.

Ethical failures like these resulted in severe harm to participants, wasted resources, and lower trust in science and scientists. This is why all research institutions have strict ethical guidelines for performing research.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Measures of central tendency
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Thematic analysis
  • Cohort study
  • Peer review
  • Ethnography

Research bias

  • Implicit bias
  • Cognitive bias
  • Conformity bias
  • Hawthorne effect
  • Availability heuristic
  • Attrition bias
  • Social desirability bias

Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.

Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .

These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.

Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .

You can only guarantee anonymity by not collecting any personally identifying information—for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.

You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). Ethical Considerations in Research | Types & Examples. Scribbr. Retrieved April 15, 2024, from https://www.scribbr.com/methodology/research-ethics/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, data collection | definition, methods & examples, what is self-plagiarism | definition & how to avoid it, how to avoid plagiarism | tips on citing sources, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Clinical Researcher

Lessons Learned from Challenging Cases in Clinical Research Ethics

Clinical Researcher April 12, 2024

case study on ethical practices

Clinical Researcher—April 2024 (Volume 38, Issue 2)

RESOURCES & REVIEWS

Lindsay McNair, MD, MPH, MSB

[A review of Challenging Cases in Clinical Research Ethics . 2024. Wilfond BS, Johnson L-M, Duenas DM, Taylor HA (editors). CRC Press (Boca Raton, Fla.)]

Challenging Cases in Clinical Research Ethics may not be a book you take to the beach for a light read, but if you have a role, or an interest, in how we analyze the complex ethical challenges that are an integral part of conducting clinical research, it may be a good book for you. This is a reference book, a teaching tool, and, in some ways, a historical record.

While healthcare institutions have long had ethics committees or even trained clinical ethicists to provide consultation to staff and families during difficult situations in clinical care settings, the specialized practice of clinical research ethics consultation is much more recent. As described in the foreward of the book, the development of this kind of resource was spurred by the National Institutes of Health’s (NIH’s) Clinical and Translational Science Awards (CTSA) program, a funding mechanism which supports a network of almost 60 medical institutions across the United States to facilitate collaboration that expedites the design and dissemination of new medical advances. Since a requirement of the funding program is that the institutions must have ethical support services, the CTSA-funded institutions created ethics consultation services that focused on the research ethics issues likely to arise from the CTSA-funded work.

In 2014, the leaders of the clinical research consultation services across the organizations formed a group to share information and best practices, called the Clinical Research Ethics Consultation Collaborative (CRECC). The CRECC continues to be an active group, and membership is open to anyone who is in a role related to clinical research ethics practice, including representatives not just from the CTSA-funded institutions, but also from biopharmaceutical companies and independent contributors.

This book arose from the work of the CRECC. The cases discussed in the book are real situations at research institutions across the U.S. for which the persons involved sought advice from their local consultation services, and the consultants brought the case to CRECC for discussion. The editors make a point of saying that by the time of the finished case discussion, each case involved 30 to 50 consultants, and they recognize almost 170 contributors to the book, including most of the best-known and most well-respected research bioethicists.

Each year, the American Journal of Bioethics has published up to four of these case presentations, along with two to four commentaries on the case from different ethicists to provide a variety of approaches, perspectives, and opinions. These cases and the accompanying commentaries comprise this book.

The editors have organized the book around the ethical principles for research ethics that were described in a seminar paper by Emanuel, Wendler, and Grady in 2000,{1} resulting in five main sections focused on collaborative partnerships, respect for participants, fair participant selection, favorable risk-benefit ratio, and informed consent. Because they also recognize that there were many possible ways to organize the material and that someone looking for discussion of a specific topic may want to be able to search in more detail, the book includes three separate appendices; one that lists cases by primary and secondary ethical principles involved, one that lists cases by topic keywords (e.g., pediatrics, Phase I trials, social media), and one that lists cases by values relevant to the discussion (social value, equity, and trustworthiness), as well as a standard index which lists topics, people, policies, and keywords and the pages on which the terms appear or are discussed.

In each section, an editor presents a brief description of the unifying theme of that section, and then short summaries of each of the five to eight cases under that theme. The section then delves into each case in more detail with an introduction that includes any necessary background context (disease details, standard of care framing, existing policy), a case description (often just a page or two), references, and then one to four commentaries.

The commentaries, each by different authors, approach different considerations or aspects of the case, together providing a variety of opinions and a well-rounded discussion. For example, there is a case focused on a request from a study team to unblind a participant’s treatment assignment after an adverse event (to help determine relationship to study drug and whether other participants were also at risk, or whether the event was a symptom of the underlying condition). The commentaries are presented by two ethicists from a sponsor company discussing the ethical issues of unblinding and the impact on study data; an ethicist from the NIH discussing considerations of a data monitoring committee in making decisions that will impact studies; and an ethicist involved in health monitoring programs for chronic illness who discusses issues of community trust and communication. The editors and commentators are careful to focus on the relevant ethical issues and conflicts, and not on operational or regulatory requirements, although they do address those considerations.

Although the cases all stem from situations that developed at research institutions, almost all of the content is relevant to other audiences in the clinical research ecosystem, including situations encountered in biopharmaceutical-sponsored studies that industry leaders have to think about. For example, there are cases that discuss ethical implications of advertising for research participants on social media, whether compensation for participation can (or should) be withheld from a participant who was intentionally deceptive to get enrolled in the study, how extensive the “alternative options” presented in a consent form should be, and whether a patient with advanced cancer must exhaust all possible treatment options before being allowed to enroll in a Phase I study of a new immunotherapy.

There are a number of ways that teachers, trainers, and leaders could use the content of this book both for education, and as the basis for case-based discussions. Overall, I would recommend this book as a resource for anyone in a training or leadership role, both for personal education and as a useful tool for developing training content that will likely prompt thoughtful discussion.

  • Emanuel EJ, Wendler D, Grady C. 2000. What makes clinical research ethical? JAMA 283(20):2701–11. doi:10.1001/jama.283.20.2701. PMID:10819955.

case study on ethical practices

Lindsay McNair, MD, MPH, MSB, is a physician, research ethicist, and Founder and Principal Consultant of Equipoise Consulting LLC, which provides consulting for projects related to the scientific and ethical conduct of research studies and drug development programs. She joined the Clinical Research Ethics Collaboration Collective, from which the authors of the reviewed book drew their case discussions, in 2023, when the book was already in the process of publication.

Manhattan, Kansas

Nashville, Tennessee

New Hyde Park, New York

case study on ethical practices

Barriers to Clinical Trial Enrollment: Focus on Underrepresented Populations

case study on ethical practices

Using Simulation to Teach Research

case study on ethical practices

An Approach to a Benefit-Risk Framework

Jobs in the acrp career center.

  • Clinical Research Technologist-Research | Nemours
  • Research Coordinator - Research Operations | Denver Health
  • Research Fellow | Nemours
  • Intern-Research | Nemours

Measuring adherence to AI ethics: a methodology for assessing adherence to ethical principles in the use case of AI-enabled credit scoring application

  • Original Research
  • Open access
  • Published: 15 April 2024

Cite this article

You have full access to this open access article

  • Maria Pokholkova   ORCID: orcid.org/0000-0002-6294-0669 1 ,
  • Auxane Boch 1 ,
  • Ellen Hohma 1 &
  • Christoph Lütge 1  

This article discusses the critical need to find solutions for ethically assessing artificial intelligence systems, underlining the importance of ethical principles in designing, developing, and employing these systems to enhance their acceptance in society. In particular, measuring AI applications’ adherence to ethical principles is determined to be a major concern. This research proposes a methodology for measuring an application’s adherence to acknowledged ethical principles. The proposed concept is grounded in existing research on quantification, specifically, Expert Workshop, which serves as a foundation of this study. The suggested method is tested on the use case of AI-enabled Credit Scoring applications using the ethical principle of transparency as an example. AI development, AI Ethics, finance, and regulation experts were invited to a workshop. The study’s findings underscore the importance of ethical AI implementation and highlight benefits and limitations for measuring ethical adherence. A proposed methodology thus offers insights into a foundation for future AI ethics assessments within and outside the financial industry, promoting responsible AI practices and constructive dialogue.

Avoid common mistakes on your manuscript.

1 Introduction

The misuse of decision-making artificial intelligence (AI) systems leads to unintended consequences stemming from the computational techniques and AI infrastructure employed in their development. Footnote 1 According to Ayling and Chapman [ 7 ], who focus on the epistemic concerns of technologies, Footnote 2 misuse stems from traditional data harms: non-intentional harms that result in individuals’ problems associated with privacy violations, Footnote 3 discrimination, Footnote 4 and automatic consent. Footnote 5 Recently, a growing body of research addressed ethical compliance in the global AI landscape, encompassing Europe and beyond. Footnote 6 The authors also stress the need for practical tools that go beyond high-level ethical principles and focus on applying these principles in AI production and deployment, emphasizing the importance of addressing the “how” of applied ethics rather than just the “what.” Footnote 7 Those efforts are translated, for example, into frameworks Footnote 8 in the public domain (recruitment, education, enforcement), which aim to systematically ensure that the high-level principles are operationalised. Footnote 9 However, Attard-Frost et al. [ 6 ] state that few AI ethical guidelines focus on fairness, accountability, and transparency within technical systems. Footnote 10 Jobin et al. [ 34 ] note, importantly, that the public’s judgment tends to have a polarized view of AI algorithms, perceiving them as bad or good,at the same time, the ethical implications of AI technologies should be addressed on the level of design. That explains why assessing AI technology at every step of the AI lifecycle is important for tackling the problem of misuse.

Many frameworks, principles, protocols, and guidelines aim to evaluate the impact of AI technology and even provide standards for its quality in different domains. For instance, Value- Based Engineering (VBE), published by IEEE, prioritizes ethical considerations in designing AI systems. Footnote 11 Technical experts from the United States and China collaborate on AI technical standards globally. Footnote 12 However, governments struggle to cooperate on ethical AI standards, namely on the issues addressed, actors involved, and strategies used. Footnote 13 This fact postpones the establishment of global governance frameworks and results in a lack of interpretation of ethical AI rules and their operationalization for more detailed and concrete cases. This absence of a standardized approach to ethical AI has contributed to a global disparity in consumer trust in AI systems. Footnote 14 However, according to Omrani et al. [ 46 ], this trust can be enhanced by maximizing the technological features of AI systems. Hooks et al. [ 26 ] claim that the technological acceptance of various newly emerged AI-specific applications directly correlates to levels of trust in AI in general. Footnote 15 The research on trust in AI lacks in-depth examinations of specific AI cases and, specifically, the impact of the underlying trust factors on those cases. An analogy to this research problem can be illustrated with an example of research on evaluating the impact of Environment, Social, and Governance (ESG) reporting. Namely, research demonstrates that employing qualitative and quantitative methods effectively reveals the direct positive effect of ESG reporting on consumer trust in a company’s brand, product, and service. Footnote 16 Consistent methods for quantifying compliance with the declared principles and values of AI systems’ developers and deployers, including the mentioned high-level principles of ethics, become desirable and essential in establishing trust and promoting the adoption of AI systems.

Besides the challenges associated with integrating AI infrastructure within business organizations, which encompass both AI developed by businesses and AI employed within businesses, there are critical concerns related to ownership rights, cybersecurity, and data protection. Footnote 17 Moreover, those concerns should be tackled while achieving economic beneficence. According to the data collected from Western data sources, at least in Western literature, neither AI developers nor businesses using AI are obliged to follow jurisdictional rules or international AI principles. Footnote 18 Globally, the fragmented regulatory landscape, global variability, or variation of ethical considerations across countries, and the inherent complexity of AI raise the probability of emerging AI systems that risk harming humans. On national levels, a lack of a clear interpretation of political acts regarding the requirements for the ethics of AI increases the gap between public policy and the practice of using AI systems. Footnote 19 In these circumstances, assessing the level of trustworthiness in tools using AI systems seems, therefore, rather difficult due to organizational and structural risks. Footnote 20 In terms of businesses, the lack of proactive strategy for integrating AI ethics into the corporate structure is explained by the “wait- and-see” policies arising from uncertainty. Footnote 21 It is why the current state of “ethical governance” Footnote 22 is underdeveloped. At the same time, the ability to effectively control the ethicality of AI systems constitutes a competitive advantage of such businesses, as it improves overall product quality and consumer trust. Footnote 23

This paper seeks to contribute to the current research on AI ethics implementability by using the elicitations of experts, a technique already implemented to solve problems in political science, government, statistics, management science, and psychology. The elicitation of experts is asking a group of qualified individuals to express their opinions and judgments regarding uncertain events in terms of probabilities. According to the scientists who applied statistical methods to elicit expert knowledge, elicitation allows for incorporating subjective beliefs and opinions into probabilistic models. Despite the strong opinion that expert judgment cannot be quantified, and such a category as ethics cannot or should not be assessed quantitatively; statistical modeling in eliciting expert judgments proved effective in predicting complex physical phenomena. Footnote 24

Due to the rapid entry of AI systems into the market, there is a pressing need for a self-consistent assessment method that would allow us to decompose the ethical characteristics of AI products and assess them. This problem can be addressed by involving qualified experts who can not only select features upon which to quantitatively evaluate the components of the overall composition but also consider the collective contribution of each feature to the overall assessment. Expert Workshop (EW), one of the approaches that employ statistical modeling, namely weighted sums, in expert judgment elicitation, fulfills these requirements. Publishing results from EW could attract and engage a broader range of experts in establishing quantitative assessment criteria for AI systems. Ultimately, this effort seeks to improve both the quality of AI systems and the performance of the financial companies utilizing them.

Therefore, in this paper, EW will be used to quantify the expert judgment of compliance with an ethical principle by an AI system that will be used for testing. This study aims to create a digital image of one of the characteristics of AI ethics used in the financial sector using data from a selected group of qualified experts. This paper illustrates the discussion using a case study involving an Expert Workshop where experts proposed a system of numerical criteria to assess the compliance of the AI Credit Scoring system with the principle of transparency.

Considering this, the hypothesis of this research can be formulated as follows: “Quantitative assessment of the constituent elements of AI ethics can be carried out based on a generalized expert opinion using statistical modeling (weighted sums) in expert judgment elicitation.” Although our proposed metrics involve weighted sums of expert judgment, it is essential to recognize that this approach may have limitations compared to other potential methods. This paper will address these issues throughout the article, providing a comprehensive analysis of the strengths and weaknesses of the proposed methodology.

This paper will first provide a theoretical background defending the proposed methodology of the Expert Workshop for defining quantifiable measures of AI ethics principles. Subsequently, the outcomes of the proof of concept Expert Workshop will be presented, followed by a discussion on the usability and effectiveness of the proposed methodology.

2 Theoretical framework

2.1 literature review on ai ethics measurement techniques.

The analysis of AI ethics research can be presented in two categories: one that examines principles by comparing and classifying them without contextualizing them to a specific industry and another that analyzes techniques tailored to address the challenges related to AI ethics integration in particular sectors. An example of the first research category is the AI regulation strategy document, “Ethics Guidelines for Trustworthy AI,” designed and published on behalf of the European Commission’s AI High-Level Expert Group (AI HLEG). Footnote 25 The document is a non-obligatory framework within the European Union that implies the implementation of procedures into businesses that guarantee seven ethical principles: fairness, transparency, privacy, security, accountability, reliability, and safety. Footnote 26 Unlike other prominent high-level frameworks like the IEEE Global Initiative on Ethics Footnote 27 and the Montreal Declaration on Responsible AI, Footnote 28 AI HLEG focuses on the scope of Europe and promotes a regulatory approach that involves influencing regulatory bodies and policymakers when designing AI ethics implementation procedures. It comes to a second category of research encompassing an engineering approach of integrating ethics into the design of a concrete AI tool, Footnote 29 i.e., algorithmic decision-making. The research findings reveal that the ethical principles of fairness, transparency, and accountability are underrepresented in AI ethical business practices and, due to the disciplinary scope of ethics, are being replaced by speculative norms, i.e., corporate secrecy. Footnote 30 This suggests the need for more transparent methods to align AI ethics considerations with the business practices of organizations that deploy AI.

Understanding the difference between different quantification approaches that aim to assess the ethicality of AI systems is essential, as the optimal synergy between the most helpful assessment techniques is a must for strengthening the use of AI systems. For instance, literature distinguishes existing quantification frameworks that concern AI ethicality based on their efficacy, scope, focus, purpose, and manner in which they connect the cause and effect of the AI systems. Footnote 31 Among them are impact assessments, technology assessments, audits Footnote 32 tailored to an industry that involves AI ethics aspects, and design toolkits like value-centered design. Footnote 33 The impact assessments are also used for industry and business-specific purposes like achieving sustainable goals or ensuring stakeholder participation. Footnote 34 At the same time, there is no universally accepted rating of the most efficient or least efficient quantification methodologies for evaluating the integration of AI ethics principles into business practices. Given the evolving nature of the AI ethics field, adaptable and context-specific quantification methods play a valuable role. The method proposed in this paper aims to contribute to the ongoing dialogue and practical application of ethical principles in AI business contexts.

When defining the methods, AI research often focuses on qualitative evaluations of the adherence of AI systems to ethical principles or legal standards. Footnote 35 The assessment is also crucial for monitoring product quality, providing businesses with insights on enhancing product competitiveness in the market, and ensuring that consumers’ rights are respected. Footnote 36 Among different options for a comprehensive evaluation of the ethicality of AI systems are ethical audits and assessments, frameworks, and guidelines developed by international, Footnote 37 national, Footnote 38 and industry-led initiatives Footnote 39 using interdisciplinary approaches. However, some practical guidelines or systems are designed to measure AI systems’ ethical qualities, often proposed by ethical consultancies to ensure impartiality. For example, some publications evaluate AI systems’ ethicality using a labeling approach introduced by AI Ethics Impact Group, Footnote 40 tailored to a concrete tool’s specific context.

Another example is a TÜV SÜD AI Quality Framework Footnote 41 that proposes to measure risk for non- compliance to the legal framework for AI systems by calculating the severity of the AI ethical implication and the scope of the corresponding industry. These qualitative approaches are often regarded as practical decision-making tools with the potential to serve as monitoring tools for AI systems characteristics. They can be helpful to various stakeholders with different needs, including policymakers, regulators, and business owners.

The quantification allowed by those models is a competitive advantage that allows for control over the ethical quality of an AI system and, therefore, simplifies the organizations’ harmonization of AI standards that will be outlined in the EU AI Act. Footnote 42 However, due to high complexity, the frameworks mentioned above can also be considered complex for understanding and implementation. Hence, the current state of AI ethical quantification frameworks must offer compelling evaluation examples. The quality of these assessments remains unclear, shielded behind non-disclosure agreements (NDAs) and corporate confidentiality. Finally, there needs to be evidence that those frameworks consider the importance of bringing various stakeholders to a consensus on the definition of ethical assessment.

A quantitative assessment of ethics in general, as well as the AI ethics and AI application’s ethical adherence, is impossible due to the complexity and abstractness of these concepts as philosophical categories. At the same time, when using AI systems in practice, having a measurable level of trustworthiness as described in the HLEG [ 24 ] work and experts’ opinions on such devices is helpful. Undoubtedly, the factors of AI systems that characterize trustworthy AI, including AI used in the financial sector, include ethics and integrity. At this stage of development and practical use of AI systems, it seems appropriate to decompose the general concept of ethics into components, a quantitative assessment of each of which can be carried out using the elicitation of expert knowledge. Merging databases of evaluations of elements, taking into account their importance, into a single database will allow us to obtain an “image” of the ethics of AI and consider its level when assessing its trustworthiness. An Expert Workshop (EW), a seminar-style approach that combines individual and group-based techniques to address complex ethical challenges or any problematic situations, leveraging the collective expertise of participants, seems a suitable methodology in this case.

2.2 Reusing the concept of expert workshop for quantification of adherence to ethical principles

The Expert Workshop (EW) is a method of seminar conduction that proposes a strategy to understand and quantify complex phenomena by involving 10–25 professionals who are specialized in corresponding phenomena. Footnote 43 This method was developed and proposed in a doctoral dissertation of Tolkacheva, Footnote 44 whose idea was to systematize a set of well-known practices and techniques for problem-oriented training for specialists in the field of engineering and technology. Two workshops were conducted using the EW methodology to assess students’ adherence to Sustainable Development (SD) values Footnote 45 with Russian and international HEI (High Engineering Institutions) stakeholders, namely engineering students and educators. The experts were selected and invited to assess the engineering students’ Sustainable Development mindset formation level. The characteristics chosen by both groups of experts were quantitative, relative, and applicable to any university and its engineering students. The authors also categorized the participants into three distinct groups: the level of the university’s commitment to SD goals, SD mindset in the student community, and individuals’ adherence to SD values. All characteristics represented a percentage or a share of the entity (i.e., “average % of study time within engineering courses devoted to SD issues”).

Based on the expert assessment, it was found that the tested level of sustainability development (SD) mindset formation among engineering students in the investigated universities is low (73% of criteria). It suggests that 73% of the criteria used to evaluate the level of sustainability development (SD) mindset formation among engineering students have been met or fall into the “low” level category. At the same time, according to authors, Footnote 46 a comparison between the initial intuitive assessment and the subsequent quantitative assessment revealed that defining quantitative criteria and applying quantitative scales to the evaluation process led to a more comprehensive analysis; resulting in a more critical evaluation. Initially, the assessment indicated a much higher level of SD mindset formation, with 43.7–52.8% of responses suggesting a level above “low.” In contrast, the later evaluation showed no indication of a level above “low.”

The authors argue that a high-quality expert selection process for the EW is crucial for building a correct and comprehensive digital “image” of the problem. If a repeated EW is conducted, the probability of new evolving characteristics is close to zero. If the experts were selected correctly, it indicates that they are highly qualified, and the criteria they propose will likely align with or be similar to those suggested by other potential future experts. Nonetheless, with each successive EW conducted, the digital “image” of the phenomena becomes increasingly detailed. This process not only aids stakeholders in undergoing a conscious transformation but also fosters their inner motivation and understanding of the addressed transformation problem. A prerequisite of EW is the experts’ competence, not based on their position or level of qualifications; but on their experience, direct involvement, and knowledge of the practical aspects of the problem.

The process for conducting expert research follows a structured sequence of steps (see Fig. 1 ). The preparation starts with selecting a seminar topic, subject, and research problem; followed by formulating requirements for experts and finally inviting chosen participants. Once experts are selected, they are provided with the seminar’s goals.

figure 1

The procedure of the expert workshop

The workshop begins with the qualitative phase, during which experts collaborate to adopt a definition, make assumptions, formulate the main question, and select a qualitative assessment scale. Individual surveys gather qualitative expert opinions, and expert teams nominate characteristics for quantitative assessment. In the qualitative phase, the moderation process plays a key role during the stage of characteristics nomination. The facilitator ensures that a consensus among the participants is reached. Moderated deliberation enhances the fairness of the characteristic selection and helps to conclude the discussion with the consent characteristic formulation among the participants. The moderator’s task is to guide the discussion and maximize the consensus process within the group, facilitate precise phrasing of the characteristics, and help the group select the most argumentable and informative characteristics. At the same time, experts define their opinions and formulate improved characteristics by participating in the discussion. During the consensus process, voting can help if the deliberation and definition of opinions take too much time. Additionally, if the new characteristics are formulated by all participants based on the ones proposed by groups, they are written down and used during voting.

The most informative characteristics are selected during a participants’ discussion, leading to the construction of a 5x5 matrix. This is followed by a quantitative phase of the workshop when criteria for the object’s condition are established, and the comparative level of information content for each criterion is determined. Finally, experts provide quantitative assessments, which are mathematically processed to construct a model describing the subject of research based on selected criteria and their contributions. This process guides the transition from problem formulation to quantification and model creation.

2.3 Quantitative assessment and calculation process in expert workshop methodology

First, the “Aggregated Quantified Assessment” of the researched quality of a subject is calculated by multiplying each of the Status Quo values corresponding to its values of Ratio of Importance. This assessment aims to provide a numerical value representing the level of subject quality. It uses relative values (KSQ1 to KSQ5) assigned to specific characteristics selected by experts to evaluate general quality (ranging from 0 to 1). Each characteristic’s value is weighted by a ratio of importance (ɣ1 to ɣ5), where the sum of these ratios equals 1. It results in a generalized quantitative assessment of the subject’s quality level, which adequately represents reality in the present moment according to the expert’s perception. It is calculated as follows:

where KSQ1 … KSQ5 are calculated relative values of characteristics selected by experts to assess the current level of adherence of AI Credit Scoring to a principle of transparency (0–1);

where ɣ1 … ɣ5—the ratio of importance, or relative assessment of the specific weight of the selected criteria, within (0–1).

The Quantitative Assessment of the Levels of Qualitative States (QALQS) is calculated with K i : a value of the criterion and ɣ i : the specific weight of the i th criterion. Different qualitative states of quality are defined as: “Critically Low,” “Low,” “Satisfactory,” “Good,” and “Excellent.” For each state, a formula is provided to calculate a numerical value based on the weighted sum of characteristic values ( K i ) using specific weights (ɣ i ). Equations ( 3 ), ( 4 ), ( 5 ), ( 6 ), and ( 7 ) represent different thresholds of quality states, and the calculated values help to classify the subject quality into one of these qualitative states. It is calculated as follows:

This calculation not only allows for a generalized quantified number for each of the states of quality but also accounts for the importance of each of the characteristics for the general quality of the subject.

The third calculated result, a Qualitative Expert Judgement (QEJ), is the result of the intuitive survey on the subject’s current state. A scale from 0 to 1 to represent the qualitative judgments obtained from the survey. These judgments are expressed as shares, indicating the percentage of respondents who selected each qualitative category (e.g., critically low, low, satisfactory, good, excellent). The exemplary question for the survey can be: “What is, according to your opinion, the current quality state of the subject X?” For coherence with the numeric framework, the survey’s answers are coded as E, and five E results, expressed as shares, match the quality states already used (critically low, low, satisfactory, good, excellent).

Finally, step four is a “Quantified Assessment of Average Collective Judgment of Experts,” or QAACJE is done by multiplying the values of QEJ with the generalised scale or each of the respective states of quality QALQS (Fig. 2 ).

figure 2

Assessment of subject quality and quantitative evaluation by experts

The final result is calculated in Eq. ( 8 ):

This multiplication results in the formation of a new scale that unites both qualitative and quantitative perceptions of experts, and the summation of those values gives a number that summarizes the expert’s assessment of the subject’s quality. It is important to clarify that QAACJE is a valuable component of the Expert Workshop methodology, as it is a technique that enhances its flexibility and effectiveness in transforming opinions into quantifiable data. However, the method of weighted sums does not eliminate the subjectivity of expert opinions. The strengths and weaknesses of the EW procedure and its metrics will be discussed in the next chapter.

2.4 Exploring the nuances of expert workshop (EW)

According to Morgan [ 42 ], poorly done expert elicitation, when used for applied decision analysis, can discredit the whole approach and lead to useless or deceptive results. Moreover, the elicitation procedure should account for inherent biases and minimize them within the process and in the results. Therefore, some principles and interpretations of the methods and techniques and the concept might be necessary for the reproducibility of the EW methodology. Finally, the methodology’s weaknesses should be minimized by raising awareness about its shortcomings and explaining the measures of control over the method.

2.4.1 The principles of EW preparation and conduction

One of the most crucial steps in organizing an EW is the selection of experts. Experts are professionals of a specific field with accumulated knowledge and expertise, complemented by their deep understanding of the subject’s constraints and advantages. Therefore, selecting appropriate experts is based on three principles: qualifications in a particular field of research, a high level of engagement, and professional interest in finding a solution to the problematic situation addressed in the workshop. Organizers of the EW ensure that at least two of the principles should be satisfied when selecting experts:

The principle of Relevance is implemented through the study of publications, information about conferences, seminars, and other events that allow the identification of a pool of qualified researchers on the relevant issue who may subsequently be invited to participate in the EW. For example, the invited experts should have at least one publication on the subject of investigation in the last three years or a minimum of two years of work experience in the context of the subject.

The principle of Engagement is realized by inviting experienced individuals who have knowledge about the phenomenon studied in the EW from their professional activities or everyday lives. Often, the expert opinions of such individuals are no less valuable than those of qualified expert researchers.

The principle of Motivation : individuals show motivation to resolve the problem of the phenomenon under study. This is particularly important due to the necessity of collectively finding ways to resolve the researched problem during the EW seminar.

Regarding the principles of EW conduction, facilitators play a crucial role in maintaining neutrality towards the various perspectives of experts. This vital principle of EW conduction is to ensure that experts feel comfortable expressing their opinions without feeling pressured to adopt a dominant viewpoint and to create conditions for experts to express their ideas.

2.4.2 Comparison with other expert judgment elicitation methods

Several categories of methods can be distinguished among numerous publications on expert elicitation methods, or methods of gathering the insights and opinions of knowledgeable individuals in a particular field regarding uncertainty. Many are expert elicitation methods explicitly tailored to the public sector, Footnote 47 environmental science and risk assessment, Footnote 48 policy analysis, Footnote 49 etc. Even though some techniques declare their ability to assess phenomena, Footnote 50 it is unclear whether there is an effective comparable method to the EW that specializes in assessing the phenomena’s state.

In the context of Expert Judgment Elicitation (EJE) taxonomy, a categorization system that organizes various methods and approaches used in expert judgment elicitation, EW could be attributed to quantitative and qualitative methods that use fluent and numerate methods. Footnote 51 Fluent methods involve gathering qualitative or descriptive information from experts, which aim to capture the experts’ subjective insights, opinions, or experiences without quantifying them into numerical values. Numerate methods aim to provide more precise assessments and can include probability estimations. The concept of the EJE provides a foundation that describes the EW method, which combines direct and indirect elicitation and individual and consensus aggregation. Footnote 52 Therefore, according to the EJE taxonomy, EW is a mixed-method group elicitation approach that combines qualitative expert judgment methods with quantitative methods, mathematical methods, or a weighted factor method. Footnote 53

EW could be compared to the Delphi method, a consensus-building technique that uses questionnaires to collect participant data. Footnote 54 However, the Delphi method often uses the opinion of geographically dispersed experts. Footnote 55 Therefore, it builds on electronic and anonymous communication, which does not leave room for clarification when interpreting the results. In contrast, EW allows for collaborative face-to-face interactions among experts, facilitating the development of agreed-upon judgments and the selection of informative numerical criteria. Delphi consists of 3–4 iteration rounds in which experts must give their statements and then reassess them to reach a consensus at the end of the process. Compared to EW, Delphi’s method presents design vulnerabilities. Delphi’s method has no requirement of being present and engaged, and the method includes the obligation to grant participants a large block of time (i.e., 2 weeks). Footnote 56

Another method, the Analytic Hierarchy Process (AHP), is a mixed-method approach that uses pairwise comparisons to derive weighted comparisons. This method can be better compared to EW, as both EW and AHP use literate and numeric metrics. AHP is a practical decision-making method that divides complex problems into hierarchical structures, allowing for comparing elements and calculating weights based on expert judgments and the relationships between factors. Footnote 57 Specifically, the similarity with AHP is noted as both AHP and EW, apart from freeform methods, such as brainstorming, use scaling methods with discrete and continuous ratings (i.e., 0–1).

According to the literature review, EW is a unique method for gathering expert judgments and selecting informative numerical criteria. It is distinct from other established methods like AHP in its focus on assessing the state of a problem or phenomenon rather than choosing among alternatives. Thus far, evidence suggests that expert elicitation methods primarily address selecting options from multiple alternatives. While EW can be formally compared to AHP as they both employ quantitative metrics, the applicability of weighted sums metrics for expert judgment elicitation requires further research.

2.4.3 Accuracy and metrics (weighted sum method)

Human brains struggle to process large amounts of data or perform intricate statistical computations, Footnote 58 therefore, there is no technical possibility to validate the accuracy of the results obtained while eliciting expert judgment. Uncertainty must be accepted when judging the probability of events and the inherent cognitive biases. The EW demonstrates the aspect of statistical representation of the expert knowledge. It states that the accuracy and statistical significance of the results obtained depend on the level of competence and the number of experts involved. Further efforts to expand the amount of experts and raise the competence level of the experts involved would increase the accuracy of the obtained group result. However, there should be a limit of approximately 25 people to keep the discussion engaging and manageable.

The structure of the matrix approach exemplifies the necessity for a participation number limitation. The accuracy of the digital portrait of the investigated phenomenon depends on the number of selected characteristics for its description and the numerical criteria for evaluation. More characteristics mean a more detailed and accurate digital portrait of the phenomena; this also applies to the range of qualitative assessment scales. The Likert Scale is the qualitative scale used for the Expert Workshop; it proposes a scale of five states of the phenomena, providing standardization and accuracy across assessments. Footnote 59 At the same time, the greater the number of features and the wider the range of qualitative assessments, the more work the experts need to do during the seminar. The matrix approach (used to obtain a digital image of the investigated phenomena when using a 10 × 10 matrix) may allow for the exclusion of maximum and minimum values during statistical processing but significantly increases the duration of the seminar. A 5 × 5 matrix allows for conducting a workshop with 15–20 participants in 3.5–4.0 h. However, a 10 × 10 matrix would take at least 8 h. According to previous tests of the methodology and the experts’ feedback, an acceptable level of accuracy of the obtained digital image of the subject under investigation is achieved with a 5 × 5 matrix. Footnote 60

Apart from the quality of experts, the selection of metrics directly impacts the accuracy and reliability of the results obtained through expert judgment elicitation and helps to reduce bias. Multi-criteria problems are fundamentally more complex than single-criteria problems and require unique methods to find their solution. Footnote 61 The Expert Workshop aims to conduct a multicriteria assessment, for which the weighted sum method (WSM) is convenient. The Weighted Sum Method (WSM) is an approach used in decision-making that aggregates multiple criteria into a single composite criterion; typically represented as a weighted sum of the individual criteria. Footnote 62 Namely, calculations that are applicable for decision-making tasks in various scenarios; such as the selection of the best option or multiple best options, ordering all options by preference, and assessing the characteristics. However, solving multi-criteria problems, such as those in multi-criteria decision analysis (MCDA), requires significant effort to gather and process decision-makers’ preferences. This can be resource-intensive and time-consuming. Moreover, since MCDA relies on subjective decision-maker preferences, there are no objective solutions for comparison; posing challenges in evaluating results against benchmarks.

The simplicity of the 0–1 continuous scale used in the Expert Workshop quantification phase is beneficial because it allows for an intuitive assessment process. Despite the complexity of the studied problem, which involves ethical considerations; this approach streamlines the evaluation to make it easier for experts to provide their insights. In ethical quality assessment, where multiple factors and perspectives are at play, a simple system helps condense the core ideas of experts’ qualitative judgments into quantifiable measures to facilitate a clearer understanding of the overall ethical landscape. However, the WSM’s dependency on expert judgment, namely on their subjective opinion, can introduce potential biases or errors in the subject’s assessment. Footnote 63 At the same time, while being biased, the method allows for the quantification of subjectivity. This accounts for the direction in which the subjectivity of a specific group of experts is directed.

Understanding subjectivity could determine the stakeholders’ priorities in the topic of development.

2.4.4 Group subjectivity in expert judgment

In an Expert Workshop (EW) scenario, the subjective probability expressed by an expert reflects their personal belief; which is influenced by both formal evidence and informal knowledge or experiences. Despite being biased, subjective probability distributions (SPD) are more effective than other statistical methods when eliciting uncertain expert knowledge. Footnote 64 In a subjectivist or Bayesian perspective, individuals assess the likelihood of uncertain events or quantities based on their subjective judgments about the present or future state of the world and the underlying governing processes. Footnote 65 Regarding group elicitation, the subjectivity extends to the group dynamics in EW.

As a result, collective judgment is influenced by groupthink—when group members feel peer pressure to conform with a dominant opinion. This is reinforced by cultural stereotypes, such as the ultimate importance of group members’ opinions who outweigh others regarding their social status (age, gender, authority, and professional achievements) during the workshop. Another consideration during the process includes the moderator’s awareness of group dynamics, ensuring that each expert holds equal weight during the discussion.

At the same time, consensus building, an essential activity of Expert Workshop (EW) achieved due to its in-person setup, aims to reach an agreement through open dialogue, negotiation, and compromise among participants. Various methods and techniques employed in EW prioritize avoiding groupthink and achieving consensus. Revising the group work results during the consequent critical analysis of the whole group of experts contributes to a more precise understanding of elicitation concepts, notions, or elicitation questions. Notably, the multi-criteria decision analysis process helps to ensure that all relevant factors are named, formulated, and considered. The success of Expert Workshops hinges on balancing individual subjectivity, group dynamics, and effective consensus-building strategies to ensure accurate and reliable collective judgments.

2.4.5 Challenges associated with engaging experts

Contacting experts via email using organizers’ professional and personal networks, conducting “cold calls” using online databases and social media, and confirming the interest of experts can be challenging. It requires extensive effort to reach out to suitable experts and ensure their participation, as usually, it is difficult for a sufficient number of experts to be physically present on a specific date, time, and place. Additionally, hosting workshops in a fixed location may limit the involvement to only those who can physically attend. However, this limitation can be mitigated by coordinating workshops with other events that draw experts from diverse locations. This challenge is also mitigated when seeking a localized perspective on the investigation.

At the same time, the challenge remains to find experts that would fit the investigated subject. The methodology procedure is explained in the invitation, which allows experts to decide on participation. However, even with the detailed explanations in the invitation, some experts may not fully agree with the contextual definition provided by the organizers, which may lead to disagreements during the workshop. For instance, according to the past use cases of EW, Footnote 66 it is likely that at least one expert in the group refuses to accept the contextual definition proposed by an organizer. In this case, the moderator asks the expert to provide their definition. So far, there has not been a case where the contextual definition of the investigated subject of the EW has been formulated promptly by an expert.

2.4.6 Stakeholder dynamics

Another challenge associated with stakeholder dynamics occurs when the stakeholders play different roles in the process (i.e., seller and buyer, user and creator). Achieving consensus during the workshop can be challenging due to the risk of stakeholders leading polarized discussions. Moreover, briefing and moderation must mobilize the participants to ensure that experts remain engaged, thoughtful, and productive. However, it can be difficult for a moderator to manage the group due to the need to accommodate participants’ diverse backgrounds and preferences. Moreover, a moderator should be experienced and knowledgeable enough to coordinate smoothly and promptly during all stages of the EW; reacting appropriately to experts’ questions, comments, and objections. During the group work stage, where experts collaborate to formulate characteristics, the moderator must monitor multiple groups simultaneously; to ensure that discussions progress in the right direction. For example, developing quantifiable characteristics can challenge the participants. If a group encounters difficulties, the moderator should react proactively and provide guidance by naming correctly formulated characteristics. Several instructed moderators instead of one could simplify the moderator’s task in cases when an EW hosts a large number of experts or organizers who lack experience in workshop conduction.

Since the EW method relies more on qualitative assessments and expert consensus than complex mathematical models, it exclusively addresses real-life problems and practical challenges with complex, multifaceted contextual nuances, like the ethics of AI. Considering this article aims to find a suitable methodology for hands-on, functional, measurable characteristics of AI systems’ ethics, the preferred method would be interdisciplinary to provide diverse perspectives. In this context, the use case of credit scoring is selected as the AI system for testing as it holds significant relevance in the financial sector. Credit scoring systems are widely used in lending and financial decision-making processes; impacting individuals and businesses. Given the potential implications of biased or unethical AI algorithms in this domain, the choice of AI CS as the use case allows for a focused examination of a real-world, high-stakes application of AI ethics.

3 Procedure of the proof-of-concept workshop

3.1 the use case of credit scoring.

The integration of Artificial Financial Intelligence (AFI) into the operations of modern fintech companies and traditional financial organizations is a rapidly growing trend. Artificial Financial Intelligence (AFI) refers to AI techniques and technologies that automate financial processes while complementing existing human financial expertise. Footnote 67 Modern fintech companies and traditional financial organizations that aim to upscale their financial operations undergo a snowballing process of AFI integration Footnote 68 in their business models. One example of AFI is AI- enabled credit scoring (AI CS). This AI type of program automates and replicates aspects of human financial expertise through a combination of machine learning (ML), Footnote 69 a subcategory of AI algorithms and other AI techniques. From an economic and technical perspective, Credit Scoring (CS) using AI has brought forth a spectrum of applications, such as Machine Learning

(ML) and sophisticated Deep Learning (DL) methodologies like Neural Networks, known for their proficiency in deciphering complex data relationships. Footnote 70 The selection of a particular model hinges upon several critical factors, including the scale and quality of available data, the intricacy of credit decisions, and the specific requirements of lending institutions. Footnote 71 AI CS can use financial and non-financial data sources, including social media activity, textual data, and online behavioral patterns. The abovementioned AI models excel compared to humans in assessing the probability of loan repayment. Footnote 72 Lastly, AI CS can autonomously decide whether to grant individuals or entities a loan or other financial services.

The advantages and disadvantages of AI CS, including its potential for improved credit risk assessment, enhanced financial policies, and concerns about unconscious bias, have been extensively discussed in the recent literature on trustworthy AI. Footnote 73 From an academic perspective, AI CS is described as a powerful tool for financial inclusion, providing affordable service to vulnerable members of society through algorithmic decision-making that mimics intelligent human behavior. Footnote 74 At the same time, while AI CS demonstrates cost-efficiency compared to traditional human creditworthiness assessments and the potential for financial inclusion for borrowers, it also raises significant ethical concerns that need careful consideration Footnote 75 : potentially discriminative AI decisions, which are based on biased algorithms; lack of interpretation on how a decision was made. In the literature, some methods are described that allow for evaluating the AI CS effectiveness, predictability, and justification mechanisms underlying the tool’s decision. Footnote 76 Specifically, in the literature examining the economic angle of AI CS when analyzing artificial intelligence, more attention is paid to economic aspects, such as the accuracy of AI CS risk evaluations. Footnote 77 However, in several specific cases, using AI in the financial sector raises serious ethical issues that require careful consideration, such as compliance with fairness, accountability, and transparency. Footnote 78 Recently, many researchers have emphasized prioritizing the principles of fairness and transparency over other moral principles. Footnote 79 This fact suggests that the change of question from “what” to “how” in the applicability of AI ethics is especially relevant for the specific AI industry in finance.

Regarding the impact assessment of AI technologies in finance, solutions have been mentioned in academic and industry dimensions. Organizations and fintech companies across various industries, from technical equipment providers offering AI-powered platforms to financial consultancies involved in the entire AI CS production cycle, have devised their practices, guidelines, and metrics for adhering to ethical AI principles. These frameworks have been developed, for instance, by IBM (AI Fairness 360), Footnote 80 Ernst and Young (Trusted AI Framework), Footnote 81 and JP Morgan Chase (Explainable AI Centre of Excellence). Footnote 82 These initiatives have not been developed explicitly in response to the AI HLEG’s recommendations. However, they share a broader objective of promoting responsible and ethical AI practices. Frameworks are committed to addressing principles such as fairness, transparency, and accountability in the context of companies’ services and product characteristics. For instance, the IBM framework is declared to detect bias in the machine learning models used to train AFI and AI CS and remove them, as the reliance on biased algorithms potentially leads to unfair or discriminatory decisions. Such bias can manifest in various forms, including prioritizing certain user groups based on ethnicity, gender, or income. Even though AI CS application scenarios have employed a quantitative assessment to measure the fairness of decisions made by AI CS to account for the extent to which its decisions are fair regarding different demographic groups, there are no signs that those metrics are widely applicable. Footnote 83 For example, limited evidence suggests that these policy frameworks comprehensively address organizational challenges in ensuring AFI’s compliance with the ethical criteria established by the EU AI HLEG [ 24 ]. At the same time, considering the speed of AFI integration, there is a pressing need for an understandable, unified methodology that includes metrics that can evaluate adherence to the most challenging ethical concerns surrounding AFI. The need for transparency in AFI was also articulated by a preliminary survey conducted on the preparatory stage of an Expert Workshop: transparency was selected to be the second most important ethical principle for AFI after security. Therefore, the importance of transparency is underscored by the choice of this principle for a proof-of- concept workshop.

In summary, AI-enabled credit scoring (AI CS) is rapidly reshaping the financial industry, offering improved credit risk assessment while raising critical ethical concerns. Given the paper’s objective to demonstrate the feasibility of the Expert Workshop (EW) method in quantifying adherence to ethical principles, the selection of AI CS for the proof of concept is adequate: AI CS provides an ideal example of a combination of financial and technical aspects. Also, considering the necessity to decompose the general concept of ethics on its underlying principles and test them separately, it was decided to methodologically test the principle of transparency based on its definition of the AI HLEG framework. Furthermore, the choice of test principle was informed by the results of a preliminary survey conducted remotely among the invited experts before the EW venue. This survey demonstrated the overarching importance of the transparency principle in the AFI. Also, the interdisciplinarity of the AI CS use case matches the specificity of the Expert Workshop methodology that fosters an interdisciplinary approach for effectiveness in assessing ethical concerns in a specific domain.

3.2 Proof of concept workshop

3.2.1 participants.

The pre-selection of experts was done via search through social network engines, such as LinkedIn, the research of thematic forums, academic search platforms, and databases focused on AI in finance. The requirements for candidates were their professional employment in the AI financial sector. AI developers, ethicists, financial professionals, regulators, consumer advocates, and business and academia representatives who possess specific knowledge about the decision-making process of AI in finance were considered suitable candidates for the EW in AI CS. Additionally, experts with knowledge of decision-making in traditional finance and individuals who work with organizational risks for ethical AI implementation were welcomed. Of 55 invited experts, thirteen confirmed their participation in the Workshop: five women and eight men (Fig. 3 ).

figure 3

Composition of Expert Workshop Participants

3.2.2 Preliminary survey on the ethical principles of AI systems in finance

The choice of testing the adherence of the AI CS system to only one most important ethical principle, rather than a set of principles, is explained by the structure of the Expert Workshop methodology that allows for the decomposition of complex phenomena into more minor elements. In the seminar context, assessing the ethical quality of AI systems in finance proves challenging due to the multifaceted nature of ethical principles involved in this domain. This implies that by focusing on one principle, one can thoroughly analyze and evaluate its application in the context of AI systems in finance. The purpose of the preliminary survey was explained by the necessity to define the most pressing ethical problem about the AI systems employed in the financial industry and test it using expert knowledge.

Selected candidates for the workshop were invited via email to vote on the most important ethical principle in AI in finance based on the five principles (transparency, fairness, privacy, security, and accountability). The survey question “Which two characteristics are the most important ones in AI systems when applied in the finance industry?” was answered by six people, and security scored four, the most considerable number of votes. Three votes were equally scored by fairness and transparency, whereas privacy scored two votes and accountability scored one vote. Considering the totality of the factors, such as the limited reachability to the experts with the necessary knowledge and the need to align the workshop research focus with the contextual knowledge of confirmed participants, as well as the prevailing literature discussions on AI ethics in AFI, transparency was chosen as the principle to be tested.

3.2.3 Expert workshop

As a first step, participants were presented with background research on the problems of the ethical aspects of AI Credit Scoring and the methodology of the Expert Workshop. Additionally, experts were presented with the preliminary survey results on the most important ethical principles and proposed using the contextual understanding of ethicality during the workshop. Also, they were asked to consent to the predefined definitions and conditions for the common usage in the context of the EW. Namely, participants had to agree with the validity of the following statements: “An AI CS is considered ethical if it has adhered to the principles of ethical AI, particularly transparency.” Also, “The ethicality of the AI Credit Scoring tool can be qualitatively assessed by measuring its adherence to the qualitative characteristics inherent to AI CS products.”

The first step in the Ethical Workshop (EW) also involved simplifying the concept of AI ethicality by equating it with transparency. This simplification was deliberately designed to enhance contextual comprehension of ethicality, and the reason for this simplification is rooted in the intricate nature of ethical considerations, which often originate from the realm of philosophy and are prone to individual and, consequently, biased interpretations. The introduction of these contextual definitions and conditions marked a pivotal initial stage in the methodology, aimed at testing both the hypothesis’s validity and the validity of the eventual results. Establishing these conditions is critical as it streamlines the intricate landscape of ethical considerations and fosters consensus among the workshop participants.

After agreeing on using contextual definitions in terms of the EW, experts were invited to individually share their opinion on the current state of AI Credit Scoring adherence to transparency using an online multiple-choice survey. Namely, they were given five types of answers: excellent, good, satisfactory, low, and critically low state to characterize the subject. The results were displayed in the form of a diagram for the attention of all experts specifically. The majority of experts considered the AI CS transparency to be low.

As a next step, the Expert Workshop (EW) participants were divided into teams, with two groups consisting of four experts each and one group comprising five experts. Each group was tasked to name five measurable characteristics that would allow for the qualitative assessment of the AI Credit Scoring tool’s transparency. For that, participants received handouts Footnote 84 providing a context for ideation of the characteristics of AI CS. Namely, participants were given an example of an abstract AI CS tool developed and tested in the EU and based on Artificial Neural Networks (ANN) models. Also, it was mentioned that the producer company claims that their tool expands access to capital and financial services for marginalized communities and uses financial and non-specified alternative data for decision-making when the client consents to disclose its data, as required to comply with GDPR.

Each group of experts was proposed to name such characteristics or features that should be evaluated on a scale from 0 to 1. After 15 characteristics were named in total (see Appendix B ) Footnote 85 participants were invited to participate in a quorum discussion with other groups to select five of the fifteen most relevant characteristics and formulate them to be scalable from 0 to 1. As a part of the process, group representatives had to defend their formulation of characteristics and consider criticism of other groups. This stage of the EW took the most significant share of the total duration of the EW. Specifically, all 13 participants were challenged to agree on the formulation of scalable characteristics, as their perspectives on what constitutes transparency factors for AI CS did not align.

With that selection, the five best characteristics were inserted into the matrix table in Microsoft Excel, Footnote 86 and the table was shared with all participants individually. Experts were asked to work individually, filling the matrix using their expert knowledge. Namely, participants were tasked with assigning values from a scale of 0 to 1 to evaluate five characteristics, a ratio indicating the importance of each characteristic, and a scale representing the status quo of the AI CS in transparency. The personal Excel table was shared with each participant via a weblink, allowing them to input the numbers they consider adequate and consistent with their expert knowledge. The values proposed by each expert were processed with thirteen Microsoft Excel Lists and connected to one standard matrix programmed to calculate the arithmetic mean of each criterion. Due to technical and organizational challenges, only ten participants could complete the matrix. As a result, the quantitative results for the group are based on the assessments provided by these ten experts, as defined in Fig. 4 .

figure 4

Matrix of criteria for assessing the level of transparency of AI Credit Scoring (on a scale of 0–1)

4 Results of Quantitative Assessment of AI Credit Scoring Transparency

The first step involves calculating an Aggregated Quantified Assessment (AQA) of the transparency level of an AI Credit Scoring tool. Participants propose numerical criteria individually for this assessment. In the proof-of-concept workshop, experts opted not to provide AQA or quantitative assessments of the current state of specific AI Credit Scoring (AI CS) due to concerns about the accuracy of such assessments. Although it was their first attempt to evaluate transparency, the experts expressed confidence in their ability to assess AI CS competencies. Instead of quantifiable data, they offered their individual intuitive opinions and insights. In the second step, the Quantitative Assessment of the Levels of Qualitative States (QALQS) is calculated based on Eqs. ( 3 ), ( 4 ), ( 5 ), ( 6 ), and ( 7 ). The results show a critically low state of 0.26, a low state of 0.36, a satisfactory state of 0.48, a good state of 0.61, and an excellent state of 0.75 (refer to Appendix D for a detailed calculation). These results form a generalized scale of states, as illustrated in Fig. 4 .

Step three is a Qualitative Expert Judgement (QEJ) or Intuitive Survey Results (%) collected from 11 participants, resulting in a chart in Fig. 4 . The survey question was “What do you think is the current state of AI Credit Scoring adherence to the ethical principle of transparency”? The majority of experts evaluated the state of AI Credit Scoring transparency as low (six votes). In contrast, two experts evaluated the state as satisfactory, two as critically low, and one participant evaluated it as a good state (Fig. 5 ).

figure 5

Intuitive survey results

Quantitative Assessment of the Average Collective Judgement of Experts (QAACJE), signifying the level of AI Credit Scoring transparency, is based on the results of expert judgments obtained from an intuitive survey and a matrix table. The process results in forming a new scale (E*S), which combines both qualitative and quantitative judgments of experts (see Fig. 4 ). For the tested group of experts, the average collective judgment resulted in a score of 0.38 (Fig. 6 ).

figure 6

Assessment of AI CS transparency and quantitative evaluation by experts

5 Findings, feasibility of the method, limitations, and outlook

The Expert Workshop (EW) serves as a valuable tool for conducting an in-depth analysis of the transparency level of AI-based credit scoring systems. Before the study, a clear definition and a proposed assumption of AI Credit Scoring’s transparency concept were established for the expert group. The expert group then had to confirm their understanding of AI ethics concepts and transparency. The following key steps were taken during the study:

Experts assessed the problem emotionally using the proposed scale from excellent to critically low. This was expressed quantitatively due to the methodology.

The experts identified and selected the five most informative characteristics, which served as a basis for establishing criteria to assess the transparency of AI-based credit scoring systems.

Characteristics enabled experts to designate the appropriate criteria levels for qualitative assessments of critically low, low, satisfactory, and excellent for each of the selected characteristics.

By applying the criteria selected by experts to indicate states of transparency, a generalized scale was derived considering all five indicators and their respective importance ratios.

The generalized scale enabled a quantitative assessment of a generalized qualitative judgment from a specific group of experts.

The collaborative efforts from the selected group of experts yielded results that expressed the opinions and judgments of this group. As per the QALQS developed by the participating group of experts, the consensus among experts regarding the adherence of AI Credit Scoring (AI CS) to the ethical principle of transparency falls within the range of “low” (0.36) to “satisfactory” (0.48); leaning more towards the “low” end of the spectrum.

The quantified assessment allows for easy comparison between different subjects or situations, which is particularly important when evaluating AI systems; especially in assessing ethical qualities. Moreover, if an Expert Workshop is conducted periodically, QAACJE allows for monitoring of how the transparency changes of AI CS over time. This gives evaluators and stakeholders full disclosure during the subject evaluation process, as they can see the numeric values and understand how the assessment was reached. The quantitative aspect of the workshop methodology serves as a competitive advantage, enabling constructive dialogue among experts by translating theoretical considerations into practical activities.

Additionally, the five unique characteristics formulated by the expert group, along with the initial fifteen characteristics proposed by three different groups during the second phase of the Expert Workshop (EW), hold significant importance. These characteristics shed light on perspectives that sometimes clash due to variations in the background knowledge of AI system implementations. Differences in the granularity of the initially formulated characteristics became apparent. Achieving consensus on characteristic definitions proved challenging due to disagreements among participants regarding the actors responsible for AI credit scoring transparency. Concerns were raised about the transferability of these characteristics to different jurisdictional layouts, which is a crucial aspect when regulating AI within responsible business contexts. Experts also noted the absence of a real-life AI Credit Scoring example for evaluation and identified a need for contextual settings in a use case description.

5.1 Feasibility of the method

The prior research on the landscape of AI ethical assessment frameworks identified their abundance Footnote 87 and, at the same time, the need for practical, quantifiable methods to evaluate AI adherence to ethical principles. The complexities associated with assessing AI ethicality were highlighted, especially considering the intricate nature of ethical concepts and the often opaque quality of existing assessments due to corporate confidentiality and non-disclosure agreements. In these terms, integrating various stakeholders’ perspectives on ethical assessments were identified as critical aspects in addressing the practicability of ethical frameworks. Within this context, the feasibility of the Expert Workshop (EW) as a methodology to address these challenges was explored.

The study validated the feasibility of the EW methodology for assessing AI ethicality and shed light on the complexities and challenges involved in evaluating AI systems’ ethicality. Moreover, it became evident that the specific conditions characterizing a concrete AI Credit Scoring tool depend on various factors, including the host company’s corporate and business goals, industry conditions, market dynamics, and prevailing rules and regulations at a given point in time. This underscores the need for tailoring an Expert Workshop (EW) to the specific requirements of particular AI tools. Finally, the participation of experts who are deeply involved in developing specific AI systems and demonstrate strong motivation to ensure its competitive compliance with ethical AI standards is essential for conducting a high-quality assessment of such phenomena.

The Expert Workshop (EW) has demonstrated that its structured methodology provides valuable insights into the challenges and considerations associated with implementing responsible practices in business models utilizing AI. This conclusion is supported by the coherence observed across different results, emphasizing the methodology’s potential effectiveness in evaluating the ethicality of AI systems. It provides a systematic way of developing quantifiable attributes to evaluate ethical compliance with the trustworthiness principles of AI systems.

5.2 Limitations

The limitations of the Expert Workshop method stem from the lack of comparable methods for research in expert elicitation methods. The metric system of weighted sums allows for statistical representation, however it still has the potential to produce biased results. Moreover, it is crucial to consider that psychological phenomena like group thinking might cause inaccurate or biased responses, and consequently yield biased quantified results. The requirement that characteristics need to be expressed as shares limits the evaluation to quantifiable aspects, which may overlook non-quantifiable ethical considerations. The technical and organizational challenges led to the fact that 10 participants out of thirteen could complete the matrix, and this aspect negatively affected the accuracy of the group result. Also, some limitations happened due to the discrepancy between the fast development of AI technologies and the lack of widespread certainty among experts in the field. This discrepancy was made apparent by experts who refused to provide judgment due to hesitation from a lack of concrete knowledge. As AI becomes increasingly implemented, there is a potential for this discrepancy to be bridged through expanded research.

5.3 Outlook

This paper has demonstrated that the assessment of AI Credit Scoring via the Expert Workshop (EW) can be achieved by obtaining quantified general estimates of the problem based on the expert opinion of a group. A set of measures is advisable to conduct a quality expert assessment, namely:

Improved data collection on characteristics could be achieved by expanding the pool of experts in AFI and soliciting expert opinions from a broader range of individuals. For example, further distribution of questionnaires and formed scales could help to collect more data from other experts who could not attend the current EW. Footnote 88

Involving more types of stakeholders in the AI industry, such as policymakers, academics, start-up representatives, and public sector members, would be beneficial. Additionally, considering experts from other communities, cities, and countries would provide more diverse perspectives and enhance objectivity in problem-solving.

A given use case’s “image” quality could be accomplished by increasing the number of characteristics considered, leading to a more comprehensive understanding of specific issues and potential solutions.

Testing other important AI ethical principles outlined by AI HLEG in the context of Expert Workshops (EW) could enable comparisons between ethical qualities to deepen understanding of the relationship between the ethical principles.

Analyzing and comparing the results of multiple EWs could reveal similarities or differences in perceptions. Patterns that evolve from these comparisons could lead to the formation of a map that explores the perception of given use cases depending on a set of factors, such as stakeholder characteristics.

This approach could be instrumental in addressing the organizational challenges associated with implementing AI ethics. This is particularly relevant for rapidly evolving industries; where it can be challenging for self-identified experts to reach a consensus.

6 Conclusion

This study utilized the Expert Workshop (EW) methodology to define quantifiable adherence characteristics to ethical principles, focusing on transparency in AI-based credit scoring systems. Through a proof of concept EW, the study aimed to evaluate the effectiveness of the EW method in assessing the ethics of AI systems, particularly in the financial sector. Due to the type of expert elicitation method used, which provided relative estimates, numeric results were obtained through mathematical models. These results support the hypothesis that the Expert Workshop (EW) methodology is a viable approach for assessing the ethicality of AI systems.

Regarding the proof-of-concept results, experts’ subjective opinions indicate low transparency achievement in AI CS technology. Experts provided a tentative scale for quantifying the adherence of such tools to the transparency principle and revealed the concerns about transparency in AI CS technology. Despite the workshop design nuances and the inherent subjectivity of the weighted sum metric, the study exemplifies the effectiveness of the methodology in this domain. All in all, the results of the initial stage demonstrate the exemplary study of EW methodology usage for assessing AI ethics components. In the meantime, there is an identified potential for evaluating a variety of ethical principles through the methodology of EW and assessing the comparative importance of principles in the context of AI. The proposed methodology can serve as a foundational framework for developing an ethical principles map, offering innovative insights into the ethical landscape of AI systems.

Data availability

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Dwivedi et al. [ 14 ], Mills [ 40 ].

Ayling and Chapman [ 7 ].

Solove [ 56 ].

Wachter et al. [ 67 ].

Andreotta et al. [ 5 ].

Amugongo et al. [ 3 , 4 ], Fontes et al. [ 18 ], Corrigan [ 10 ].

Ayling and Chapman [ 7 ], Morley et al. [ 43 ].

Jobin et al. [ 34 ], Amugongo et al. [ 3 , 4 ], Lütge et al. [36].

Zhou et al. [ 69 ].

Attard-Frost et al. [ 6 ].

Spiekermann and Winkler [ 57 ].

von Ingersleben-Seip (2023).

Hagendorff [ 22 ], Hagendorff [ 23 ], Morley et al. [ 43 ], Hohma et al. [ 25 ].

Omrani et al. [ 46 ].

Hooks et al. [ 26 ].

Tripopsakul and Puriwat [ 61 ], Koh et al. [ 35 ].

Truby et al. [ 62 ].

Stix [ 58 ].

Koefer et al. [ 32 ].

Framework Summary: Establishing a practical organizational framework for AI Accountability.

Winfield and Jirotka [ 68 ].

Morley et al. [ 43 ].

Garthwaite et al. [ 20 ].

HLEG [ 24 ].

Radclyffe et al. [ 51 ].

IEEE Global Initiative [ 30 ].

Morandín-Ahuerma [ 41 ].

Jobin et al. [ 34 ].

IEEE Standards (2019).

Morrison-Saunders and Retief [ 44 ], Vakkuri et al. [ 66 ].

Dolganova [ 12 ].

HLEG [ 24 ]>, IEEE Global Initiative [ 30 ].

UK Parliament Committee [ 64 ], Executive Office of the President National Science and Technology Council [ 16 ].

>TUV SÜD [ 63 ], Hallensleben et al. [ 21 ].

Hallensleben et al. [ 21 ].

TUV SÜD [ 63 ].

Floridi [ 17 ].

Savinova [ 53 ].

Tolkacheva [ 60 ].

Pokholkov et al. [ 49 ].

Butler et al. [ 9 ].

Usher and Strachan [ 65 ].

Morgan [ 42 ].

Hsu [ 27 ].

Szwed [ 59 ].

Szwed [ 59 ], Satybaldiyeva et al. [ 52 ].

Adams [ 1 ].

Delbecq et al. [ 13 ], Hsu and Sandford [ 28 ].

Saaty [ 54 ].

Pasman and Rogers [ 48 ].

Podinovski and Potapov [ 50 ].

Podinovski and Potapov [ 50 ], Garthwaite et al. [ 20 ].

Lenthe [ 33 ].

Pokholkov et al. [ 49 ]

Solanki [ 55 ].

Kumar et al. [ 37 ].

Huang et al. [ 29 ].

Eddy and Bakar [ 15 ].

Ben-David and Frank [ 8 ].

Curto et al. [ 11 ].

Ozili [ 47 ].

Maree et al. [ 38 ].

Ghodselahi and Amirmadhi [ 19 ].

Max et al. [ 39 ], Ahmed [ 2 ], Nowakowski and Waliszewski [ 45 ], Kozodoi et al. [ 36 ].

Kozodoi et al. [ 36 ].

IBM Developer Staff, “AI Fairness 360” https://www.ibm.com/opensource/open/projects/ai-fairness-360/ (2018).

Ernst and Young Staff, “Responsible AI”, https://www.ey.com/en_ch/ai/responsible-ai , n.d.

JP Morgan Chase, Explainable AI Centre of Excellence, https://www.jpmorgan.com/technology/artificial-intelligence/initiatives/explainable-ai-center-of-excellence , 2023.

Jammalamadaka and Itapu [ 31 ].

Appendix A : Handout for Participants.

Appendix B : 15 Characteristics.

Appendix C : Microsoft Excel Table.

Adams, S.J.: Projecting the next decade in safety management: a Delphi technique study. Prof. Saf.Saf. 46 (10), 26–29 (2001)

Google Scholar  

Ahmed, F.: Ethical aspects of artificial intelligence in banking. J. Res. Econ. Fin. Manage. 1 (2), 55–63 (2022). https://doi.org/10.56596/jrefm.v1i2.7

Article   MathSciNet   Google Scholar  

Amugongo, L.M., Bidwell, N.J., Corrigan, C.C.: Invigorating ubuntu ethics in AI for healthcare: enabling equitable care. In: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, Chicago, IL, USA, 12–15 June 2023, FAccT ‘23, pp. 583–592. Association for Computing Machinery, New York, NY, USA (2023)

Amugongo, L.M., Kriebitz, A., Boch, A., Lütge, C.: Operationalising AI ethics through the agile software development lifecycle: a case study of AI-enabled mobile health applications. AI Ethics (2023). https://doi.org/10.1007/s43681-023-00331-3

Article   Google Scholar  

Andreotta, A.J., Kirkham, N., Rizzi, M.: AI, big data, and the future of consent. AI Soc. 37 (4), 1715–1728 (2022). https://doi.org/10.1007/s00146-021-01262-5

Attard-Frost, B., De los Ríos, A., Walters, D.R.: The ethics of AI business practices: a review of 47 AI ethics guidelines. AI Ethics 3 (2), 389–406 (2023). https://doi.org/10.1007/s43681-022-00156-6

Ayling, J., Chapman, A.: Putting AI ethics to work: are the tools fit for purpose? AI Ethics (2022). https://doi.org/10.1007/s43681-021-00084-x

Ben-David, A., Frank, E.: Accuracy of machine learning models versus “hand crafted” expert systems—a credit scoring case study. Expert Syst. Appl. 36 (3, Part 1), 5264–5271 (2009). https://doi.org/10.1016/j.eswa.2008.06.07

Butler, A.J., Thomas, M.K., Pintar, K.D.M.: Systematic review of expert elicitation methods as a tool for source attribution of enteric illness. Foodborne Pathog. Dis.Pathog. Dis. 12 (5), 367–382 (2015). https://doi.org/10.1089/fpd.2014.1844

Corrigan, C.C.: Lessons learned from co-governance approaches—developing effective AI policy in Europe. In: The 21 Yearbook of the Digital Ethics Lab, p. 2546. Springer International Publishing, Cham (2022)

Curto, G., Jojoa Acosta, M.F., Comim, F., Garcia-Zapirain, B.: Are AI systems biased against the poor? A machine learning analysis using Word2Vec and GloVe embeddings. AI Soc. (2022). https://doi.org/10.1007/s00146-022-01494-z

Dolganova, O.: Improving customer experience with artificial intelligence by adhering to ethical principles. Bus. Inform. 15 (2), 34–46 (2021). https://doi.org/10.17323/2587-814X.2021.2.34.46

Delbecq, A.L., Van de Ven, A.H., Gustafson, D.H.: Group Techniques for Program Planning. Scott, Foresman, and Co., Glenview, IL (1975)

Dwivedi, Y.K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., Duan, Y., Dwivedi, R., Edwards, J., Eirug, A., Galanos, V., Ilavarasan, P.V., Janssen, M., Jones, P., Kar, A.K., Kizgin, H., Kronemann, B., Lal, B., Lucini, B., et al.: Artificial intelligence (AI): multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. Int. J. Inf. Manage. 57 , 101994 (2021). https://doi.org/10.1016/j.ijinfomgt.2019.08.002

Eddy, Y.L., Bakar, E.M.N.E.A.: Credit scoring models: techniques and issues. J. Adv. Res. Bus. Manage. Stud. 7 (2), 2 (2017)

Executive Office of the President National Science and Technology Council: Preparing for the future of Artificial Intelligence [Online] (2016). https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf . Accessed 2 Mar 2024

Floridi, L.: Establishing the rules for building trustworthy AI. Nat. Mach. Intell. 1 (6), 261–262 (2019). https://doi.org/10.1038/s42256-019-0055-y

Fontes, C., Corrigan, C., Lütge, C.: Governing AI during a pandemic crisis: initiatives at the EU level. Technol. Soc. 72 , 102204 (2023)

Ghodselahi, A., Amirmadhi, A.: Application of artificial intelligence techniques for credit risk evaluation. Int. J. Model. Optim. (2011). https://doi.org/10.7763/IJMO.2011.V1.43

Garthwaite, P.H., Kadane, J.B., O’Hagan, A.: Statistical methods for eliciting probability distributions. J. Am. Stat. Assoc. 100 (470), 680–701 (2005). https://doi.org/10.1198/016214505000000105

Hallensleben, S., Fetic, L., Fleischer, T., Grünke, Hagendorff, T., Hauer, M., Hauschke, A., Heesen, J., Herrmann, M., Hillerbrand, R., Hubig, C., Kaminski, A., Krafft, T., Loh, W., Otto, P., Puntschuh, M., Hustedt, C. (2020). From Principles to Practice: An Interdisciplinary Framework to Operationalize AI Ethics. https://publikationen.bibliothek.kit.edu/1000121427

Hagendorff, T.: AI virtues—the missing link in putting AI ethics into practice. Philos. Technol. 35 (3), 55 (2022). https://doi.org/10.1007/s13347-022-00553-z

Hagendorff, T.: The ethics of AI ethics: an evaluation of guidelines. Mind. Mach. 30 (1), 99–120 (2020). https://doi.org/10.1007/s11023-020-09517-8

HLEG: Ethics guidelines for Trustworthy AI Shaping Europe’s digital future [Online] (2019). https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai . Accessed 2 Mar 2024

Hohma, E., Boch, A., Trauth, R., Lütge, C.: Investigating accountability for Artificial Intelligence through risk governance: a workshop-based exploratory study. Front. Psychol. 14 , 1073686 (2023)

Hooks, D., Davis, Z., Agrawal, V., Li, Z.: Exploring factors influencing technology adoption rate at the macro level: a predictive model. Technol. Soc. 68 , 101826 (2022). https://doi.org/10.1016/j.techsoc.2021.101826

Hsu, C.C.: The Delphi technique: making sense of consensus. Pract. Assessment Res. Eval. 12 (1), 1–8 (2007)

Hsu, C.-C., Sandford, B.A.: The Delphi technique: making sense of consensus. Pract. Assess. Res. Eval. 12 , 10 (2019). https://doi.org/10.7275/pdz9-th90

Huang, Z., Chen, H., Hsu, C.-J., Chen, W.-H., Wu, S.: Credit rating analysis with support vector machines and neural networks: a market comparative study. Decis. Support. Syst.. Support. Syst. 37 (4), 543–558 (2004). https://doi.org/10.1016/S0167-9236(03)00086-1

IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems: Ethically Aligned Design [Online] (2019). https://standards.ieee.org/industry-connections/ec/autonomous-systems.html . Accessed 2 Mar 2024

Jammalamadaka, K.R., Itapu, S.: Responsible AI in automated credit scoring systems. AI Ethics 3 (2), 485–495 (2023). https://doi.org/10.1007/s43681-022-00175-3

Koefer, F., Lemken, I., & Pauls, J. (2023). Realizing fair outcomes from algorithm- enabled decision systems: an exploratory case study. In: Lecture Notes in Business Information Processing, vol. 467 LNBIP, pp. 52–67. Scopus. https://doi.org/10.1007/978-3-031-31671-5_4

Van Lenthe, J.: ELI: an interactive elicitation technique for subjective probability distributions. Organ. Behav. Hum. Decis. Process.Behav. Hum. Decis. Process. 55 (3), 379–413 (1993). https://doi.org/10.1006/obhd.1993.1037

Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nat. Mach. Intell. 1 (9), 389–399 (2019). https://doi.org/10.1038/s42256-019-0088-2

Koh, H.-K., Burnasheva, R., Suh, Y.G.: Perceived ESG (environmental, social, governance) and consumers’ responses: the mediating role of brand credibility, brand image and perceived quality. Sustainability (2022). https://doi.org/10.3390/su14084515

Kozodoi, N., Jacob, J., Lessmann, S.: Fairness in credit scoring: assessment, implementation and profit implications. Eur. J. Oper. Res.Oper. Res. 297 (3), 1083–1094 (2022). https://doi.org/10.1016/j.ejor.2021.06.023

Kumar, A., Sharma, S., Mahdavi, M.: Machine learning (ML) technologies for digital credit scoring in rural finance: a literature review. Risks 9 (11), 192 (2021). https://doi.org/10.3390/risks9110192

Maree, C., Modal, J. E., & Omlin, C. W. (2020). Towards responsible AI for financial transactions. In: 2020 IEEE Symposium Series on Computational Intelligence (SSCI), pp 16–21. https://doi.org/10.1109/SSCI47803.2020.9308456

Max, R., Kriebitz, A., VonWebsky, C.: Ethical considerations about the implications of artificial intelligence in finance. In: San-Jose, L., Retolaza, J.L., van Liedekerke, L. (eds.) Handbook on Ethics in Finance, pp. 577–592. Springer International Publishing, Cham (2021)

Chapter   Google Scholar  

Mills, S.: The misuse of algorithms in society (SSRN Scholarly Paper 4400026). SSRN J. (2023). https://doi.org/10.2139/ssrn.4400026

Morandín-Ahuerma, F. (2023). Montreal Declaration for Responsible AI: 10 Principles and 59 Recommendations. OSF Preprints. https://doi.org/10.31219/osf.io/sj2z5

Morgan, M.G.: Use (and abuse) of expert elicitation in support of decision-making for public policy. Proc. Natl. Acad. Sci. 111 (20), 7176–7184 (2014). https://doi.org/10.1073/pnas.1319946111

Morley, J., Kinsey, L., Elhalal, A., Garcia, F., Ziosi, M., Floridi, L.: Operationalising AI ethics: barriers, enablers and next steps. AI Soc. 38 (1), 411–423 (2023). https://doi.org/10.1007/s00146-021-01308-8

Morrison-Saunders, A., Retief, F.: Walking the sustainability assessment talk—progressing the practice of environmental impact assessment (EIA). Environ. Impact Assess. Rev. 36 , 34–41 (2012). https://doi.org/10.1016/j.eiar.2012.04.001

Nowakowski, M., Waliszewski, K.: Ethics of artificial intelligence in the financial sector. Przegląd Ustawodawstwa Gospodarczego 2022 , 2–9 (2022). https://doi.org/10.33226/0137-5490.2022.1.1

Omrani, N., Rivieccio, G., Fiore, U., Schiavone, F., Agreda, S.G.: To trust or not to trust? An assessment of trust in AI-based systems: concerns, ethics and contexts. Technol. Forecast. Soc. Chang. 181 , 121763 (2022). https://doi.org/10.1016/j.techfore.2022.121763

Ozili, P.K.: Big data and artificial intelligence for financial inclusion: benefits and issues (SSRN Scholarly Paper 3766097). SSRN J. (2021). https://doi.org/10.2139/ssrn.3766097

Pasman, H.J., Rogers, W.J.: How to treat expert judgment? With certainty it contains uncertainty! J. Loss Prev. Process Ind. 66 , 104200 (2020). https://doi.org/10.1016/j.jlp.2020.104200

Pokholkov, Y., Horvat, M., Quadrado, J.C., Chervach, M., Zaitseva, K. (2020). Approaches to assessing the level of engineering students’ sustainable development mindset. In: 2020 IEEE Global Engineering Education Conference (EDUCON), pp. 1102–1109. https://doi.org/10.1109/EDUCON45650.2020.9125292

Podinovski, V., Potapov, M.: Weighted sum method in the analysis of multicriterial decisions: pro et contra. Bus. Inf. 3 (25), 41–48 (2013)

Radclyffe, C., Ribeiro, M., Wortham, R.H.: The assessment list for trustworthy artificial intelligence: A review and recommendations. Front. Artif. Intell. (2023). https://doi.org/10.3389/frai.2023.1020592

Satybaldiyeva, E., et al.: Applying the export method to determine a company. Transp. Probl. 18 (2), 123–132 (2023). https://doi.org/10.20858/tp.2023.18.2.11

Savinova, O.V.: Probation of an expert seminar on “students’ involvement in research work during studying. Inzhener Obrazov 29 , 34–44 (2021). https://doi.org/10.4835/18102883_2021_29_3

Saaty, R.W.: The analytic hierarchy process—what it is and how it is used. Math. Modell. 9 (3), 161–176 (1987). https://doi.org/10.1016/0270-0255(87)90473-8

Solanki, R.: Fintech: a disruptive innovation of the 21st century, or is it? Glob. Bus. Manage. Res. 14 (2), 76–87 (2022)

MathSciNet   Google Scholar  

Solove, D.J.: A taxonomy of privacy. Univ. Pa. Law Rev. 154 (3), 477 (2006). https://doi.org/10.2307/40041279

Spiekermann, S., Winkler, T.: Value-based engineering for ethics by design. SSRN Electron. J. (2020). https://doi.org/10.2139/ssrn.3598911

Stix, C.: Actionable principles for artificial intelligence policy: three pathways. Sci. Eng. Ethics 27 (1), 15 (2021). https://doi.org/10.1007/s11948-020-00277-3

Szwed, P.: Working Paper: Establishing a Theoretically Sound Baseline for Expert Judgment in Project Management – Part I. [Online] (2014). https://www.researchgate.net/publication/259948022_Working_Paper_Establishing_a_Theoretically_Sound_Baseline_for_Expert_Judgment_in_Project_Management_-_Part_I . Accessed 2 Mar 2024

Tolkacheva, K. (2015). Expert seminar as a form of realizing the goals of problem-oriented training of specialists in engineering and technology. National Research Tomsk Polytechnic University (TPU). Retrieved from http://catalog.lib.tpu.ru/catalogue/simple/document/RU/TPU/book/336904

Tripopsakul, S., Puriwat, W.: Understanding the impact of ESG on brand trust and customer engagement. J. Hum. Earth Future 3 (4), 430–440 (2022). https://doi.org/10.8991/HEF-2022-03-04-03

Truby, J., Brown, R., Dahdal, A.: Banking on AI: mandating a proactive approach to AI regulation in the financial sector. Law Fin. Markets Rev. 14 (2), 110–120 (2020). https://doi.org/10.1080/17521440.2020.1760454

TUV SÜD: Artificial Intelligence. [Online] (2023). https://www.tuvsud.com/en/themes/artificial-intelligence . Accessed 2 Mar 2024

UK Parliament Committee: Written Evidence Submitted by Committee on Standards in Public Life (GAI0110). [Online] (2022). https://committees.parliament.uk/writtenevidence/114057/html/ . Accessed 2 Mar 2024

Usher, W., Strachan, N.: An expert elicitation of climate, energy, and economic uncertainties. Energy Policy 61 , 811–821 (2013). https://doi.org/10.1016/j.enpol.2013.06.110

Vakkuri, V., Kemell, K.-K., Jantunen, M., Halme, E., Abrahamsson, P.: ECCOLA—a method for implementing ethically aligned AI systems. J. Syst. Softw.Softw. 182 , 111067 (2021). https://doi.org/10.1016/j.jss.2021.111067

Wachter, S., Mittelstadt, B., Russell, C.: Why fairness cannot be automated: bridging the gap between EU non-discrimination law and AI. Comput. Law Secur. Rev.. Law Secur. Rev. 41 , 105567 (2021). https://doi.org/10.1016/j.clsr.2021.105567

Winfield, A.F.T., Jirotka, M.: Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 376 (2133), 20180085 (2018). https://doi.org/10.1098/rsta.2018.0085

Zhou, J., Chen, F., Berry, A., Reed, M., Zhang, S., Savage, S. (2020). A survey on ethical principles of AI and implementations. In: 2020 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 3010–3017. https://doi.org/10.1109/SSCI47803.2020.9308437

Download references

Open Access funding enabled and organized by Projekt DEAL. This work was supported by Fujitsu Limited and the Technical University of Munich’s Institute for Ethics in Artificial Intelligence (IEAI). All authors declare no other competing interests.

Author information

Authors and affiliations.

Institute for Ethics in Artificial Intelligence, School of Social Sciences and Technology, Technical University of Munich, Arcistrasse 21, 80333, Munich, Germany

Maria Pokholkova, Auxane Boch, Ellen Hohma & Christoph Lütge

You can also search for this author in PubMed   Google Scholar

Contributions

CL, EH, AB and MP contributed to the conception and design of the overall research project and the conception and planning of the conducted workshop. MP, EH, CL and AB contributed to the preparation, realization, post-processing and analysis of the workshop. Further, MP, AB, EH and CL contributed to writing, revising and approving the manuscript. All authors contributed to the article and approved the submitted version.

Corresponding author

Correspondence to Maria Pokholkova .

Ethics declarations

Conflict of interest.

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Ethics statement

Ethical review and approval were not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. Participants were made aware before participating in the workshop of the data collection and use for research purposes, as well as research questions and interests. All data have been anonymised in accord with the respect of participants’ privacy.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1.1 Appendix A: Handout for Participants

figure a

1.2 Appendix B: 15 Characteristics

Share of relevant data points that were used in decision-making of AI CS that was

disclosed and explained to the customer.

Share of AI CS decisions that a credit analysis domain expert reviewed

Share of reviewed decisions by an AI CS, explanations on which were found satisfactory by a domain expert

Share of predictions correctly explained by a local interpretation method

Share of complaints/incidents asked on an AI CS decision after a customer asked for clarification on his/her decision

weight of data source and type

share of cases where human intervention was needed

share of (sensitive) features used

model metrics (accuracy, confidence level, fairness metrics)

number of different data sources/share of trustworthy data sources

Share documentation of relevant steps in the AI tool lifecycle (defined by standards and including post-hoc adjustments)

Share of cases for which output is reproducible within acceptable standards (defined by standards)

Share of group of users (reporting) understanding of the tool (UX research)

Share of known potential limitations presented to the public

Share of information about the system that is publically available (based on internal documentation)

1.3 Appendix C: Microsoft excel table

figure b

1.4 Appendix D: detailed calculations

The calculation of the Quantitative Assessment of the Levels of Qualitative States (QALQS) based on Equations ( 3 ), ( 4 ), ( 5 ), ( 6 ), and ( 7 ) is presented below.

1. Calculation of Step 2 (QALQS) with Eqs. ( 3 ), ( 4 ), ( 5 ), ( 6 ), ( 7 ).

critically low:

\((0.2*0.27)+(0.3*0.25)+(0.34*0.18)+(0.4*0.13)+(0.0*0.17)\) = 0.2554–0.26;

\((0.4*0.27)+(0.5*0.25)+(0.43*0.18)+(0.5*0.13)+(0.1*0.17)\) = 0.3581–0.36;

satisfactory:

\((0.5*0.27)+(0.6*0.25)+(0.52*0.18)+(0.7*0.13)+(0.1*0.17)\) = 0.4788–0.48;

\((0.6*0.27)+(0.7*0.25)+(0.62*0.18)+(0.8*0.13)+(0.2*0.17)\) =0.6136–0.61;

\((0.8*0.27)+(0.9*0.25)+(0.77*0.18)+(0.9*0.13)+(0.3*0.17)\) = 0.7528–0.75.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Pokholkova, M., Boch, A., Hohma, E. et al. Measuring adherence to AI ethics: a methodology for assessing adherence to ethical principles in the use case of AI-enabled credit scoring application. AI Ethics (2024). https://doi.org/10.1007/s43681-024-00468-9

Download citation

Received : 29 November 2023

Accepted : 07 March 2024

Published : 15 April 2024

DOI : https://doi.org/10.1007/s43681-024-00468-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence ethics
  • Ethical assessment
  • Expert workshop methodology
  • AI-enabled credit-scoring
  • Transparency
  • AI ethics in finance
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. How To Identify Ethical Issues In A Case Study

    case study on ethical practices

  2. Case Study Business Ethics Solution

    case study on ethical practices

  3. How To Identify Ethical Issues In A Case Study

    case study on ethical practices

  4. Case Study Business Ethics Solution

    case study on ethical practices

  5. (PDF) UNDERSTANDING AND THE PRACTICE OF ETHICAL LEADERSHIP AMONG

    case study on ethical practices

  6. (PDF) Ethical Issues in Pharmacy Practice Research: An Introductory Guide

    case study on ethical practices

VIDEO

  1. Case Study Analysis Ethical Considerations and Cultural Impact 1

  2. CASE STUDY ETHICAL AND NON FINANCIAL CONSIDERATION IN INVESTMENT DECISIONS

  3. CASE STUDY (Q11)

  4. Ethics in International Relations

  5. Ethical Issues in Counseling / Psychotherapy Practice (Week 2)

  6. Comment Your Decision

COMMENTS

  1. Case Studies

    More than 70 cases pair ethics concepts with real world situations. From journalism, performing arts, and scientific research to sports, law, and business, these case studies explore current and historic ethical dilemmas, their motivating biases, and their consequences. Each case includes discussion questions, related videos, and a bibliography.

  2. Business Ethics Cases

    A Business Ethics Case Study. An employee at an after-school learning institution must balance a decision to accept or decline an offered gift, while considering the cultural norms of the client, upholding the best interests of all stakeholders, and following the operational rules of his employer.

  3. Ethics Cases

    A Business Ethics Case Study. The CFO of a family business faces difficult decisions about how to proceed when the COVID-19 pandemic changes the business revenue models, and one family shareholder wants a full buyout. Case studies and scenarios illustrating ethical dilemmas in business, medicine, technology, government, and education.

  4. The Costco Model

    Download Case Study PDF. Costco is often cited as one of the world's most ethical companies. It has been called a "testimony to ethical capitalism" in large part due to its company practices and treatment of employees. Costco maintains a company code of ethics which states: "The continued success of our company depends on how well each ...

  5. Ethics: Articles, Research, & Case Studies on Ethics- HBS Working Knowledge

    by Kara Baskin. The pressure to do more, to be more, is fueling its own silent epidemic. Lauren Cohen discusses the common misperceptions that get in the way of supporting employees' well-being, drawing on case studies about people who have been deeply affected by mental illness. 07 Nov 2023. Cold Call Podcast.

  6. Ethical Business Practices: Case Studies and Lessons Learned

    This article explores some case studies that shine a light on ethical business practices, offering valuable lessons for businesses in any industry. Case Study 1: Patagonia's Commitment to Environmental Ethics. Patagonia, the outdoor clothing and gear company, has long set a standard for environmental responsibility.

  7. A case study of ethical issue at Gucci in Shenzhen, China

    We shall draw on two very different perspectives to conduct a moral evaluation of the labor management practices in the Gucci case. The first perspective is that of traditional Confucian ethics, the second is modern labor rights theory. 1. Confucianism. The core of Confucian ethics is comprised of five values.

  8. Cases

    Erica Kaufman West, MD. Zoonoses are infectious diseases that pass from an animal to a human and now constitute the majority of new and emerging infections. AMA J Ethics. 2024;26 (2):E103-108. doi: 10.1001/amajethics.2024.103. Case and Commentary. Feb 2024.

  9. Cases

    Case Discussion (20 mins): Collectively consider the (1) interests of individuals and groups in how this case is handled; (2) ethical principles or values at stake; (3) the alternative answers that might be considered as solutions; and (4) the rationales for selecting a particular choice of action agreeable to all. Summary (10 mins):

  10. Research and Practice of AI Ethics: A Case Study Approach ...

    This study investigates the ethical use of Big Data and Artificial Intelligence (AI) technologies (BD + AI)—using an empirical approach. The paper categorises the current literature and presents a multi-case study of 'on-the-ground' ethical issues that uses qualitative tools to analyse findings from ten targeted case-studies from a range of domains. The analysis coalesces identified singular ...

  11. Case Study Application of an Ethical Decision-Making Process for a

    In our case study, 93 year old Ms. Jones is admitted to hospital with a fragility hip fracture. As a first step, we must recognize that there is actually an ethical dilemma; in this case, the dilemma is whether the patient should proceed with surgery or not, given her underlying medical conditions and potential for perioperative complications.

  12. The patient suicide attempt

    Nurses face more and more ethical dilemmas during their practice nowadays, especially when nurses have responsibility to take care of patients with terminal diseases such as cancer [1].The case study demonstrates an ethical dilemma faced by a nursing staff taking care of an end stage aggressive prostate cancer patient Mr Green who confided to the nurse his suicide attempt and ask the nurse to ...

  13. Ethical Case Studies for Coach Development and Practice

    Providing both a depth and breadth of examples of ethical dilemmas which coaches may face as part of their practice, this book is the first comprehensive handbook of case studies in the field, supporting coaches in developing their ethical awareness and competence. The world of coaching has become increasingly complex over the past two decades.

  14. Internet Ethics Cases

    Markkula Center for Applied Ethics. Focus Areas. Internet Ethics. Internet Ethics Cases. Find ethics case studies on topics in Internet ethics including privacy, hacking, social media, the right to be forgotten, and hashtag activism. (For permission to reprint articles, submit requests to [email protected] .)

  15. Performable Case Studies in Ethics Education

    The case study was researched, developed, and presented by Professor Robeson and MSII students at UNC-Chapel Hill in 1997-1998, and studied and recorded by students in the Department of Communication at WFU in 2012. The issues posed have not changed in the intervening years. Table 1. Performable Case Studies 1988-2016.

  16. Making a Case for the Case: An Introduction

    Case studies are particularly helpful with ethical issues to provide crucial context and explore (and evaluate) how ethical decisions have been made or need to be made. Classic cases include the Tuskegee public health syphilis study, the Henrietta Lacks human cell line case, the Milgram and Zimbardo psychology cases, the Tea Room Trade case ...

  17. Case Studies

    Three additional case studies are scheduled for release in spring 2019. Methodology: The Princeton Dialogues on AI and Ethics case studies are unique in their adherence to five guiding principles: 1) empirical foundations, 2) broad accessibility, 3) interactiveness, 4) multiple viewpoints and 5) depth over brevity.

  18. Ethics Case Studies

    EthicsEthics Case Studies. Ethics Case Studies. The SPJ Code of Ethics is voluntarily embraced by thousands of journalists, regardless of place or platform, and is widely used in newsrooms and classrooms as a guide for ethical behavior. The code is intended not as a set of "rules" but as a resource for ethical decision-making.

  19. Ethical Considerations in Research

    Revised on June 22, 2023. Ethical considerations in research are a set of principles that guide your research designs and practices. Scientists and researchers must always adhere to a certain code of conduct when collecting data from people. The goals of human research often include understanding real-life phenomena, studying effective ...

  20. Ethical practice and the role of people professionals

    Ethical practice is the application of ethical values to organisational behaviour. ... A useful way to do this is to nominate ethics ambassadors and obtain case studies from each business area. If an organisation has an ethical code, it should be regularly reviewed and interactively discussed with employees. It shouldn't just be covered ...

  21. Practice

    Processes In Ethical Processes a set of Case Studies offer insights into the ethical dilemmas that can arise during a research project, and how developing an ethical practice involves responding to hotspots, drawing on touchstones, revealing blindspots and re-imagining moonshoots.Ethical Processes a set of Case Studies offer insights into the ethical

  22. Lessons Learned from Challenging Cases in Clinical Research Ethics

    Lessons Learned from Challenging Cases in Clinical Research Ethics. Clinical Researcher April 12, 2024. Clinical Researcher—April 2024 (Volume 38, Issue 2) RESOURCES & REVIEWS. Lindsay McNair, MD, MPH, MSB. [A review of Challenging Cases in Clinical Research Ethics. 2024. Wilfond BS, Johnson L-M, Duenas DM, Taylor HA (editors).

  23. Measuring adherence to AI ethics: a methodology for ...

    The study's findings underscore the importance of ethical AI implementation and highlight benefits and limitations for measuring ethical adherence. A proposed methodology thus offers insights into a foundation for future AI ethics assessments within and outside the financial industry, promoting responsible AI practices and constructive dialogue.