Ethical concerns mount as AI takes bigger decision-making role (2024)

Business

Ethical concerns mount as AI takes bigger decision-making role in more industries

By Christina Pazzanese Harvard Staff Writer

Date

Trending

  1. Where are we going, America?
  2. The myth of the ‘math person’
  3. Good genes are nice, but joy is better
  4. Six from Harvard named Rhodes Scholars
  5. Williams to step down as dean of Harvard Chan School

Second in a four-part series that taps the expertise of the Harvard community to examine the promise and potential pitfalls of the rising age of artificial intelligence and machine learning, and how to humanize them.

For decades, artificial intelligence, or AI, was the engine of high-level STEM research. Most consumers became aware of the technology’s power and potential through internet platforms like Google and Facebook, and retailer Amazon. Today, AI is essential across a vast array of industries, including health care, banking, retail, and manufacturing.

Also in the series

Trailblazing initiative marries ethics, tech

Computer science, philosophy faculty ask students to consider how systems affect society

AI revolution in medicine

It may lift personalized treatment, fill gaps in access to care, cut red tape but risks abound

Imagine a world in which AI is in your home, at work, everywhere

AI+Art project prompts us to envision how the technology will change our lives

But its game-changing promise to do things like improve efficiency, bring down costs, and accelerate research and development has been tempered of late with worries that these complex, opaque systems may do more societal harm than economic good. With virtually no U.S. government oversight, private companies use AI software to make determinations about health and medicine, employment, creditworthiness, and even criminal justice without having to answer for how they’re ensuring that programs aren’t encoded, consciously or unconsciously, with structural biases.

Its growing appeal and utility are undeniable. Worldwide business spending on AI is expected to hit $50 billion this year and $110 billion annually by 2024, even after the global economic slump caused by the COVID-19 pandemic, according to a forecast released in August by technology research firm IDC. Retail and banking industries spent the most this year, at more than $5 billion each. The company expects the media industry and federal and central governments will invest most heavily between 2018 and 2023 and predicts that AI will be “the disrupting influence changing entire industries over the next decade.”

“Virtually every big company now has multiple AI systems and counts the deployment of AI as integral to their strategy,” said Joseph Fuller, professor of management practice at Harvard Business School, who co-leads Managing the Future of Work, a research project that studies, in part, the development and implementation of AI, including machine learning, robotics, sensors, and industrial automation, in business and the work world.

Early on, it was popularly assumed that the future of AI would involve the automation of simple repetitive tasks requiring low-level decision-making. But AI has rapidly grown in sophistication, owing to more powerful computers and the compilation of huge data sets. One branch, machine learning, notable for its ability to sort and analyze massive amounts of data and to learn over time, has transformed countless fields, including education.

Firms now use AI to manage sourcing of materials and products from suppliers and to integrate vast troves of information to aid in strategic decision-making, and because of its capacity to process data so quickly, AI tools are helping to minimize time in the pricey trial-and-error of product development — a critical advance for an industry like pharmaceuticals, where it costs $1 billion to bring a new pill to market, Fuller said.

Health care experts see many possible uses for AI, including with billing and processing necessary paperwork. And medical professionals expect that the biggest, most immediate impact will be in analysis of data, imaging, and diagnosis. Imagine, they say, having the ability to bring all of the medical knowledge available on a disease to any given treatment decision.

In employment, AI software culls and processes resumes and analyzes job interviewees’ voice and facial expressions in hiring and driving the growth of what’s known as “hybrid” jobs. Rather than replacing employees, AI takes on important technical tasks of their work, like routing for package delivery trucks, which potentially frees workers to focus on other responsibilities, making them more productive and therefore more valuable to employers.

“It’s allowing them to do more stuff better, or to make fewer errors, or to capture their expertise and disseminate it more effectively in the organization,” said Fuller, who has studied the effects and attitudes of workers who have lost or are likeliest to lose their jobs to AI.

Ethical concerns mount as AI takes bigger decision-making role (9)

“Can smart machines outthink us, or are certain elements of human judgment indispensable in deciding some of the most important things in life?”

— Michael Sandel, political philosopher and Anne T. and Robert M. Bass Professor of Government

Though automation is here to stay, the elimination of entire job categories, like highway toll-takers who were replaced by sensors because of AI’s proliferation, is not likely, according to Fuller.

“What we’re going to see is jobs that require human interaction, empathy, that require applying judgment to what the machine is creating [will] have robustness,” he said.

While big business already has a huge head start, small businesses could also potentially be transformed by AI, says Karen Mills ’75, M.B.A. ’77, who ran the U.S. Small Business Administration from 2009 to 2013. With half the country employed by small businesses before the COVID-19 pandemic, that could have major implications for the national economy over the long haul.

Rather than hamper small businesses, the technology could give their owners detailed new insights into sales trends, cash flow, ordering, and other important financial information in real time so they can better understand how the business is doing and where problem areas might loom without having to hire anyone, become a financial expert, or spend hours laboring over the books every week, Mills said.

One area where AI could “completely change the game” is lending, where access to capital is difficult in part because banks often struggle to get an accurate picture of a small business’s viability and creditworthiness.

“It’s much harder to look inside a business operation and know what’s going on” than it is to assess an individual, she said.

Information opacity makes the lending process laborious and expensive for both would-be borrowers and lenders, and applications are designed to analyze larger companies or those who’ve already borrowed, a built-in disadvantage for certain types of businesses and for historically underserved borrowers, like women and minority business owners, said Mills, a senior fellow at HBS.

But with AI-powered software pulling information from a business’s bank account, taxes, and online bookkeeping records and comparing it with data from thousands of similar businesses, even small community banks will be able to make informed assessments in minutes, without the agony of paperwork and delays, and, like blind auditions for musicians, without fear that any inequity crept into the decision-making.

“All of that goes away,” she said.

A veneer of objectivity

Not everyone sees blue skies on the horizon, however. Many worry whether the coming age of AI will bring new, faster, and frictionless ways to discriminate and divide at scale.

“Part of the appeal of algorithmic decision-making is that it seems to offer an objective way of overcoming human subjectivity, bias, and prejudice,” said political philosopher Michael Sandel, Anne T. and Robert M. Bass Professor of Government. “But we are discovering that many of the algorithms that decide who should get parole, for example, or who should be presented with employment opportunities or housing … replicate and embed the biases that already exist in our society.”

Ethical concerns mount as AI takes bigger decision-making role (10)

“If we’re not thoughtful and careful, we’re going to end up with redlining again.”

— Karen Mills, senior fellow at the Business School and head of the U.S. Small Business Administration from 2009 to 2013

AI presents three major areas of ethical concern for society: privacy and surveillance, bias and discrimination, and perhaps the deepest, most difficult philosophical question of the era, the role of human judgment, said Sandel, who teaches a course in the moral, social, and political implications of new technologies.

“Debates about privacy safeguards and about how to overcome bias in algorithmic decision-making in sentencing, parole, and employment practices are by now familiar,” said Sandel, referring to conscious and unconscious prejudices of program developers and those built intodatasets used to train the software. “But we’ve not yet wrapped our minds around the hardest question: Can smart machines outthink us, or are certain elements of human judgment indispensable in deciding some of the most important things in life?”

Panic over AI suddenly injecting bias into everyday life en masse is overstated, says Fuller. First, the business world and the workplace, rife with human decision-making, have always been riddled with “all sorts” of biases that prevent people from making deals or landing contracts and jobs.

When calibrated carefully and deployed thoughtfully, resume-screening software allows a wider pool of applicants to be considered than could be done otherwise, and should minimize thepotential for favoritism that comes with human gatekeepers, Fuller said.

Sandel disagrees. “AI not only replicates human biases, it confers on these biases a kind of scientific credibility. It makes it seem that these predictions and judgments have an objective status,” he said.

In the world of lending, algorithm-driven decisions do have a potential “dark side,” Mills said. As machines learn from data sets they’re fed, chances are “pretty high” they may replicate many of the banking industry’s past failings that resulted in systematic disparate treatment of African Americans and other marginalized consumers.

“If we’re not thoughtful and careful, we’re going to end up with redlining again,” she said.

A highly regulated industry, banks are legally on the hook if the algorithms they use to evaluate loan applications end up inappropriately discriminating against classes of consumers, so those “at the top levels” in the field are “very focused” right now on this issue, said Mills, who closely studies the rapid changes in financial technology, or “fintech.”

“They really don’t want to discriminate. They want to get access to capital to the most creditworthy borrowers,” she said. “That’s good business for them, too.”

Oversight overwhelmed

Given its power and expected ubiquity, some argue that the use of AI should be tightly regulated. But there’s little consensus on how that should be done and who should make the rules.

Thus far, companies that develop or use AI systems largely self-police, relying on existing laws and market forces, like negative reactions from consumers and shareholders or the demands of highly-prized AI technical talent to keep them in line.

“There’s no businessperson on the planet at an enterprise of any size that isn’t concerned about this and trying to reflect on what’s going to be politically, legally, regulatorily, [or] ethically acceptable,” said Fuller.

Firms already consider their own potential liability from misuse before a product launch, but it’s not realistic to expect companies to anticipate and prevent every possible unintended consequence of their product, he said.

Few think the federal government is up to the job, or will ever be.

“The regulatory bodies are not equipped with the expertise in artificial intelligence to engage in [oversight] without some real focus and investment,” said Fuller, noting the rapid rate of technological change means even the most informed legislators can’t keep pace. Requiring every new product using AI to be prescreened for potential social harms is not only impractical, but would create a huge drag on innovation.

Ethical concerns mount as AI takes bigger decision-making role (11)

“I wouldn’t have a central AI group that has a division that does cars, I would have the car people have a division of people who are really good at AI.”

— Jason Furman, a professor of the practice of economic policy at the Kennedy School and a former top economic adviser to President Barack Obama

Jason Furman, a professor of the practice of economic policy at Harvard Kennedy School, agrees that government regulators need “a much better technical understanding of artificial intelligence to do that job well,” but says they could do it.

Existing bodies like the National Highway Transportation Safety Association, which oversees vehicle safety, for example, could handle potential AI issues in autonomous vehicles rather than a single watchdog agency, he said.

“I wouldn’t have a central AI group that has a division that does cars, I would have the car people have a division of people who are really good at AI,” said Furman, a former top economic adviser to President Barack Obama.

Though keeping AI regulation within industries does leave open the possibility of co-opted enforcement, Furman said industry-specific panels would be far more knowledgeable about the overarching technology of which AI is simply one piece, making for more thorough oversight.

While the European Union already has rigorous data-privacy laws and the European Commission is considering a formal regulatory framework for ethical use of AI, the U.S. government has historically been late when it comes to tech regulation.

“I think we should’ve started three decades ago, but better late than never,” said Furman, who thinks there needs to be a “greater sense of urgency” to make lawmakers act.

Business leaders “can’t have it both ways,” refusing responsibility for AI’s harmful consequences while also fighting government oversight, Sandel maintains.

“The problem is these big tech companies are neither self-regulating, nor subject to adequate government regulation. I think there needs to be more of both,” he said, later adding: “We can’t assume that market forces by themselves will sort it out. That’s a mistake, as we’ve seenwith Facebook and other tech giants.”

Last fall, Sandel taught “Tech Ethics,” a popular new Gen Ed course with Doug Melton, co-director of Harvard’s Stem Cell Institute. As in his legendary “Justice” course, students consider and debate the big questions about new technologies, everything from gene editing and robots to privacy and surveillance.

“Companies have to think seriously about the ethical dimensions of what they’re doing and we, as democratic citizens, have to educate ourselves about tech and its social and ethical implications — not only to decide what the regulations should be, but also to decide what role we want big tech and social media to play in our lives,” said Sandel.

Doing that will require a major educational intervention, both at Harvard and in higher education more broadly, he said.

“We have to enable all students to learn enough about tech and about the ethical implications of new technologies so that when they are running companies or when they are acting as democratic citizens, they will be able to ensure that technology serves human purposes rather than undermines a decent civic life.”

Next: The AI revolution in medicine may lift personalized treatment, fill gaps in access to care, and cut red tape. Yet risks abound.

Related

The robots are coming, but relax

As AI rises, you’ll likely have a job, analysts say, but it may be a different one

The good, bad, and scary of the Internet of Things

Radcliffe researcher explores ways regulation can minimize online risk, maximize safety, control environmental impact, and help society

Paving the way for self-driving cars

Two Harvard efforts are helping craft policy before the shift gains speed

The Daily Gazette

Sign up for daily emails to get the latest Harvardnews.

Ethical concerns mount as AI takes bigger decision-making role (2024)

FAQs

What are the ethical concerns about AI? ›

The legal and ethical issues that confront society due to Artificial Intelligence (AI) include privacy and surveillance, bias or discrimination, and potentially the philosophical challenge is the role of human judgment.

What is the most serious AI ethical concern? ›

Automated decisions / AI bias

Biased AI algorithms may lead to discrimination of minority groups.

What is the most AI ethical concern related to data? ›

At the moment the biggest ethical concern relating to data is data biases. For example if the AI is using previous data to assist police officers where to patrol next the AI may pick up on racist policing practice data and decide that the police should always look in black neighborhoods for a crime.

What are the biggest concerns with AI? ›

These are the most common problems with AI development and implementation you might encounter and ways in which you can manage them:
  • Determining the right data set. ...
  • The bias problem. ...
  • Data security and storage. ...
  • Infrastructure. ...
  • AI integration. ...
  • Computation. ...
  • Niche skillset. ...
  • Expensive and rare.
2 Jun 2022

Can AI make ethical decisions? ›

For the most part, AI systems make the right decisions given the constraints. However, AI notoriously fails in capturing or responding to intangible human factors that go into real-life decision-making — the ethical, moral, and other human considerations that guide the course of business, life, and society at large.

What is AI ethics with a example explain? ›

AI ethics is a system of moral principles and techniques intended to inform the development and responsible use of artificial intelligence technology. As AI has become integral to products and services, organizations are starting to develop AI codes of ethics.

Can we have an AI system without ethical concerns? ›

You cannot simply have AI ethics. It requires real ethical due diligence at the organizational level—perhaps, in some cases, even industry-wide reflection. Until this does occur, we can look forward to many future AI scandals and failures.

Should you follow AI ethics yes or no? ›

Algorithms can enhance already existing biases. They can discriminate. They can threaten our security, manipulate us and have lethal consequences. For these reasons, people need to explore the ethical, social and legal aspects of AI systems.

What are some concerns about AI? ›

Risks of Artificial Intelligence
  • Automation-spurred job loss.
  • Privacy violations.
  • 'Deepfakes'
  • Algorithmic bias caused by bad data.
  • Socioeconomic inequality.
  • Market volatility.
  • Weapons automatization.
6 Jul 2021

Why is ethics in AI so important? ›

As ethics is important in the society of us humans, it is equally (if not more) necessary in the world of AI. To prevent AI from going rogue and out of our control, we need to implement ethics into the code so that one day the movie I Robot does not become a reality.

What are top 10 principles for ethical artificial intelligence? ›

Knowledge and behaviour: the 10 principles of ethical AI
  • Interpretability. ...
  • Reliability and robustness. ...
  • Security. ...
  • Accountability. ...
  • Beneficiality. ...
  • Privacy. ...
  • Human agency. ...
  • Lawfulness.

How do you ensure ethical use of AI? ›

  1. Start with education and awareness about AI. Communicate clearly with people (externally and internally) about what AI can do and its challenges. ...
  2. Be transparent. This is one of the biggest things I stress with every organization I work with. ...
  3. Control for bias. ...
  4. Make it explainable. ...
  5. Make it inclusive. ...
  6. Follow the rules.
10 Sept 2021

How do you overcome ethical issues in AI? ›

How to Operationalize Ethical AI?
  1. Ethics Council. There should be a committee such as a governance board that can take care of fairness, privacy, cyber and other data related to risk and issues. ...
  2. Ethical AI Framework. ...
  3. Optimize guidance and tools. ...
  4. Awareness.
26 Jan 2021

How is AI affecting decision making? ›

Artificial Intelligence adds to decision making a lot. It makes the process clearer, faster, and more data-driven. Empowered with AI, you can make small decisions on the go, solve complex problems, initiate strategic changes, evaluate risks, and assess your entire business performance.

How does AI affect human decision making? ›

AI can handle anomaly detection, data crunching, complex analysis, and spotting trends. The final decisions are then either completely automated or taken over by the human end.

Who is responsible for AI ethics? ›

Which function is primary responsible for AI ethics? CEOs (28%) – but also Board members (10%), General Counsels (10%), Privacy Officers (8%), and Risk & Compliance Officers (6%) are viewed as being most accountable for AI ethics by those surveyed.

Is artificial intelligence ethical or unethical Why? ›

AI presents three major areas of ethical concern for society: privacy and surveillance, bias and discrimination, and perhaps the deepest, most difficult philosophical question of the era, the role of human judgment, said Sandel, who teaches a course in the moral, social, and political implications of new technologies.

What are the Top 5 ethical issues in Information Technology? ›

These are explained with their affects as following below:
  • Personal Privacy: It is an important aspect of ethical issues in information technology. ...
  • Access Right: The second aspect of ethical issues in information technology is access right. ...
  • Harmful Actions: ...
  • Patents: ...
  • Copyright: ...
  • Trade Secrets: ...
  • Liability: ...
  • Piracy:
27 Jan 2020

What are the 3 AI ethics? ›

For example, Mastercard's Jha said he is currently working with the following tenets to help develop the company's current AI code of ethics: An ethical AI system must be inclusive, explainable, have a positive purpose and use data responsibly.

What are the 3 major AI issues? ›

Humans developed AI systems by introducing into them every possible intelligence they could, for which the humans themselves now seem threatened.
  • Threat to Privacy. ...
  • Threat to Human Dignity. ...
  • Threat to Safety.

What is an ethical concern? ›

An ethical issue is a circ*mstance in which a moral conflict arises in the workplace; thus, it is a situation in which a moral standard is being challenged. Ethical issues in the workplace occur when a moral dilemma emerges and must be resolved within a corporation.

What is the impact of AI on society? ›

Artificial intelligence's impact on society is widely debated. Many argue that AI improves the quality of everyday life by doing routine and even complicated tasks better than humans can, making life simpler, safer, and more efficient.

Top Articles
Latest Posts
Article information

Author: Kareem Mueller DO

Last Updated:

Views: 5964

Rating: 4.6 / 5 (46 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Kareem Mueller DO

Birthday: 1997-01-04

Address: Apt. 156 12935 Runolfsdottir Mission, Greenfort, MN 74384-6749

Phone: +16704982844747

Job: Corporate Administration Planner

Hobby: Mountain biking, Jewelry making, Stone skipping, Lacemaking, Knife making, Scrapbooking, Letterboxing

Introduction: My name is Kareem Mueller DO, I am a vivacious, super, thoughtful, excited, handsome, beautiful, combative person who loves writing and wants to share my knowledge and understanding with you.