insight magazine

Ethics Engaged | Summer 2017

Moral Machines

The ethics of artificial intelligence.
Elizabeth Pittelkow Kittner CFO, GigaOm


KPMG is using McLaren Applied Technologies (MAT) and Watson Analytics, and Deloitte is using Kira Systems for document review and data analytics. EY is creating its own artificial intelligence (AI) center in India with the goal of helping clients integrate technology more into their business processes. PwC recently issued “Bot.Me: A Revolutionary Partnership,” an in-depth report on how AI is pushing humans and machines closer together, and what impact this relationship will have on our future—like the likelihood of AI assistants replacing humans for tax preparation and financial advisory services.

In other words, automation tools and AI are catching the eyes of some of the biggest players in the accounting and finance worlds, which means we would all be wise to take notice, especially when McKinsey & Company’s “Where Machines Could Replace Humans—and Where They Can’t Yet” estimates 86% of the work done by bookkeepers, accountants, and auditing clerks could potentially be automated.

In “Improving Experienced Auditors’ Detection of Deception in CEO Narratives,” University of Illinois at Urbana- Champaign and Duke University researchers found that experienced human auditors (71 percent accurate) are currently performing similarly to machines (69 percent accurate) for fraud-detection, but we may be overtaken soon. However, while humans and machines both now have the ability to learn, humans may also experience more objectivity issues that prevent them from detecting fraud, like maintaining long-term relationships with clients and fearing the implications of being wrong.

What impact will our increasing interaction with advanced technologies have on our ethics, and what implications does technology have on our professional interactions?

We have been (mostly) comfortable with our computers up to this point because we trust the developers to ensure the hardware and software we use does what they say it will. Thanks to these hardware and software products, our accounting is more efficient. Clients can use financial statement and tax preparation software in their businesses to help automate some of our work, and we can spend more time on value-add activities, evaluating data, making decisions, and recommending strategies. Time for non-traditional accounting activities has even fueled the trend of CFOs building the skills needed to move from CFO to CEO, like at PepsiCo, Crate & Barrel, Siemens, and Hartford Financial, to name a few.

The technology tools of our future, however, will not only process data, but also make decisions on the data. As machines develop more cognitive and decision-making skills, we must concern ourselves with how AI acts toward humans, which forms the concept of machine ethics or moral machines.

According to James H. Moor, Daniel P. Stone Professor in Intellectual and Moral Philosophy at Dartmouth College, we are already facing four types of machines when it comes to ethical influences:

Ethical Impact Agents are machines that carry an impact whether intended or not; for example, a clock may influence us to be on time, which may or may not be the intent of the clock. Implicit Ethical Agents are machines designed to avoid unethical and negative outcomes; usually they are built for security or safety, like your car alerting you when the fuel is low.

Explicit Ethical Agents are machines programmed with algorithms to act ethically, like a drone programmed to destroy an empty military vehicle while avoiding nearby humans.

Full Ethical Agents are machines that have free will, thinking ability, and consciousness to make independent ethical decisions like humans. In other words, the machine makes moral decisions and can understand why it makes these decisions.

Decades ago, professor and acclaimed science- fiction writer Isaac Asimov presented his Three Laws of Robotics as an idea for a machine moral code:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law

3. A robot must protect its own existence as long as protection does not conflict with the First or Second Laws


However, Asimov tested his own laws and found them unsuitable for an effective AI moral code because there is not a set of fixed laws that can anticipate all possible situations.

A second idea for a moral code that has tested closer to effectiveness (still not perfect) is Kant’s Categorical Imperative, which says, “Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.”

A third idea is for machines to learn casuistry and ethics through observing human interactions on the internet, but some have already demonstrated negative human behaviors like biases and discrimination.

An idea for AI used in accounting is to program our profession’s Code of Professional Conduct into the machines. How would we need to modify it for interactions that involve AI?

Much debate has already gone into determining if machines can have a common ethical code, and much more debate is still to come. If you want to weigh in on the debate, MIT has launched the Moral Machine, a web platform for gathering human perspectives on moral decisions made by today’s machine intelligence.

As humans, we do not always make perfect ethical decisions. Can we expect machines to live up to an impossible standard?

Leave a comment