Microsoft Wrote a Book on The Ethics of AI

The good folks at Microsoft have published a book [PDF] (“The Future Computed: Artificial Intelligence and its role in society”) and associated web site on the ethics of AI.

The Future Computed

The executive summary cites the following proposed principles:

  • Fairness: When AI systems make decisions about medical treatment or employment, for example, they should make the same recommendations for everyone with similar symptoms or qualifications. To ensure fairness, we must understand how bias can affect AI systems.
  • Reliability: AI systems must be designed to operate within clear parameters and undergo rigorous testing to ensure that they respond safely to unanticipated situations and do not evolve in ways that are inconsistent with original expectations. People should play a critical role in making decisions about how and when AI systems are deployed.
  • Privacy and security: Like other cloud technologies, AI systems must comply with privacy laws that regulate data collection, use and storage, and ensure that personal information is used in accordance with privacy standards and protected from theft.
  • Inclusiveness: AI solutions must address a broad range of human needs and experiences through inclusive design practices that anticipate potential barriers in products or environments that can unintentionally exclude people.
  • Transparency: As AI increasingly impacts people’s lives, we must provide contextual information about how AI systems operate so that people understand how decisions are made and can more easily identify potential bias, errors and unintended outcomes.
  • Accountability: People who design and deploy AI systems must be accountable for how their systems operate. Accountability norms for AI should draw on the experience and practices of other areas, such as healthcare and privacy, and be observed both during system design and in an ongoing manner as systems operate in the world.

Technology itself of course is just a tool and tends to be inherently amoral (in the sense of lacking the concept of morals.)

Those of us who create it and use it are a different story.

We can choose whether to use a technology and how to use a technology and there are very real implications for society, short and long term, for all these choices. What we can’t do is un-invent it. (Much as occasionally that seems like a good idea.)

What we can, and must, do as a society is decide on a set of ethical principles of how we will use any such technology.

This is also complicated when something is brand new, or, as in the case of AI, technology reaches a new level of critical mass it had not previously achieved.

Due to its newness, as a society we usually find that we haven’t developed these principles yet. Also in our world of software the rate of change tends to be very high. It can feel like we have gone from something not existing to being pervasive in a heart beat.

Many people in the AI community have been debating ethics around AI for decades, so like AI itself as a discipline, this is not a new topic and in many ways the Microsoft book does not break new ground. With the soon-to-be pervasiveness of AI though, Microsoft has a global platform to bring a much greater awareness to the issues and that is to be applauded.

Some of our brightest minds have very publicly raised their concerns of the potential problems with AI, notably Elon Musk and Stephen Hawking. We should use their opinions and books like this to have informed debates as we continue to push AI forward.

Everything I need to know in life I learned from watching television.

Leave a comment

Please be polite. We appreciate that. Your email address will not be published and required fields are marked