Just earlier today, Google's Sundar Pichai laid down the company's new AI manifesto. While it may just seem like a more verbose response to the recent military AI contracts, in many ways, the new set of principles are more accurately the answer to questions initially raised last year by Sergey Brin in Alphabet's 2017 Founders' letter. In it, Brin speculated on the impact of machine learning and AI, as well as the problems and expectations Google had in developing the new technology. And now a year later, the company has revealed its objectives and its limits.

In the wake of AI fear (mongering?), Google has put down its own rules.

Machine learning and artificial intelligence present a difficult moral situation. Google recognizes that AI is able to tackle serious problems in a way nothing else can, literally saving lives in many applications, but it also understands that specific questions and concerns regarding its use have been raised. To that end, Google has established seven "principles" that will allegedly guide it as it progresses. And they're not just pie in the sky science fiction abstractions like the "Laws of Robotics," they're real guidelines the company believes it can use for both engineering and business decisions.

  1. Be socially beneficial.
  2. Avoid creating or reinforcing unfair bias.
  3. Be built and tested for safety.
  4. Be accountable to people.
  5. Incorporate privacy design principles.
  6. Uphold high standards of scientific excellence.
  7. Be made available for uses that accord with these principles, based on the following additional factors:
    1. What is the primary use?
    2. Is it unique?
    3. Will it have a significant impact?
    4. What will Google's involvement be?

The TL;DR for this is that there is no TL;DR. Google's new rules are actually fairly detailed—and fairly short, just read them. While they could be summarized abstractly by saying something generic like "Google resolves to do no harm with AI," that would be a gross oversimplification of the specificity behind these new guidelines.

To be clear, Google understands that at some level any new technology, including AI, could introduce "harm" in the abstract: Harm to vested interests that are damaged by the introduction of a new technology, harm to maladapted economies or groups, harm from errors in oversight, and unavoidable or unforeseen "harm" stemming from unexpected use. To believe otherwise would be shortsighted, and the only mitigation of any and all harm would be to avoid the pursuit of new technologies like AI entirely.

In addition to these new rules, Google has established a set of AI use cases that it will explicitly not pursue, including technologies that are likely to cause "overall harm" (including weapons), technologies which enable surveillance (beyond "accepted norms," whatever doublespeak that means), and any general violation of the principles of international law or human rights. However, while Google won't be working on AI for use in weapons, it will be working with governments and militaries in other already disclosed areas, including cybersecurity, healthcare, and training.

Google also understands that establishing limits for AI isn't a one-time declaration; it's an ongoing conversation as a result of changes in understanding, and it plans on drawing from the thoughts of others in multiple areas and disciplines going forward. But whatever changes it may make in the future, Google's first gameplan for AI-related ethics has been established.

Source: Google