Responding to the understandable concern that such technology could be used for more efficient killing of human beings, Pichai made it clear that the firm will not design or deploy AI for technology that is likely to cause overall harm.
"Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints," Pichai writes.
He went on to specifically rule out the use of Google Ai in "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people."
In addition to addressing concerns about the military uses for AI, Pichai also wrote that its software will not be used in technologies that gather or use information for surveillance violating internationally accepted norms.
Read the blog post at: blog.google