SAN FRANCISCO— Google, reeling from an employee protest over the use of Artificial Intelligence for military purposes, said Thursday that it would not use A.I for weapons or for surveillance that violates human rights. But it will continue to work with governments and the military.
These new rules were the part of Google unveiled relating to the use of artificial intelligence. In the company blog post, Sundar Pichai, the chief executive, discuss the seven objectives for its A.I technology, including “avoid creating or reinforcing unfair bias” and “be socially beneficial.”
Google also stated the details of application of technology that the company will not pursue, including the A.I for “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people” and “technologies that gather or use information for surveillance violating internationally accepted norms of human rights.”
But also Google said that they will continue their work with Governments and Military using A.I in areas including cybersecurity, training and military recruitment.
“We recognize that such powerful technology raises equally powerful questions about its use. How A.I is developed and used will have a significant impact on society for many years to come,” Mr.Pichai Wrote.
The potential uses of artificial intelligence bubbled over at Google when the company secured a contract to work on the Pentagon’s Project Maven program, which uses A.I to interpret video images and could be used to improve the targeting drone strikes.
Over 4,000 or more of Google employees signed a petition protesting the contract, and huge employees resigned.
In response, Google said it would not seek to renew the Maven contract when it gets expired on the next year and pledged to draft a set of guidelines for appropriates uses of A.I.
Mr. Pichai did not address the Maven program or the pressure from their employees. It is not clear whether these guidelines have precluded the Google from pursuing the Maven contract, since the company has repeatedly work for Pentagon was not for “offensive purposes.”
Google has bet that its future on artificial intelligence and the company executive believe that the technology could have a huge impact over the development of internet.
Google has promoted several benefits of artificial intelligence for the tasks such as the early diagnosis of diseases and the reduction of spam emails. It has also experienced some perils integrated with A.I, including Youtube Recommendations pushing their users to extremist videos or Google Photos image recognition software categorizing black people as gorillas.
“We will reserve the right to prevent or stop uses of our technology if we become aware of uses thar are inconsistent with these principals,” the company said.
Most of the top corporate A.I labs, which are loaded with former and current academies, Google has openly publishes much of its A.I research. This basically means that they can recreate and reuse many of their methods and ideas. But also Google is joining other sort of labs which may hold back certain research if it believes other will misuse it.
DeepMind, a top A.I lab owned by Google parent company, Alphabet, is considering whether it should refrain from publishing their certain research because it may be dangerous.
OpenAI, a lab founded by the Tesla chief executive Elon Musk and others, recently have released a new character indicating it could do much the same— even though it was founded on the principal that it would openly share all its research.
Content Retrieved From=> NYTimes