Engineers, scientists and others working for various major US tech platforms have been in a rebellious mood lately. They do not want their firms to work for the Pentagon or to be dragooned into the fight against illegal immigration using facial recognition techniques. In recent weeks they have forced their CEOs to take a stand and lay down red lines in various areas of artificial intelligence (AI).
Many of the technologies that we carry around in our pockets today, and from which Silicon Valley has derived steady income, have a military origin, whether they be the Internet, GPS or touch screens, to name just a few, not to mention older developments such as microwaves and the impetus given to computing. DARPA (the Defense Advanced Research Projects Agency, with a budget of US$3.18 billion in 2018 and which reports to the Pentagon) plays a major role in this regard. Have things changed?
Pressure groups such as the Tech Workers Coalition and Coworker.org have enabled a degree of mobilisation that is unprecedented in terms of speed and scope, with a level of self-organisation that the large corporations will from now on have to contend with. At Google, thousands of workers signed a public letter calling on its CEO, Sundar Pichai, to end its participation in the Algorithmic Warfare Cross-Functional Team, known as Project Maven, a contract with the Pentagon to create a ‘customised AI surveillance engine’ for military drones. They argued that Google ‘should not be in the business of war’, warning that the ‘Google brand’ and its ‘ability to compete for talent’ would otherwise be damaged.
At Amazon, in the wake of the removal of illegal immigrants’ young children ordained by Trump, thousands of employees asked its CEO, Jeff Bezos, to halt all sales of facial recognition software to the government, because its Rekognition tool could be used unjustly against immigrants. Employees at Microsoft wrote their own letter to protest against its contract with ICE (Immigration and Custom Enforcement Agency). But all major corporations in the US (as well as in Europe and China, where the most advanced surveillance state is being developed) are investing in these technologies, which have clear civil applications.
The people who have possibly gone furthest in their response to these employee rebellions are the CEO of Google and Satya Nadella of Microsoft. In his blog last June, Pichai published some ‘principles’ for implementing the AI that Google is developing. The first sentence of the Google Code of Conduct released in 2000 was ‘don’t be evil’. This was withdrawn last May. According to the new principles, however, among the AI applications that Google is not going to design or roll out there are:
- ‘Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
- ‘Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
- ‘Technologies that gather or use information for surveillance, violating internationally accepted norms.
- ‘Technologies whose purpose contravenes widely accepted principles of international law and human rights.’
Eventually the employees succeeded in persuading Google to abandon Project Maven.
Two weeks ago, the President of Microsoft, Bradford L. Smith, was one of the first to call for a whole area to be regulated, something unusual in itself: government rules on facial recognition technology; in addition to the benefits they promise, he argued, these technologies have a sinister side, such as surveillance by the state, being used without the explicit consent of those they are used upon and can even ‘perpetuate racial discrimination’. The EU has made far more progress on the issue of consent (although its actual impact in practice remains to be seen). Facebook, which has been criticised for its role in fostering hatred in countries such as Sri Lanka and Myanmar, has launched a plan to block content that incites people to hatred, although not to the extent of censoring Holocaust denial on its platform.
To what can we ascribe this outbreak of ethics in Silicon Valley –a place where secrecy generally prevails– and elsewhere? The political awakening spread and became viral. The spark may have been the anti-immigration policies of President Trump. Nadella himself described it as ‘cruel and abusive’. More than half of the most valuable tech companies in Silicon Valley were founded and/or are run by first- and second-generation immigrants. Many of the engineers and other employees who work for major tech firms were not born in the US. Talent counts above all else. The idea that working there should help to change the world for the better is also prevalent.
There is also a degree of pacifism in the air, not unlike that which has arisen among Japanese technology researchers, especially amid the growing militarisation of AI, above all when it is not defensive. Getting the Internet into operation had this defensive dimension of constructing a decentralised communications network capable of withstanding a nuclear attack. All this despite the fact that facial recognition, for example, has reduced, although not eradicated, the collateral casualties caused by US drone attacks.
Arguably there is a certain parallel with what occurred in the aftermath of the Manhattan Project, entrusted with building the atom bomb during World War II. After it was deployed in Hiroshima and Nagasaki there were many scientists who criticised it. Many in Silicon Valley baulk at taking part in battlefield programmes or fostering military technology or excessive security. The movement against autonomous weapons, so-called ‘killer robots’, is growing apace, and not only in the US.
It has been argued that the opposition to Project Maven could be the “#MeToo moment” for tech employees in the US. Ethics is becoming more and more prominent in the debate about the impact of new technologies and has only just started to get going. Autonomous weapons, the ideological bias of algorithms and blanket surveillance, to stay within the field security, are phenomena that have taken on great significance, together with other more positive aspects, such as the capacity to save the lives of civilians and soldiers, or the ever-more pressing issue of cybersecurity. Companies and their employees are no longer on the side-lines. Robert McGinn put them on notice with his book The Ethical Engineer, a must for such professionals.