Man Machine Rules

Man-Machine Rules are necessary if we want to survive as a species.

This blog  of mine was published by the Mad Scientist Initiative at the U.S. Army Training and Doctrine Command (TRADOC), a marketplace of ideas in dialogue with academia, industry, and government. Focused on the Deep Future, 2035-2050, its broad spectrum of interests makes the group an exceptional Community of Forecasters, among them those interested in urbanism.

For convenience, the text of the blog appears below. It makes the case for establishing Man-Machine Rules — applicable to all technologies, from humankind’s first Paleolithic hand axe to the ultimate future of strong Artificial Intelligence (AI).

Man Machine Rules

Two hundred years of massive collateral impacts by technology have brought to the forefront of society’s consciousness the idea that some sort of rules for man-machine interaction are necessary, similar to the rules in place for gun safety, nuclear power, and biological agents. But where their physical effects are clear to see, the power of computing is veiled in virtuality and anthropomorphization. It appears harmless, if not familiar, and it often has a virtuous appearance.

Man Machine Rules_image_9
Computing originated in the punched cards of Jacquard looms.

Computing originated in the punched cards of Jacquard looms. Today it carries the promise of a cloud of electrons from which we make our Emperor’s New Clothes. As far back as 1842, the brilliant mathematician Ada Augusta, Countess of Lovelace (1815-1852) foresaw the potential of computers. A protégé and associate of Charles Babbage (1791-1871), conceptual originator of the programmable digital computer, she realized the “almost incalculable” ultimate potential of such difference engines. She also recognized that, as in all extensions of human power or knowledge, “collateral influences” occur.[i]

Man Machine Rules_image_2
Avid mathematician Ada Augusta Lovelace, daughter of Lord Byron, is often called the first computer programmer. 1836 Portrait by British painter Margaret Sarah Carpenter (1793-1872). 

AI faces us with such “collateral influences.”[ii]  The question is not whether machine systems can mimic human abilities and nature, but when. Will the world become dependent on ungoverned algorithms? Should there be limits to mankind’s connection to machines?[iii]  As concerns mount, well-meaning politicians, government officials, and some in the field are trying to forge ethical guidelines to address the collateral challenges of data use, robotics, and AI.[iv]

A Hippocratic Oath of AI?

Man Machine Rules_image_3
Asimov’s I Robot series lists his “Three Laws of Robotics..”

Asimov’s Laws of Robotics are merely a literary ploy to infuse his storylines [v] In the real world, Apple, Amazon, Facebook, Google, DeepMind, IBM, and Microsoft, founded www.partnershiponai.org, [vi] to ensure “… the safety and trustworthiness of AI technologies, the fairness and transparency of systems.” Data scientists from tech companies, governments, and nonprofits gathered to draft a voluntary digital charter for their profession.[vii] Oren Etzioni, CEO of the Allen Institute for AI and a professor at the University of Washington’s Computer Science Department, proposed a Hippocratic Oath for AI.

But such codes are composed of hard-to-enforce terms and vague goals such as using AI “responsibly and ethically, with the aim of reducing bias and discrimination.” They pay lip service to privacy and human priority over machines. They appear to sugarcoat a culture which passes the buck to the lowliest soldier.[viii]

We know that good intentions are inadequate when enforcing confidentiality. Well-meant but unenforceable ideas don’t meet business standards. It is unlikely that techies and their bosses, caught up in the magic of coding, will shepherd society through the challenges of the petabyte AI world.[ix]  Vague principles, underwriting a non-binding code, cannot counter the cynical drive for profit.[x]

Indeed, in an area that lacks authorities or legislation to enforce rules, the Association for Computing Machinery (ACM) is itself backpedaling from its own Code of Ethics and Professional Conduct. Their document weakly defines notions of “public good” and “prioritizing the least advantaged.”[xi]  Microsoft’s President Brad Smith admits that his company wouldn’t expect customers of its services to meet even these standards.

Man Machine Rules_image_4

In the wake of the Cambridge Analytica scandal, it is clear that coders are not morally superior to other people and that voluntary, unenforceable Codes and Oaths are inadequate.[xii] Programming and algorithms clearly reflect ethical, philosophical, and moral positions.[xiii]  It is false to assumes that the so-called “openness” trait of programmers reflects a broad mindfulness. There is nothing heroic about “disruption for disruption’s sake” or hiding behind “black box computing”.[xiv] The future cannot be left up to an adolescent-centric culture in an economic system that rests on greed.[xv]  The society that adopts “Electronic personhood” deserves it.

Machines are Machines, People are People

After 200 years of the technology tail wagging the humanity dog, it is apparent now that we are replaying history – and don’t know it. Most human cultures have been intensively engaged with technology since before the Iron Age 3,000 years ago. We have been keenly aware of technology’s collateral effects mostly since the Industrial Revolution, but have not yet created general rules for how we want machines to impact individuals and society. The blurring of reality and virtuality that AI brings to the table might prompt us to.

Distinctions between the real and the virtual must be maintained if the behavior of the most sophisticated computation machines and robots is captured by legal systems. Nothing in the virtual world should be considered real any more than we believe that the hallucinations of a drunk or drugged person are real.

The simplest way to maintain the distinction is remembering that the real IS, and the virtual ISN’T, and that virtual mimesis is produced by machines. Lovelace reminded us that machines are just machines. While in a dark, distant future, giving machines personhood might lead to the collapse of humanity, Harari’s Homo Deus warns us that AI, robotics, and automation are quickly bringing the economic value of humans to zero.[xvi]

Man Machine Rules_image_5
From MIT Technology Revue

From the start of civilization, tools and machines have been used to reduce human drudge labor and increase production efficiency. But while tools and machines obviate physical aspects of human work in the context of production of goods or processing information, they in no way affect the truth of humans as sentient and emotional living beings, nor the value of transactions among them.

The man-machine line is further blurred by our anthropomorphizing machinery, computing, and programming. We speak of machines in terms of human traits, and make programming analogous to human behavior. But there is nothing amusing about GIGO experiments like MIT’s psychotic bot Norman. or Microsoft’s fascist Tay.[xvii]  Technologists falling into the trap of considering that AI systems can make decisions, are analogous to children, playing with dolls, marveling that “their dolly is speaking.”

Man Machine Rules_image_6
Microsoft’s Tay AI Chatter Bot

Machines don’t make decisions.  Humans do. They may accept suggestions made by machines and when they do, they are responsible for the decisions made. People are and must be held accountable, especially those hiding behind machines. The holocaust taught us that one can never say, “I was just following orders.”

Nothing less than enforceable operational rules is required for any technical activity, including programming. It is especially important for tech companies, since evidence suggests that they take ethical questions to heart only under direct threats to their balance sheets.[xviii]

When virtuality offers experiences that humans perceive as real, the outcomes are the responsibility of the creators and distributors, no less than tobacco companies selling cigarettes, or pharmaceutical companies and cartels selling addictive drugs. Individuals do not have the right to risk the well-being of others to satisfy their need for complying with clichés such as “innovation,” and “disruption.”

Nuclear, chemical, biological, gun, aviation, machine, and automobile safety rules do not rely on human nature. They are based on technical rules and procedures. They are enforceable and moral responsibility is typically carried by the hierarchies of their organizations.[xix]

As we master artificial intelligence, human intelligence must take charge.[xx] The highest values known to mankind remains human life and the qualities and quantitates necessary for the best individual life experience.[xxi] For the transactions and transformations in which technology assists we need simple operational rules to regulate the actions and manners of individuals. Moving the focus to human interactions empowers individuals and society.

Man-Machine Rules

Man Machine Rules_image_7

Man-Machine rules should address any tool or machine ever made or to be made. They would be equally applicable to any technology of any period, from the first flaked stone, to the ultimate predictive “emotion machines.” They would be adjudicated by common law.[xxii]

  1. All material transformations and human transactions are to be conducted by humans.
  2. Humans may directly employ hand/desktop/workstation devices in the above.
  3. At all times, an individual human is responsible for the activity of any machine or program.
  4. Responsibility for errors, omissions, negligence, mischief, or criminal-like activity is shared by every person in the organizational hierarchical chain, from the lowliest coder or operator, to the CEO of the organization, and its last shareholder.
  5. Any person can shut off any machine at any time.
  6. All computing is visible to anyone [No Black Box].
  7. Personal Data are things. They belong to the individual who owns them, and any use of them by a third party requires permission and compensation.
  8. Technology must age before common use, until an Appropriate Technology is selected.
  9. Disputes must be adjudicated according to Common Law.

Machines are here to help and advise humans, not replace them, and humans may exhibit a spectrum of responses to them.  Some may ignore a robot’s advice and put others at risk. Some may follow recommendations to the point of becoming a zombie. But either way, Man-Machine Rules are based on and meant to support free, individual human choices.

Man-Machine Rules can help organize dialog around questions such as how to secure personal data. Do we need hardcopy and analog formats? How ethical are chips embedded in people and in their belongings? What degrees and controls are contemplatable for personal freedoms and personal risk? Will consumer rights and government organizations audit algorithms?[xxiii] Would equipment sabbaticals be enacted for societal and economic balances?

The idea that we can fix the tech world through a voluntary ethical code emergent from itself, paradoxically expects that the people who created the problems will fix them.[xxiv] It is not whether the focus should shift to human interactions that leaves more humans in touch with their destiny. The question is at what cost. If not now, when? If not by us, by whom?

The FY18 Mad Scientist Laboratory Anthology offers readers articles on futures-oriented technology, trends, and potentially game changing innovations. Each has a wealth of links to relevant videos, podcasts, conference proceedings, and presentations. Given the 100-year perspective of Classic Planning, knowing technology is essential in urbanism.

Notes

[i] Lovelace, Ada Augusta, Countess, Sketch of The Analytical Engine Invented by Charles Babbage by L. F. Menabrea of Turin, Officer of the Military Engineers, With notes upon the Memoir by the Translator, Bibliothèque Universelle de Genève, October, 1842, No. 82.

[ii] Oliveira, Arlindo, in Pereira, Vitor, Hippocratic Oath for Algorithms and Artificial Intelligence, Medium.com (website), 23 August 2018, https://medium.com/predict/hippocratic-oath-for-algorithms-and-artificial-intelligence-5836e14fb540; Middleton, Chris, Make AI developers sign Hippocratic Oath, urges ethics report: Industry backs RSA/YouGov report urging the development of ethical robotics and AI, computing.co.uk (website), 22 September 2017, https://www.computing.co.uk/ctg/news/3017891/make-ai-developers-sign-a-hippocratic-oath-urges-ethics-report; N.A., Do AI programmers need a Hippocratic oath?, Techhq.com (website), 15 August 2018,  https://techhq.com/2018/08/do-ai-programmers-need-a-hippocratic-oath/

[iii] Oliveira, 2018; Dellot, Benedict, A Hippocratic Oath for AI Developers? It May Only Be a Matter of Time, Thersa.org (website), 13 February 2017, https://www.thersa.org/discover/publications-and-articles/rsa-blogs/2017/02/a-hippocratic-oath-for-ai-developers-it-may-only-be-a-matter-of-time; See also: Clifford, Catherine, Expert says graduates in A.I. should take oath: ‘I must not play at God nor let my technology do so’, Cnbc.com (website), 14 March 2018, https://www.cnbc.com/2018/03/14/allen-institute-ceo-says-a-i-graduates-should-take-oath.html; Johnson, Khari, AI Weekly: For the sake of us all, AI practitioners need a Hippocratic oath, Venturebeat.com (website), 23 March 2018, https://venturebeat.com/2018/03/23/ai-weekly-for-the-sake-of-us-all-ai-practitioners-need-a-hippocratic-oath/; Work, Robert O., former deputy secretary of defense, in Metz, Cade, Pentagon Wants Silicon Valley’s Help on A.I., New York Times, 15 March 2018.

[iv] Schotz, Mai, Should Data Scientists Adhere To A Hippocratic Oath?, Wired.com (website), 8 February 2018, https://www.wired.com/story/should-data-scientists-adhere-to-a-hippocratic-oath/; du Preez, Derek, MPs debate ‘hippocratic oath’ for those working with AI, Government.diginomica.com (website), 19 January 2018, https://government.diginomica.com/2018/01/19/mps-debate-hippocratic-oath-working-ai/

[v] 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Asimov, Isaac, Runaround, in I, Robot, The Isaac Asimov Collection ed., Doubleday, New York City, p. 40.

[vi] Middleton, 2017.

[vii] Etzioni, Oren, A Hippocratic Oath for artificial intelligence practitioners, Techcrunch.com (website), 14 March 2018. https://techcrunch.com/2018/03/14/a-hippocratic-oath-for-artificial-intelligence-practitioners/?platform=hootsuite

[viii] Do AI programmers need a Hippocratic oath?, Techhq, 2018.

[ix] Goodsmith, Dave, quoted in Schotz, 2018.

[x] Schotz, 2018.

[xi] Do AI programmers need a Hippocratic oath?, Techhq, 2018. Wheeler, Schaun, in Schotz, 2018.

[xii] Gnambs, T., What makes a computer wiz? Linking personality traits and programming aptitude, Journal of Research in Personality, 58, 2015, pp. 31-34.

[xiii] Oliveira, 2018.

[xiv] Jarrett, Christian, The surprising truth about which personality traits do and don’t correlate with computer programming skills, Digest.bps.org.uk (website), British Psychological Society, 26 October 2015, https://digest.bps.org.uk/2015/10/26/the-surprising-truth-about-which-personality-traits-do-and-dont-correlate-with-computer-programming-skills/; Johnson, 2018.

[xv] Do AI programmers need a Hippocratic oath?, Techhq, 2018.

[xvi] Harari, Yuval N. Homo Deus: A Brief History of Tomorrow. London: Harvill Secker, 2015.

[xvii] That Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of artificial intelligence gone wrong when biased data is used in machine learning algorithms is not an excuse. AI Twitter bot, Tay had to be deleted after it started making sexual references and declarations such as “Hitler did nothing wrong.”

[xviii] Schotz, 2018.

[xix] See the example of Dr. Kerstin Dautenhahn, Research Professor of Artificial Intelligence in the School of Computer Science at the University of Hertfordshire, who claims no responsibility in determining the application of the work she creates.  She might as well be feeding children shards of glass saying, “It is their choice to eat it or not.” In Middleton, 2017. The principle is that the risk of an unfavorable outcome lies with an individual as well as the entire chain of command, direction, and or ownership of their organization, including shareholders of public companies and citizens of states. Everybody has responsibility the moment they engage in anything that could affect others. Regulatory “sandboxes” for AI developer experiments – equivalent to pathogen or nuclear labs – should have the types of same controls and restrictions. Dellot, 2017.

[xx] Oliveira, 2018.

[xxi] Sentience and sensibilities of other beings is recognized here, but not addressed.

[xxii]  The proposed rules may be appended to the International Covenant on Economic, Social and Cultural Rights (ICESCR, 1976), part of the International Bill of Human Rights, which include the Universal Declaration of Human Rights (UDHR) and the International Covenant on Civil and Political Rights (ICCPR). International Covenant on Economic, Social and Cultural Rights, http://www.refworld.org.; EISIL International Covenant on Economic, Social and Cultural Rights, http://www.eisil.org; UN Treaty Collection: International Covenant on Economic, Social and Cultural Rights, UN. 3 January 1976; Fact Sheet No.2 (Rev.1), The International Bill of Human Rights, UN OHCHR. June 1996.

[xxiii] Dellot, 2017.

[xxiv] Schotz, 2018.

8 Replies to “Man Machine Rules”

  1. A motivating discussion is worth comment. I think that you should publish more on this subject matter, it might not be a taboo matter but generally people don’t speak about such subjects. To the next! Kind regards!!|

    Liked by 1 person

  2. My brother recommended I may like this blog. He used to be entirely right. This publish actually made my day. You can not consider simply how much time I had spent for this info! Thanks!

    Like

  3. I do agree with all of the ideas you’ve presented in your post. They’re really convincing and will definitely work. Still, the posts are very short for novices. Could you please extend them a little from next time? Thanks for the post.

    Like

Leave a comment