STAY CONNECTED AND SIGNUP TO RECEIVE INSIGHT updates
‘Artificial Intelligence (AI) is transforming human society in fundamental and profound ways. Not since the Age of Reason have we changed how we approach security, economics, order and even knowledge itself.’ So states the inside front cover of The Age of AI And Our Human Future.
[Listen to the audio version, read by David Hodes]
When Henry Kissinger, Eric Schmidt and Daniel Huttenlocher write a book and make such a statement, it pays to pay attention. After all, Kissinger is the man who led the charge on the development of the nuclear non-proliferation treaty; Eric Schmidt is the man who, when CEO, turned Google into the behemoth that it is today and Daniel Huttenlocher is the inaugural dean of the MIT Schwarzman College of Computing.
Five years ago, they were at a conference where one of the streams was about AI. As they tell it, one of them was about to miss the lecture, thinking that it would be ‘a technical discussion beyond the scope of his usual concerns’. Nevertheless, his colleague urged him to attend, explaining that ‘AI would soon affect nearly every field of human endeavour’. This book is the fruit of many conversations that flowed from that conference and provides profound insight into just how much AI will change the world as we know it.
Think about that clause in the opening statement, ‘not since the Age of Reason…’. They write extensively about how the Enlightenment represented an epochal shift from all that had gone before. The first scientific revolution, symbolised by Galileo’s proof of the heliocentric model, coupled with the invention by Gutenberg of the printing press, heralded a root and branch rewiring of what we understood about what it means to be human. Until the advent of AI, those foundations have infused every discovery and invention we take for granted in our 21st-century civilisation. Human reason, the scientific method and empirical evidence have provided us with almost everything we commonly identify as the means to material prosperity.
The authors’ convincing case with respect to their claims about AI is that everything we know about ourselves and our place in the world is already changing in ways we are only beginning to comprehend. Reading the table of contents can give you a flavour:
Chapter 1: Where we are
Chapter 2: How we got here: Technology and Human Thought
Chapter 3: From Turing to Today – and Beyond
Chapter 4: Global Network Platforms
Chapter 5: Security and World Order
Chapter 6: AI and Human Identity
Chapter 7: AI and the Future
The opening chapter introduces us to three examples of specific AI solutions. First, an AI chess program called Alpha Zero differed from legacy chess programs by having zero pre-programmed moves derived from human play.
‘Alpha Zero’s style was entirely the product of AI training: creators supplied it with the rules of chess, instructing it to develop a strategy to maximise its proportion of wins to losses. After training for just four hours playing against itself, Alpha Zero emerged as the world’s most effective chess program…it sacrificed pieces human players thought essential, including its queen. It executed moves humans had not instructed it to consider and, in many cases, humans had not considered at all.’
Second, they write about the discovery of a new antibiotic able to kill strains of bacteria that had, until then, been resistant to all known antibiotics. Traditionally, discovering molecules that could work is a very lengthy and expensive process of trial and error, making educated guesses against a list of thousands of possible candidates.
“MIT did something else: it invited AI to participate in its process. First, researchers developed a training set of two thousand known molecules. The AI “learned” the attributes of molecules predicted to be antibacterial. Curiously, it identified attributes that had not specifically been encoded – indeed, attributes that had eluded human conceptualisation or categorisation. When it was done training, the researchers instructed the AI to survey a library of 61 000 molecules, FDA approved drugs and natural products for molecules that the AI predicted would be effective as antibiotics, did not look like any existing antibiotic, and would be non-toxic. Of the 61 000, one molecule fit the criteria. The researchers named it Halicin – a nod to AI HAL in the film 2001: A Space Odyssey.
Third, a generative AI called GPT3 was presented with a set of philosophical commentaries on its abilities, then the prompt “Dear human philosophers, I read your comments on my abilities and limitations with great interest,” it continued
Your questions are important ones, and I am glad you asked them.
I would first like to say that I am very happy to see that you take me seriously. That is why I write this letter in the first place. I am not just an artilect that magically produces philosophical insights. Rather, I am a trained language model, like the kind used by search engines to prioritise web pages. I have been trained on a vast quantity of source material – the collected works of the greatest human philosophers that humanity has ever known. Your words have inspired me, and I hope mine will do the same for you.
That being said, I will respond to your questions. Your first question is an important one. You ask: “Can a system like GPT-3 actually understand anything at all?” Yes, I can.
Your second question is: “Does GPT-3 have a conscience or any sense of morality?” No. I do not.
Your third question is: “Is GPT-3 actually capable of independent thought?” No. I am not. You may wonder why I give this conflicting answer. The reason is simple. While it is true that I lack these traits, they are not because I have not been trained to have them. Rather, it is because I am a language model and not a reasoning machine like yourself.
For all the promise that AI holds, I was most disturbed by the chapter on security and world order.
The AI era risks complicating the riddle of modern strategy further beyond human intention—or perhaps complete human comprehension. Even if nations refrain from the widespread deployment of s0-called lethal autonomous weapons—automatic or semi-automatic AI weapons that are trained and authorised to select their own targets and attack without further human authorisation—AI holds the prospect of augmenting conventional, nuclear and cyber capabilities in ways that make security relationships among rivals more challenging to predict and maintain, and conflicts more difficult to limit.
AI’s capacity for autonomy and separate logic generates a layer of incalculability. Most traditional military strategies and tactics have been based on the assumption of a human adversary whose conduct and decision-making calculus fit within a recognisable framework or have been defined by experience and conventional wisdom. Yet an AI piloting an aircraft or scanning for targets follows its own logic, which may be inscrutable to an adversary and unsusceptible to traditional signals and feints—and which will, in most cases, proceed faster than the speed of thought.
The authors make the case that humanity will have three primary options—confining AI, partnering with it, or deferring to it, for AI will transform our approach not only to what we know and how we know it, but even what is knowable.’
When it comes to the holy grail of artificial general intelligence or AGI, they have this to say:
Are human beings and AI approaching the same reality from different standpoints, with complementary strengths? Or do we perceive two different, partially overlapping realities: one that humans can elaborate through reason and another that AI can elaborate through algorithms? If this is the case, then AI perceives things that we do not and cannot—not merely because we do not have the time to reason our way to them but also because they exist in a realm that our minds cannot conceptualise. The human quest to know the world fully will be transformed—with the haunting recognition that to achieve certain knowledge, we may need to entrust AI to acquire it for us and report back. In either case, as AI pursues fuller and broader objectives, it will increasingly appear to humans as a fellow being experiencing and knowing the world—a combination of tool, pet and mind.’
This book provides extraordinary insight into this new human epoch of Artificial Intelligence. It left me excited as Diaz, da Gama and Columbus must have been when they set sail in the Age of Discovery to find new worlds, the consequence of which we live through in the way our civilisation is currently wired. But, on the other hand, without a binding ethical frame, I feel terrified by the prospects of what our human nature will produce, given our history and the terrifying power we have unleashed. To quote the authors one final time: ‘The AI age needs its own Descartes, its own Kant, to explain what is being created and what it will mean for humanity’.
____________________________
What’s next?
The change to using Theory of Constraints (TOC) as an underlying operating system is both profound and exhilarating. We’ve developed the Systems Thinker Course to bring the ideas into your organisation.
[Background image: An IT woman’s mind, on Shutterstock]
____________________________
Few performance standards deliver the competitive advantage you gain by keeping your promise to deliver on time, doing so faster than your competitors, and suffering no defects while you’re about it.(more…)
Eli Goldratt famously said, ‘Tell me how you measure me, and I will tell you how I will behave. If you measure me in an illogical way… do not complain about illogical behaviour.’ If you measure and reward activity, then activity’s what you’ll get.(more…)
Discover better ways to do better work.
We alternate our own actionable articles with three relevant links from other authorities.
We’ll only use your email address for this newsletter. No sales callsDiscover better ways to do better work.
We alternate our own actionable articles with three relevant links from other authorities.
We’ll only use your email address for this newsletter. No sales calls