At this time of year I’ve often used this column to reflect on doom-laden prophesies about what to expect in the year ahead. After all, It’s that magnificent time when we can wallow in the short dark days and add to the gloom with blood-curdling predictions of disasters.
For those of us who do wallow in predictions of the dreadful prospects that the future holds, I recommend my favourite book of all time: The Coffee Table Book of Doom, which was published a few years ago by Art Lester and Steven Appleby. Here is a flavour from the advertising blurb:
“…with the apocalypse at hand, don’t fret about dying uninformed. The Coffee Table Book of Doom is a revelatory, brilliantly funny, superbly illustrated and erudite compendium of all the 27 doom-laden horsemen we need to worry about – personal doom, gender erosion, asteroid impact, pandemics, super storms, sexual ruin – and much more besides.”
Anyway, a new ‘horseman’ has arisen in the form of Artificial Intelligence.
I previously opined on AI in this column last July: Artificial Intelligence
Parliament’s Joint Committee on Human Rights of, which I am a member, (‘joint’ because it is comprised of both MPs and peers) is currently conducting an investigation into the human rights implications of the technology.
The principal near term threat is to employment opportunities. Technology has always changed the nature of employment, replacing all kinds of labour, but presenting the possibility new and better opportunities elsewhere. The jury is still out, as to whether AI will replace most of us, or create sufficient new employment to keep us satisfied. The need to do useful work is part of our basic psychology. We learn in the Bible that even Almighty God worked on the creation before, seeing that it was good, and rested.
Nevertheless, a new and much more dreadful prospect has arisen: that AI will develop into a ‘general intelligence’ and threaten our very existence. I ‘ve just stated reading The New York Times instant bestseller If Anyone Builds It, Everyone Dies by Eliezer Yudkowsk. The blurb states that “the scramble to create superhuman AI has put us on the path to extinction-but it’s not too late to change course, as two of the field’s earliest researchers explain in this clarion call for humanity, may prove to be the most important book of our time.”
I’m only one chapter into it, but my understanding is that the basic thesis is that we don’t understand how AI works and therefore will not be able to control it.
In our last evidence session in the Human Rights Committee, I asked one of the world’s leading experts how scared, on a scale of one to ten, should we of the technology. His answer was eleven.
I’ll read the rest of the book, and reflect.
