I don’t know how I came across the above “Singularity is Nonsense” piece, but I did find it interesting. I’m not sure who wrote it, either. Maybe Tim Tyler, his name’s at the bottom.
I like the page because it’s a quick explanation of what the Singularity is. A nice summation even though the writer complains vehemently about the use of the term “Singularity.” The problem is summed up best by Nick Bostrum’s quote about half way down the page: “The Singularity has been taken to mean different things by different authors…” There seems to be no general consensus of what the Singularity is or what it will mean. Or even if we’ll know when and if it happens.
To be honest, it doesn’t matter if the term is appropriate. Many of our tools are incorrectly labeled, especially if they’re part of a pop phenomenon. We call copiers Xerox machines. Our disposable hankies are Kleenexes even if they carry the Walmart brand. Lots of things are misnamed or misleading even. We still call it the United States of America even though we aren’t really. (I think, though, that folks are starting to forgive Texas so there’s hope.)
Once a name becomes popular, it’s going to stick. Especially if it helps you get your head around a difficult or just plain weird concept. This is true of the Singularity. We may not understand what a hyper ramped up world of technology with incredibly advanced artificial intelligence means, but at least with a term that’s Googleable, we can find out.
Beyond that, though, the author feels we’ll never be able to look back and decide when the Singularity happened. There probably won’t be one defining moment when machine intelligence takes over human intelligence anyway. It’s going to be a gradual co-opting of higher being status. Truly, if machine intelligence does take over gradually there can be no singularity, if the word “singularity” implies a single point in time.
The author includes a graph from Kurzweill depicting the advancing intelligence of machines, i.e. the march to the Singularity. What if that graph is wrong? What if there’s an inflection point and increasing intelligence eventually levels off like so:
There’s certainly an identifiable point in time with this model, a singularity. And we should be able to recognize it when it happens. But what is there for us to recognize?
For the record, most sf writers refer to the Singularity as the point in time we figure out how to digitize the human mind. Supposedly that event would lead to immortality of the human race. Whether on not this will ever be possible is not possible at this time to say. I wonder: if something is immortal is it truly alive? I think there’s a paradox there: it’s not life if death has no meaning for it. At any rate, we can’t imagine existence where there is no death, so that’s why we can’t predict what life for the post humans will be like. Not just because of chaos as the author states. Chaos is a current problem only. After the Singularity, there may be no more chaos.
If digitization of the brain becomes possible, we’re definitely going to know about it. It will not slip by like a gradual increase of artificial intelligence beyond human capabilities. The moment will be momentous and it will be recorded. We’ll probably video tape the new intelligent order reciting “Mary had a little lamb” or something like that. Point is, it will be remarked upon. Champagne corks will be popped, ribbons cut, and ships launched. Nothing will be the same after that.