I've yet to see a consensus around any meaningful definition of (i) intelligence, or (ii) consciousness. I certainly encourage investigations into these phenomenon, but we're at the very early stages with a lot of different interpretations, each one of which has consequences for both research and interpretation of results. What we seem to have learned so far is that we humans are occasionally semi-conscious but mostly operate beneath that threshold and then post-hoc rationalize to ourselves our actions. With regards to computing, there's a huge difference between improvements in processor and memory versus improvements in instructions. Although we can now cram millions of transistors into a tiny space, the logic gates are fundamentally little different from those implemented with wires and valves. Likewise, although languages have higher levels of abstraction than back in the days of assembler and Fortran, we're still implementing hard-coded instructions for very narrow problem-sets. While techniques like Linear regression, logistic regression, Naive Bayes, kNN, Random forest, etc. can produce useful outputs, we're a very long way from the kind of connectivity exhibited by even a fairly primitive organic brain. Hence I see no reason to imagine that we'll have a Frankenstein moment any more than Victorian steam engines and Babbage machines could have teamed up to form all-conquering automata.

Written by

Anyone who enjoys my articles here on Medium may be interested in my books Why Democracy Failed and The Praying Ape, both available from Amazon.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store