Allan Milne Lees
1 min readDec 11, 2023

--

It seems to me that the closer algorithms come to mimicking human thought processes, the more error-prone they will become. Our brains are the result of millions of years of evolution, and our thought processes are the result of adaptations that were "good enough" on average during the 99.8% of our history in which we lived in relatively simple and unchanging environments such as the savanna of Africa and the primordial forests of Eurasia. As such, we "think" in distinct simple ways and unfortunately these ways frequently lead us into logical errors. People believe in all manner of invisible magical pixies (deities) and in all manner of superstitions. People are automatically swayed by non-relevant inputs (give someone a warm cup to hold for a moment and they'll judge you as having a "warmer" personality than if you hand them a cold cup. Most of the time, humans don't think at all but merely repeat simple patterns. So either we'll find that our algorithms, via a process of continuous improvement, out-think us in an increasingly wide range of domains or our algorithms will become as fallible as we are and thus generate outputs that match average human outputs and are thereby not of much value.

--

--

Allan Milne Lees
Allan Milne Lees

Written by Allan Milne Lees

Anyone who enjoys my articles here on Medium may be interested in my books Why Democracy Failed and The Praying Ape, both available from Amazon.

No responses yet