An excellent primer article on LLMs, but one that in the summary section falls into the same trap as nearly every other commentary on the topic. We use the word "conscious" as though it were some binary quality, yet in reality studies show we humans are partially conscious some of the time. The question of whether or not an AI combined with the sensors of a physical robot could become "conscious" therefore is meaningless, because we've failed to define properly what the word "conscious" actually means. Instead, we blithely assume humans have this property (which we clearly don't, most of the time) and this leads to a misconception of the entire domain. Personally I hope that research into "consciousness" eventually drives us to a more precise definition of the word.