I see absolutely zero possibility of the kind of AI routinely depicted in scare-stories. First of all, we don't even have a definition of "intelligence" or "consciousness." Secondly, we humans - however we define the two terms above - possess only fragments of either. So attempting to replicate those fragments is a fool's errand. Furthermore, nearly all of our partial and occasional "consciousness" is highly dependent on kinaesthesia, which by definition an algorithm running on a server cannot possess. More importantly, AI isn't a thing; it's a term we use for many different algorithms designed to perform many different but highly restricted tasks. There's no way to link these different algorithms together to create a supra-algorithm entity. As for attempting to design a general-purpose AI, efforts in this direction have consistently failed for very basic reasons and are unlikely to be successful any time soon. We can, with careful selection of outputs, make claims about "intelligent" systems but when we look at the details we see the cracks in the façade. We're better served thinking of AI as tools, in the same way we think of fuel injection systems - much better than humans at the specific task, but entirely limited to the specific task they were designed for. At best we won't create a god, but instead a simulacrum of a trained dog.