Servo Magazine ( August 2017 )
Extending Your Life as an Embedded Intelligence
By Bryan Bergeron View In Digital Edition
While you’re working on that carpet roamer or quadcopter, it’s fun to imagine where technology will bring robotics in your lifetime. There’s likely to be some resistance from animal rights organizations, but at some point, quadcopter and predator drone guidance systems will likely have the thought processes and perhaps even memories of hawks or other birds of prey.
Eventually, someone is going to download some part of a human’s cerebral cortex into synthetic memory. That’s when it gets tricky. As dozens of science fiction authors have detailed, there are issues of rights, of enslavement, and no death to escape it all. Granted, there will be all sorts of social and political issues. For now, let’s look at what could possibly go wrong on the technological front.
Foremost on my list is disease/decay. Every life form that I know of is susceptible to disease, and every synthetic object is prone to decay. While a human is remarkably self-healing in most respects, machines and computer chips are not.
Flash memory degrades with read/write operations, for example. While the average human lifespan might be 72 years, I don’t know of any computer that hasn’t gone down in the past five years. And while I’ve seen early room-sized computers that date from the ‘50s in museums, none of them were working.
One obvious workaround may be to frequently back up the system. However, that’s going to be a time-consuming expensive process, assuming the systems will require several hundred TB of memory. It’s probably cheaper to toss the embedded AI when it’s corrupted and drop in a new one. (So much for living forever.)
Related to disease/decay is that humans are not logical creatures. Our nervous systems often break down for unknown reasons — perhaps from a stroke, a viral infection (e.g., meningitis), physical trauma to the brain, or PTSD from a war or a bad childhood.
Given we don’t know how to cure most of these neurological diseases/disorders in humans, how can we possibly repair a synthetic brain that exhibits similar behaviors?
Recall the depressed robot, Marvin from The Hitchhiker’s Guide to the Galaxy? Probably another case of tossing the embedded AI and starting over.
I have yet to meet a perfect human. As such, the first human downloads will certainly contain errors, even if the transfer process doesn’t introduce additional issues.
It’s hard for me to imagine how a paranoid schizophrenic embedded AI in a toaster might manifest itself – perhaps burning the toast when it knows you’re already running late for work? SV