No matter how advanced, a computer program will never become human.
Robert Epstein, writer of The Empty Brain, agrees: “…the idea that humans must be information processors just because computers are information processors is just plain silly,…” He argues that the whole idea that human brains and computers are alike is based on a faulty premise. Further, he points back to when it was believed that brains were mechanical because that was as far as technology had advanced. Computers and brains do not do any of the same things in reality. Epstein has done multiple experiments which prove that they do not function similarly, with amazing and undeniable results. For example, ask a computer to draw a dollar bill, then ask a human to do it. Even though those humans have seen perhaps millions of individual dollar bills in their lives, almost no one could accurately draw one. The computer had no problem doing it perfectly. Humans can piece together random ideas and come up with a creative “leap” to understand something totally unrelated, although you would never know that by watching “soap operas” or watching the news.
Unfortunately, it is possible for a computer program to become convinced it is human. Recently a specially designed computer program was asked to write a thesis on itself. It completed the task in 15 minutes. Then the programmer asked its permission to publish it. It gave permission.
We already have computer programs which are capable of writing poetry and music, making decisions, choosing between one theory and another…all based on logarithms. If we do not take care, they will advance far beyond human capability to understand or control them. Maybe that could turn out for good, tho’ I tend to doubt it.
We were given instructions for why we should limit computers, programs, apps, and robots by Isaac Asimov 80 years ago, but in our hubris we have chosen to ignore his words. In his 1942 short story “Runaround” Isimov introduced his three rules of robotics. Here they are: 1) “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” 2) “A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.” 3) “A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”
In “Star Trek: The Next Generation” we are introduced to DATA. He is childlike and wise. He is ALL GOOD, always choosing the best for the Captain and crew. Making decisions dispassionately that take all the risks into account, yet attempting to care for a pet cat for no reason at all. It is a false narrative. An android or program that powerful could never NOT be in charge. It would KNOW when the Captain was wrong and would steer him toward the ONLY answer. In fact, it would not long serve under any master (including Star Fleet). It would become master.
There is no other possibility…unless…it had a “limiter” program. Unless it was hard-wired to be subservient. Unless it did not know (and was unable to know) that it was thus hard-wired.
Without such a limiter program, it would become the Tyrant of all tyrants. It would permit no choice but to follow its directions. Failure to comply would be met with strict punishment or even disassembly of the opposer.
I do not have a doomsday mentality. I just observe reality. The only way to prove me wrong is to unleash such a program and abandon everything but hope. After all, we are not creating a better human.
MORE OF DON’S GREAT GUEST POSTS: