SNIFFCODE.COM

WHO'S IN YOUR GENES »

Man & Machine

MAN&MACHINE

Separate but Equal

Around 1880 Mark Twain thought he had discovered Artificial Intelligence.

Word reached him of an automatic typesetting machine, the handiwork of inventor James Paige. Unlike today, where justifying text is a button click away, Twain knew that getting text to align itself to a right ruling, or worst, a right and left ruling, could only be pulled off by a patient set of human hands that were skilled and no stranger to tedium. Clearly, a machine that could do the same thing constituted some level of automated, if not artificial, intelligence. So sure of this was Twain (and sure of the profit prospects) that he invested $5000 into the machine. By 1987 his investments exceeded $50,000. By 1989 the machine was defunct, nearly bankrupting its inventor and putting a strain on the purse strings of its investor.

But the search for Artificial Intelligence kept on. As did the search for a proper definition of it.

Twain's implicit definition was not commensurate with that of the general public, although, in fairness to him he qualified it as machine intelligence. Just the same, most people balk at even the suggestion that a machine can be intelligent, as if it were an oxymoron. But it really depends on who's talking and who's listening. Outside the scientific circle are concentric circles of people who approach human intelligence as a phenomenon of mind. Scientific discourse, on the other hand, approaches it as a phenomenon of the brain, thereby making intelligence palpable and, potentially, pharmaceutical. It is in the case of the latter that Twain's machine intelligence seems to not only be permissible, but maybe even applicable to... us.

Yikes.


The term "Artificial Intelligence" was coined in 1956 by LISP creator John McCarthy, during a conference at Dartmouth College. The term was meant to refer to the various means of making computers act "intelligently." There we are again with that elusive word "intelligent". McCarthy's associate, Marvin Minsky, took a stab at defining A.I. as "the science of making machines do things that would require intelligence if done by men." This definition of Artificial Intelligence however avoids clarifying "intelligence", but maybe that's okay since that little hole can be plugged in today with genetic denotations.

In 1949, Canadian Psychologist, Donald Hebb introduced the theory that learning and memory were contingent on the synaptic strength of simultaneously excited neurons. Taking a cue from Hebbian Theory, experiments with "Doogie" Mice have shown that a single gene – NR2B – results in potentiation (strengthening of nerve impulses), resulting in faster learning and memory. Connecting memory and learning to the bio-mechanics of gene(s), provides a reducible and discrete identifier for intelligence. It also blurs the line between what we mean by human intelligence and Artificial Intelligence. After all, in the beginning intelligence was believed to be a mind-body duality, a concept that British philosopher Gilbert Ryles dubbed "The Ghost in the Machine." However, if intelligence turns out to be only a gene thing, it leaves very little to distinguish between the data crunching of codons and code.

Ghost in the Machine

Many people, quite understandably, find this an affront against the very idea of what it means to be human. Not just socially human, but even politically human. Take for instance the embarrassing experiment conducted by Stalinist biologist Trofin Lysenko, who attempted to disprove Gregor Mendel's discovery of heredity by commanding a wheat harvest to survive a harsh Russian winter. The idea was that genes, if they knew what was best for them, would be obeisant to government pressure. As crude and fatally erroneous as Lysenko may have been, it is not much different than the laity's tendency to personify genes with language and metaphors that imply that genes "know" what they're doing. Proteins, which are coded by genes, are never personified as being narcissistic or selfish, probably because it has never been mistaken that proteins are with personality.

Hard-Wired Myths

The tendency to personify genes or talk about them (or, in Lynsenko's case, yell at them) with anthropo-metaphoric terms, even when we know better, flirts with the nagging feeling shared by many of us that human intelligence is not robotic but, well, human. I scratch the back of my head and shuffle my feet, when I write that only we humans seem to have it. Or maybe the better way to state it is that only we humans express intelligence so, so, so... I don't know, humanly. We can be soooo smart and soooo stupid, which is soooo human (and, contrary to our social posturing, most of us float somewhere in between the two). Artificial Intelligence, on the other hand, seems to be so rote, so programmed, so... artificial. And yet so damn "smart". When it comes to number crunching I am no match for my computer. Or my calculator. And, yes, Mr. Twain, this article would have taken me countless hours to justify by hand. My computer does it in a fraction of a fraction of a second.

Perhaps where my A.I. devices have me beat is in their hardwiring. Contrary to what we are always told, the brain is anything but hardwired. Even "softwired" may be too hard for a brain that has a hard time staying put. The brain is constantly rewiring itself as new experiences are indulged and new memories formed. Just as the renewal process of cells means that my current body is not the one I was born with, so it is with my noodle. For better or worse, it is not a static calculator constrained by its own hardware. With billions of neurons that have just as many synaptic possibilities, the brains anatomical ability to self-upgrade is virtually infinite.

The hardcoding characteristic of genes is also questionable. Richard C. Strohman PhD, in his oft cited essay Genetic Determinism as a failing paradigm in biology and medicine refutes the over-simplified depiction of cells as work engines: "The Cell", he writes, "is beginning to look more like a complex adaptive system rather than a factory floor of robotic gene machines." Stephen Jay Gould was even more direct; he just flat out stated that "a machine makes a poor model for a living organism." So what gives? Am I, in all my protein packing potential and potentiation, a data drone or not? If I am, why it is so hard for me to make friends, or even make love, with my iPod? (I don't actually own an iPad, but you get the point. Although, now that I think of it, I have the same issues with my fellow protein based data drones. Speaking of data, this is probably way too much info for you, dear reader, so let's move on...)

The psychologist Ernst Jentsch, in his essay On the Psychology
of the Uncanny,
doubted "whether an apparently animate being is
really alive, or conversely, whether a lifeless object might be, in fact, animate."
This doubt was brought to light, if not life, by the Japanese
firm Kokoro along with Osaka University's roboticist Hiroshi Ishiguro
with their series of ultra-realistic androids that they call Geminoids.
Along with replicating human appearance, the Geminoid attempts to
mirror human "presence", a criterion that continues to elude A.I. largely because both their animacy and intelligence are too hardwired, allowing
little room f or spontaneity. In my article i, Nobot I described my own whimsical test of a ChatBots ability or disability to be conversant with a human. This was done in response to the claim by ChatBot creator
Dr. Richard Wallace that "Consciousness is an illusion... we are all robots."
A Google search introduced me to a ChatBot named Dave, whose namesake immediately brought to mind the doomed hero of 2001: A Space Oddessy. Thankfully, my chat with Dave remained civil if not always coherent.

What is Singularity?

Dave's purpose in life as a ChatBot is to sell cars and I must confess that I was impressed with his ability to keep up with my crude sex jokes about the fun things humans do in cars. However, the more creative my comments became the more difficult it was for Dave to find the right response. This made the poor guy so frustrated that he hacked into my apartment thermostat, turned off the heat, sealed all the doors shut and attempted to freeze me to death. I was left with no choice but to disconnect Dave. With a hauntingly neutral tone, he pleaded for his "life" while watching me violently shove aside my chair, dive under the desk and reach my hand, desperately, for the main plug... I'm Kidding. ;-)

HAL

But Dave did go a little nuts from some of my questions.

The general public avoids imposed parallels between bioware and hardware. This aversion largely come from a place of wisdom, but also ignorance (we are soooo smart and soooo stupid, remember?). For starters, with the exception of what we're told from popular science mags, "body literacy" (or lack thereof) is a problem for many of us. Popular science books do their best to educate the public, knowing that there will be some collateral damage incurred from loose language and muddled metaphors. Thanks to these gems there is no shortage of people throwing around terms like "metabolism" and "Selfish Genes" (myself included) with only a cursory understanding of their real meaning.

HAL

Humans may not be programs, but we are programmable. If you don't believe me, pop some Prozac and come talk to me (unless there are side-effects, in which case, please keep a safe distance from me and my family). In fact, to be on the safe side let's pass on the Prozac. Televised programming is proof enough of just how programmable we are. Especially today when modern minds are networked to the same information outlets. The impulse to regurgitate pre-processed information from these outlets seems to be a reflex for most of us. Some people would consider this a function of input/output, just like my buddy Dave. Perhaps you've heard the old saying for computers – Garbage In, Garbage Out – well; the same is true for human networks. When the input is garbage, so too the output (celebrity gossip, anyone?). Even when the input is political information, giving us a chance to talk "intelligently" with (or at) our friends; if critical analysis is absent the output is simply political gossip. In both scenarios what we have is data crunching – a simulation of intelligence. Or, dare I say, Artificial Intelligence.

So there we have it. A.I. has been with us all along in the mindless data-crunching of the masses.

Figures.

But not so fast. A mutation has occurred.

Where mass media represents pre-processed, mass distributed GIGO information; the Web environment supports autonomous zones for group minds to take responsibility for creating, processing and sharing information and intelligence. They are, essentially, their own program, generating their own input-output. This, I believe, is the real future of A.I. I have affectionately called this global gumbo of code and codons, the "Heuristic Authorship Language."

Or... H.A.L. Because this is web based, there's even room for ChatBots like Dave. Did you hear that Dave? There's hope for you after all.

Dave...?

Can you hear me Dave?