Not a news item, really, but somewhat related to my last post regarding consciousness.
http://nautil.us/blog/heres-how-well-know-an-ai-is-consciousThis is one of those things that I am interested in but don't really have the education to fully grasp. These are slippery topics that are difficult to get a grip on.
The alt-right crowd recently came up with a meme they call "the NPC" ..."non-player character", referring to video game characters who are controlled by computer scripts rather than other human players. The premise is that liberals aren't really sentient beings, they're just simplistic scripts that can't engage in rational conversation, and are only capable of giving a few predictable responses. "that's racist", "Trump supporters are Nazis", and so on. It's dumb, but that's how the meme goes. But the meme is actually a new formulation of a really old question. How can you tell if somebody else is actually sentient? What if they're not? What if I'm the only sentient self-aware being alive, and everybody else is a robot or a figment of my imagination, or so-on?
That puzzle leads to something called a Turing Test... proposed by famous cryptographer and early computing theorist Alan Turing, a Turing Test is the notion that a real intelligence could be discerned from an emulation by asking suitable questions... some questions may be grammatically correct but make no contextual sense, for example, and one would expect a thinking observer to respond that the question doesn't make any sense, while a non-thinking observer might attempt to answer, which should result in inane responses that reveal it didn't really understand the question.
But if an artificial intelligence is able to scan the entire internet for similar questions, parse the results of its search, and pick out popular answers, or determine that the question is nonsense by observing a lack of search results, then a Turing Test becomes more challenging. The author suggests that a Turing Test would have to be conducted with the subject disconnected from the internet.
The author discusses
"qualia"-- aspects of something that can only really be understood through actual experience-- you can't describe color to a blind person, you can't describe pain to something with no physical sensations, and so on. This is something I thought was an interesting question-- is a chess computer actually playing chess, or is it just solving a series of mathematical problems? (Are humans even playing chess, or are we also actually just solving a series of mathematical problems?) Does any understanding of what "chess" even is factor into whether chess is being played? Is it possible to be a champion chess player who has no concept of what chess is other than a series of math problems to work out? I dunno. Anyway, the author of this article talks about qualia as a possible means of identifying sentience.
What might we ask a potential mind born of silicon? How the AI responds to questions like “What if my red is your blue?” or “Could there be a color greener than green?” should tell us a lot about its mental experiences, or lack thereof. An AI with visual experience might entertain the possibilities suggested by these questions, perhaps replying, “Yes, and I sometimes wonder if there might also exist a color that mixes the redness of red with the coolness of blue.” On the other hand, an AI lacking any visual qualia might respond with, “That is impossible, red, green, and blue each exist as different wavelengths.” Even if the AI attempts to play along or deceive us, answers like, “Interesting, and what if my red is your hamburger?” would show that it missed the point.
But that's also problematic, because an artificial intelligence could potentially have qualia that are much different from our own, while having no concept of qualia that are meaningful to humans. An artificial intelligence might fail to grasp color as anything other than different wavelengths, while experiencing mathematics (for example) in a way that humans are simply incapable of grasping, or perceive qualia in some other respect that we can't even conceive of. So trying to discern sentience using qualia that are meaningful to humans would result in a very anthropocentric concept of sentience, and one that might be very incomplete.
So the author moves on to more abstract concepts, which is where I kind of lost the plot. Anyway, I still thought it was interesting.
-k