I Am a Robot! Maybe...

I was watching a recent Ted talk asking the question asking if a computer can write poetry. There are many articles regarding this subject. The Turing Test is the basis for most of these articles and test. The premise is simple, if a human talking to a computer just by written text was able to convince the human it is a human also then it has passed the test. In many cases, such as the aforementioned Ted talk points out, computers can write poetry.

Well, at least the computer can analyze words, then make something similar that will convince you it was written by a human. This is all very interesting stuff to read about especially for those interested in artificial intelligence.

While I was watching the video, I started to think of a different route to take these computer/human tests. I am not proposing a new Turing Test but something much the opposite to it. Can a human convince another human that it is a computer?

I remember several times in my life on receiving a text message from someone I'd rather not respond to I decided with some friends to come up with a "computer generated" response. I was trying to trick the texter (I don't think this is a word) into thinking that the number they were trying to text (mine) had been disconnected. Many of my attempts failed though.

I can see how humans can be deceived by computers into thinking they are human also but how difficult it is to convince someone that you are really a robot? I know you couldn't walk up to just anyone on the street and convince them in a few minutes that you really were a computer and not a human. If you did I'm sure they would freak out and I can think of several bad responses that would occur at that moment.

What if we had a similar test to the Turing but just in reverse? Only using text based communication, would you be able to convince someone that you really are a robot? I think there would have to be some minimum amount of time involved to attempt this. I would believe that consistency with formatting, responses, and such would be almost a definite if you wanted to convince someone. We would also need to put some requirements on the writers. If they weren't responding or just putting garbled messages this would not be helpful to the tester.

Maybe if we tried this test by putting together a group of computers and humans that were trying to convince the testing humans that they were the opposite of what they really were. The humans would try to convince the other humans that they were computers but would be holding a conversation just like the computer's conversation.

I don't know what would be proved by such a test but I think it would be interesting to see the results! While we are dealing with uncanny valley robots and tries at AI, I wonder if it's time to see if we can know the real thing when it tries to present itself otherwise.