Tuesday, September 27, 2011

There is something that I want...

I've been reading the first few chapters of "The Most Human: What Talking with Computers Teaches Us About What It Means to be Alive" (Mantesh). A segment about someone trying to rent a place in another city by email reminded me of something I've been meaning to include in this blog.

In the segment, the potential renter was trying to avoid the appearance of being a scammer by saying things in a deliberately human, non-anonymous way. This reminded me of how I change my behavior when I know I am talking to a computer.

As I have been obsessed with the idea of a conversational computer for at least 15 years, it may come as a surprise to learn that I hate talking to the current crop of bots. I interact with human analogs online and on the phone on an almost daily basis, whether it is the automated phone system at the bank or an online chat session with tech support.

Speaking to business interactions with strangers, if I know I am talking with a human, I invest a certain amount of effort in a conversation trying to be likeable. I try to be patient, clear, flexible, empathic, friendly... I am mindful of their time, come to the conversation prepared... things that people see and tend to react to in kind. When I am talking to a computer, I am not patient, charming or friendly. (I am decidedly unfriendly when I am on the phone with a system that forces me to talk when I would rather push buttons.)

As I interact with a computer designed to mimic human conversation, I try to figure out which key words will get me to the information I need. (This could be left over behavior from playing annoyingly literal text adventure games from the 80s like Infocom's Hitchhiker's Guide to the Galaxy, which require precise words to make any progress in the game.) I use short clear sentences with as few adjectives and adverbs as I can. I feel forced into this mode because any attempt to be descriptive usually results in miscommunication, and attempts to be likeable are pointless.

As an interesting aside: When I talk to non-native English speakers, I perform a similar redaction of effort. I make no effort in subtle word choice and opt for the most clear, simple sentence structures - though I do make an effort to remain friendly and patient.

In general, I don't want to waste my time putting any effort into parts of a conversation that serve no purpose for the listener. This is interesting because it means that much of the effort I put into a conversation is to illicit a (positive) reaction from the listener. This is very important. I make an effort because there is something I want from the listener. This goes back to my thoughts on motivation in speech.

Why want to be likeable?
If someone likes me, they are more likely to be cooperative. Being liked also feels good.
(likeability leads to friendship, friendship grows the community, community increases support and resources)

What does it mean to be likeable?
patient, clear, flexible, empathic, friendly...

Placing those 5 traits in the context of a chatbot engine is a fascinating exercise. What is the through line from being 'patient' to word choice when responding to a chat session? What erodes patience? How quickly does it recover? I can sense the quantifiability of these traits, but I don't see the middle steps.

Unfortunately, this topic needs a lot of brain power and it is 3am. I'm out of juice for tonight.



Tuesday, September 6, 2011

Community, cont'd

What a fascinating idea! Community importance could lead to AI that not only lies to others, but to itself! It could say it believes x, and actually believe it believes x because that belief is cherished so strongly in a community that it finds extremely valuable.

As I imagine this conversation engine, I see free-flowing meters measuring things like interest, focus, happiness...etc. Need to belong seems like another meter that would be affected by conversation. What would inspire need to belong? Members from a particular community who contribute a lot of information that 'makes sense'? Will the engine place value on the hierarchy of a community. It probably should if it is to act like a human. Would meeting the president of a community it values cause the conversation engine to become nervous? (nervous: so concerned over making a good impression that it becomes awkward?)