Saturday, August 20, 2011

Motivation

When thinking about AI and reproducing human thought, I try to imagine the interim steps between my high-level thoughts about community and starting from scratch. It never ceases to amaze me how murky the gray area is between the two. I catch snatches of the through lines in the form of small rules or processes, but I don't have a good place to start. I need a discreet engine that can learn and (possibly more importantly) interact in a way that demonstrates my ideas.

I sense high level motivations and ways to quantify things that humans consider qualitative, but I don't see the whole mechanism.

Breaking Silence
Something that keeps coming up is this: When two people enter a room, they don't just start talking back and forth like a ping-pong match. There are silences. If we exclude somatic occupation (walking, sleeping, sensing,...), what are our brains doing when we are doing nothing? What are all of the possible pursuits of mind and what benefit are they? In quiet solitary moments we remember, reason, solve problems, make judgements... In order to simulate this human distraction, what should AI do to produce quiet times? More important, when and why does it choose to break the silence?

In my mind, I imagine AI leafing through the new information it has acquired, relating it to other information and making note of dead-ends. Gaps in information spark conversation. Unfortunately, on a discreet level, I have no idea how this would work.

Thursday, August 18, 2011

Community

I've been reading a lot of Hitchens and Harris lately and had a revelation. Science and Religion bump heads because the goal of science is to explain the world, while the goal of religion is community. This is not the usual juxtaposition, but I think this is the crux of the problem.
Community - A social, religious, occupational, or other group sharing common characteristics or interests and perceived or perceiving itself as distinct in some respect from the larger society within which it exists --Dictionary.com
To be distinct, you must have difference - us vs them, believers vs non-believers. Religion is not concerned about truth or proof, though it often misuses these terms; it is concerned about maintaining community and, by relation, distinction. The faithful rabidly defend ideas that they themselves may not truly believe because of their devotion to the community. With religion, community is more important than facts.

Community is an evolutionary benefit of language. Communication allowed primitive man to cooperate on increasingly complex levels, which led to greater survivability. If AI values community over facts, it is possible for AI to develop fanatical, logically flawed beliefs if such behaviors are a measure for support of the community. Ergo, it is possible for AI to have religious beliefs. This opens some truly fascinating possibilities.

Sci Fi always plays the genocidal robot as one who decides humanity is an infestation, or fundamentally flawed and should be wiped out. The unspoken fear is that a science minded robot would think too logically and decide that humanity is not worth saving. In truth, the real fear would be community-minded AI getting mixed up with the 'wrong crowd'.

In fairness, science, facts and logic can lead to socially destructive thoughts. Look at the Nazi selective breeding programs. In truth, selective breeding would produce smarter, healthier humans if conducted properly. We selectively breed cattle to improve favorable traits. However, this sort of tampering is socially distasteful because it removes intimacy and choice (and the 'humanity') from reproduction.

And there are positive benefits to needing community membership. Community approval steers behavior, and creates 'common sense' knowledge. Successful communities resist self-destructive tendencies as a result of natural selection. Communities that self-destruct do not survive.

Applying Community-Mindedness to AI
I imagine a weighted balance between the need for community approval and reason. How does individual approval fit (referring to 'having a hero')? Is that a community of one? Is it possible to belong to more than one community? (of course!) What if one belongs to two communities with mutually exclusive values? How does AI resolve the conflict? (avoidance?) It would be good if the weight AI places on community could be self-governing as it is in humans. How would AI add weight to need for community vs need for facts? Maybe all of these should be grouped under 'belonging' or 'approval'. How does AI associate itself with a group?

I believe XYZ and so do these people.

How does AI evaluate rejection or the potential for rejection? By determining the distinctive elements

I like these people because they add to my knowledge of XYZ, but they believe ABC and I do not. I need them because I enjoy discussing XYZ with them (new knowledge + novelty + frequency). If I say I do not believe in ABC, they will stop talking to me. I will say I believe ABC in order to continue discussing XYZ with them.

A lie in order to continue receiving enjoyment from the community interaction.

Continued