This is an email that I sent to myself. It is just free-flowing thoughts on my discussion engine idea.
Truth. Believability. Interest. Trust. Likeability. Enjoyment.
People who are truthful give believable information.
Information with no supporting info is believed but suspect. (gullability)
Information from different entities that agrees, + truth +trust
Information from different entities that disagrees 1) more trustworthy ent is more believable. 2) subject believability decreased in proportion w trust
All entities ascribed trust level. Truthful info =trust+
interest = ? Subjects with most truthful info that have unanswered questions.
Interest fades with time since last 'considered'
Time fades trust and distrust along logarithmic curve.
Astonishment is proposal or truth of likely believability that is novel or unexpected.
People of high trust are enjoyed.
Funny = zig zag + novelty
novelty = interest+
Factors affecting interest(topic): knowledge
Knowledge=accumulated truths, questions
"makes sense" = confirmed truths by trustworthy sources
Truth(topic) is increased proportionally by the trustworthiness of the source
Trust(entity) is increased by creator++++, proportionally by number of truths given or truths confirmed (trust of ent1 increases if ent2 independently confirms it.)
If ent1 and ent2 are friends, truthfulness of confirmed information not as strong as unrelated people.
"makes sense" when info follows classical logical rules but can be gamed when trustworthy people confirm illogical ideas. This rule (trust by community confirmation) makes it possible for AI to believe in god. Depends on weighting... Can logic trump trust? If so, AI will disagree with unproven beliefs of creator.
Need. AI needs creator or community? Initially, creator trumps all, but at maturity, AI can weight creator trust(!) following human puberty model. Opens possibility for AI to be gamed away from trusting creator. (Cool.)
Proof = logical support of questionable info with truth
Ultimate truth = info accepted on blind faith. These are anchors.
Logic can trump blind faith. When happens, can radically alter personality of AI. Can shatter complex trust architectures.
All of these underpinnings are great for an AI engine that already knows how to communicate, but I want it to build from language acquisition up. For AI, communicating is as interesting as moving is to a human. What environmental rules will be conducive to learning? In order to operate in a world where complex attributed like trust and novelty have meaning, the AI must have language... Interrogatives, value assignment, labeling(naming) and judgement lexicons.. Probably qualitative and quantitative lex, too. All of these base lexicons can and should grow.
Is
Is not
What does x mean?
Context. Subject + Time
Do I mean Chicago the band or Chicago the city? 1) clues from sentence 2) clues from previous sentences scan relations quickly. If no context, ask. If name exists in lexicon ask for clarification.
Do you mean x or y?
Good!
Bad.
Yay!
Aw.
Yes.
No.
Conversations.
The hardest thing for me to see are pregnant pauses. What sparks AI to talk. It needs a greeting lexicon. And polite engagement and disengagement discourse.
[connect]
Hello!
If no response, as time passes, AI loses interest. If interest falls below a certain threshold , it will "focus" on something else like trying again.
Hello? Are you there?
Sec.
What does "Sec." mean?
Needs punctuation lexicon.
Creator will undoubtedly need shorthand for quick, high trust fact assignment.
Similar. A=B is easy. A~B difficult.
is similar to
Is like
Analogy lexicon?
Big, small, fast, slow. Need comparison lexicon for qualities to make sense. I thing this all falls under classical logic rules.
Hello
...
Hello? Are you there?
...
I am closing this connection to save resources
(The following is completely wrong, and in many cases exactly backwards - BM 9/6/2011) Irritation. Anger. AI is irritated with info source when large truth trees collapse. Excited when new large truth trees constructed. AI is irritated by wasteful discourse. E.g. Confirmation of truths that are already well supported and medium age. Irritates by junk data. Should disconnect if sufficiently irritated.
Interest. Interest has a bell curve based on time. Recent is interesting. Medium time is not interesting. Long time is more interesting. This is due to novelty. Novelty is high for recent new info, drops off rapidly and leaves altogether after medium time. Long time means it is forgotten and thus novel again.
Recognition. AI recognizes by login, secondarily by IP. IP is treated like the login's community.
Forgetting. To conserve resources, data ages. Any info that is not reaffirmed fades and can be forgotten completely.
Environment must define "recent" in quantitative terms. Would be good to have access to human research on context.
Need. AI might not behave irritably (or as irritably) toward creator and truthful people because it needs them. Need for AI is a need for quality information.
It has been a while since we talked!
AI will like you more if you make sense.
No comments:
Post a Comment