Tuesday, February 1, 2011

Leatherbound AI notes p16 (2003)

Leatherbound AI notes p16
Sunday, November 20, 2005
8:02 PM
5/1/2003
I have tried to narrow the focus of my AI project to communication. Should the brain be one that evolves into language? Letters are innate knowledge, words and rules are learned. So what rules govern letters?

All AI knows is that a string of characters is communication. Limited character count will help limit scope of communication in much the same way our larynx and mouth pronounce a finite range of sounds - ears only hear a limited spectrum.

A computer must know you are there to communicate
What do we do when we think?
Problem solve: Traverse cept trees: collate compare contrast condense

[11/20/2005 After reading over the last few pages, I feel like screaming at my 2003 self. Good lord. Get out of this minutia. Letters are just cepts used to describe words. Should AI evolve into language? Before I am too hard on myself, I have to explain where I was going at the time. I was imagining a system that could be taught much like an infant…
In this romantic vision, the program would see words on the screen and imitate them back. It would study the back and forth relationship of words out and words in to learn to speak. This system would need an administrative layer that would be used to monitor and teach the brain of AI while it worked. Actually, this tool will be necessary anyway… But I think the system can start with an essentially blank slate but pre-programmed with knowledge that will constitute learning tools. It will understand statements of assignment when phrased a certain way.]

Leatherbound AI notes p16 (2003)

Leatherbound AI notes p16
Sunday, November 20, 2005
8:02 PM
5/1/2003
I have tried to narrow the focus of my AI project to communication. Should the brain be one that evolves into language? Letters are innate knowledge, words and rules are learned. So what rules govern letters?

All AI knows is that a string of characters is communication. Limited character count will help limit scope of communication in much the same way our larynx and mouth pronounce a finite range of sounds - ears only hear a limited spectrum.

A computer must know you are there to communicate
What do we do when we think?
Problem solve: Traverse cept trees: collate compare contrast condense

[11/20/2005 After reading over the last few pages, I feel like screaming at my 2003 self. Good lord. Get out of this minutia. Letters are just cepts used to describe words. Should AI evolve into language? Before I am too hard on myself, I have to explain where I was going at the time. I was imagining a system that could be taught much like an infant…
In this romantic vision, the program would see words on the screen and imitate them back. It would study the back and forth relationship of words out and words in to learn to speak. This system would need an administrative layer that would be used to monitor and teach the brain of AI while it worked. Actually, this tool will be necessary anyway… But I think the system can start with an essentially blank slate but pre-programmed with knowledge that will constitute learning tools. It will understand statements of assignment when phrased a certain way.]

Leatherbound AI notes p15 (2003)

Leatherbound AI notes p15
Sunday, November 20, 2005
7:42 PM
AI needs to be able to communicate back to its instructor (External monitoring is an option  instead). I'm trying to avoid a conditioned response for not understanding. The actual  response should be learned behavior. I keep thinking back to training animals - if you want to  train an animal what "sit" means, you say the word and push on their hind quarters until they  are in a sitting position. (This is oversimplification)

The point is that core language evolves out of physical interaction. You can not physically  interact with a computer on the same level as a human, so language instruction will be  difficult at best.

Teaching language by way of the language without other assistance may be impossible  without a lot of innate traits. Imagine instructing a child by using only words on a screen.

I can see AI evolving from a point of knowledge of words but right now I'm having trouble  seeing it evolve -into- words.

What would motivate it to communicate?

[11/20/2005 At this point I'm WAY over thinking the problem. Yes it would be great if I could  come up with a mechanism that would allow AI to learn words by starting from zero, but that  would take too long. I think it will be necessary to start AI off with a small vocabulary that  will allow instruction. The vocabulary will be soft as opposed to hardwired so it can be  learned, unlearned and even evolve over time.]

Leatherbound AI notes p14 (2003)

Leatherbound AI notes p14
Sunday, November 20, 2005
7:35 PM

 image
The AI's world is "letter" based.

A SMALL TREE
On awakening, this will look like X_QVTEE XRYY until words are understood.

Question: do spaces qualify as characters or are they preprogrammed breaks?
I believe they should be considered characters like any other.

If the AI learns to interpret these itself, then it will be that much more flexible.

So establishing the inherited traits vs learned behavior should be fairly simple

Innate Abilities
  • Recognition of characters
  • Recognition of incoming vs outgoing characters

Learned
  • Recognition of groups of characters
  • Note: not "words" this is to encompass phrases… which allows cepts to be more than just  words - they can be complete conceptual entities. For example: The phrase "I don't know"
  • Is a single conceptual entity

[11/20/2005 This page like the last is a bit off the rails. White space will be treated like any  other character, but it should be taught that white-space = word break right away to allow  for greater ability to learn.

Innate abilities vs learned behavior should be established, but there is a third layer which is  "primary learning" or learned behavior that is taught in order to facilitate learning. It's like  grade school for AI.

Leatherbound AI notes p13 (2003)

Leatherbound AI notes p13
Sunday, November 20, 2005
7:29 PM
Refinement: cepts are not the container, cepts are the data
 image

Is there a practical way to remove the second level of environment?

HARDWARE/OS+Interpreter/AI environment host

… meaning - rather than building a word interpreter - should the leraning engine start with letters as its  building block? I think it should. Our basic level of comprehension in learning to deal with the world is not a  molecule - it is an aggregate though we can add the unseen smaller world later.
Human: tangible obj. - manipulation
AI: character obj & …?

We interact with the world
Sight - object manipulation or communication and observe reaction
What is a computer assertion?
What motivates it to interact?
In order to explore capabilities one must be able to gauge reactions to actions.

[11/20/2005 major step backward with the last 2 paragraphs… shocking after such a large leap forward  with the first sentence. Cepts are the data, not the container. Why in the world did I go to 'letters as  building blocks'??? A cept can be a letter, word, phrase or small sentence. Very strange backpedaling. The  last sentence is of slight interest: I think gauging reactions will be a function of the learning process.]

Leatherbound AI notes p12 (2003)

Leatherbound AI notes p12
Sunday, November 20, 2005
7:14 PM
4/20/03
I just joined AAAI yesterday. They cover a variety of topics that I have touched on in my  journals. Amazingly, they even have a section on the mechanics of common sense. I haven't  read far into the current theory - but I still believe that they are missing the point that  "common sense" is the result of our own experimentation over time. If you distill the  mechanics of thought & enable a machine to process, learn and explore common sense will  follow naturally.

This brings me back to my original musings on the nature of thought
 image

Avoid thinking of nouns and verbs. Everything is a cept with associated interactive &  combinatory rules; in fact, these rules are cepts themselves

Evaluation:
How does the evaluative process figure into the cept model?

 image
[11/20/2005 - It's good to see that I correct myself in mid though sometimes. The evaluation  and link engineering should be part of the system, not the cept model itself. I think I was  trying to establish a system whose rules were flexible enough to change themselves… but the  truth is, our brains treat words and thoughts (cepts) in very consistent, predictable ways.  Even if this system turns out to be flexible, it doesn't matter because we're working within an  environment where the system doesn't require that level of flexibility. (Think: using  Newtonian physics for every day planet-earth situations as opposed to quantum mechanics.)  The goal is to mimic, not mirror.

Ignore the drawings - they're crap.]

Leatherbound AI notes p11 (2003)

Leatherbound AI notes p11
Sunday, November 20, 2005
6:52 PM
A cept is the smallest detail unit a conceptual construct which is an impossibility in reality. This is why we have so much semantic ambiguity. We keep hoping for a true parent node but never find one. Why? Because the neural field is not bounded by concrete borders.

Faith is the act of relying on individual cepts without pursuing the trails further back (causing ambiguity)

[11/20/2005 Wow. This is a can of worms. First, a cept is a construct houses a single idea. What I meant by "an impossibility in reality" is that this is not going to be the organic solution, this is a simple representation of a more complex structure. I believe semantic ambiguity will be inherent in the system as we use words to define words. The rest of that sentence is probably correct…

I do have to change my definition of Faith per my journal entry early this morning. Faith isn't the act of failing to follow a definition back to discover the untruths, it has to do with a rather radical, complicated cept - believing in that for which there is no proof. This cept changes the way we process information. "I know what I see, and what you say makes sense, but I can't believe it because God is the answer and his answer trumps all." It is a true mental weighting system at work. This particular cept has been given authority to always be higher in truth than all other facts. Fascinating]

Leatherbound AI notes p10 (2003)

Leatherbound AI notes p10
Sunday, November 20, 2005
6:50 PM

image

[11/20/05 Actually I think this definition holds up!  See previous notes]


Leatherbound AI notes p9 (2003)

Leatherbound AI notes p9
Sunday, November 20, 2005
6:12 PM

 image

[11/20/2005 Well the first figure still holds: bond strength between concepts (which governs  retention and recall) is stronger if the path is traversed frequently. This bond still fades over  time, but should bear the marks of having high bond strength (perhaps a max?)

#2 is almost completely right. Memorizing by rote simply runs over paths in short term  memory which don't stick. From experience you will often recall the first fact and the last one  but without any strong relational thread, you'll lose all the stuff in the middle. There has to  be some way of figuring in interest level… I wonder how to represent interest. I think it  almost has to be an artifact of the system. Perhaps interest comes in when lots of facts  make sense? I don't know.

#3 is a weak statement that isn't worth much… Recall is just the act of retrieving  information. I'm not sure what I meant by 'forced'.

#4 I'm not sure what I meant by fast tree creation - perhaps I was trying to illustrate the  disposability of the short term memory tree. I think relation to existing cepts helps move  things into long-term memory then repetition over time. I have defined cepts earlier and I  believe those definitons supercede the one above.]

Leatherbound AI notes p8 (2003)

Leatherbound AI notes p8
Sunday, November 20, 2005
5:24 PM
Sunday 2/2/03

I'm letting too much time slip by without developing my AI project. Maybe now that work will  be more regimented, I will be able to devote more time to this:
 image
In the broader scheme you can see evidence of this in the way we write, organize, compute,  drive… everything we do (page) is an extrapolation of our thought process. So why then is to  so hard for the system to examine itself?


[11/20/2005 HA! The first paragraph was written when I was first told about having to work  only 8 hours and report that in Time & Labor. I would soon be promoted to management and  quickly overwhelmed by work once again.

The image of a neuron is useful in one major way: it shows the relational lines between  cepts. As for the essence of thought, I'm not so sure my list covers the bases adequately.  This may be a description of one of the background processes for storing cepts, but since I  don't elaborate on any of the one word bullets, it is hard to be sure.

Why is it so hard for the system to examine itself, indeed. A friend of mine introduced me to  the idea that "You can't read the label from inside the bottle." That's a fantastic analogy.]

Leatherbound AI notes p7 (2003)

Leatherbound AI notes p7
Sunday, November 20, 2005
5:17 PM
Language & Interpretation
image


[11/20/2005 Wow. Another vague entry. I think I was trying to divine the understanding of  interjections but I sure didn't write much.

I just imagined a dialogue where the word HEY! Is explained.

Heh - I even thought about explaining that ALL CAPS IS SHOUTING SO AMP UP THE  IMPORTANCE, then figuring out how to teach AI to spot people who cry wolf with caps. That  would be a lot of fun to explore. For another time, perhaps.]

Leatherbound AI notes p6 (2003)

Leatherbound AI notes p6
Sunday, November 20, 2005
5:07 PM
1/22/03
Perhaps an object model of though would provide some insight.
What is the core object? Fact? Word? A Truth? I think I'm using a fact weighted by faith in  that fact.
image
[11/20/2005 That's a good question. I still don't have an answer. I know at some point  whatever structure is used should be capable of containing a phrase which constitutes a  single idea. Perhaps "Idea" is the root object. (It's good that we don't have to mess with  sensory information!) I'm only modeling thought and communication.

The drawing above is pretty useless… doesn't say much. The comment about comparing  facts to augment faith is also useless. I wonder if I should introduce a drive to find similarities  that will surpass the desire to be factual. (modeling the behavior of astronomers who are  looking so hard for another planet with life that they think they see life everywhere  … or the  religious devout who see/feel/hear God.

cept = idea = concept. A concept can be as simple as a letter or as complex as a phrase. If  it is a sentence it should be short and must embody a single modular concept.

So what defines the modularity of a concept? If parts of it can be stripped away and applied  to another cept in some analogous way, then the stripped version is stored as a cept.]

Leatherbound AI notes p5 (2003)

Leatherbound AI notes p5
Sunday, November 20, 2005
4:52 PM

Believability fades with disuse of information

Importance fades with disuse

Relevance
Also fades with disuse
Low relevance items fall off the tree first as memory runs low

Knowledge Storage and Retrieval

Language & language usage is a reflection of how the brain stores and organizes information

[11/20/2005 I agree with the edit up there, importance fades with disuse. Importance of  information will be weighed by three factors: intensity(?), time and repetition. For example, I  don't ever touch a lit stove because I know that I can be seriously hurt. This is an immutable  fact, but if I were whisked away for 30 years to a planet which had no fire, I could see this  information fading away. On my return I might remember something about a stove being hot  and dangerous so I'd avoid it but the fear wouldn't be as intense. That was probably a poor  example. A better one would be: You have to carry exact change for the bus because they  don't make change. If you stop riding busses and enough time passes, you may forget about  that completely.

An interesting secondary note - This is the sort of memory that can be revived when the  idea is reintroduced. (e.g. you take the bus again for the first time in years and say "OH  YEAH! I forgot!) You remember what busses are for, how they work..etc. but forgot a detail.

I don't know if I agree with the line about language being a reflection of how the brain stores  and organizes information. I think it is a clue to how the brain stores and organizes  information, but not a complete reflection. Also in the car today I thought about the smallest  unit of thought (in fact I was up at 3am this morning searching the internet for this topic…  no luck) I'm still not solid on how I want to approach the storage of information apart from a  barely formed notion that I need to have near infinite relational links to 'cepts']

Leatherbound AI notes p4 (2003)

Leatherbound AI notes p4
Sunday, November 20, 2005
4:37 PM
V "Everyone says that you lie to me"
Introduces Doubt
B "Do they? Do I give you a lot of contradictory information?"
V"No" Doubt Reduced… (a little)

Importance
Frequency of use
Very important - as told by several people w/ high believability scores

Expertise
Increases believability in a particular tree. In user gives a lot of non-contradictory info under a particular node, their expertise level goes up.

[11/20/2005 I'm amazed by how much went unsaid here. There is enough to trigger the right memories, though. The exchange above highlights the attribute by which no truth is unshakable. AI questions the believability of his master. I'm imagining all sorts of scenarios going on in the real world. For example, I release this AI brain and it skyrockets in popularity. Mob mentality can kick in and people could gang up to tell the AI brain that I lie. (This isn't paranoia, just fascinating fuel for consideration - I'd welcome that challenge)

The importance section is a carry-over from the core attribute that memories and facts solidify with repetition. I was talking about this with Amy in the car today when I realized that repetition in the immediate memory doesn't make a lasting impression because only the summary will make it to the long term storage. Long term storage will be invigorated by review later on down the road. I imagine a clock ticking away on short term memory which lasts just long enough to allow context but not long enough to take up valuable system resources. I wonder if I can use computer memory as short term memory and hard drive as long term. That would be interesting. The hard drive only stores summaries and analogies…. That is a story for another time.

Another idea triggered by the importance section has to do with weighting. Since AI won't have any life threatening hazards to contend with (it will work through positive reinforcement) I have to find some rule for highlighting the importance of certain concepts. "Don't tell anyone my phone number," for example could be bypassed if the computer could be bamboozled easily. A use could say "Oh, Brien told me to tell you to give it to me."

Expertise should probably be a side-effect of learning and interaction rather than a weighted attribute. If someone gives a lot of good discussion that makes sense on a particular branch of "cepts", they could be considered an expert. The concept of 'expert' can simply be defined as a cept with AI.]

Leatherbound AI notes p3 (2003)

Leatherbound AI notes p3
Sunday, November 20, 2005
4:30 PM
TRUTH & Believability

Users all have a core level of believability. People whose information is usually contradictory (self or w/group or w/well established) wrong or inadequate = low belief score

People who confirm facts with low truth scores increase truth and their own believability.

People who confirm well established truths do not gain much believability nor does their input increase the fact's truth as much as it would a lesser known truth.

[11/20/2005 Once again I see my brain outpacing my ability to write. The second paragraph means: People who are able to provide adequate supporting evidence to bolster their low-scoring fact will impress the AI engine, increasing their likeability and believability. This is the equivalent to convincing someone that using "you and I" in the predicate is incorrect by showing the rule to them from a grammar book.

The scenario handled by the third paragraph is the one in which someone bores you by telling you things you already know.]

Leatherbound AI notes p2 (2003)

Leatherbound AI notes p2
Sunday, November 20, 2005
4:29 PM
    Dialogue:
    Brien: "A ball is round." (a statement of assignment)
    VGER: Ball  tans round (Brien's truth weight is given to the assignment)
    No truth is unshakable. The evaluation system needs weights that can shift easily.
    [11/20/2005 I actually crossed that last part out in my notes, but it is true. I think my 'truth  weight' represents the level of trust the AI engine has for me. (I'm also a little embarrassed  by "VGER" up there, but I've been trying to find a good name for my AI brain for years and  have so far come up empty.)]

Leatherbound AI notes p1 (2003)

Leatherbound AI notes p1
Sunday, November 20, 2005
4:18 PM
This is a collection of my AI notes as scribbled in a nice leatherbound book with one of the  nicest pens I have ever owned. I'm not sure what became of the pen or the leather book  enclosure, but as of November 2005, I have the actual book (one of them anyway… weren't  there more?)

This is a faithful transcription of the notes and drawings in my book (no matter how ludicrous  I find them now.)

January 05, 2003

Pinker "How the Mind Works"
p.14  "An intelligent system, then, cannot be stuffed with trillions of facts. It must be  equipped with a smaller list of core truths and a set of rules to deduce their implications."

No:

This is the way to build today's machines. A truly intelligent system will be a machine which  evaluates truth. The first truly cognizant machines will best be employed as judges - this is  something even sci-fi writers have missed.

To be a thinking machine is to evaluate levels of truth

[11/20/2005 Pinker is right, and I am actually agreeing with him. In fact, I am kind of laughing  at how immature and excited I sound in this passage - it was only 3 years ago. The idea of  computer judges may be on track, but this assumes that AI will have reached a sufficient  level of understanding. I believe I was hinting at the idea that AI would parse words and  meaning without human bias.]

Leatherbound AI notes p1 (2003)

Leatherbound AI notes p1
Sunday, November 20, 2005
4:18 PM
This is a collection of my AI notes as scribbled in a nice leatherbound book with one of the  nicest pens I have ever owned. I'm not sure what became of the pen or the leather book  enclosure, but as of November 2005, I have the actual book (one of them anyway… weren't  there more?)

This is a faithful transcription of the notes and drawings in my book (no matter how ludicrous  I find them now.)

January 05, 2003

Pinker "How the Mind Works"
p.14  "An intelligent system, then, cannot be stuffed with trillions of facts. It must be  equipped with a smaller list of core truths and a set of rules to deduce their implications."

No:

This is the way to build today's machines. A truly intelligent system will be a machine which  evaluates truth. The first truly cognizant machines will best be employed as judges - this is  something even sci-fi writers have missed.

To be a thinking machine is to evaluate levels of truth

[11/20/2005 Pinker is right, and I am actually agreeing with him. In fact, I am kind of laughing  at how immature and excited I sound in this passage - it was only 3 years ago. The idea of  computer judges may be on track, but this assumes that AI will have reached a sufficient  level of understanding. I believe I was hinting at the idea that AI would parse words and  meaning without human bias.]

Leatherbound notebook

I just came across a 2005 transcription of some handwritten notes that I made in 2003. They are interesting in that they are my initial thoughts, followed by commentary a few years later. I will post each page separately.