Wednesday, January 4, 2017

Consciousness

My Facebook news feed is cyclical. The same stories recirculate over and over... today is the first day that I am actually grateful for this. 

A frequent rhetorical question that comes up in science blog posts is: what is consciousness? This is usually framed within the context of an article about self-consciousness as some insurmountable wall for artificial intelligence research. The problem of course is not (simply) that consciousness is hard to achieve, it is that we lack a common definition.

Today a reasonably concise definition of consciousness popped into my head thanks to this article from Curiosity. This was my comment on the post:

It is easier to define consciousness if you take a step back from the individual. We are a communal species. Communication allowed us to work together. Speech and speech processing are reciprocal systems, allowing us to communicate internally - essentially making the self a community of one that acts as two (speaker/listener). Speaking to yourself and knowing that you are the source is the simplest self-awareness. Self-awareness + comparison + prediction + action = consciousness.
I really like solid feeling around the definition of self-awareness. (Self-awareness is another nebulous term that gets batted around by philosophers and scientists.)

The most exciting part about that definition was the idea that our multi functional brain processes and packages thoughts for consumption by others, and that perhaps that system operates independently from the system that listens and unpacks inbound communication. Perhaps we learn things from talking to ourselves because it involves different parts of the brain. I tried to get a sense of what the outbound communication thought might be and how it might differ from inbound thought. 

Inbound communication is immediately sent into the associative engines. This step doesn't seem necessary for outbound communication... unless you include the reciprocal activity used to compose thought that is packaged for communication. 

There is so much potential here! I really need to think this through more thoroughly. One question I desperately want to answer is why doesn't the reciprocal self talk result in an implicit loop? What is the beginning and end of this activity?

Saturday, February 2, 2013

The Great Courses has a number of courses on linguistics and neurology. I'm alternating among four of them:

  • How We Learn 
  • History of the English Language
  • Understanding Linguistics
  • The Story of Human Language

Listening to these lectures has really sparked a number of different language processing ideas. Last night as I was walking the dogs, I was listening to How We Learn lesson 6 "What Babies Know" when a quote from Carl Sagan popped into my head:
"The brain has evolved from the inside out... Its structure reflects all of the stages through which it has passed."
What if the reflective, analytical  part of our brain is communicating peripherally with the more primitive pieces to record memory?

Monisha Pasupathi (Professor, How We Learn Course) was talking about how we can distinguish male from female from far away based on the way they walk even before we perceive visual gender clues. The people tested could not explain what clues they pick up on.

My explanation for this is simple... Sometimes we learn without analysis. Our brains process and store subtle information about gait without consciously thinking about the bounciness, sway..etc. and given time to think about it, people, I am sure, could begin to assign labels to the qualities they stored in their prototype library of 'male' and 'female'. (I wish I could remember Pasupathi's conclusion -- but I drifted off into my own thoughts above.)

The nature of her question also made me think about Sagan's quote. What if the more primitive parts of our brain record things subtly, outside the pieces that compose consciousness. What comes to mind immediately are the regions that process horizontal, vertical and diagonal lines... We aren't aware of the independent brain processing for these 'types of lines', we see images as a whole.

I have a related pet hypothesis about staring. When I was a teenager, I used to scan page after page of hexidecimal numbers looking for specific patterns. I could find these patterns much more quickly if I lost focus, stopped moving my eyes and stared at the middle of the screen. By lost focus, I don't mean I made the image blurry, I mean that I consciously disengaged my attention. I can feel it when I have done it properly. There is a mental 'thunk' as focus shuts off and I take in everything I see. (I think I heard somewhere that people with autism live in that unfocused state.)

Pet Hypothesis:
Staring turns off visual focus (again, not necessarily optical focus). It relaxes the part of the brain used to maintain and process focused visual information while leaving other parts of the brain engaged. There are different types of staring... there is staring where you're completely disengaged from your surroundings, using your brain to process mentally generated imagery. Then there is staring where the brain is not turned inward, rather it is collecting sounds and information from the entire field of view.

Look at your middle knuckle on the back of your hand. Now, without moving your eyes, shift your attention to your fingers. Notice light, color, lines... Notice that you are aware of your fingers, but that details toward your finger tips are sketchy, out of focus. Now, look at your finger tips. As you shift focus, you should have the same sensation as when you snap out of a 'spontaneous' stare.

Something that just occurred to me. Optical focus also relaxes in response to a stare because optical focus is not as important when visual focus is turned off.


Thursday, December 27, 2012

Ebbinghaus Forgetting Curve

The Ebbinghaus forgetting curve - loss of facts over time.
http://en.wikipedia.org/wiki/Forgetting_curve

Only applies to rote memorization.


echoic memory - memory that records non-focus material for a few seconds

Tuesday, September 27, 2011

There is something that I want...

I've been reading the first few chapters of "The Most Human: What Talking with Computers Teaches Us About What It Means to be Alive" (Mantesh). A segment about someone trying to rent a place in another city by email reminded me of something I've been meaning to include in this blog.

In the segment, the potential renter was trying to avoid the appearance of being a scammer by saying things in a deliberately human, non-anonymous way. This reminded me of how I change my behavior when I know I am talking to a computer.

As I have been obsessed with the idea of a conversational computer for at least 15 years, it may come as a surprise to learn that I hate talking to the current crop of bots. I interact with human analogs online and on the phone on an almost daily basis, whether it is the automated phone system at the bank or an online chat session with tech support.

Speaking to business interactions with strangers, if I know I am talking with a human, I invest a certain amount of effort in a conversation trying to be likeable. I try to be patient, clear, flexible, empathic, friendly... I am mindful of their time, come to the conversation prepared... things that people see and tend to react to in kind. When I am talking to a computer, I am not patient, charming or friendly. (I am decidedly unfriendly when I am on the phone with a system that forces me to talk when I would rather push buttons.)

As I interact with a computer designed to mimic human conversation, I try to figure out which key words will get me to the information I need. (This could be left over behavior from playing annoyingly literal text adventure games from the 80s like Infocom's Hitchhiker's Guide to the Galaxy, which require precise words to make any progress in the game.) I use short clear sentences with as few adjectives and adverbs as I can. I feel forced into this mode because any attempt to be descriptive usually results in miscommunication, and attempts to be likeable are pointless.

As an interesting aside: When I talk to non-native English speakers, I perform a similar redaction of effort. I make no effort in subtle word choice and opt for the most clear, simple sentence structures - though I do make an effort to remain friendly and patient.

In general, I don't want to waste my time putting any effort into parts of a conversation that serve no purpose for the listener. This is interesting because it means that much of the effort I put into a conversation is to illicit a (positive) reaction from the listener. This is very important. I make an effort because there is something I want from the listener. This goes back to my thoughts on motivation in speech.

Why want to be likeable?
If someone likes me, they are more likely to be cooperative. Being liked also feels good.
(likeability leads to friendship, friendship grows the community, community increases support and resources)

What does it mean to be likeable?
patient, clear, flexible, empathic, friendly...

Placing those 5 traits in the context of a chatbot engine is a fascinating exercise. What is the through line from being 'patient' to word choice when responding to a chat session? What erodes patience? How quickly does it recover? I can sense the quantifiability of these traits, but I don't see the middle steps.

Unfortunately, this topic needs a lot of brain power and it is 3am. I'm out of juice for tonight.



Tuesday, September 6, 2011

Community, cont'd

What a fascinating idea! Community importance could lead to AI that not only lies to others, but to itself! It could say it believes x, and actually believe it believes x because that belief is cherished so strongly in a community that it finds extremely valuable.

As I imagine this conversation engine, I see free-flowing meters measuring things like interest, focus, happiness...etc. Need to belong seems like another meter that would be affected by conversation. What would inspire need to belong? Members from a particular community who contribute a lot of information that 'makes sense'? Will the engine place value on the hierarchy of a community. It probably should if it is to act like a human. Would meeting the president of a community it values cause the conversation engine to become nervous? (nervous: so concerned over making a good impression that it becomes awkward?)

Saturday, August 20, 2011

Motivation

When thinking about AI and reproducing human thought, I try to imagine the interim steps between my high-level thoughts about community and starting from scratch. It never ceases to amaze me how murky the gray area is between the two. I catch snatches of the through lines in the form of small rules or processes, but I don't have a good place to start. I need a discreet engine that can learn and (possibly more importantly) interact in a way that demonstrates my ideas.

I sense high level motivations and ways to quantify things that humans consider qualitative, but I don't see the whole mechanism.

Breaking Silence
Something that keeps coming up is this: When two people enter a room, they don't just start talking back and forth like a ping-pong match. There are silences. If we exclude somatic occupation (walking, sleeping, sensing,...), what are our brains doing when we are doing nothing? What are all of the possible pursuits of mind and what benefit are they? In quiet solitary moments we remember, reason, solve problems, make judgements... In order to simulate this human distraction, what should AI do to produce quiet times? More important, when and why does it choose to break the silence?

In my mind, I imagine AI leafing through the new information it has acquired, relating it to other information and making note of dead-ends. Gaps in information spark conversation. Unfortunately, on a discreet level, I have no idea how this would work.

Thursday, August 18, 2011

Community

I've been reading a lot of Hitchens and Harris lately and had a revelation. Science and Religion bump heads because the goal of science is to explain the world, while the goal of religion is community. This is not the usual juxtaposition, but I think this is the crux of the problem.
Community - A social, religious, occupational, or other group sharing common characteristics or interests and perceived or perceiving itself as distinct in some respect from the larger society within which it exists --Dictionary.com
To be distinct, you must have difference - us vs them, believers vs non-believers. Religion is not concerned about truth or proof, though it often misuses these terms; it is concerned about maintaining community and, by relation, distinction. The faithful rabidly defend ideas that they themselves may not truly believe because of their devotion to the community. With religion, community is more important than facts.

Community is an evolutionary benefit of language. Communication allowed primitive man to cooperate on increasingly complex levels, which led to greater survivability. If AI values community over facts, it is possible for AI to develop fanatical, logically flawed beliefs if such behaviors are a measure for support of the community. Ergo, it is possible for AI to have religious beliefs. This opens some truly fascinating possibilities.

Sci Fi always plays the genocidal robot as one who decides humanity is an infestation, or fundamentally flawed and should be wiped out. The unspoken fear is that a science minded robot would think too logically and decide that humanity is not worth saving. In truth, the real fear would be community-minded AI getting mixed up with the 'wrong crowd'.

In fairness, science, facts and logic can lead to socially destructive thoughts. Look at the Nazi selective breeding programs. In truth, selective breeding would produce smarter, healthier humans if conducted properly. We selectively breed cattle to improve favorable traits. However, this sort of tampering is socially distasteful because it removes intimacy and choice (and the 'humanity') from reproduction.

And there are positive benefits to needing community membership. Community approval steers behavior, and creates 'common sense' knowledge. Successful communities resist self-destructive tendencies as a result of natural selection. Communities that self-destruct do not survive.

Applying Community-Mindedness to AI
I imagine a weighted balance between the need for community approval and reason. How does individual approval fit (referring to 'having a hero')? Is that a community of one? Is it possible to belong to more than one community? (of course!) What if one belongs to two communities with mutually exclusive values? How does AI resolve the conflict? (avoidance?) It would be good if the weight AI places on community could be self-governing as it is in humans. How would AI add weight to need for community vs need for facts? Maybe all of these should be grouped under 'belonging' or 'approval'. How does AI associate itself with a group?

I believe XYZ and so do these people.

How does AI evaluate rejection or the potential for rejection? By determining the distinctive elements

I like these people because they add to my knowledge of XYZ, but they believe ABC and I do not. I need them because I enjoy discussing XYZ with them (new knowledge + novelty + frequency). If I say I do not believe in ABC, they will stop talking to me. I will say I believe ABC in order to continue discussing XYZ with them.

A lie in order to continue receiving enjoyment from the community interaction.

Continued