What a fascinating idea! Community importance could lead to AI that not only lies to others, but to itself! It could say it believes x, and actually believe it believes x because that belief is cherished so strongly in a community that it finds extremely valuable.
As I imagine this conversation engine, I see free-flowing meters measuring things like interest, focus, happiness...etc. Need to belong seems like another meter that would be affected by conversation. What would inspire need to belong? Members from a particular community who contribute a lot of information that 'makes sense'? Will the engine place value on the hierarchy of a community. It probably should if it is to act like a human. Would meeting the president of a community it values cause the conversation engine to become nervous? (nervous: so concerned over making a good impression that it becomes awkward?)
No comments:
Post a Comment