Imagination

Propose new ideas

  1. For action
  2. For imagination of novel thoughts

When motivational intensity is high, thought transition temperature is low. Less novelty.

Sketch.png

I see that data can increase or decrease the information content of other data both in time and space

Sketch.png

Spatial priors

Information relations among nodes in the visual lattice should be similar to information relations in the thought space

It needs some complex chaotic system to strive to maintain homeostasis with. Instead of just making up a chaotic system, have it always carry around another agent that it speaks to

Should I have a salience network that switches between default mode and central executive networks? If so would a symbolic planner be useful? Or what if I am much more expensive AI service like OpenAI API was used when executive moment was activated? It should cost energy.

Homeostasis must have a mathematical definition, so that the reward function will have something to optimize. And it does not depend on what the brain predicted so just having another internal monologue is not the answer, but the complex system should use energy currency that is both consumed acting in the environment and on paying for the computer infrastructure

Maybe executive mode is top down, directed activity happens at a much higher probability. Also the system 2 does not care about reward, just probability using the action and perception as divergence model

During sleep let the network make most likely transitions while occasionally -proportionally to the cross entropy (don't train on completely wacky data)- resetting internal state from the experience buffer proportional to the novelty of the state experienced in the past. Then, use those collected  state transitions to actually fit the network and minimize energy.

Also during REM sleep excessively propagated information should be minimized as a consequence of the fast learning that takes place in high energy

Also constantly training in real time with learning rate proportional to the information content node-wise (this allows fast compositional learning while not forcing the entire brain to change)

Information sparsity regularization