I may just have to use two separate calls to tape.gradient

Do some Jupiter lab experiments

If it's possible though, I'd like to only make one call to tape.gradient. That way I don't have to make a persistent tape

I need to add e and ebu and etd

Combine the split and concat with the dense network

Add RNNs to every location. The hippocampus has no lateral fn. only one big RNN. The RNN is Tim during the gradient generation part so this is how errors get carried forward in time. Actually use hopfield networks

Add parameter to make lateral updates optional

I should actually train connection fns on t-2 to t-1 then run them (and backprop errors) to generate t from t-1

Only her gradient updates from previous activation to new activation when a new refractory period begins

I want to create nonlinear updates

Energy is consumed by error.

The energy of an upstream node is consumed by the error of its connected downstream nodes.

Actually a node's own energy is consumed when updating it. This creates a refractory period

A refractory period creates a time minimizing constraint

To make the refractory period discontinuous, both the energy and error must surpass a critical level to update the location. I may use a sharp sigmoid to achieve this. The refractory period should be tuned to maximize information throughout when there is demanded change but minimize high energy time when there is not a critical amount of error. The refractory period normally restarts only when a neuron activation is changed. Connections are only active when both all their inputs and some of their outputs are non-refractory

But there should be inhibitory neurons to synchronize the refractory periods across diverse regions of the brain

How would I train inhibitory neurons.