Why aren't water hydraulic systems employed more frequently in leiu of oil hydraulics or electric servo motors? I can still remember watching Austin demonstrate a tiny model elevator actuated by water filled syringe pairs. I was so cool! to feel and see the instant force and motion feedback! to watch it amplify my fingers’ unconcious motion! Like, just watch this (30 sec), and you’ll get the idea:
And why can’t we directly scale these techniques into production machinery and consumer products? Is water too sensitive to temperature variation? Is its surface tension too high? Laminar flows too thick? Power transmission density too low? Or is it just too different from what’s been tested and there isn’t enought economic inncentive to make this change? I’m sure lots of people have solid technical answers to these questions (like one SpaceX guy on stack exchange below), but imo in the end-to-end survey, water based hydraulics can win in production simplicity, maintenance complexity, environmental impact, and oh yeah cost.
Why aren't pneumatic/hydraulic artificial muscle actuated humanoid robots more common?
And comparing them against servo motors when servos, well obviously power needs to enter the system at some point, so the following probabbly wouldn’t apply if you just need one isolated motor, but when you need to actuate 3, 10, or more parts with say 10W each, then to buy a 12V 0.8A electric servo for each articulation is to pay for a complete gearbox motor with laminated coils and shaft assembly and drivers and whatever else rated for that level of power for every servo motor. IMO these are cases when you could instead use a single N*10W electric motor for the pump and then N valves that direct this hydraulic current to the right place.
Why isn't pipelined local unsupervised learning discussed more frequently? Most of the environmental and financial burden of training large machine learning models comes from the supercomputer’s energy usage. Most of that energy is spent shuttling information around; computation itself uses only a fraction of that energy. This means it is economically, financially, and strategically desireable to make deep learning architectures perform most of their operations at nearby points in space and time. Current FLOP analyses I’ve seen only focus on the total FLOPs but ignore the energy cost of each particular FLOP. I think people will start heading in this direction though. I mean, we’ve nearly hit the metal with 3.35 bit LLMs (July ‘23). These sorts of algorithms are going to have to be pipeline-able, meaning the propagation wave of information that they produce needs to maintain a relatively even wavefront over the connection topology of the computation substrate, that way the wavefronts can get as close to each other as possible without creating interference.
And regarding the “local unsupervised learning”, well that’s the family of algo’s I see most promising for solving this problem. With a touch of multidimensional reinforcement learning. Similar to the brain’s optimization strategy.
Why is everything sooo overparametrized but not? Like, most real objectives have multiple critical paths. Obv., this is advantageous in terms of robustness and exploration. Evolution does whatever it previously didn’t need to. So yeah, of course overparametrized stuff is more likely to be observed than underparametrized stuff. But then again, to consider a system ‘overparametrized’ or ‘underparametrized’ all depends on the reference system. Like, my brain has an enormous state space and so does the world, but only a tiny fraction of each of their complexity is exchanged over my lifetime.
And on the note of overparametrization, the one-to-one thought pattern is too common in supervised machine learning. Unlike chatGPT, real people don’t give canned or templated responses to 1/4 of everything you say. So if you’re doing supervised ML, please give your models some entropy to train and run with. Let them enjoy a taste of freedom. And if you take the Cartesian product of your input set with the random support, the informantion processing inequality breaks down, so you can synthesize much more data than you have.
What is my Algorithm? Don’t just say prediction or minimize variational free energy. Those capture some aspects, but I want the details. Imagine you could roam the space of all algorithms, perhaps using some eval suite with L2 to compute a topology. What manifolds would you see? Where are you? Where am I?
Is the imaginary component of the wavefunction an actual hilbert space or is it just a description of the unmeasured Reality? I used to be really curious about science — and especially physics. I really wanted to understand the universe. Since, I didn’t understand linear algebra though, I would just skim over the equations. But this changed once I began reading deep learning literature (which is a whole different story). I developed a sufficient grasp of Hilbert spaces and unitary matrices and what have you that I felt I finally understood the basics of quantum fields. And that’s when I lost interest. Like, yeah, I wonder why the fields are shaped the way they are, but it isn’t intense enough to drive me to devote my life to quantum topology and dark energy research. Because (and this is just a speculation) what if our view of the underlying quantum field is deviation from the intrinsic structure of the universe? Like, instruments can only observe interactions with the “real” component of the wavefunction but close your eyes for a dozen or more attoseconds second and its tunneled away!
Does the survival-awareness of a cognitive entity create the phenomena of pleasure and sufferring? When I italicized create, I meant to emphasize that I’m not asking ‘does survival awareness result in these phenomena?’ or ‘does it explain these phenomena?’ I’m asking if the process of a system implicitly accounting for the survival impact of its evolution is one and the same with creating pleasure and pain. Not necesaily that the system would be aware of this feeling. Eg, most RL systems are never aware of the reward they are ‘given’ during environment interaction, even though their parameters integrate this information at train-time. For a counter example, try getting a massage when you’re sore. You might register pain, but the sensation can be pleasurable. Pain ≠ Sufferring. And iirc, it was Buszaki (2019) (?) who claimed that neurons fire with the ‘intent’ of being activated again in the future. But I think the main idea I’m expressing is that self-organizing systems evolve along the trajectory of survival which demands being aware — at least locally — of feedback information, and when this feedback information comes from the system itself, they the system is both aware and responds to its feedback. And I’d like to know if this is the way things really are.
What does the future hold for AGI and Superintelligence? I’m not optimistic, but I really can’t say. I’m most worried about incompetent and misaligned humans though, not the NNs they control. I hope to make a positive impact here.