r/SneerClub archives
newest
bestest
longest
494

Every waking moment of every waking day I back propagate and run gradient descent in my head to make sure my priors are in line with the world. It’s not much but it’s honest (sic).

"I trust in the good ol' Robbins-Monro algorithm to optimize my objective happiness function. My policy is to reasonably, rationally, slow down my learning when steps are too large for my provably unbiased tastes. The optimal step size is in expectation harmful to my respectable identity. I'm more of a fiscally liberal and socially conservative kind of guy."
Heh. More seriously though backprop / gradient descent seem quite biologically implausible, we actually have little idea how it works. Nothing's really off the table yet, not even wacky quantum stuff (its just highly unlikely). It is really fascinating. Thought experiments about brain simulations and other such crap, they completely miss the part where we actually figure something out like "how does the brain actually do learning this well with this little training data" and come across facts we can't even imagine yet. We just get futurist crap without ever having been surprised by anything.
Yeah I’m aware. I do follow Scott Aaronson because I’m in CS and so I’ve read his doubts about quarks in the brain. My personal unqualified hypothesis is that if ML folx find any break through in how he brain works it’ll come out of learning Theory and something like PAC.
Yeah. The biggest gap is not even just that we don't know how "learning" works, is that it also works massively better than our best attempts (able to learn usefully from much less data), so we're definitely missing something very major (whether of the classical computing variety or not; if it's not doing quantum computations on microtubules then what we miss on the algorithmic side is even more major). Meanwhile there is this huge body of pretending in the earnest that a far more complete account of the mechanisms already exists, merely because we find it likely that it can exist. Even if we had an account of everything except learning, a mind without any learning (if you can call that a mind) can't be reasonably expected to replicate all the subjective phenomena any more than a (not physically possible) perfectly frozen brain. Since in a sense a fundamental aspect of it would be stuck unchanging.
I barely followed this exchange but I'm pretty sure the gist of it is fascinating. I don't suppose you have any names or titles (or handy introductory articles ... 👀) for a complete layperson outsider to read?
There’s a few things. I’m afraid I’m only more qualified to talk about Learning Theory. If you look at Valiants paper A Theory of the Learnable that was major in the field but can be a confusing read. The beginning of the textbook Foundations of Machine Learning by Mohri and others also dedicates a few chapters to it. Unfortunately Learning Theory is not hot right now.
Thanks :)

What always drives me crazy about these shitheads who say “I am updating my priors” is that YOU DON’T UPDATE PRIORS – YOU COMPUTE A POSTERIOR! They don’t even get the math concept right.

what if we met at the econ lab 😳 and calculated posteriors 😳😳😳
their prior emerged from their posterior
I always hear it as a Planescape: Torment thing. You know, "updated my journal." *Updated my priors.*
Oh goddamnit now someone needs to write a Planescape Torment fanfic where the Basilisk is a general in the Blood War, and the Rationalists are being forced to fight for it on the plains of Avernus for eternity. Hmmm. Would the Basilisk side with the baatezu or the tanar'ri? Or maybe the yugoloths of the Crawling City?
I feel like Basilisk is obviously lawful evil, seeing as its essence is derived from a future promise of torment, so the Baatezu
Yeah, that sounds about right.
Lawful evil is the best description of its alignment (pun actually not intended, but still happening) that I ever read.
Uhh the point is to update your priors so the posterior distribution is according to your preferences whatever the observation.....

r/TOTALLYNOFEELING, I swear on the basilisk.

Reminds me of 6 year olds who pretend to be robots during recess. Freedom from guilt and shame through mechanical douchebaggery.

That’s a pretty disingenuous simplification of Bayesian reasoning.

That's the point. This is making fun of people who claim to use Bayesian reasoning but don't; ie, many pseudo intellectuals of the center and right.