Wolfram’s post is fucking interminable and consists of about 20% semi-interesting math and 80% goofy shit like deciding that the creepy (to Wolfram) images in the AI model’s probability space must represent how aliens perceive the world. to my memory, this is about par for the course for Wolfram

the orange site decides that the reason why the output isn’t very interesting is because the AI isn’t a robot:

What we see from AI is what you get when you remove the “muscle module”, and directly apply the representations onto the paper. There’s no considering of how to fill in a pixel; there’s just a filling of the pixel directly from the latent space.

It’s intriguing. Also makes me wonder if we need to add a module in between the representational output and the pixel output. Something that mimics how we actually use a brush.

this lack of muscle memory is, of course, why we have never done digital art once in the history of humanity. all claims to the contrary are paid conspirators in the pocket of Big Dick Blick

Of course, the AIs can’t wake up if we use that analogy. They are not capable of anything more than this state right now.

But to me, lucid dreaming is already a step above the total unconsciousness of just dreaming, or just nothing at all. And wakefulness always follows shortly after I lucid dream.

only 10x lucid dreamers wake up after falling asleep

we can progressively increase the numerical values of the weights—eventually in some sense “blowing the mind” of the network (and going a bit “psychedelic” in the process)

I wonder if there’s a more exact analog of the action of psychedelics on the brain that could be performed on generative models?

I always find it interesting how a hero dose of LSD gives similar visuals to what these image AI’s do to achieve a coherent image.

[more nonsense]

I feel like the more we get AI to act like humans, and the more those engineers and others use LSD, the more convergence we are going to have with curiosity and breakthroughs about how we function.

the next time you’re in an altered state, I want you to close your eyes and just imagine how annoyed you’d be if one of these shitheads was there with you, trying to get you to “form a BCI” or whatever by typing free association words into ChatGPT

  • Phil
    link
    English
    41 year ago

    Ah, that’s why you were asking about it?

    & yeah, a naïve digital physics model is going to run slap into issues with both Bohmian constaints on locality & the fact that it’s unlikely your lovely new model predicts anything taking the current model and ignoring all the philosophical problems (because they don’t actually matter) fails to predict. I would be very surprised if Wolfram isn’t aware of this.

    There are some intriguing hints of something deeper in the holographic principle, which suggests that everything that happens inside a bounded region of space can be inscribed on the surface of that volume: You could imagine some kind of process occurring on the surface of a volume that’s connected to the interior in ways that would be non-local from the interior’s POV, but might be entirely local on that surface.

    But none of this is developed anywhere near the point that you can wave at one of Wolfram’s favourite automata & go “well, if you run that on the surface of a volume then Quantum Physics & Gravity drop out naturally.” It’s all handwaving & “it might work like this” & a bunch of very theoretical physics that doesn’t, at the current time, actually correspond to real world physics very much, if at all.