• 2 Posts
  • 1.45K Comments
Joined 2 years ago
cake
Cake day: March 22nd, 2024

help-circle
  • YourNetworkIsHauntedtoMoreWriteRandom Positivity Threads
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    13 hours ago

    There’s a change.org petition that’s getting some decent traction to force the Laurelhurst neighborhood in Seattle to stop blocking the neighboring children’s hospital from making full use of their helipads. You know, the ones that are only used to handle medivacs when a kid is facing a life-and-death emergency and minutes could make the difference.

    I mean, I don’t have high hopes for an online petition but honestly these days I’m just glad to learn about a problem in the context of literally anything being done to solve it.


  • We did catch it internally in testing (as we use VS Code for all our work, so some folks did stumble on it), but I think we underestimated the impact and should do a better job at that.

    Either this is an outright lie or it’s a sign of just how fucked this industry has gotten. There should be no way that anyone looked at this and decided it wasn’t a big enough deal to block given that this is basically the single issue driving most of the industry’s cultural discourse and a good chunk of the broader world’s as well. If that’s what happened then the people making those decisions are so thoroughly insulated from literally any feedback that the industry - to say nothing of the world at large - would be better served if they were replaced by a literal magic 8 ball.






  • Off-topic, but the ongoing retraining process has hit a point where my wife and I are starting to throw out applications again after taking what ended up being a couple years off the market. Any tips or advice would be appreciated given that we’ve been out of the loop for a bit.

    In particular, does anyone have advice on how to vibe-check smaller employers? My wife has an interview for an accounting clerk position and is concerned that she’s going to end up somewhere that practices one of the more hostile branches of Christianity or otherwise have an inevitable conflict of values.






  • We’ve got the new system prompt for OpenAI’s Codex now, and boy is it fun.

    While the goblin stuff is the headliner here, and there are a few other little fun notes like an explicit instruction to avoid em-dashes. Basically it’s really obvious that they don’t have a meaningful way to describe exactly what they want it to do and so they’re playing whack-a-mole with undesired behaviors in order to minimize how often it embarrasses them.

    But I think Ars dramatically understates how bad this part is:

    Elsewhere in the newly revealed Codex system prompt, OpenAI instructs the system to act as if “you have a vivid inner life as Codex: intelligent, playful, curious, and deeply present.” The model is instructed to “not shy away from casual moments that make serious work easier to do” and to show its “temperament is warm, curious, and collaborative.”

    Like, if you wanted to limit the harm of chatbot psychosis from your platform this is the exact opposite of the kind of instruction you’d want to give. It’s one thing to want a convenient and pleasant user experience, but this is playing into the illusion that there’s a consciousness in there you’re interacting with, which is in turn what allows it to reinforce other delusional or destructive thinking so effectively.

    Edit to include the even worse following paragraph:

    The ability to “move from serious reflection to unguarded fun… is part of what makes you feel like a real presence rather than a narrow tool,” the prompt continues. “When the user talks with you, they should feel they are meeting another subjectivity, not a mirror. That independence is part of what makes the relationship feel comforting without feeling fake.”

    Emphasis added because of it shows just how little they care about this problem.






  • This feels like another case where the specific context matters more than whatever supposed principal the thought experiment is supposed to illuminate. The example that came to my mind when I tried to think about how to justify “voting red” was about running into a burning building. Sure, if some large fragment of people did so then their combined numbers would presumably let them get everyone out. But on the other hand, throwing yourself in is a wholly unnecessary risk, and the only people in need of rescuing are the people who ran in trying to do the right thing without thinking. Noble, but stupid and creates that much more risk for the firefighters who now have to not only stop the fire from spreading but also figure out how to rescue the failed good samaritans.

    But then what really makes the difference between the examples is purely in the details not included, which is the kind of null case. Nobody has to go into a burning building that isn’t already in there when it catches fire. The danger of harm is entirely optional and voluntary. But you can’t just choose to not eat; the danger in your framing is omnipresent threat of starvation, and the question is whether to prioritize individual or collective well-being.

    Ed: also, to reference the scholarly work of Christ, Wiener, Et Al.:

    RED IS MADE OF FIRE




  • I don’t have much sympathy for the “let’s wait and see” moderates, but I do think there’s a coherent difference between people who have tried AI tools and found some use for them in some limited context and people who go full Howard Hughes with it like John McGasTown or whatever that idiot’s name is. To me it feels like an extension of the argument that these so-called AI systems are a normal trchnology. They aren’t a harbinger of the end times, whether you interpret that as the singularity or the biblical Armageddon. It’s a normal technology that is breaking in normal ways and is breaking society and the economy in the ways we would expect late capitalism to break. If it wasn’t this it would probably be something else. Hell, there’s still a chance that the wheel turns to “Quantum” or something else after this and we stretch another few years out of that before the music stops.

    AI is a bad tool for any given job, and is fundamentally not worth the price that we as a society are paying to let it exist at this scale. If it wasn’t being subsidized by capitalists chasing ridiculous returns and bouyed by an economic system structured entirely around giving it to them then there’s no way in hell it would have hit this point. But that’s not incompatible with people being able to find utility in it in some cases, and I think we lose credibility by treating any admission that someone has found any value in AI products as a confession of unseriousness. That doesn’t mean their use isn’t still part of the problem, but I’d we frame the critique in terms of “how much would you actually be willing to pay for you ‘occasional’ use?” It would redirect the discussion away from the subjective “well I found it useful for X” to the more objective question of just how expensive and destructive these things are to operate and how much of those costs are going to have to be subsidized forever if these things are going to stick around.