

Pessimistically I think this scourge will be with us for as long as there are people willing to put code “that-mostly-works” in production. It won’t be making decisions, but we’ll get a new faucet of poor code sludge to enjoy and repair.


Pessimistically I think this scourge will be with us for as long as there are people willing to put code “that-mostly-works” in production. It won’t be making decisions, but we’ll get a new faucet of poor code sludge to enjoy and repair.


In French, ChatGPT sounds like « Chatte, j’ai pété » meaning “Pussy, I farted”.


Maybe he doesn’t (can’t ?) understand who much of Software Development is filled with time-wasting, pursuing dead-ends. I’m not sure if there’s a good analogy for a law practice, pouring hours into trying to apply a legal code that is no longer in force?
He does seem very sure of his own rightousness, the problem is “Developer Burnout” not “Substandard Submissions”. What does he even envision when expressing a desire for more “Human-centric” development that incorporates “LLMs”? Is it just grandstanding word salad?


The power, of words:
Is all but naught, if not heard.
And a bot, cannot.


Of course! It’s to know less and less, until truly, the only thing they know is that they know nothing.


It’s clearly meant to mean /HalleluJah


To be fair though it’s not just their brains turning to mush, google has genuinely been getting worse too.


Ahh the missing period, an even worse tone indicator compared to /hj (youtube).


I’ll gladly endorse most of what the author is saying.
This isn’t really a debate club, and I’m not really trying to change your mind. I will just end on a note that:
I’ll start with the topline findings, as it were: I think the idea of a so-called “Artificial General Intelligence” is a pipe dream that does not realistically or plausibly extend from any currently existent computer technology. Indeed, my strong suspicion AGI is wholly impossible for computers as we presently understand them.
Neither the author nor me really suggest that it is impossible for machines to think (indeed humans are biological machines), only that it is likely—nothing so stark as inherently—that Turing Machines cannot. “Computable” in the essay means something specific.
Simulation != Simulacrum.
And because I can’t resist, I’ll just clarify that when I said:
Even if you (or anyone) can’t design a statistical test that can detect the difference of a sequence of heads or tails, doesn’t mean one doesn’t exist.
It means that the test does (or can possibly) exist that, it’s just not achievable by humans. [Although I will also note that for methods that don’t rely on measuring the physical world (pseudo random-number generators) the tests designed by humans a more than adequate to discriminate the generated list from the real thing.]


Even if true, why couldn’t the electrochemical processes be simulated too?
But even if it is, it’s “just” a matter of scale.
I do know how to write a program that produces indistinguishable results from a real coin for a simulation.
As a summary,


Assuming they have any amount of good faith, I would make the illustration that using AI is like dunning-kruger effect on steroids. It’s especially dangerous when you think know enough, but don’t know enough to know that you don’t.


That’s because there’s absolutely reams of writing out there about Sonnet 18—it could draw from thousands of student essays and cheap study guides, which allowed it to remain at least vaguely coherent. But when forced away from a topic for which it has ample data to plagiarize, the illusion disintegrates.
Indeed, Any intelligence present is that of the pilfered commons, and that of the reader.
I had the same thought about the few times LLMs appear to be successful in translation, (where proper translation requires understanding), it’s not exactly doing nothing, but a lot of the work is done by the reader striving to make sense of what he reads, and because humans are clever they can somtimes glimpse the meaning, through the filter of AI mapping a set of words unto another, given enough context. (Until they really can’t, or the subtelties of language completely reverse the meaning when not handled with the proper care).


TIHI
I reiterate the hope that AI slop, will eventually push us towards better sourcing of resources/articles as a society going forwards, but yikes in the meantime.


On this topic I’ve been seeing more 503 lately, are the servers running into issue, or am i getting caught in anti-scraper cross-fire?


Some changes to adventofcode this year, will only have 12-days of puzzles, and no longer have global leaderboard according to the faq:
Why did the number of days per event change?
It takes a ton of my free time every year to run Advent of Code, and building the puzzles accounts for the majority of that time. After keeping a consistent schedule for ten years(!), I needed a change. The puzzles still start on December 1st so that the day numbers make sense (Day 1 = Dec 1), and puzzles come out every day (ending mid-December).
Scaling it a bit down rather than completely burning out is nice i think.
What happened to the global leaderboard?
The global leaderboard was one of the largest sources of stress for me, for the infrastructure, and for many users. People took things too seriously, going way outside the spirit of the contest; some people even resorted to things like DDoS attacks. Many people incorrectly concluded that they were somehow worse programmers because their own times didn’t compare. What started as a fun feature in 2015 became an ever-growing problem, and so, after ten years of Advent of Code, I removed the global leaderboard. (However, I’ve made it so you can share a read-only view of your private leaderboard. Please don’t use this feature or data to create a “new” global leaderboard.)
While trying to get a fast time on a private leaderboard, may I use AI / watch streamers / check the solution threads / ask a friend for help / etc?
If you are a member of any private leaderboards, you should ask the people that run them what their expectations are of their members. If you don’t agree with those expectations, you should find a new private leaderboard or start your own! Private leaderboards might have rules like maximum runtime, allowed programming language, what time you can first open the puzzle, what tools you can use, or whether you have to wear a silly hat while working.
Probably the most positive change here, it’s a bit of shame we can’t have nice things, a no real way to police stuff like people using AI for leaderboard times. Still keeping the private one, for smaller groups of people, that can set expectations is unfortunately the only pragmatic thing to do.
Should I use AI to solve Advent of Code puzzles?
No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve - no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.
It’s nice to know the creator (Eric Wastl) has a good head on his shoulders.


It’s also incredibly unimaginative to try and frankenstein the very same “remake” concept as one year ago.


Some juicy extracts:
Soon enough then the appointed day came to pass, that Mr. Assi began playing some of the town’s players, defeating them all without exception. Mr. Assi did sometimes let some of the youngest children take a piece or two, of his, and get very excited about that, but he did not go so far as to let them win. It wasn’t even so much that Mr. Assi had his pride, although he did, but that he also had his honesty; Mr. Assi would have felt bad about deceiving anyone in that way, even a child, almost as if children were people.
Yud: “Woe is me, a child who was lied to!”
Tessa sighed performatively. “It really is a classic midwit trap, Mr. Humman, to be smart enough to spout out words about possible complications, until you’ve counterargued any truth you don’t want to hear. But not smart enough to know how to think through those complications, and see how the unpleasant truth is true anyways, after all the realistic details are taken into account.” […] “Why, of course it’s the same,” said Mr. Humman. “You’d know that for yourself, if you were a top-tier chess-player. The thing you’re not realizing, young lady, is that no matter how many fancy words you use, they won’t be as complicated as real reality, which is infinitely complicated. And therefore, all these things you are saying, which are less than infinitely complicated, must be wrong.”
Your flaw dear Yud isn’t that your thoughts cannot out-compete the complexity of reality, it’s that it’s a new complexity untethered from the original. Retorts to you wild sci-fi speculations are just minor complications brought by midwits, you very often get the science critically wrong, but expect to still be taken seriously! (One might say you share a lot of Humman misquoting and misapplying “econ 101”. )
“Look, Mr. Humman. You may not be the best chess-player in the world, but you are above average. [… Blah blah IQ blah blah …] You ought to be smart enough to understand this idea.”
Funilly enough the very best chess players like Nakamura or Carlsen will readily call themselves dumbasses outside of chess.
“Well, by coincidence, that is sort of the topic of the book I’m reading now,” said Tessa. “It’s about Artificial Intelligence – artificial super-intelligence, rather. The authors say that if anyone on Earth builds anything like that, everyone everywhere will die. All at the same time, they obviously mean. And that book is a few years old, now! I’m a little worried about all the things the news is saying, about AI and AI companies, and I think everyone else should be a little worried too.”
Of course this a meandering plug to his book!
“The authors don’t mean it as a joke, and I don’t think everyone dying is actually funny,” said the woman, allowing just enough emotion into her voice to make it clear that the early death of her and her family and everyone she knew was not a socially acceptable thing to find funny. “Why is it obviously wrong?”
They aren’t laughing at everyone dying, they’re laughing at you. I would be more charitable with you if the religion you cultivate was not so dangerous, most of your anguish is self-inflicted.
“So there’s no sense in which you’re smarter than a squirrel?” she said. “Because by default, any vaguely plausible sequence of words that sounds it can prove that machine superintelligence can’t possibly be smarter than a human, will prove too much, and will also argue that a human can’t be smarter than a squirrel.”
Importantly you often portray ASI as being able to manipulate humans into doing any number of random shit, and you have an unhealthy association of intelligence with manipulation. I’m quite certain I couldn’t get at squirrel to do anything I wanted.
"You’re not worried about how an ASI […] beyond what humans have in the way of vision and hearing and spatial visualization of 3D rotating shapes.
Is that… an incel shape-rotator reference?
You do realize that—within reason, of course—you’re describing sealioning, one of the more toxic anti-social internet behaviours? [Not the worst exactly, but one where moderation often tarries much before taking action.]
I guess my P(Doom|Bathroom) should have been higher.