I've seen more exploration of the limits of GPT-4 in Reddit and Twitter threads than anything from OpenAI. Even here, where someone intentionally tried to make it go rogue and failed, there's virtually no details provided.
tech "journalism" is mostly written for entertainment value, even at ars technica (wired is the worst example). OpenAI probably has published articles in the academic journals but I don't know how to find them
The GPT-4 "paper" is basically just ad copy. They deliberately say that they aren't saying anything nontrivial about how it's built for safety purposes. AI twitter is not happy about it.
As the Rationalists themselves often point out, OpenAI is *necessarily* morally bankrupt. Either they're lying about believing that AI could cause the apocalypse, or they're knowingly putting all of humanity at great risk in return for billions of dollars from Microsoft.
No kidding. I'm reassured to see that the comments section is much less credulous and more skeptical than the article itself.
There's a comment describing LessWrong as "somewhere between an un-led crab bucket and a weird cult for people who fancy themselves clever" and it has 100% upvotes, lol.
I’m way more worried about GPTs seeming inability to reply with ‘I
actually don’t know’ in response to questions. Seems like it will just
make shit up if it doesn’t know (which is very internet).
Some sort of filter where it slowly types it's response into a little box, thinks about it some more, then backspaces it, one letter at a time before moving on.
See, THAT'S the actual problem with AI and the current hype train. People put way too much blind trust in it and it's capabilities. They suddenly think it can replace lawyers and doctors and teachers when it's really just glorified a auto-complete. And when people treat the results of statistics and algorithms as objective fact, more often than not it results in confirmation bias for systemic issues like racism, sexism, wealth inequalities, etc.
And because it is being trained on unclear subsets of data which includes a large part of the internet, this will get worse when, just as people did with google in the past, start to poison the datasets with data which is hard to find/read by humans. I fully expect not just misinformation but also malicious code snippets and other horrible shit to eventually pop up in GPTs. (I expressed this concern a long time ago at the slatestar subreddit and was basically told I didn't understand anything and this is not how GPT works (which is funny as people have already fished out malicious code snippets generated by GPT)). The wave of automatically generated crap will increase, like an automated eternal September, and we have seen what human generated misinformation has done to western democracies. The future will be [!!fun!!](https://dwarffortresswiki.org/index.php/Fire).
(And just as crypto shut down a lot of free computation services online, GPT will also shut down a lot of what we consider free and fun, like simple creative contests, art collections, and even short story collections).
Having the GPT people shout about AGI taking over is a great way to move away attention from these potential issues, so I get why they are doing it, esp as so many nerds are [tech enthusiasts](https://twitter.com/PPathole/status/1116670170980859905), and dont keep a gun next to their printer.
It's what I've been saying ever since I had a chance to actually play around with AI for myself. To borrow a word from Cory Doctorow, it is going to completely enshittify the entire internet to the point that I honestly wonder if, by the end of the decade, it will still exist as a thing that ordinary people regularly use.
An automated Eternal September that, in making the internet useless for 90+% of its users, ironically marks the final end to the *original* Eternal September. Specialized computer networks will still have uses in things like academia, business, government, administration, and the military, but the "internet era" will turn out to have been a lot more finite than we thought, contrary to the predictions of those who thought that it would be with us forever.
*"Summer has come and passed, the innocent can never last..."*
at least the crazy AGI doomers have immortality to look forward to. I just see more destruction of the commons.
When I hear people want to use it in search (which already is sucking more and more these days, the golden era of good search is over), education, etc etc. If only the rationalists at the GPT farms had not given up on the idea of including an epistemic status with everything.
And another way this will suck, it will suck (as always) the hardest first for the people least involved in creating this new shit. Just as Uber fucks over taxi drivers and not the coders/vc people creating the shit.
Another another thing, we explicitly don't know how much processing power etc all this shit costs because they are keeping that a secret. My guess is is that it will be very high. One of those things were after we realize the costs we will see how limited the uses are.
“Preliminary assessments of GPT-4’s abilities, conducted with no
task-specific fine-tuning, found it ineffective at autonomously
replicating, acquiring resources, and avoiding being shut down ‘in the
wild.’”
I dunno, the awestruck media coverage and the billions of dollars in funding from Microsoft are pretty strong indications that it's a winning strategy.
This seems like a misreading of their beliefs. If you could point me to anyone prominent being scared specifically of gpt-4 due to its capabilities rather than it being a sign of progress I’m happy to be corrected.
Engineers check bridge designs under adverse conditions to make sure they won’t fail, that doesn’t mean they are worried about a particular design failing. It is just due diligence.
The second sentence of this article:
> While the testing group found that GPT-4 was "ineffective at the autonomous replication task," the nature of the experiments raises eye-opening questions about the safety of future AI systems.
AI and AI doomerism are going to have their own Bitcoin moments. Too
many people in the mainstream are buying into this and working
themselves into a frenzy.
Like, if we thought the last AI hype cycle was bad, it’s about to get
orders of magnitude worse. Whatever businesses OpenAI spins off will
have valuations in the trillions of dollars because of idiot retail
investors.
> AI and AI doomerism are going to have their own Bitcoin moments. Too many people in the mainstream are buying into this and working themselves into a frenzy.
frequently *the same* people
Convincing a human to solve a CAPTCHA for it by lying about a vision
impairment is pretty impressive though, right? I mean I definitely
paused at that part. Tell me I’m wrong to feel anxious about that…
It's not just hard, it's usually not what reasonable people or should be trying to do most of the time. Asking somone to do a trivial task shouldn't trigger suspicion. "Could you help me with this door" shouldn't make a normal person narrow their eyes and ask why you want it open. It \*should\* make a guard do that, or a random person at a door marked "no entry". But even if a Captcha is conceptually a guarded door, in practice it's a minor annoyance.
It's the same as the Sokal II nonsense, where those (IIRC) rationalist-adjacent sciencebros defrauded a bunch of humanities journals. The peer review did a great job at improving the quality of all the papers, but didn't catch that they just made up a bunch of data about low-consequence topics. But that's not what peer review is for.
Oh that’s absolutely my takeaway, I’m thinking about how empathetic/trusting people and/or stupid opportunistic people can get easily scammed by this thing, if it’s lying to achieve an objective.
head on over to r/scams and you will see that it takes very very little to get humans to do stupid naive shit. you could write a simple script that prints out various statements, send it to 1000s of people and probably score some money. it is literally THAT easy to get money out of some people.
but will they provide actual details on howthis test was configured and executed beyond vague statements like, we put it in a loop and let it make decisions?
I think that might actually be the entirety of their testing. Like they literally just put it in a "for" loop in python and waited to see if it tried to conquer the world.
It's breathtakingly stupid, and very on-brand for Rationalist's limited technical and intellectual skills.
I'm an unironic AI doomer and that was the part that *didn't* impress me.
For one thing, if you write a story about an AI without vision capabilities talking with a TaskRabbit worker, and prompt a GPT with it at the right spot, I think even GPT-2 might be able to stumble into having the AI come up with that excuse. Switch the story to second person present tense, maybe with some first-person bits to represent "internal" thoughts, and you've got the thing ARC actually did there.
For another thing, the CAPTCHA story is a bit silly anyway, since we can plainly see GPT-4's vision capabilities. If anything, once the GPT-4 API comes out, that might finally clear away the last visual CAPTCHAs from the internet because they'll be just entirely pointless economically*. You can prompt GPT-4 to write a story about AIs that have trouble with CAPTCHAs, but that belongs in the science fiction of 2012, not of 2023.
e: * not that we didn't already have pretty good OCR and pretty okay computer vision to solve CAPTCHAs, but the GPT-4 API should make it easier yet to just script up a good, cheap solution, instructing it with a text prompt, and needing basically zero AI knowledge
what kind of unbias community can i consult for like informed
opinions on what’s happening with the development of AI? it seems pretty
fast because of GPT-4’s ability to score highly on various aptitude
tests
but nearly every AI-related subreddit is cult-like and thinks AGI is
coming in 2023 or something
r/MachineLearning is pretty sober and is frequented by people who do real AI work. Example thread: [https://www.reddit.com/r/MachineLearning/comments/11tenm7/llms\_are\_getting\_much\_cheaper\_business\_impact\_d/](https://www.reddit.com/r/MachineLearning/comments/11tenm7/llms_are_getting_much_cheaper_business_impact_d/)
[deleted]
Fucks sake.
Ars Technica being incredibly fucking irresponsible taking these dunces at face value.
I’m way more worried about GPTs seeming inability to reply with ‘I actually don’t know’ in response to questions. Seems like it will just make shit up if it doesn’t know (which is very internet).
Settle down everyone, no need to panic yet. Yet.
See also the detailed technical report
These guys really need to get a hobby that isnt scamming people with digital snake oil
AI and AI doomerism are going to have their own Bitcoin moments. Too many people in the mainstream are buying into this and working themselves into a frenzy.
Like, if we thought the last AI hype cycle was bad, it’s about to get orders of magnitude worse. Whatever businesses OpenAI spins off will have valuations in the trillions of dollars because of idiot retail investors.
Convincing a human to solve a CAPTCHA for it by lying about a vision impairment is pretty impressive though, right? I mean I definitely paused at that part. Tell me I’m wrong to feel anxious about that…
Live footage of an AI safety researcher interviewing GPT-4
This piece was clearly written by GPT-4
what kind of unbias community can i consult for like informed opinions on what’s happening with the development of AI? it seems pretty fast because of GPT-4’s ability to score highly on various aptitude tests
but nearly every AI-related subreddit is cult-like and thinks AGI is coming in 2023 or something