nah, this is ok: Roko (yes, the Roko) has never been good at doing
the reading and it’s not clear he’s ever taken an idea in since reading
Yudkowsky in 2008 or so
man, my opinion of Roko was so much higher before I found out
anything about his opinions other than basilisks
Gwern is heralded (e.g. by Scott Alexander) as a research giant of
the movement because he reads stuff much more than the rest of ’em,
e.g. before he writes about it (a trick Scott A has notable trouble
with)
Yeah Roko is such a dummy for not knowing what “crazy runtime
meta-learning” (a very technical and precisely-defined piece of NLP
jargon that definitely appears in the abstract of the GPT-3 paper) is
supposed to mean.
I got banned from gwern’s sub for making some simple points about
communism. “Lazy questions” almost always means “uncomfortable political
questions”. They don’t like to be challenged in any way and they don’t
want to have a conversation.
Recent work has demonstrated substantial gains on many NLP tasks and
benchmarks by pre-training on a large corpus of text followed by
fine-tuning on a specific task. While typically task-agnostic in
architecture, this method still requires task-specific fine-tuning
datasets of thousands or tens of thousands of examples. By contrast,
humans can generally perform a new language task from only a few
examples or from simple instructions - something which current NLP
systems still largely struggle to do. Here we show that scaling up
language models greatly improves task-agnostic, few-shot performance,
sometimes even reaching competitiveness with prior state-of-the-art
fine-tuning approaches. Specifically, we train GPT-3, an autoregressive
language model with 175 billion parameters, 10x more than any previous
non-sparse language model, and test its performance in the few-shot
setting. For all tasks, GPT-3 is applied without any gradient updates or
fine-tuning, with tasks and few-shot demonstrations specified purely via
text interaction with the model. GPT-3 achieves strong performance on
many NLP datasets, including translation, question-answering, and cloze
tasks, as well as several tasks that require on-the-fly reasoning or
domain adaptation, such as unscrambling words, using a novel word in a
sentence, or performing 3-digit arithmetic. At the same time, we also
identify some datasets where GPT-3’s few-shot learning still struggles,
as well as some datasets where GPT-3 faces methodological issues related
to training on large web corpora. Finally, we find that GPT-3 can
generate samples of news articles which human evaluators have difficulty
distinguishing from articles written by humans. We discuss broader
societal impacts of this finding and of GPT-3 in general.
Still couldn’t tell you what runtime meta-learning is.
i’m gonna disregard everything else and say that it’s corny as hell
to call GPT-3 ‘terrifying’
also i like the dude in the replies who’s sharing his cure for COVID:
raising NAD+! it’s a cofactor that the anti-aging nerds are obsessed
with. guess there really is one solution to everything
prompt: Harry Potter and the Methods of Rationality.
> The original story is only five hundred pages long, but it was written by James and William Shakespeare in 1811, as well as various other authors and thinkers including Jules Ferry, T.S. Eliot, G.K. Chesterton, and Voltaire. The book is about the evolution from childhood to adult rationality and the philosophical implications of this process (that is, why and how we became rational beings).
> The book was written for adults, which obviously has some restrictions. James and William Shakespeare were not very bright people, and they were trying to write a novel about something as old as the age of reason itself, so they thought that something had to be done to prevent future children from becoming intellectual "children of the age of reason" and losing their moral compass and the ability to be responsible for their actions.
> There are three key passages from the original story:
> Chapter One: "It is a strange dream."
> This is perhaps the most important passage in the story. It shows that the young Harry never had an "instinctive" aversion to reality, and that he became rational when he was in his early teens. But it also shows Harry's need for companionship with the "young and foolish," like his sister Hermione, and his desire to be friends with the magical boy. It was all very surreal
GPT-2 can already do a better HPMOR than Yudkowsky, you're goddamn right they find that terrifying
I know im a broken record about this, but the rationalists keep being the worst at communicating their own concepts to people.
You can’t even say that this person is a timewasting troll, he clearly is actually interested in what gwern wants to tell people.
nah, this is ok: Roko (yes, the Roko) has never been good at doing the reading and it’s not clear he’s ever taken an idea in since reading Yudkowsky in 2008 or so
man, my opinion of Roko was so much higher before I found out anything about his opinions other than basilisks
Gwern is heralded (e.g. by Scott Alexander) as a research giant of the movement because he reads stuff much more than the rest of ’em, e.g. before he writes about it (a trick Scott A has notable trouble with)
Yeah Roko is such a dummy for not knowing what “crazy runtime meta-learning” (a very technical and precisely-defined piece of NLP jargon that definitely appears in the abstract of the GPT-3 paper) is supposed to mean.
And this is the same type of person who hates the “it’s not my job to educate you” Twitter leftist…
I got banned from gwern’s sub for making some simple points about communism. “Lazy questions” almost always means “uncomfortable political questions”. They don’t like to be challenged in any way and they don’t want to have a conversation.
The abstract:
Still couldn’t tell you what runtime meta-learning is.
i’m gonna disregard everything else and say that it’s corny as hell to call GPT-3 ‘terrifying’
also i like the dude in the replies who’s sharing his cure for COVID: raising NAD+! it’s a cofactor that the anti-aging nerds are obsessed with. guess there really is one solution to everything
Hey it’s Roko
lmao, what an unnecessary dickhead