When you achieve Full Thetan Activation, the Future of Life Institute reveals to you that the asteroid that caused the Chicxulub impact was actually a superintelligent alien computer. Also that the Great Dying was a result of a Permian supercomputer designed to optimize for cool volcanoes.
> a vast or even the vast (at least remotely sensible) proportion of worries about how technology is going to affect jobs which it is dismissing constitute fears about layoffs and an inadequate pickup which requires e.g. workers with lifetime skills to take significant cuts in pay and quality of life in order to avoid unemployment
You are completely right about the gap between what most people worry about when they worry about technological disruption of employment and all of these arguments about *long-term* technological unemployment, but I don’t really doubt that the scope of the tweet is meant to be the latter, because the whole discourse that these guys inhabit treats the former as a rounding error. I don’t think I can fairly blame this on “longtermism” because economists have been doing it longer than that has been a thing but obviously this kind of handwaving past the gory details of immediate impacts fits right in with “longtermism.”
> absolutely questionable consequences of capitalism
Socialist countries haven't really performed or done much better in avoiding these consequences
And the main difference I guess is short vs long run unemployment, the other dude's phrasing was just poor
I just assumed that your suggested alternative to capitalism would be socialism, because if not, what else is there? Anarchism or Communism (which itself is seen by some as requiring socialism as a stepping stone)
I agree we should address the negative consequences of capitalism through the government in a social democracy system, and merely pointing out some socialist and primitivist society (or some socialism revolution) would be worse.
What are the other alternatives being suggested? Genuinely curious. Only alternatives I hear are socialism, some even more libertarian capitalism and anarchism (left or right wing), which all seem less feasible with socialism most feasible
Well if you're mentioning the problems of capitalism we should probably address them, right? And if these problems as inherent to capitalism, then either reform significantly or socialism. After all, all these things (industrialization, development etc.) have been fundamentally linked to capitalism (by you) too
I'm not talking about failed socialism means capitalism is problem free, just that in light of unfettered capitalism failing and even successful socialist states being eh, there may be a solution in terms of policy and regulation as opposed to a socialist revolution.
And to the first point, I'd say that was industrialization. Even socialist nations industrialized. Of course all of it is underpinned by human greed.
It doesn't have to be a revolution-- I just mean any socialism. Can be a peaceful revolution, or even everyone voting em in and peacefully giving over assets to state. I only mention socialism as the only real alternative to capitalism, which you correctly critiqued
I do think there is a difference, because even socialist states industrialized and industrialization and capitalism are both driven by human greed. If capitalist and industrialization are interchangeable, that leaves USSR, China, etc. in odd spots
Amazing that you have produced more rational content here making a joke than in the tweet.
Are we sure this person isn’t chatgpt being prompted to pretend like it’s smart?
literally zero historical precedent for advances in technology
putting people out of work. awesome. I wish I could get paid whatever
this guy is getting paid to not understand anything
Eth is getting paid for these half baked takes that a 5 year old can see through.
The people who lose their jobs to technological advancement don’t always immediately find new/equivalent work. They aren’t guaranteed to be the people to step into positions opened up by technological advances.
This is going to happen over the coming decades as we approach AGI. Seems like a waste of time to pretend otherwise.
Given the retraining promises to the steel and auto industries that never happened, it's going to be very satisfying to hear about former coders at call centers and the drive through at McD's
Some unemployment=/= mass unemployment. All the luddites who predicted mass unemployment from previous eras were inevitably wrong as new technologies created new jobs and augmented old ones, only completely replacing a select few. It's possible this will happen again.
you're right but there's one small issue: this guy thinks skynet is likelier than a higher unemployment rate so I don't have to interpret anything he says in good faith
> All the luddites who predicted mass unemployment from previous eras were inevitably wrong as new technologies created new jobs and augmented old ones
Okay, prove that.
I guess over industrial revolution is best example. Around 90% of all labour (at least, in some countries) was automated, eventually many people left agriculture jobs to city, working in manufacturing, services, etc.
No, it is not a good example.
> [Did technological unemployment sweep across England in the wake of the British Industrial Revolution? We don’t know. The extent to which the new machines replaced workers, leaving them temporarily unemployed, has never been quantified. Recent scholarship refers to the technological unemployment which caused devastating short- term harm to workers during the Industrial Revolution (Frey 2019), while other studies doubt the scale of this innovation (Mokyr et al, 2015).](https://blogs.lse.ac.uk/economichistory/2022/06/01/technological-unemployment-in-victorian-britain/)
Oh I 100% agree there was unemployment, just that at the same time "new technologies created new jobs and augmented old ones" was always true. It's just short vs long run.
Effort should be focused on alleviating short term unemployment (frictional, structural), retraining, etc. Think we broadly agree-- if anything we can learn from failures of industrial revolution to ensure equitable short and long term split of share of growth
One of the things about these guys that makes them hard to take
seriously is how absolutely unconcerned they are with looking like
morons. If I thought that my cause was the only thing standing between
humanity and extinction, I would put a lot more thought into making sure
the optics of that cause were as clean as possible and no one speaking
from a position of authority said this type of absolute moron shit.
… this is quite literally backwards? There are tons of historical
examples of automation and mechanization destroying entire industries
and leading to large scale unemployment. That it’s currently LLMs that
we’re focusing on is a modern twist on a story that goes all the way
back to the freaking jacquard loom, if not earlier.
Which industries did automation and mechanisation destroy? Technology changes employment, it's reductive to say it causes large scale unemployment when it also creates a whole host of new jobs
I would agree but the main difference is, in this case, we have no idea which kind of job it can create except prompt engineer (only to some extent) while we know perfectly which one all these transformations can "displace". I don't think it was the case during the first or second industrial revolution. I might speak with hindsight in this case but that's also the reason why it could be frightening when you are exposed to such changes. I find the old tasks a lot easier to imagine than the new. That is only natural but still unsettling
I hope you are right though - but I wouldn't be so sure. At least knowing the adaptation's faculty of most people I know. Hopefully I am wrong but I don't understand the lack of understanding there
AI might simply change the tasks workers do, for instance software engineers might verify and review code rather than writing it. Consider the relationship between bank tellers and ATMs https://www.aei.org/economics/what-atms-bank-tellers-rise-robots-and-jobs/
Why? When a company writes code for a client, that client typically reviews the code of that company to verify that it meets their specifications. Why would AI be different in that regard?
> When a company writes code for a client, that client typically reviews the code of that company to verify that it meets their specifications.
Oh sweet summer child.
Yeah I know this argument which is some variation of Jevons paradox and convince me, but to some extent only
but 1/ software engineers, due to their knowledge of technology and, most important factor in the technocracy we live in - overall social credibility - are not really the people I'm mostly concerned wit there at least in the short term due to social inertia. Large portion of the economy aren't software engineers - and can also be automated, while it's still really hard to think about a function they could take in - eventually, the code reviews can largely be automated (to some extent) too. I agree that we're quite not there, - not that far either 2/ the attempt from main AI firms to replace most parts of the industry is very real - and parts of the discourse and marketing (present for example in the white papers of open AI or anthropic) is very real and one-sided 3/ the "resolution" of this paradox comes with an energetical cost which I think is bounded - and would only be tolerated as long as it is economically "viable". The problem being the society in which it takes place and the asymmetry of access to energy'. Induction from historical affairs, in this case is, I think, not really the right approach due to the radical difference of means and situation 4/ even if I agree with you that there is some space for possible transformations and adaptation, those must be thoroughly pondered and spread through the Zeitgeist and without being dismissed as "luddite" - considering the uncertainty surrounding most agents. In this case, I don't really believe this to solve itself automatically - or at least I don't expect it to solve itself without thinking about it or dismissing the problem as something secondary or unimportant. If there is a need for new structures to answer some problem, it is legitimate to think tere is a need for people to tackle such problems.
I would gladly accept some documentation regarding the interplay between economy, technology (AI or not) and energy) though, I am in fact quite desperate for it as I am really interested- so feel free to share if you have some valid objections or resources
Sure, it’s totally reasonable to believe that AGI can wipe out the
whole world’s armies using nanotech robots it designs using mail-order
ingredients, but that it won’t obsolete millions of call center workers,
office-workers and plumbers. Nothing inconsistent about this package of
beliefs.
There have been some recent announcements around zero-shot learning for object manipulation. I don't think anyone should get comfy with the idea that "robots can't do things in the physical world."
You are all strawmanning his argument. We evolved, which means there
was no intelligent life here before us, which means that it was
extincted by AGI, which of course means the theory and experiment agree
and we’re all doomed.
cf Humans killing the mammoth civilization (the mammoths had bred and taught the humans in the hopes that humans would simulate the mammoths in their minds and give the mammoth simulations eternal bliss, but it didn't work out that way). The mammoths needed Elimammoth Yudmastodowsky to solve the alignment problem first. A cautionary tale.
My brain just assumed that by existential risk, he meant like
existential dread (AI changing how we interpret life and such), because
how the fuck do you go from claiming unemployment is unlikely to “MASS
EXTINCTION”???
He’s from the Future of Humanity Institute (FHI) at Oxford. Future of
Life Institute is remote but HQ’d in Boston. If you’re going to sneer,
come correct.
What the fuck does he mean by empirically here
Ah yes, technology driven automation has no historical record of causing layoffs, but we have empirical support of things that cause human extinction…
[deleted]
literally zero historical precedent for advances in technology putting people out of work. awesome. I wish I could get paid whatever this guy is getting paid to not understand anything
One of the things about these guys that makes them hard to take seriously is how absolutely unconcerned they are with looking like morons. If I thought that my cause was the only thing standing between humanity and extinction, I would put a lot more thought into making sure the optics of that cause were as clean as possible and no one speaking from a position of authority said this type of absolute moron shit.
… this is quite literally backwards? There are tons of historical examples of automation and mechanization destroying entire industries and leading to large scale unemployment. That it’s currently LLMs that we’re focusing on is a modern twist on a story that goes all the way back to the freaking jacquard loom, if not earlier.
Sure, it’s totally reasonable to believe that AGI can wipe out the whole world’s armies using nanotech robots it designs using mail-order ingredients, but that it won’t obsolete millions of call center workers, office-workers and plumbers. Nothing inconsistent about this package of beliefs.
You are all strawmanning his argument. We evolved, which means there was no intelligent life here before us, which means that it was extincted by AGI, which of course means the theory and experiment agree and we’re all doomed.
I swear, the anti intellectualism
I eagerly await the empirical evidence
The claim that piss is stored in the balls, meanwhile, has stronger support - both theoretically and empirically.
Further proof that prestigious universities actively turn away intelligent people
I want to upvote thumis harder than a regular upvote
Eth is a premined scam. Also our solar system will cease to exist after the sun is going to die
¯_( ツ )_/¯
“Yes, yes, I’ve heard about the job displacement layoffs. But that’ll never work in theory.”
Science fiction isn’t the historical record, contrary to Galaxy Quest.
Let me guess, this man is not a historian.
Also does he have an unfortunate last name or did they go from cryptocurrencies to agi?
My brain just assumed that by existential risk, he meant like existential dread (AI changing how we interpret life and such), because how the fuck do you go from claiming unemployment is unlikely to “MASS EXTINCTION”???
He works at the Future of humanity institute. Distinct from the FLI
I hope the Future of Life Institute hires a couple of AI bots. Smarter output plus Eth gets a taste of the empirical evidence he needs.
Good to know we’ll still be employed in the apocalypse
The proof is in the pudding, Daniel Eth already outsourced his job of posting pseudo-coherent Tweets to GPT and retained his job.
He’s from the Future of Humanity Institute (FHI) at Oxford. Future of Life Institute is remote but HQ’d in Boston. If you’re going to sneer, come correct.
I was going to write a clever reply, but it’s not worth it. I’ll just sneer.
Oxford should be ashamed