r/SneerClub archives
newest
bestest
longest
15

[deleted]

CS is the least science-y of sciences when you consider how hackily many things we rely on are implemented. What a misnomer. Source: I did a Bachelor's in CS, too LMAO
hey, Machine Learning counts as Science! and is definitely not just poorly-justified statistics
the virgin estimating y vs the Chad estimating beta
Only filthy humans see patterns in noise! Algorithms? Nope nope nope!
It should probably be regrouped back into maths.
All the theoretical stuff in algorithms? Okay. Just remember that in no other engineering discipline will you find anyone advocating "move fast and break things" over "measure twice, cut once" in practice...
Algorithms themselves are probably the least math-y part of CS though. I mean it's like saying "formulas" are math, algorithms are given in constructive proofs but "data structures and algorithms" as thought in CS courses aren't math.
"Algorithms" is also the name of an active field of research which essentially counts as applied mathematics, in my opinion. The main venue for algorithms publications is the SODA conference. To give you an idea of the type of research, here are the recipients of the best paper awards in SODA 2018: [Approaching 3/2 for the s-t path TSP](https://arxiv.org/pdf/1707.03992.pdf) [Online bipartite matching with amortized O(log^2 n) replacements](https://arxiv.org/pdf/1707.06063.pdf) The main contributions in these papers have the form of theorems and their rigorous proofs. This makes these papers math, at least in my mind.
Word — I definitely wasn’t referring to the kind of stuff you’d find in a standard SWE interview/hazing ritual and more like the proof-based crap.
Look I have a theoretical degree in physics, could we not gatekeep science? ;)
Took me a second.
I think "scientist" refers to Christiano. He did do some quality science back when he was in academia.
>Paul just completed a PhD in the theory of computing group at UC Berkeley. [https://www.fhi.ox.ac.uk/team/paul-christiano/](https://www.fhi.ox.ac.uk/team/paul-christiano/) There's a list of publications on his website: [https://paulfchristiano.com/publications/](https://paulfchristiano.com/publications/) It's easy to find this stuff on Google BTW

Finding the best ways to do good. Made possible by The Rockefeller Foundation.

¯\(ツ)

Future Perfect is their EA branch, yeah.

That being said, yeah, there are lots of scary things about AI. Notably “Wait, why are we letting a bunch of rich white guys code crime profiling and hiring software?”

[Automatic lights](https://www.youtube.com/watch?v=jqG1fX3ZaLQ)
lmao incredible
wow this tv series is now a decade old. We learned nothing! Nothing!
honestly that 50 second clip is a better summary of the problem with AI than the entire rationalist ouvre
yes, we need intersectional coding bootcamps
👏Hire👏more👏female👏crime👏profiler👏coders👏
I, for one, am incredibly mad that their database of exploitable, identifying information (which they've pinky-sweared won't be used for no-good) doesn't include more marginalized people! We're trying to ~~predict behavior to sell products~~ solve problems here!
> That being said, yeah, there are lots of scary things about AI. Notably "Wait, why are we letting a bunch of rich white guys code crime profiling and hiring software?" yep
It's the potential for automated weapons which really scares me.
not much scarier than human-operated weapons frankly i don't particularly care who pressed the button while i'm being blown up by a hellfire missile
https://en.wikipedia.org/wiki/Slaughterbots > ...a 2017 arms-control advocacy video presenting a dramatized near-future scenario where swarms of inexpensive microdrones use artificial intelligence and facial recognition to assassinate political opponents based on preprogrammed criteria. ... > ###Feasibilty > Overall The Economist agreed that "slaughterbots" may become feasible in the foreseeable future: "In 2008, a spy drone that you could hold in the palm of your hand was an idea from science fiction. Such drones are now commonplace... When DCIST wraps up in 2022, the idea of Slaughterbots may seem a lot less fictional than it does now." The Economist is skeptical that arms control could prevent such a militarization of drone swarms: "As someone said of nuclear weapons after the first one was detonated, the only secret worth keeping is now out: the damn things work"
yeah spooky but not any different from a person piloting a drone with some thermite attached the hardware's the scary part, not the "wOOoooOOoo it's controlled by a resnet" part
An autonomous drone is cheaper, more attentive, unlikely to identify with its targets, and will unquestioningly follow crazy policies like targeting ethnic characteristics.
good thing soldiers are expensive, always identify with their targets, and never target ethnic minorities e: removed the article link, wasn't actually relevant on a reread
That argument would also apply to Zyklon B.
right. zyklon b is terrifying, and tiny drones with blobs of thermite attached are also terrifying. they're just not *more* terrifying for being "autonomous". they wouldn't last 2 days without human maintenance anyway. i'm scared of AI because it concentrates wealth and disenfranchises people, and i'm scared of war because it kills people in horrible ways. but i'm not scared of war because there are some python scripts running a few of the weapons the thing about AI-warfare-scaremongering is that it's a magic eye trick. it's propaganda that: - draws attention away from the very real systems of oppression that AI and tech in general are already responsible for (because racist algorithms are less scary than "killer robots") - [justifies spending lots of money on "ethical" (read: expensive) warfare techniques, like fighter jets](https://www.nsfwcorp.com/dispatch/cheap-drones/)
>they're just not more terrifying for being "autonomous". This is exactly why I think MIRI is a grift -- if they're truly scared shitless of evil AGI, you'd think they'd AT LEAST be eager to demonstrate via destructive testing what technological innovations from their research can do against the lesser human bad actors in a way that infosec people won't scoff at. You'd also think that as a Jew and someone eager to look like he's made some kind of contribution to mankind, Yudkowski would be more vigorous about keeping bad actor types from ever touching AI, erring on the side of say, purging the Rationalist community of reactionaries. They sure as hell don't behave like people experiencing any kind of existential threat.
yeah lol. like publish some papers on minimizing collateral damage from a high-altitude EMP blast and then get back to me
No, I mean that argument would have suggested in 1941 that Zyklon B was no big deal because humans were already genociding other humans with guns and starvation.
i mean. that's true. most of the people killed in the holocaust were killed with guns and starvation. zyklon b was used for 1 million out of the 17 million murders performed by the german government. the thing to be scared of was the *system that produced the holocaust* -- which was, by the way, entirely run by regular people -- and not the particular knickknacks they used to do their evil deeds.
Those systems are already in operation, and we should struggle to stop them but it's unlikely to happen quickly. Efficiency gains in murder and oppression are terrifying in their own right, particularly large efficiency gains.
but. there aren't any. guns and starvation are pretty damn efficient already. now, if we were talking about nukes, i'd agree. or maybe bioengineered plagues. but face-recognition-driven thermite drones?? pretty specialized and inefficient. not really any efficiency gains over good old "lock people in an area without food and wait for them to die". or "shoot them with a bullet that you bought in bulk for 30 cents a pop"
Automating a process tends to result in massive efficiency gains, because you no longer have to persuade people to execute it for you. Military automation is likely to work the same way.
so explain to me what sort of "massive efficiency gains" are going to happen because of AI in the military. in the past year hundreds of thousands of people have died in Yemen, including [85,000 children](https://www.nytimes.com/2018/11/21/world/middleeast/yemen-famine-children.html). there's a bombing campaign there, but you know what's doing most of the killing? starvation. and all it took to keep food out of Yemen was a couple of sanctions imposed by the U.S. and Saudi Arabia. Hard to beat hundreds of thousands of deaths for the cost of a piece of paper + a couple of boats and planes sitting outside the border.
My fear is that drones are going to be used as flying landmines, which take off and fly towards human forms from hundreds of feet away, allowing sparse distribution; and as really stupid but perfectly dedicated police, enforcing simple policies like "no ," or "the people on this list must be killed." Blockades are only really effective after you've already wiped out your target's defenses. SA spent over $10B to get to that point. In a few years, that kind of money will get you enough drones to cause Yemenis starve to death because they're afraid to go outside, with no infrastructure damage, no telegenic explosions, no soldiers wracked by guilt, and with a system whose owners can enable and disable it at will.
we already have those. they're called UAVs and/or hellfire missiles, and they work perfectly well human-operated > In a few years, that kind of money will get you enough drones to cause Yemenis starve to death because they're afraid to go outside, with no infrastructure damage, no telegenic explosions, no soldiers wracked by guilt, and with a system whose owners can enable and disable it at will. human-operated UAVs. you're describing human-operated UAVs also lmao at "telegenic explosions" and "soldiers wracked by guilt". Yemen's been happening for years -- since 2014 -- and people only really started giving a shit a few months ago.
The thing is you won't find, in the actually existing world, instances where low-level enforcers are doing things that they would refuse to do. But you can still acknowledge that the possibility of mutiny serves as a limit on what can be demanded of them.
soldiers mostly mutiny when you don't feed them, not when you tell them to kill people. also your argument also applies to the drone operators
>soldiers mostly mutiny when you don't feed them, not when you tell them to kill people. Mostly. > also your argument also applies to the drone operators Yes, but there are fewer of them.
well there's also the other side of the coin don't forget, you aren't trying to kill all yemenese, you're only trying to maintain your power. you don't want to kill them all, that wouldn't be pure efficiency. pure efficiency would be something like: kill 85% of them, capture 5% for medical testing keeping them in cages in deep underground laboratories, use 8% for blended up human goo that you can reconstitute into anything like iron for cereal and bone marrow for cattle feed, and use 2% in a 24/7 AI Virtual Reality top secret mind control lab where you find their memories and make them relive their memories, track their biodata, keep them living in the matrix, introducing new stimuli and seeing what happens, etc. ​ **TLDR;** scientific testing with POW's would be near pure efficiency.
what the fuck are you talking about
Oh wow that's a weird argument. They could have just used some other method of killing. Like the whole industrial technological-industrial-organizational apparatus maybe is the technology that enabled the madness but no particular chemical was an important technological enabler.
Sure, it's not Zyklon B specifically, it's the capacity for efficiently killing people with chemical weapons.
Eh, you're kind of making a point but not really. E.g. consider Oskar Schindler. And he wasn't the only one working the other side. There is no robot Schindler. (Until the singularity I mean, then all AI does whatever it wants.)
This is the scary thing IMO. A system in which the wealthy and powerful pay people (soldiers, police, politicans) to maintain their position has an inherent degree of instability. Automating the monopoly on violence removes that step, and the potential consequences of wealth disparity for the rich could rapidly become unshackled.
*Smart* land mines.

This article isn’t even completely terrible, it does vaguely point in the direction of real problems that real scientists do real work on. That makes it better than all other articles related to MIRI & co.

Have you read it ? It openly praise MIRI, Nate Soares, and AI apocalypse eschatology ("with a bang").
I did, and the absurd scaremongering is indeed detrimental. But unlike other articles, this one actually contains one-and-a-half paragraph of good and meaningful text: >Human institutions are, already, better at maximizing easy-to-measure outcomes than hard-to-measure outcomes. It’s easier to increase standardized math test scores than it is to increase students’ actual math knowledge. It’s easier to cut reported robberies than it is to prevent actual robberies. > >Machine-learning algorithms share this flaw, and exaggerate it in some ways. They are incredibly good at figuring out through trial and error how to achieve a human-specified quantitative goal. It's good that they are right on one point, however accidental it might be, as it improves over them being consistently wrong all the time.