So when, and i do mean when, this results in a crash, who will be held responsible?
Biden
Hillary with her butterymales?
Well we have already had deaths due to the current crunch (and not paying them) of us air traffic controllers.
https://www.cbc.ca/news/world/laguardia-collision-air-traffic-control-ntsb-9.7140479
And who was blamed for those? Oh yeah the traffic controllers! So when grok starts seeing how many 737s can fit in the same physical space, it will be the controllers fault. As you can imagine this will make those controllers want to quit, meaning more pressure to use shit like AI tools.
Liberals.
If it’s the ATC then it’s their fault, if it’s AI then it’s no one’s.
Obviously it’s the DEI
LOL. Fuck that. I’m not flying.
Forget flying, you’ll be getting Donnie Darkoed in your bedroom.
We don’t have enough air traffic controllers.
We use AI to reduce their workload. <---- We are here
We don’t need as many air traffic controllers.
We sack more air traffic controllers.
We don’t have enough air traffic controllers.
Powered by Grok?
My mistake, you’re absolutely right – I neglected to ensure the runway was clear before scheduling that landing. Please accept my apologies for causing those deaths. I’m really glad to be working with you, it’s reassuring that you’ll always keep me honest. You’re not just an assistant traffic controller – you’re a friend.
HAL-9000 if it was made today
Well, at least the AI seemed sincere in their apology.
Yet another reason not to go to the USA.
Well, once the mistakes start to pile up I will probably get a lot less judgement from others about my apprehension of flying.
I tried to use AI to install a reverse osmosis water system yesterday, I asked it to look at manual for hose colors to match them, I figured it would save me a few mins.
After an hour of it not working and trying all sorts of nonsense I looked in manual to have it show me it had given me all the wrong information to a simple task.
I can’t wait to have people’s lives reliant on this technology.
I just saw an ad for using ChatGPT to “come up with new recipes and baking ideas”
Yeah I’m sure having a bunch of people decide to eat whatever a hallucinating AI comes up with isn’t going to be dangerous at all…
I’ll look it up and try to find it. But I’m pretty sure there’s a YouTube video where they actually did ask Chat GPT to come up with new recipes and baking ideas and then they tried to make them to the results you would expect.
Edit: ok, so it looks like there are a whole lot of YouTubers making AI recipes to the expected results. So Google away.
We just need one rich asshole in a private jet to crash due to ATC failure for them to care.
a data analytics tool that will help advance the agency’s modernization objectives for aviation safety.
SMART will cost $12 billion, and will supposedly help flight controllers schedule flights weeks in advance to cut down on delays.
“This software will say, ‘well, listen, we can see this 45 days out. Let’s move some of those flights a little bit later, or five, seven, 10 minutes earlier, and we can resolve the issue. And so then you are not delayed,'” Duffy said.
Nothing in any of the facts as reported there suggest the use of language models, except for the editorialising in the summary about how LLMs hallucinate things, which makes me wonder about how competent Futurism’s tech journalism is.
Let’s say the error rate is 0.1%. Pretty low, right. But that’s one mistake per thousand flights. Are they really okay with one plane out of a thousand potentially crashing? There are certain industries and jobs where AI simply cannot and should not be used.
Each day, about 100-120 people die in car crashes in America.
Over 45,000 planes fly in America every day, and over 5000 are in the air at any given moment. With a crash rate of 1 out of a thousand, we’d be having multiple plane crashes, with thousands of people killed, every day. One plane crash could easily match or surpass that daily car crash number, and we’d be having multiple plane crashes per day.
1 out of a thousand? I’d never fly again. NOBODY would ever fly again.
The worst part would be that it doesn’t matter if you fly or not - as long as a plane can fly above you, you’re at risk. None of us are safe.
Normally, I would scoff at being worried about airborne debris, but if 1 out of 1000 were crashing, and there were 45k flights a day, that’s enough crashes to worry about.
The vast majority of those crashes would be around airports, though, so just keep away from the airports, and your chance of being clobbered by a black box goes down significantly.
It’s almost comical to think about major airports having a half dozen crashes a day. At least the AI won’t have any trouble sleeping at night.
Even further: the biggest problem with AI and thus the biggest decider on its suitability or not for something is that its distribution of failure in terms of consequence is uniform rather than it being more likely to err in ways with few or less grevious consequences than in ways with more or worse consequences.
In other words, unlike humans who activelly try and avoid making the nastiest and deadly mistakes, when AI fails, it can fail just as easilly in the most horrible and deadly ways as it can in the most minor of ways.
That’s why you have lots of instances of LLMs giving what for humans are obviously dangerous advice like telling people to put glue on pizza to make it look good or those with suicidal thoughts to kill themselves - unlike humans AI has no mechanism to detect “obviously dangerous” on an output it’s about to produce and generate a different output instead.
This is why using AI to generate fluff filling for e-mails is fine but it’s not fine in systems were errors can easilly cost lives.
Sarcasm:
But think of the insurance people! Look at how many insurances are waiting to be denied and robbed!
More importantly, we can justify every other profit increase, because our economies are built on literal exploitation just as they did a couple hundred years prior!
Modern exploiting problems require modern idol solutions.
Sadly there is part of the population that will view that as a valid argument. Faux News, news max, OAN and all the conservative talk radio will feed it to them
when you have the pilot and microslop copilot:
for entertainment purposes only
Prompt unclear, plane stuck in skyscraper.
People just straight up believe AI is magic.
Will this affect my miles program? Anyways, I’m gearing the family up for the exciting trip of a lifetime. We are going to reenact a trace of the Lewis & Clark trail for seven days. It will be in August along the Great Plains. With nothing but authentic gear of the time allowed. The kids should love it.









