• Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    30
    ·
    2 days ago

    We don’t have enough air traffic controllers.

    We use AI to reduce their workload. <---- We are here

    We don’t need as many air traffic controllers.

    We sack more air traffic controllers.

    We don’t have enough air traffic controllers.

  • bearboiblake [he/him]@pawb.social
    link
    fedilink
    English
    arrow-up
    45
    ·
    edit-2
    2 days ago

    My mistake, you’re absolutely right – I neglected to ensure the runway was clear before scheduling that landing. Please accept my apologies for causing those deaths. I’m really glad to be working with you, it’s reassuring that you’ll always keep me honest. You’re not just an assistant traffic controller – you’re a friend.

  • flop_leash_973@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 days ago

    Well, once the mistakes start to pile up I will probably get a lot less judgement from others about my apprehension of flying.

  • skozzii@lemmy.ca
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 days ago

    I tried to use AI to install a reverse osmosis water system yesterday, I asked it to look at manual for hose colors to match them, I figured it would save me a few mins.

    After an hour of it not working and trying all sorts of nonsense I looked in manual to have it show me it had given me all the wrong information to a simple task.

    I can’t wait to have people’s lives reliant on this technology.

    • phx@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      I just saw an ad for using ChatGPT to “come up with new recipes and baking ideas”

      Yeah I’m sure having a bunch of people decide to eat whatever a hallucinating AI comes up with isn’t going to be dangerous at all…

      • buddascrayon@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 days ago

        I’ll look it up and try to find it. But I’m pretty sure there’s a YouTube video where they actually did ask Chat GPT to come up with new recipes and baking ideas and then they tried to make them to the results you would expect.

        Edit: ok, so it looks like there are a whole lot of YouTubers making AI recipes to the expected results. So Google away.

  • skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    a data analytics tool that will help advance the agency’s modernization objectives for aviation safety.

    SMART will cost $12 billion, and will supposedly help flight controllers schedule flights weeks in advance to cut down on delays.

    “This software will say, ‘well, listen, we can see this 45 days out. Let’s move some of those flights a little bit later, or five, seven, 10 minutes earlier, and we can resolve the issue. And so then you are not delayed,'” Duffy said.

    Nothing in any of the facts as reported there suggest the use of language models, except for the editorialising in the summary about how LLMs hallucinate things, which makes me wonder about how competent Futurism’s tech journalism is.

  • GreenBeanMachine@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    ·
    edit-2
    2 days ago

    Let’s say the error rate is 0.1%. Pretty low, right. But that’s one mistake per thousand flights. Are they really okay with one plane out of a thousand potentially crashing? There are certain industries and jobs where AI simply cannot and should not be used.

    • BarneyPiccolo@lemmy.today
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      2 days ago

      Each day, about 100-120 people die in car crashes in America.

      Over 45,000 planes fly in America every day, and over 5000 are in the air at any given moment. With a crash rate of 1 out of a thousand, we’d be having multiple plane crashes, with thousands of people killed, every day. One plane crash could easily match or surpass that daily car crash number, and we’d be having multiple plane crashes per day.

      1 out of a thousand? I’d never fly again. NOBODY would ever fly again.

      • Whats_your_reasoning@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        The worst part would be that it doesn’t matter if you fly or not - as long as a plane can fly above you, you’re at risk. None of us are safe.

        • BarneyPiccolo@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Normally, I would scoff at being worried about airborne debris, but if 1 out of 1000 were crashing, and there were 45k flights a day, that’s enough crashes to worry about.

          The vast majority of those crashes would be around airports, though, so just keep away from the airports, and your chance of being clobbered by a black box goes down significantly.

          It’s almost comical to think about major airports having a half dozen crashes a day. At least the AI won’t have any trouble sleeping at night.

    • Aceticon@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      2 days ago

      Even further: the biggest problem with AI and thus the biggest decider on its suitability or not for something is that its distribution of failure in terms of consequence is uniform rather than it being more likely to err in ways with few or less grevious consequences than in ways with more or worse consequences.

      In other words, unlike humans who activelly try and avoid making the nastiest and deadly mistakes, when AI fails, it can fail just as easilly in the most horrible and deadly ways as it can in the most minor of ways.

      That’s why you have lots of instances of LLMs giving what for humans are obviously dangerous advice like telling people to put glue on pizza to make it look good or those with suicidal thoughts to kill themselves - unlike humans AI has no mechanism to detect “obviously dangerous” on an output it’s about to produce and generate a different output instead.

      This is why using AI to generate fluff filling for e-mails is fine but it’s not fine in systems were errors can easilly cost lives.

    • Napster153@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      Sarcasm:

      But think of the insurance people! Look at how many insurances are waiting to be denied and robbed!

      More importantly, we can justify every other profit increase, because our economies are built on literal exploitation just as they did a couple hundred years prior!

      Modern exploiting problems require modern idol solutions.

      • Heikki2@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Sadly there is part of the population that will view that as a valid argument. Faux News, news max, OAN and all the conservative talk radio will feed it to them

  • 6stringringer@lemmy.zip
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    Will this affect my miles program? Anyways, I’m gearing the family up for the exciting trip of a lifetime. We are going to reenact a trace of the Lewis & Clark trail for seven days. It will be in August along the Great Plains. With nothing but authentic gear of the time allowed. The kids should love it.