It’s not always easy to distinguish between existentialism and a bad mood.

  • 19 Posts
  • 600 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle


  • Michael Hendricks, a professor of neurobiology at McGill, said: “Rich people who are fascinated with these dumb transhumanist ideas” are muddying public understanding of the potential of neurotechnology. “Neuralink is doing legitimate technology development for neuroscience, and then Elon Musk comes along and starts talking about telepathy and stuff.”

    Fun article.

    Altman, though quieter on the subject, has blogged about the impending “merge” between humans and machines – which he suggested would either through genetic engineering or plugging “an electrode into the brain”.

    Occasionally I feel that Altman may be plugged into something that’s even dumber and more under the radar than vanilla rationalism.




  • ArchiteuthistoTechTakesKeePassXC doubles down on AI use
    link
    fedilink
    English
    arrow-up
    6
    ·
    6 days ago

    I feel the devs should just ask the chatbot themselves before submitting if they feel it helps, automating the procedure invites a slippery slope in an environment were doing it the wrong way is being pushed extremely strongly and executives’ careers are made on 'I was the one who led AI adoption in company x (but left before any long term issues became apparent)

    Plus the fact that it’s always weirdos like the hating AI is xenophobia person who are willing to go to bat for AI doesn’t inspire much confidence.





  • So if a company does want to use LLM, it is best done using local servers, such as Mac Studios or Nvidia DGX Sparks: relatively low-cost systems with lots of memory and accelerators optimized for processing ML tasks.

    Eh, Local LLMs don’t really scale, you can’t do much better than one person per one computer, unless it’s really sparse usage, and buying everyone a top-of-the-line GPU only works if they aren’t currently on work laptops and VMs.

    Sparks type machines will do better eventually but for now they’re supposedly geared more towards training than inference, it says here that running a 70b model there returns around one word per second (three tokens) which is snail’s pace.




  • What’s a government backstop, and does it happen often? It sounds like they’re asking for a preemptive bail-out.

    I checked the rest of Zitron’s feed before posting and its weirder in context:

    Interview:

    She also hinted at a role for the US government “to backstop the guarantee that allows the financing to happen”, but did not elaborate on how this would work.

    Later at the jobsite:

    I want to clarify my comments earlier today. OpenAI is not seeking a government backstop for our infrastructure commitments. I used the word “backstop” and it mudlled the point.

    She then proceeds to explain she just meant that the government ‘should play its part’.

    Zitron says she might have been testing the waters, or its just the cherry on top of an interview where she said plenty of bizzare shit


  • it often obfuscates from the real problems that exist and are harming people now.

    I am firmly on the side of it’s possible to pay attention to more than one problem at a time, but the AI doomers are in fact actively downplaying stuff like climate change and even nuclear war, so them trying to suck all the oxygen out of the room is a legitimate problem.

    Yudkowsky and his ilk are cranks.

    That Yud is the Neil Breen of AI is the best thing ever written about rationalism in a youtube comment.