It’s not always easy to distinguish between existentialism and a bad mood.

  • 13 Posts
  • 272 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle
rss

  • Base open source model just means some company commanding a great deal of capital and compute made the weights public to fuck with LLMaaS providers it can’t directly compete with yet, it’s not some guy in a garage training and RLFH them for months on end just to hand the result over to you to fine tune for writing caiaphas cain fanfiction.


  • That’s some wildly disingenuous goal post moving when describing what was meant to be The Future of Finance™ at the time.

    Like saying yeh, AGI was a pipedream and there’s no disruption of technical professions to be seen anywhere, but you can’t deny LLMs made it way easier for bad actors to actively fuck with elections, and the people posting autogenerated youtube slop 5.000 times a day sure did make some legitimate ad money.




  • Zero interest rate period, when the taps of investor money were wide open and spraying at full volume because literally any investment promising some sort of return was a better proposition than having your assets slowly diminished by e.g. inflation in the usually safe investment vehicles.

    Or something to that effect, I am not an economist.








  • 22-2 commentary

    I got a different solution than the one given on the site for the example data, the sequence starting with 2 did not yield the expected solution pattern at all, and the one I actually got gave more bananas anyway.

    The algorithm gave the correct result for the actual puzzle data though, so I’m leaving it well alone.

    Also the problem had a strong map/reduce vibe so I started out with the sequence generation and subsequent transformations parallelized already from pt1, but ultimately it wasn’t that intensive a problem.

    Toddler’s sick (but getting better!) so I’ve been falling behind, oh well. Doubt I’ll be doing 24 & 25 on their release days either as the off-days and festivities start kicking in.




  • Slate Scott just wrote about a billion words of extra rigorous prompt-anthropomorphizing fanfiction on the subject of the paper, he called the article When Claude Fights Back.

    Can’t help but wonder if he’s just a critihype enabling useful idiot who refuses to know better or if he’s being purposefully dishonest to proselytize people into his brand of AI doomerism and EA, or if the difference is meaningful.

    edit: The claude syllogistic scratchpad also makes an appearance, it’s that thing where we pretend that they have a module that gives you access to the LLM’s inner monologue complete with privacy settings, instead of just recording the result of someone prompting a variation of “So what were you thinking when you wrote so and so, remember no one can read what you reply here”. Que a bunch of people in the comments moving straight into wondering if Claude has qualia.




  • 16 commentary

    DFS (it’s all dfs all the time now, this is my life now, thanks AOC) pruned by unless-I-ever-passed-through-here-with-a-smaller-score-before worked well enough for Pt1. In Pt2 in order to get all the paths I only had to loosen the filter by a) not pruning for equal scores and b) only prune if the direction also matched.

    Pt2 was easier for me because while at first it took me a bit to land on lifting stuff from Djikstra’s algo to solve the challenge maze before the sun turns supernova, as I tend to store the paths for debugging anyway it was trivial to group them by score and count by distinct tiles.


  • @ArchiteuthistoSneerClubCasey Newton drinks the kool-aid
    link
    English
    7
    edit-2
    24 days ago

    And all that stuff just turned out to be true

    Literally what stuff, that AI would get somewhat better as technology progresses?

    I seem to remember Yud specifically wasn’t that impressed with machine learning and thought so-called AGI would come about through ELIZA type AIs.