• 23 Posts
  • 64 Comments
Joined 6 months ago
cake
Cake day: June 23rd, 2025

help-circle

  • They could both be right… From a certain point of view.

    Within FAIR, LeCun has instead focused on developing world models that can truly plan and reason. Over the past year, though, Meta’s AI research groups have seen growing tension and mass layoffs as Zuckerberg has shifted the company’s AI strategy away from long-term research and toward the rapid deployment of commercial products.

    LeCun says current AI models are a dead end for progress. I think he’s correct.

    Zuckerberg appears to believe long term development of alternative models will be a bigger money drain than pushing current ones. I think he’s correct too.

    It looks like two guys arguing about which dead end to pursue.





  • Alex Karp thinks people only care about one kind of surveillance. And he thinks he will alleviate our fears if he gives us a pinky promise not to surveil us in that one way.

    That way is cheating.

    He later brings this up again, saying that most surveillance technology isn’t determining, “Am I shagging too many people on the side and lying to my partner?” Your guess is as good as any as to what that’s all about.

    Well, thanks for clearing that up, Alex. That was indeed my sole concern.

    (The rest of the article is full of indecipherable quotes from Alex, which demonstrates you don’t need to be smart to be rich.)








  • Incredible article. Starting off with, they are only banning minors from chats (although their other features are unpopular), and they are tapering children off by limiting chats to two hours for now.

    the company says it’s rolling out a new in-house “age assurance model” that classifies a user’s age based on the type of characters they choose to chat with…

    If nobody under 18 is allowed on the chats, why would you have characters geared towards minors?!

    Adults mistaken for minors can prove their age to the third-party verification site Persona

    The surveillance company, that’s great

    And then there’s this quote from the guy who runs the suicide bot company:

    “When we started making the changes of under 18 experiences earlier in the year, our under 18 user base did shrink, because those users went into other platforms, which are not as safe,” Anand said.

    Give me a break.



  • This is good writing. Register-level, even.

    In promoting their developer registration program, Google purports:

    Our recent analysis found over 50 times more malware from internet-sideloaded sources than on apps available through Google Play.

    We haven’t seen this recent analysis — or any other supporting evidence — but the “50 times” multiple does certainly sound like great cause for distress (even if it is a surprisingly round number). But given the recent news of “224 malicious apps removed from the Google Play Store after ad fraud campaign discovered”, we are left to wonder whether their energies might better be spent assessing and improving their own safeguards rather than casting vague disparagements against the software development communities that thrive outside their walled garden.


  • This is good writing.

    In promoting their developer registration program, Google purports:

    Our recent analysis found over 50 times more malware from internet-sideloaded sources than on apps available through Google Play.

    We haven’t seen this recent analysis — or any other supporting evidence — but the “50 times” multiple does certainly sound like great cause for distress (even if it is a surprisingly round number). But given the recent news of “224 malicious apps removed from the Google Play Store after ad fraud campaign discovered”, we are left to wonder whether their energies might better be spent assessing and improving their own safeguards rather than casting vague disparagements against the software development communities that thrive outside their walled garden.


  • The expectation is for the Foundation to use its equity stake in the OpenAI Group to help fund philanthropic work. That will start with a $25 billion commitment to “health and curing diseases” and “AI resiliance” to counteract some of the risks presented by the deployment of AI.

    Paying yourself to promote your own product. Promising to fix vague “risks” that make the product sound more powerful than it is, with “fixes” that won’t be measurable.

    In other words, Sam is cutting a $25 billion check to himself.