• 0 Posts
  • 154 Comments
Joined 4 years ago
cake
Cake day: October 2nd, 2020

help-circle
rss
  • excellent writeup with some high quality referencing.

    minor quibble

    Firefox is insecure

    i’m not sure many people would disagree with you that FF is less secure than Chromium (hardly a surprise given the disparity in their budgets and resources)

    though i’m not sure it’s fair to say FF is insecure if we are by comparison inferring Chromium is secure? ofc Chromium is more secure than FF, as your reference shows.


    another minor quibble

    projects like linux-libre and Libreboot are worse for security than their counterparts (see coreboot)

    does this read like coreboot is proprietary? isn’t it GPL2? i might’ve misunderstood something.


    you make some great points about open vs closed source vs proprietary etc. again, it shouldn’t surprise us that many proprietary projects or Global500 funded opensource projects, with considerably greater access to resources, often arrive at more robust solutions.

    i definitely agree you made a good case for the currently available community privacy enhanced versions based on open source projects from highly commercial entities (Chromium->Vanadium, Android/Pixel->GrapheneOS) etc. something i think to note here is that without these base projects actually being opensource, i’m not sure eg. the graphene team would’ve been able to achieve the technical goals in the time they have, and likely with even less success legally.

    so in essence, in the current forms at least, we have to make some kind of compromise, choosing between something we know is technically more robust and then needing to blindly trust the organisation’s (likely malicious) incentives. therefore as you identify, obviously the best answer is to privacy enhance the project, which does then involve some semi-blind trusting the extent of the privacy enhancement process - assuming good faith in the organisation providing the privacy enhancement: there is still an implicit arms race where privacy corroding features might be implemented at various layers and degrees of opacity vs the inevitably less resourced team trying to counter them.

    is there some additional semi-blind ‘faith’ we’re also employing where we are probably assuming the corporate entity currently has little financial incentive in undermining the opensource base project because they can simply bolt on whatever nastiness they want downstream? it’s probably not a bad assumption overall, though i’m often wondering how long that will remain the case.

    and ofc on the other hand, we have organisations who’s motivation we supposedly trust (mostly…for now), but we know we have to make a compromise on the technical robustness. eg. while FF lags behind the latest hardening methods, it’s somewhat visible to the dedicated user where they stand from a technical perspective (it’s all documented, somewhere). so then the blind trust is in the purity of the organisation’s incentives, which is where i think the political-motivated wilfully-technically-ignorant mindset can sometimes step in. meanwhile mozilla’s credibility will likely continue to be gradually eroded, unless we as a community step up and fund them sufficiently. and even then, who knows.

    there’s certainly no clear single answer for every person’s use-case, and i think you did a great job delineating the different camps. just wanted to add some discussion. i doubt i’m as up to date on these facets as OP, so welcome your thoughts.


    I’m sick of privacy being at odds with security

    fucking well said.







  • everyone in here gleefully shitting on op (in a rather unfriendly fashion btw)

    getting hung up on the 1:99 thing, when what they actually said was

    As long as the percentage is not 100%

    obviously i’m not saying op has presented firm evidence of the supernatural. but the irony of supposedly espousing the scientific method, while completely ignoring the critical part of op’s argument.

    who here is claiming to know 100.000000% of all supernatural evidence is absolutely disproven? that would be an unscientific claim to make, so why infer it?

    is the remaining 10-x % guaranteed “proof” of ghosts/aliens? imo no, but it isn’t unreasonable to consider it may suggest something beyond our current reproducible measurement capacity (which has eg. historically been filed under “ghosts”). therefore the ridicule in this thread - rather than friendly/educational discussion - is quite disappointing.

    it’s not exactly reasonable to assume we’re at the apex of human sensory capability, history is full of this kind of misplaced hubris.

    until the invention of the microscope, germs were just “vibes” and “spirits”



  • imo

    Main Points

    1. most people (including most men) do not actually give a fuck.

    2. a tiny insignificant group mumbling in a dark corner probably do care, but noone should give a shit or listen to them.

    3. instead their voice is amplified in social/legacy media as a typical divide and conquer tactic (men vs women is ‘powerful’ as its half the planet vs the other half).

    4. unoriginal drones parrot those amplifications because they’ll get angry about whatever their screens tell them to this week.

    5. society has leaned male-dominant for too long, so genuine efforts to be fair are perceived by some idiots (see #2,#4) as “unfair”.

    6. corporations don’t actually give a shit about equality, so their maliciously half-arsed pretense at fairness rings hollow, adding more fuel to the flames.

    Bonus

    If you want to know more about this problem in general, see the Bechdel test, once you see it, you can’t unsee it everywhere you go:

    The test asks whether a work features at least two female characters who have a conversation about something other than a man.





  • Glad to see everyone agrees this is

    1. funny cos they’re crying over stealing what they stole

    2. acknowledges this means the weights are actually open sourced (which is how it fuckin should be)

    also discussion i’ve seen elsewhere:

    1. when considering the energy footprint of chatgpt, also consider the energy footprint of running the internet for 30 years to accumulate all that data they stole. therefore the most ecological option is to extract the weights and then opensource it.

    just want to add

    1. if the accusations aren’t true (still a possibility), oai is probably deliberately buying time/stock recovery by keeping this discussion in the news rather than everyone discussing how much they suck

    2. if large entities are going to capture and then open source each others proprietary weights, that may actually be one of the best outcomes for global humanity amidst this “AI” craze






  • I wonder if the context of ‘tech person’ vs average person is what they meant?

    A genx tech person in their field is going to be on avg further along than millenial in the same field - because they’ve literally been doing it longer, more experience, learnt more, exposed to more fundamentals.

    imo the distinction is the average (non-tech) genx probably will have less tech exposure than avg millenial, millenials were coming up during the shift of the average person thinking “computers are for geeks” to “tech is cool”.

    disclaimer: generation names are kind of arbitrary divide and conquer bs anyway.