Some machine learning isn’t inherently harmful. The issue is when techbros go to the extremes that they have with “AI” and the level of damage they’ve done; implanting the idea of replacing creatives with their slop generating hallucination engines, compromising systems with inherently insecure “AI” tools. In most cases, a user isn’t given the option to refuse these tools (as corps are desperate for people to use these tools). Using stolen data to train their “AI”, extreme power and water usage per prompt honestly disgusts me in a visceral way. It’s not even profitable, which confuses me as to why they’d try to desperately shill this tech swill.
Eh, Harper does seem like it is powered by machine learning, nothing inherently malicious though. Seems like a lot of the rules were set by those with an understanding of grammar and spelling.
Some machine learning isn’t inherently harmful. The issue is when techbros go to the extremes that they have with “AI” and the level of damage they’ve done; implanting the idea of replacing creatives with their slop generating hallucination engines, compromising systems with inherently insecure “AI” tools. In most cases, a user isn’t given the option to refuse these tools (as corps are desperate for people to use these tools). Using stolen data to train their “AI”, extreme power and water usage per prompt honestly disgusts me in a visceral way. It’s not even profitable, which confuses me as to why they’d try to desperately shill this tech swill.
Eh, Harper does seem like it is powered by machine learning, nothing inherently malicious though. Seems like a lot of the rules were set by those with an understanding of grammar and spelling.