The company I work for (we make scientific instruments mostly) has been pushing hard to get us to use AI literally anywhere we can. Every time you talk to IT about a project they come back with 10 proposals for how to add AI to it. It’s a nightmare.
I got an email from a supplier today that acknowledged that “76% of CFOs believe AI will be a game-changer, [but] 86% say it still hasn’t delivered mean value. Ths issue isn’t the technology-it’s the foundation it’s built on.”
Like, come on, no it isn’t. The technology is not ready for the kind of applications it’s being used for. It makes a half decent search engine alternative, if you’re OK with taking care not to trust every word it says it can be quite good at identifying things from descriptions and finding obscure stuf… But otherwise until the hallucination problem is solved it’s just not ready for large scale use.
I think you’re underselling it a bit though. It is far better than a modern search engine, although that is in part because of all of the SEO slop that Google has ingested. The fact that you need to think critically is not something new and it’s never going to go away either. If you were paying real-life human experts to answer your every question you would still need to think for yourself.
Still, I think the C-suite doesn’t really have a good grasp of the limits of LLMs. This could be partly because they themselves work a lot with words and visualization, areas where LLMs show promise. It’s much less useful if you’re in engineering, although I think ultimately AI will transform engineering too. It is of course annoying and potentially destructive that they’re trying to force-push it into areas where it’s not useful (yet).
Yeah, because the market is run by morons and all anyone wants to do is get the stock price up long enough for them to get a good bonus and cache out after the quarter. It’s pretty telling that these tools still haven’t generated a profit yet
Then they can kick rocks. Anyone who claims you “need” to use the Bullshit Machine to achieve productivity is a moron who is setting themselves up to lose. If any interviewer tries to tell me this is required I’m picking up my stuff and walking out right then and there.
If nothing else those people are outright admitting that they’re not offering stable employment because the corporate dream is that these LLM schemes will allow them to eliminate all of their coders, tech writers, artists, and marketing department. Not only this this an anathema to anybody earning a living, it’s also mathematically impossible. So why would I even want to work for them in the first place?
When the inevitable collapse occurs, these idiots will have to pay the remaining dwindling number of competent people left to come back and bail their stupid asses out, and that’s even if any of us deign to do so for them.
I don’t use generative “AI” and I never have. Not even once. What I create is my own, I can understand and document all of it, and I can maintain it in perpetuity. Every pixel I’ve pushed, every line I’ve written. All of it, without exception. That’s not changing.
every company ive interviewed with in the last year wants experience with these tools
The company I work for (we make scientific instruments mostly) has been pushing hard to get us to use AI literally anywhere we can. Every time you talk to IT about a project they come back with 10 proposals for how to add AI to it. It’s a nightmare.
I got an email from a supplier today that acknowledged that “76% of CFOs believe AI will be a game-changer, [but] 86% say it still hasn’t delivered mean value. Ths issue isn’t the technology-it’s the foundation it’s built on.”
Like, come on, no it isn’t. The technology is not ready for the kind of applications it’s being used for. It makes a half decent search engine alternative, if you’re OK with taking care not to trust every word it says it can be quite good at identifying things from descriptions and finding obscure stuf… But otherwise until the hallucination problem is solved it’s just not ready for large scale use.
I think you’re underselling it a bit though. It is far better than a modern search engine, although that is in part because of all of the SEO slop that Google has ingested. The fact that you need to think critically is not something new and it’s never going to go away either. If you were paying real-life human experts to answer your every question you would still need to think for yourself.
Still, I think the C-suite doesn’t really have a good grasp of the limits of LLMs. This could be partly because they themselves work a lot with words and visualization, areas where LLMs show promise. It’s much less useful if you’re in engineering, although I think ultimately AI will transform engineering too. It is of course annoying and potentially destructive that they’re trying to force-push it into areas where it’s not useful (yet).
Yeah, because the market is run by morons and all anyone wants to do is get the stock price up long enough for them to get a good bonus and cache out after the quarter. It’s pretty telling that these tools still haven’t generated a profit yet
Then they can kick rocks. Anyone who claims you “need” to use the Bullshit Machine to achieve productivity is a moron who is setting themselves up to lose. If any interviewer tries to tell me this is required I’m picking up my stuff and walking out right then and there.
If nothing else those people are outright admitting that they’re not offering stable employment because the corporate dream is that these LLM schemes will allow them to eliminate all of their coders, tech writers, artists, and marketing department. Not only this this an anathema to anybody earning a living, it’s also mathematically impossible. So why would I even want to work for them in the first place?
When the inevitable collapse occurs, these idiots will have to pay the remaining dwindling number of competent people left to come back and bail their stupid asses out, and that’s even if any of us deign to do so for them.
I don’t use generative “AI” and I never have. Not even once. What I create is my own, I can understand and document all of it, and I can maintain it in perpetuity. Every pixel I’ve pushed, every line I’ve written. All of it, without exception. That’s not changing.