In shocking news, popular plausible BS machine ChatGPT gets medical diagnoses wrong more than half the time. But don’t worry — it’s very confident in them! A team at the University of Western Ontar…
The annoying bit is that CV and ML are absolutely extremely useful(/can be where they aren’t used yet) in terms of increasing the accuracy of doctors viewing scans and diagnoses in general (not as “the answer”, but “have you considered…?”).
But bullshit like trying to throw data at an LLM is going to negatively impact the investment and adoption of the actual useful shit.
But bullshit like trying to throw data at an LLM is going to negatively impact the investment and adoption of the actual useful shit.
I vaguely recall hearing how Theranos’ fraud getting revealed set back the field of bloodwork a fair bit - seems we may be seeing history repeat itself.
The annoying bit is that CV and ML are absolutely extremely useful(/can be where they aren’t used yet) in terms of increasing the accuracy of doctors viewing scans and diagnoses in general (not as “the answer”, but “have you considered…?”).
But bullshit like trying to throw data at an LLM is going to negatively impact the investment and adoption of the actual useful shit.
I vaguely recall hearing how Theranos’ fraud getting revealed set back the field of bloodwork a fair bit - seems we may be seeing history repeat itself.