Deputy Travis Junior said:
riverrataggie said:
I don't mind a doctor using AI to verify what they are saying or speed things up if you will. But a doctor who thinks they will be replaced by AI isn't performing the duties of a doctor that I want in the first place.
You're underestimating AI and the size of the data pools it's analyzing. Some of its capacities dwarf those of human doctors: for example, AI can jointly analyze your MRI, clinical history, and genetic data to not just make a diagnosis but predict the medicines to which you'd be most responsive. This sort of deep analysis is far beyond unassisted humans.
But an underlying and fundamental problem with it is that you can train unintentional and superfluous or erroneous patterns into it because you didn't know they were in the training set. Sometimes that's good because you pick up on new things you didn't know were there, but sometimes it's bad because it's something that shouldn't be there and leads to bad outputs. You can never just assume it is right.
Perfect example is image generation AI not being able to make analog clocks with accurate times because training sets for those image generators are heavily comprised of marketing pictures and descriptions scraped from the internet. Marketers have determined the optimal and most aesthetically pleasing hand position for clocks to sell more of them, and it's so ubiquitous that it is engrained in the image generators. The ubiquity of a specific time in the marketing photos is something that goes largely unnoticed, and something the data set creators had no idea was there, until it causes an issue and then you go look at nuances in the data.
While AI can do a lot of things, it is still subject to the fallibility of those who set it up. It requires A LOT of thought and consideration and even more validation. I funny think doctors are going to be replaced anytime in the near future because of those limitations.