AI Predicts Cancer Treatment Responses from Tumor Images
A.I. & Cancer treatment: AI Predicts Cancer... - CLL Support
A.I. & Cancer treatment
"Since the algorithm uses spatial information, Beker doesn’t think ENLIGHT-PT is very useful for liquid tumors". [emphasis mine] insideprecisionmedicine.com...
“It took us some time to understand how a noisy inference could still produce such an accurate prediction of the treatment response.” Beker stated that they cannot fully explain what DeepPT is doing because they are not using explainable artificial intelligence, implying that DeepPT is somewhat of a black box". [same]
Remember the buzz words "fuzzy logic / math"? It seems that this is exactly where it belongs the research lab, and there only. My biggest misgiving about the mRNA experimental covid-19 vaccinations we gambled on during the pandemic is the old, "letting the camel put his nose in the tent to keep it warm" slope. Optimism is never a replacement for understanding.
🤔
Spark_Plug -
I have much more confidence in transcriptomic and imaging AI than in LLM based AI such as ChatGPT, Gemini, Copilot, Apple Intelligence, etc.
AI is a marketing buzzword now, and means very different things in different contexts. You could call a spellchecker function AI, even though that's hard coded, and has been around for 3 or 4 decades. Don't let some "AI" app on your cell phone or PC color your expectations of all things AI.
Transcriptomic AI can be verified - make the prediction, try the treatment or not, and measure the outcome. Instead of training on all possible transcriptomes observed, selected transcriptomes whose outcome is better known helps train. There's no equivalent to LLM AI's misreading satire as fact. Similar transcriptomic AI is also used for custom drug design, and tests in-vitro, in animals, and in humans validates it. This is a fairly mature technology.
en.wikipedia.org/wiki/Trans...
Transcriptomics in general have been used in CLL for a decade:
pubmed.ncbi.nlm.nih.gov/?te...
The mRNA vaccines were not as experimental as many have tried to make them appear. They had been tried and observed in animal studies for years with good effects. Their ultimate clinical trials in humans validated their general safety. They saved millions of lives during a global crisis, and did not demonstrably harm a significant number of people. We know the mechanisms of most of the harmful outcomes and who is vulnerable now. Virtually all of the harms alleged to exist in mRNA vaccines exist also in actual infections to a much greater degree. Counts of allegations are not counts of harm - that's a prime example of a Gish Gallop strategy.
en.wikipedia.org/wiki/Gish_...
Such strategies do not result in additional knowledge, truth, or safety. Many more people will die from disease due to the illusion of danger in vaccination, especially in the next pandemic.
=seymour=
The mRNA vaccine, was an example (it's not a fear, I am aware that it's been around for decades). It is the precedent and public perception, Covid-19 called for desperate measures and as you state mRNA work wasn't the shot in the dark many thought it was.
It is leveraging a good case like mRNA vaccines success, and in turn using other technologies as just as possibly safe but without the time and study behind them.
I'm not saying we have to have an idealistic knowledge of every single fact in place, heck, we don't understand the brain itself yet. My point, although maybe not stated correctly was first, tumors and CLL are not equal. Second, is the amazement in a "black box" technique turning into endorsement/validation. It is not fear mongering nor irrational, look at the misuse of X-rays in their infancy. To this day, on this form, how many times has the radiation from scans been a factor in conversation and that technology is how old?
I'm not saying don't run and explore DeepPT, or any other emerging technology with "unexplainable" "intelligence", just keep it in it's proper place until they can explain it.
The world economy is in the dumper and too many are looking for the next economic boost to solve all the problems, that is just not the way the universe works. Every time society cuts a corner there is a loss somewhere else.
Spark_Plug -
I don't see this as as much of a black box as text based (LLM) AI. The training data is the DNA and RNA sequences of actual unusual tissue samples of actual patients and the images of slides that match them. It's verifiable stuff - take an unknown image from a tissue slide, see what the algorithm predicts will be the mRNA, unblind the original sequence, and compare. The big issue is in setting our acceptable limits of accuracy.
We still have the older method of prescribing drugs based on guidelines to compare with, and ask in a large study, "Do more people get better outcomes using this algorithm than treatment by the original guidelines?" Compare and count the AEs.
We should certainly not blindly worship at the alter of AI. It's marketing hype. Everyone should make fun of it. It's like calling something "turbo." Mayve even worse.
We should not use Google Gemini, Microsoft Copilot, or Apple Intelligence for anything that matters. At Copilot provides some references - read and judge how well they did. Ask it something you know a lot about, and prepare to be disappointed. It's good entertainment for the elderly.
I think biochemical engineers are less trusting than social science scholars or non-engineers who use large language models to grab a headline. I think AI imaging apps in medicine have a good track record so far. But will the execs controlling software purchases fall for a hyped application that lacks the actual steps of verification of training data? Will they fall for other promises the software makes unrelated to the imaging aspect?
A recent Health Care Management class that I attended had a module on AI. The biggest win so far has been in speech to text transcription of doctor's notes - but forcing doctors to still take responsibility for the result. It's had failures in diagnosing some things where it's clear the result must be reviewed. It's done well in radiological image analysis. I do see a trend toward excuses, "That's what the system said to do. So we did it." I don't see it as that different from doctors deciding things with far too little info, prescribing antibiotics based on guesses, or ordering tests that are not often needed.
=seymour=
All good points Seymour, and if it were you making the decisions I'd rest easy.
I guess I got a little jittery when a site called precision medicine has an author that editorializes about a quote as a black box in relation to an as yet unexplained.
You're absolutely right it's a convergence of hype. 🙂
Here is another use of AI to predict cancer treatment.
I wouldn’t want insurance companies to own or influence something like this especially as the technology gets more widespread and powerful.
One bit at a time. It's nothing like the LLM's that the public play with, they are just toys.
nature.com/articles/s41598-...
In the field.
england.nhs.uk/long-read/ar...
They like it.
england.nhs.uk/2023/06/nhs-...
They really do like it.