AI Predicts Cancer Treatment Responses from Tumor Images
A.I. & Cancer treatment: AI Predicts Cancer... - CLL Support
A.I. & Cancer treatment
"Since the algorithm uses spatial information, Beker doesn’t think ENLIGHT-PT is very useful for liquid tumors". [emphasis mine] insideprecisionmedicine.com...
“It took us some time to understand how a noisy inference could still produce such an accurate prediction of the treatment response.” Beker stated that they cannot fully explain what DeepPT is doing because they are not using explainable artificial intelligence, implying that DeepPT is somewhat of a black box". [same]
Remember the buzz words "fuzzy logic / math"? It seems that this is exactly where it belongs the research lab, and there only. My biggest misgiving about the mRNA experimental covid-19 vaccinations we gambled on during the pandemic is the old, "letting the camel put his nose in the tent to keep it warm" slope. Optimism is never a replacement for understanding.
🤔
Spark_Plug -
I have much more confidence in transcriptomic and imaging AI than in LLM based AI such as ChatGPT, Gemini, Copilot, Apple Intelligence, etc.
AI is a marketing buzzword now, and means very different things in different contexts. You could call a spellchecker function AI, even though that's hard coded, and has been around for 3 or 4 decades. Don't let some "AI" app on your cell phone or PC color your expectations of all things AI.
Transcriptomic AI can be verified - make the prediction, try the treatment or not, and measure the outcome. Instead of training on all possible transcriptomes observed, selected transcriptomes whose outcome is better known helps train. There's no equivalent to LLM AI's misreading satire as fact. Similar transcriptomic AI is also used for custom drug design, and tests in-vitro, in animals, and in humans validates it. This is a fairly mature technology.
en.wikipedia.org/wiki/Trans...
Transcriptomics in general have been used in CLL for a decade:
pubmed.ncbi.nlm.nih.gov/?te...
The mRNA vaccines were not as experimental as many have tried to make them appear. They had been tried and observed in animal studies for years with good effects. Their ultimate clinical trials in humans validated their general safety. They saved millions of lives during a global crisis, and did not demonstrably harm a significant number of people. We know the mechanisms of most of the harmful outcomes and who is vulnerable now. Virtually all of the harms alleged to exist in mRNA vaccines exist also in actual infections to a much greater degree. Counts of allegations are not counts of harm - that's a prime example of a Gish Gallop strategy.
en.wikipedia.org/wiki/Gish_...
Such strategies do not result in additional knowledge, truth, or safety. Many more people will die from disease due to the illusion of danger in vaccination, especially in the next pandemic.
=seymour=
The mRNA vaccine, was an example (it's not a fear, I am aware that it's been around for decades). It is the precedent and public perception, Covid-19 called for desperate measures and as you state mRNA work wasn't the shot in the dark many thought it was.
It is leveraging a good case like mRNA vaccines success, and in turn using other technologies as just as possibly safe but without the time and study behind them.
I'm not saying we have to have an idealistic knowledge of every single fact in place, heck, we don't understand the brain itself yet. My point, although maybe not stated correctly was first, tumors and CLL are not equal. Second, is the amazement in a "black box" technique turning into endorsement/validation. It is not fear mongering nor irrational, look at the misuse of X-rays in their infancy. To this day, on this form, how many times has the radiation from scans been a factor in conversation and that technology is how old?
I'm not saying don't run and explore DeepPT, or any other emerging technology with "unexplainable" "intelligence", just keep it in it's proper place until they can explain it.
The world economy is in the dumper and too many are looking for the next economic boost to solve all the problems, that is just not the way the universe works. Every time society cuts a corner there is a loss somewhere else.
Here is another use of AI to predict cancer treatment.
I wouldn’t want insurance companies to own or influence something like this especially as the technology gets more widespread and powerful.
One bit at a time. It's nothing like the LLM's that the public play with, they are just toys.
nature.com/articles/s41598-...
In the field.
england.nhs.uk/long-read/ar...
They like it.
england.nhs.uk/2023/06/nhs-...
They really do like it.