I saw your reply to JolleySprings. Is there something significant about the 20 month time frame (having PSA under 0.10)?
I have 16 months of undetectable PSA with about five more months of treatment with keytruda and Lupron. That will finish up my two years of treatment. I am wondering about what happens next.
tennis8285
Written by
tennis8285
To view profiles and participate in discussions please or .
I didn't say anything about 20 months of undetectable PSA so I'm not sure what you are asking about. I only addressed the futility of treating individual lymph nodes. Your situation is entirely different from JolleySpring's so I don't understand why you are even asking about it.
In your case, you will probably continue on Lupron, perhaps intermittently.
Tall_Allen, It was Nalakrats that replied to JolleySprings. However, I also wanted your opinion because of your expertise, If this is confusing, don't worry about it, I just wanted to know if 20 months of undetectable PSA was a significant milestones or just a guideline for taking a vacation from ADT..Thanks, tennis8285
It is not a milestone. It is the time they are giving you Keytruda. There is no evidence that giving it to you longer has any benefit. I assume they will continue with Lupron alone or give you a vacation to see if PSA increases. Your doctor is following a protocol for MSI-hi patients. You should not depart from it or take supplements that may interfere with immune killing of PCa cells.
Thanks for the info. So that explains why they only plan to administer keytruda for two years. What supplements are you referring to that may hinder immune killing of PCa cells?
Hello Tall_Allen! I’ve been reading a few articles on the possible effects the taking of various antioxidants could have on the killing of cancer cells. I remain unclear. I do take a number of them - mostly for cardiovascular health - but are you saying the antioxidants could be inhibiting my immune system from killing cancer cells, or at least suppressing them, given that my only treatment is ADT (45mg of Eligard biannually plus 50 mg of Casodex daily) which has reduced and kept my PSA at <0.02 for the last 36 months. With Gleason 4+5 dx in Oct 2017 and RP Dec 2017 - 1 seminal vesicle invasion, 11 negative lymph nodes and negative margins, I did no radiological treatment and started ADT about a year after RP - after PSA had moved over 9 months from <0.04 to 0.1. The ADT has kept the PSA at 0.02 since, which I assume will not last forever. But, again, if I may, are you saying that daily ingestion of antioxidants could hasten BCR?
I don't have it on hand now, but I believe your newsletter that warned against antioxidant use concerned their effect on deceptively masking PSA rise.Is there really evidence of antioxidants interfering with immune killing of prostate cancer cells?
You are confusing 2 different things. One is about supplements that interfere with PSA tests. The other is to avoid supplements that interfere with ROS killing of radiation, chemo, and immune system.
Thanks for the clarification, TA. Is there any study or list pointing to which supplements interfere with the immune system, ROS killing etc?I have followed your newsletter re interference with PSA tests, which means stopping the respective supplements 1 - 7 days before a test, after which they are no longer present in blood.
I fear the immune interference may mean more permanent interference.
Anything advertised as an antioxidant. Especially, Vitamin E. That was the huge learning from the SELECT trial. Vitamin E seemed to be beneficial in small trials, the kind of trials often posted here and on the internet to justify supplements. When Vitamin E was given in a large randomized trial, they found the opposite - it caused (a contributing factor to) prostate cancer. That's because our bodies create "reactive oxygen species (ROS)" to destroy cells that have something wrong with them. Cells with "wrong" DNA self-destruct (apoptosis) using ROS. Killer T cells kill "non-self" entities in our bodies using ROS. ROS is part of the mechanism of why radiation and chemotherapy work so well. OTOH, too much ROS can cause mistakes in DNA replication. Our bodies create a careful balance to keep everything working smoothly. My guess is that our microbiomes maintain the balance of bacteria with ROS too. Interfering with that balance by overloading with antioxidant supplements can make things worse. If antioxidants enter the body with food, our bodies can take what it needs and excrete the excess.
Our biochemistry is amazingly complex and was built over millions of years of evolution. We have to have some humility in the face of that, and acknowledge that we have no idea what we're doing to ourselves when we take supplements. Only randomized clinical trials can tell us if, on balance, a drug (and supplements are drugs) is harmful or helpful.
Indeed, high-dose Vitmin E, and betcarotene, have those effects. For the supplements I take that have some antioxidant effect curcumin, zinc, boron, garlic, green tea, quercetin there is some clinical evidence of positive effects and much preclinical, but I believe not the large RCTs. I don't know how unique the SELECT trial was in an RCT overturning smaller positive studies.
"there is some clinical evidence of positive effects" No, there has never been a single prospective clinical trial for any of those except EGCG. There have only been retrospective case-control studies and lab studies, which are worthless as proof (although they may be good for screening drugs OUT). You are taking drugs that have never been proven safe or effective in prospective clinical trials. They may be harmless or dangerous. There may be interactions with other drugs and with tests.
Thank you, TA, this is certainly food for thought.From the EGCG study: ' (PolyE), a proprietary mixture of GTCs, containing 400 mg (−)-epigallocatechin-3-gallate (EGCG) per day, in 97 men with high-grade prostatic intraepithelial neoplasia (HGPIN) and/or atypical small acinar proliferation (ASAP). The primary study endpoint was a comparison of the cumulative one-year prostate cancer rates on the two study arms. No differences in the number of prostate cancer cases were observed: 5 of 49 (PolyE) versus 9 of 48 (placebo), P = 0.25"
As far as I can see, nothing in the following text contradicted this.
To me, 5 out of 49 vs 9 out of 48 seems a fairly positive outcome for EGCG but apparently not statistically conclusive.
I draw a different conclusion. "5 of 49 with EGCG vs 9 of 48 placebo, P = 0,25". I understand this to mean that there is a 25 % probability that the difference is by chance. 75 % that it is not by chance and that EGCG has an effect. Not statistically significant since that required a much lower P perhaps 0.05, but IMHO 75% is enough to go for EGCG, IF side effects are benign.
That's a common misinterpretation of p values. Here's a partial explanation:
Common misinterpretations of single P values
(1) The P value is the probability that the test hypothesis is true; for example, if a test of the null hypothesis gave P = 0.01, the null hypothesis has only a 1 % chance of being true; if instead it gave P = 0.40, the null hypothesis has a 40 % chance of being true.No! The P value assumes the test hypothesis is true—it is not a hypothesis probability and may be far from any reasonable probability for the test hypothesis. The P value simply indicates the degree to which the data conform to the pattern predicted by the test hypothesis and all the other assumptions used in the test (the underlying statistical model). Thus P = 0.01 would indicate that the data are not very close to what the statistical model (including the test hypothesis) predicted they should be, while P = 0.40 would indicate that the data are much closer to the model prediction, allowing for chance variation.
(2) The P value for the null hypothesis is the probability that chance alone produced the observed association; for example, if the P value for the null hypothesis is 0.08, there is an 8 % probability that chance alone produced the association. No! This is a common variation of the first fallacy and it is just as false. To say that chance alone produced the observed association is logically equivalent to asserting that every assumption used to compute the P value is correct, including the null hypothesis. Thus to claim that the null P value is the probability that chance alone produced the observed association is completely backwards: The P value is a probability computed assuming chance was operating alone. The absurdity of the common backwards interpretation might be appreciated by pondering how the P value, which is a probability deduced from a set of assumptions (the statistical model), can possibly refer to the probability of those assumptions.
Note: One often sees “alone” dropped from this description (becoming “the P value for the null hypothesis is the probability that chance produced the observed association”), so that the statement is more ambiguous, but just as wrong.
The p-value of various data sets can prove an important component in many facets of the software industry. Learn how to use p-values in easy to understand language.
Tim Ojo user avatar by Tim Ojo · Sep. 05, 18 · Big Data Zone · Tutorial
Like (10)
Comment (3)
Save
Tweet 62.95K Views
Join the DZone community and get the full member experience. JOIN FOR FREE
For the stats novice like me, understanding what p-value is can be difficult. This is because when asked, professional statisticians tend to try to give a complete and accurate description of what p-value is and how it is derived. For example, here is the definition from the American Statistical Association:
In statistical hypothesis testing, the p-value or probability value or asymptotic significance is the probability for a given statistical model that, when the null hypothesis is true, the statistical summary (such as the sample mean difference between two compared groups) would be the same as or of greater magnitude than the actual observed results.
I also have heard descriptions that start with a example of a coin that is flipped 1000 times. At that point I go; "buckle up Tim, it's going to be a bumpy ride."
While these are very accurate descriptions of p-value, as an engineer looking from the outside into the stats world, I just want a simple definition that gives me some idea as to what I'm looking at when I see a reported p-value.
So here we go:
To understand what p-value is, you first need to understand what a null hypothesis is. When running a hypothesis test/experiment, the null hypothesis says that there is no difference or no change between the two tests. The alternate hypothesis is the opposite of the null hypothesis and states that there is a difference between the two tests. The goal of the experiment is usually to disprove the null hypothesis, and to prove/test the alternate hypothesis. Let me illustrate this with some examples.
If you are trying to test whether a new marketing campaign generates more revenue, the null hypothesis is that there is no change in the revenue as a result of the new marketing campaign. And the alternate hypothesis is that the new marketing campaign performs better (or worse) than the previous campaign. If you are trying to prove that a new drug lowers cholesterol, the null hypothesis states that there is no difference in cholesterol between the group with the drug and the group without, while the alternate hypothesis states that the new drug does have an effect on cholesterol levels. If you are trying to test whether a new server version has better or worse performance than the previous version, the null hypothesis is that both server versions have equal performance. And the alternate hypothesis is that there is a meaningful difference in the performance of the old and new server.
So what is the simple layman's definition of p-value? The p-value is the probability that the null hypothesis is true. That's it.
In the example where we are trying to test whether a new marketing campaign generates more revenue, the p-value is the probability that the null hypothesis, which states that there is no change in the revenue as a result of the new marketing campaign, is true. If the value of the p-value is 0.25, then there is a 25% probability that there is no real increase or decrease in revenue as a result of the new marketing campaign. If the value of the p-value is 0.04 then there is a 4% probability that there is no real increase or decrease in revenue as a result of the new marketing campaign. As you can surmise, the lower the p-value, the more confident we are that the alternate hypothesis is true, which, in this case, means that the new marketing campaign causes an increase or decrease in revenue.
So what do p-values really tell us? p-values tell us whether an observation is as a result of a change that was made or is a result of random occurrences. In order to accept a test result we want the p-value to be low. How low you ask? Well, that depends on what standard you want to set/follow. In most fields, acceptable p-values should be under 0.05 while in other fields a p-value of under 0.01 is required. This cut-off number is known in statistics as the alpha, and results from experiments with p-values below this threshold are considered to be statistically significant. So when a result has a p-value of 0.05 or lower we can say that we are 95% confident that there is an actual difference between the two observations as opposed to just differences due to random variations. And as a result, we have reasonable grounds to support the alternate hypothesis and reject the null hypothesis.
I am sure what you have sent me is correct and I am thankful you sent it, but it is hard to understand for the layman and it gives limited guidance from a p value.
In my reply I quoted what P-value is in layman terms from Big Data Zone.
From that: "In the example where we are trying to test whether a new marketing campaign generates more revenue, the p-value is the probability that the null hypothesis, which states that there is no change in the revenue as a result of the new marketing campaign, is true. If the value of the p-value is 0.25, then there is a 25% probability that there is no real increase or decrease in revenue as a result of the new marketing campaign. If the value of the p-value is 0.04 then there is a 4% probability that there is no real increase or decrease in revenue as a result of the new marketing campaign. As you can surmise, the lower the p-value, the more confident we are that the alternate hypothesis is true, which, in this case, means that the new marketing campaign causes an increase or decrease in revenue".
What you sent shows that this is a gross simplification but it is what I have to guide me in understanding the probability of benefit from, in this case, EGCG. I have seen similar texts elsewhere on how to interpret p for the layman. With all its limitations and in the absence of alternatives, I will let the p value guide me in this way.
I included an easy to understand video with music and actors, because I thought it would be easier for the layman.
You are parroting exactly what the statistician wrote was INCORRECT. As an example of an incorrect interpretation, he wrote: "for example, if the P value for the null hypothesis is 0.08, there is an 8 % probability that chance alone produced the association."
The correct way to interpret that example is: There is an 8% probability that the data would have turned out that way or more extreme even if there were no real association.
In the EGCG example, where p=0.25, the correct interpretation is that there is a 1 in 4 chance that the data would have turned out that way or more extreme even if there were no association. So, there is a pretty good chance that the data is not telling you the true story. That's why p values only tell you how reliable your story (hypothesis) is. It says nothing about the probability of the events.
People who get it wrong drive statisticians nuts, but your misunderstanding is common -- just completely wrong. If you can't understand it, for your own health, consider the possibility that you are misinterpreting it. After 20 years in statistics, I do understand it. If you can't understand it, just understand that your "layman's understanding" gets it completely backwards. The statistician wrote: "The absurdity of the common backwards interpretation might be appreciated by pondering how the P value, which is a probability deduced from a set of assumptions (the statistical model), can possibly refer to the probability of those assumptions."
OK, TA, I get your message. It is incorrect to say that with P = 0.25 there is a 25% probability that chance alone produced the association (between EGCG supplement and reduced prostate cancer risk).
It should be stated as a 25 % probability that the data would have turned out that way, or more extreme, even if there were no real association.
For me this makes no real difference as to what to do. With P = 0.25, I am willing to take the risk that there is no real association, IF side effects are benign.
This is a personal choice. What I have learned thanks to your patience in giving a statistical lesson, is that I will not recommend any others to take a substance with P = 0.25. As the researchers wrote, there is no (statistical) difference between 5 out of 49 getting PC with EGCG and 9 out of 48 getting PC with placebo.
My take is similar with the retrospective case-control studies of boron, zinc etc mentioned in this thread, which I had wrongly called clinical and which you rightly responded to as lack of proof. But your response here and elsewhere, which I am most grateful for, will make me look at the studies and side-effects of each substance more closely to the best of my ability, and reexamine my decision on each one.
I think you now understand. But I thought of a way of explaining p's more simply, so let me know if this is a way I should use with others.
Say there is a drug that may cure headaches. We give it to 4 random people and 3 (75%) are cured. Pretty good -- we think it's a cure!
But let's test the null hypothesis: The drug does not cure headaches
We'll designate H if it cured the headache, T if it didn't. All the 16 ways it could have worked out for the 4 subjects if it were purely random are:
HHHH HHHT HHTH HHTT HTHH HTHT HTTH HTTT
THHH THHT THTH THTT TTHH TTHT TTTH TTTT
You can count, the probability it cured the headache if totally random was:
4H= 1/16
3H= 4/16=1/4
2H= 6/16=3/8
1H= 4/16=1/4
0H=1/16
Probability (p) is 5/16=.31 that at least 3 would have been cured if the null hypotheses were correct. So we accept the null hypothesis and conclude that based on this evidence, it does not cure headaches.
It does NOT mean that the odds are 69% that it really does cure a headache.
If you choose to take that drug, the experiment provides no evidence that it works.
Very true! But in this case, there was indeed evidence of no effect. The evidence was a randomized trial where the difference in effect was not statistically significant. Statistics can only falsify, never prove (Popper). In this case, the null hypothesis (no effect) was not falsified.
What you write makes it clear, although I suspect it can still meet with an automatic recoil from even looking at what those numbers mean. Numbers and statistics don't come easy,
I agree! They are not at all intuitive. It gets even harder to understand with Baysian (conditional) probabilities - prior results affect future probabilities. If you really want your head to spin, try understanding the Monte Hall Problem - that knowing what's behind a curtain you didn't choose, changes your best guess for the right curtain (hint: always change your curtain).
You are both missing the big picture which indicates that said test is very under-powered to lead to any conclusive results. People are counted by integers, there are no fractional values like 4.23 patients. The quantization (or rounding) error to the measured values apply and it is of the order of ~20% for the 5 man "strong" group and ~11% for the 9 man one. Starting with such an error prone data set, any attempt for finding statistical significance is futile. It is simply an under-powered data set. If there had been ten times more samples for each group, then we could start discussing the nitty-gritty of the p value.
Just cant! In statistics, when a subset is taken from a primary set its statistical parameters are adjusted by the fraction (N+1)/N. But, let us leave that aside for the moment and think what the situation would had been if instead of a group of 49 people 4900 were included. Anything from 451 to 549 patients could had been objerved. When making calculations with integers there are applicable constrains. For example, it is just silly to punch the numbers onto a calculator and say that the frequency of incidence of the first group is 5/49=10.205%. The frequency variable only assumes quantized values like 4/49, 5/49, 6/49 etc. It can't be treated as a continuous variable just because the calculator printed the digits. They should had enrolled more people. This is what the high p signifies. We didn't get more wiser after this test. It is equally wrong to assert that EGCg didn't do any good. This test was just inconclusive.
That's why small tests are used for screening. If it fails to show a significantly significant difference on a small clinical trials, it will be screened OUT. If if had shown a significant difference, it would be screened IN. That is what we call Phase 2 and Phase 3 studies. Phase 2 studies, if significant, are used to decide on the sample size for a sufficiently powered Phase 3 study.
There's so many, and so many have had different effect. Certainly a vague statement, could cause/make a lot of noise, unintended.
So an oxidative environment is beneficial to apoptosis? And antioxidants the reverse? So many studies which ultimately point to both sides of the coin and possibilities. Some even having both effect, all depending upon unknown causes of mechanism that can either help kill cells or help them along the way. Is crazy! But I would welcome a more lively discussion as to these mechanisms and or causations. For instance that old Swedish study that showed natural antioxidant consumption as beneficial, but supplementation as being beneficial to PCa. Crazy stuff... It does get confusing! For instance when undergoing RT salvage therapy, I was a 40oz per day green tea drinker, and that I was persistent post therapy, it might have helped protect those dastardly cancer cells during therapy? Or was the micro cellular spread already beyond the application of local therapy application? Could we ever know...? I do know I didn't deal with much of the side effects from ADT and Radiation that I had expected to come at the time. Antioxidants like Vit C, E, and more, and oxidative stress, ATP production, etc... and more & more.
Like I said, such a deep dive! Apologies for the brain spin & fog on this subject.
I think we may be overreacting to the supplement issue. We already know that we should get our vitamins and minerals from whole foods. Also, we should not overdo the supplements. Everyone will have their own interpretation of how to do that for themselves. Some of us are vegetarians, others are keto, etc. To each his own.
There are so many calls on this. My renowned oncologist recently encouraged me to quit ADT after 10,5 months, despite gs 9 and a single zapped met, with psa less than 0.1 the last 8-9 months and ok ldh alp. After hesitation I followed his advice so am on no SOC medication now. If and when psa rises ( 0.2?) I will give it the double or triple treatment as per the latest studies.
Yes Thanks, as my ONC advises I have a vacation after only 4 monthly shots of Firmagon. PSA below 1.My concern is when the PSA starts rising again, as it always does, I will be resistant to that treatment?
Live for today brother. Normal May not return. I am permanently altered . I did the orch . No going back on that one. Best of luck in whatever you do! 👏🏼👏🏼👍
Yes thanks for the reply Nal. I see the ONC this tuesday & expect the PSA to be below 1 as it was 1.6 last monthly visit & injection.My opinion also is just to keep it low as possible & not let it rise again, it was 18.6 only 4 months ago.
QOL was reduced more by the second AZ covid injection than the Firmagon.
Content on HealthUnlocked does not replace the relationship between you and doctors or other healthcare professionals nor the advice you receive from them.
Never delay seeking advice or dialling emergency services because of something that you have read on HealthUnlocked.