I am an advocate of people being given the tools to do their own research on the health disability they live with. One of the tools needed is the ability to stand up to the psychologists and psychiatrists. Hence this article.
I recently went to a University public engagement meeting on Health and Well Being issues. At this meeting was an experienced psychology researcher. They said that they had provided treatment to people with psychological issues. They said they had done this without any knowledge of what happiness was. The psychologist then as an expert wanted to produce a research proposal to explore what happiness was. They asked for suggestions of how to explore this subject. Their conduct gave me the impression that they were very dismissive of people’s behaviour and realisations outside their narrow cultural experience.
Many people on Painconcern have been on the receiving end of the psychological profession at pain Clinics. I wonder how much of the psychological presentation is actually based on the whole person in pain? I wonder how much is based on theory that has never been adequately checked across the various coping and management strategies that people may have developed? The Pain Clinic Pacing concept is something I have problems with. Pacing is a theory which concentrates on activity and length of activity rather than what body movements are involved and how we move at a particular time. It tends to ignore the need for careful observation of what is actually taking place at the time it is taking place. I have met a number of psychologists who have worked for pain clinics. Some of these psychologists could not grasp that pain can vary in a pain sufferer for good physical reasons. They held to the theory of pain nerves not being able to switch off.
I have done some research into what is known about problems with researched psychology. The problem was far worse than I could possibly imagine before my research.
WEIRD Psychology
Many of us on Painconcern have experienced issues regarding psychology and the psychiatric labels that have become attached to our medical records. I have come across the acronym WEIRD. WEIRD is an acronym for Western, Educated, Industrialized, Rich, and Democratic. I will keep referring to the WEIRD acronym. WEIRD psychology is useful for enabling challenge of the professionals .
Behavioural scientists routinely publish broad claims about human psychology and behaviour in the world’s top journals based on samples drawn entirely from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies.
“A 2008 survey of the top psychology journals found that 96% of subjects were from Western industrialized countries — which house just 12% of the world’s population . Strange, then, that research articles routinely assume that their results are broadly representative, rarely adding even a cautionary footnote on how far their findings can be generalized. The evidence that basic cognitive and motivational processes vary across populations has become increasingly difficult to ignore. For example, many studies have shown that Americans, Canadians and western Europeans rely on analytical reasoning strategies — which separate objects from their contexts and rely on rules to explain and predict behaviour”
Taken from:
The Neglected 95% Why American Psychology Needs to Become Less American by Jeffrey J. Arnett of Clark University. Full article at:
people.auc.ca/brodbeck/4007...
“When it comes to replicating studies, context matters” is an article published on May 23, 2016
It says: “The results showed that context ratings predicted replication success even after statistically adjusting for methodological factors such as effect size and statistical power. Specifically, studies with higher contextual sensitivity ratings—where, for instance, altering the race or geographical location of study participants could alter the results—were less likely to be reproduced by the Reproducibility Project researchers.”
at medicalxpress.com/news/2016...
Reproducibility Project
“Brian Nosek of University of Virginia and colleagues sought out to replicate 100 different studies that all were published in 2008. The project pulled these studies from three different journals, Psychological Science, the Journal of Personality and Social Psychology, and the Journal of Experimental Psychology: Learning, Memory, and Cognition, published in 2008 to see if they could get the same results as the initial findings. In their initial publications 97 of these 100 studies claimed to have significant results. To stay as true as they could the group went through extensive measures to remain true to the original studies, to the extent of consulting the original authors. Even with all the extra steps taken to ensure the same conditions of the original 97 studies only 35 of the studies replicated (36.1%), and if they did replicate their effects were smaller than the initial studies effects. The authors emphasized that the findings reflect a problem that affects all of science not just psychology, and that there is room to improve reproducibility in psychology.”
Taken from
en.wikipedia.org/wiki/Repro...
The following was published in Nature in Nature volume 466, page 29 on 01 July 2010.
“Experimental findings from several disciplines indicate considerable variation among human populations in diverse domains, such as visual perception, analytic reasoning, fairness, cooperation, memory and the heritability of IQ. This is in line with what anthropologists have long suggested: that people from Western, educated, industrialized, rich and democratic (WEIRD) societies — and particularly American undergraduates — are some of the most psychologically unusual people on Earth . So the fact that the vast majority of studies use WEIRD participants presents a challenge to the understanding of human psychology and behaviour. … The evidence that basic cognitive and motivational processes vary across populations has become increasingly difficult to ignore. For example, many studies have shown that Americans, Canadians and western Europeans rely on analytical reasoning strategies — which separate objects from their contexts and rely on rules to explain and predict behaviour”
www2.psych.ubc.ca/~heine/do...
This applies to many on Painconcern. We are living with particular health disabilities in particular environmental circumstances which are possibly unique to each person. We report our health disability in contextual sensitive environment - the medical consultant’s room. We manage our pain and discomfort in environmental contexts - our homes, our families and our preferred social groupings. These contexts are often ignored by the professionals who may rely on analytical reasoning strategies which separate objects from their contexts. I think I have met this time and time again without realising this is what was taking place during the various consultations I have had.
We are living with particular health disabilities in particular environmental circumstances which are possibly unique to each person. We manage our pain and discomfort in environmental contexts. These contexts provide a unique set of issues that interact with each other in unknown and unpredictable ways.
I also suggest that a number of medical consultants have used psychological theories that they have come across to make a clinical decision to modify what we tell them. These modifications are recorded in our medical notes. And we often do not know about these modifications.
Difference between Qualitative Research and Quantitative Research
Qualitative Research is primarily exploratory research. It is used to gain an understanding of underlying reasons, opinions, and motivations. It provides insights into the problem or helps to develop ideas or hypotheses for potential quantitative research. Qualitative Research is also used to uncover trends in thought and opinions, and dive deeper into the problem. ... The sample size is typically small, and respondents are selected to fulfil a given quota.
Quantitative Research is used to quantify the problem by way of generating numerical data or data that can be transformed into usable statistics. It is used to quantify attitudes, opinions, behaviours, and other defined variables – and generalize results from a larger sample population. Quantitative Research uses measurable data to formulate facts and uncover patterns in research.
Taken from
snapsurveys.com/blog/qualit...
Quantitative Research can only report on data that can have numbers attached to them. A lot of what I have experienced cannot be given a number let alone a description that can be shared across the various medical disciplines. Also how the numbers are recorded is carefully defined. So issues which are not in the data recording strategy get ignored and not recorded.
What are the implications? The implications are we are being defined by psychological and psychiatric papers that are not using enough people who have lived experience of the issues that we are living with in a qualitative manner. Qualitative research may be in depth, but not enough people are being investigated in a qualitative manner to obtain the various differences in environmental conditions that exist . Qualitative research also tends to ignore the different language usage of the invited participants Thus wrong conclusions are often being applied to people with a health disability. We are getting treatment regimes that are not based on the reality that we may experience. If we question the psychological treatment regime we are regarded as non experts who do not have the expertise to make sensible treatment proposals.
References to longer articles
What follows are references to longer articles. Many of us do not need to reference them. Some of us in order to defend ourselves from the “all in our head accusations” might find some of what is said in the articles useful. Particularly those of us who live with the exhausting side effects of pain, discomfort and lack of sleep. These side effects create a sub group of people who have very different reactions to things than the WEIRD population that a lot of psychology is based on.
The following extract:
“Behavioral scientists routinely publish broad claims about human psychology and behavior in the world's top journals based on samples drawn entirely from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies. Researchers – often implicitly – assume that either there is little variation across human populations, or that these “standard subjects” are as representative of the species as any other population. Are these assumptions justified? Here, our review of the comparative database from across the behavioral sciences suggests both that there is substantial variability in experimental results across populations and that WEIRD subjects are particularly unusual compared with the rest of the species – frequent outliers. The domains reviewed include visual perception, fairness, cooperation, spatial reasoning, categorization and inferential induction, moral reasoning, reasoning styles, self-concepts and related motivations, and the heritability of IQ. The findings suggest that members of WEIRD societies, including young children, are among the least representative populations one could find for generalizing about humans.”
was taken from
cambridge.org/core/journals...
which in turn was reporting on
www2.psych.ubc.ca/~henrich/...
is a copy of BEHAVIORAL AND BRAIN SCIENCES (2010), the article is 75 pages long and discusses many different society groups.
doi:10.1017/S0140525X0999152X
researchgate.net/publicatio...
Is pdf article on the “Estimating the reproducibility of psychological science” in Science of 28 Aug 2015
This has as part of its conclusion the following:
“After this intensive effort to reproduce a sample of published psychological findings, how many of the effects have we established are true? Zero. And how many of the effects have we established are false? Zero. Is this a limitation of the project design? No. It is the reality of doing science, even if it is not appreciated in daily practice. Humans desire certainty, and science infrequently provides it. As much as we might wish it to be otherwise, a single study almost never provides definitive resolution for or against an effect and its explanation. The original studies examined here offered tentative evidence; the replications we conducted offered additional, confirmatory evidence. In some cases, the replications increase confidence in the reliability of the original results; in other cases, the replications suggest that more investigation is needed to establish the validity of the original findings. Scientific progress is a cumulative process of uncertainty reduction that can only succeed if science itself remains the greatest skeptic of its explanatory claims.”
plato.stanford.edu/entries/...
“Reproducibility of Scientific Results First published Mon Dec 3, 2018” It discusses science was facing a “replication crisis”.
“The crisis often refers collectively to at least the following things:
a. The virtual absence of replication studies in the published literature in many scientific fields. ...
b. The widespread failure to reproduce results of published studies in large systematic replication projects. …
c. The evidence of publication bias. ...
d. A high prevalence of “questionable research practices”, which inflate the rate of false positives in the literature, and the documented lack of transparency and completeness in the reporting of methods, data and analysis in scientific publication.”
ebm.bmj.com/content/ebmed/2...
Catalogue of bias: publication bias in BMJ Evidence-Based Medicine, April 2019 | volume 24 | number 2 | by Nicholas J DeVito, Ben Goldacre
“Dickersin and Min define publication bias as the failure to publish the results of a study ‘on the basis of the direction or strength of the study findings’. This non-publication introduces a bias which impacts the ability to accurately synthesise and describe the evidence in a given area. Publication bias is a type of reporting bias and closely related to dissemination bias, although dissemination bias generally applies to all forms of results dissemination, not simply journal publications. A variety of distinct biases are often grouped into the overall definition of publication bias.”