Definition Argument – thathawkman

The Truth Can Be Skewed

Scientific studies allow science to expand its knowledge, from finding relationships for two seemingly different entities to testing and explaining phenomena that the world doesn’t quite understand. With correct use of scientific studies scientist can achieve feats that would not have been deemed possible without the newly found knowledge. More cures can be found, larger realizations and trends can be identified, and even more knowledge of a field can potentially make grows as even more studies can elaborate. However, the massive influence that scientific studies is a double-edged sword. With the massive influence studies have, these studies can determine what is the truth. However, studies are still fallible and studies that push false claims can skew what the common people believe is the truth.

As expected, scientific studies have a very rigid system that permits what studies have to accomplish to make a claim. In order for a scientific study to prove a claim, scientifically known as a hypothesis, the study must prove that the hypothesis must have an undeniable relationship. In order to prove the hypothesis, the scientists then form what is known as a null hypothesis, which assumes the that there is no correlation between the two. For example, if the hypothesis is that a newly made drug increases dopamine levels, the null hypothesis would be that the drug did not exhibit any change in dopamine levels. The scientists then attempt to prove the actual hypothesis by rejecting the null hypothesis.

The data, which is found by the carefully thought-out tests and conditions that researchers place, is then analyzed to see if the data was statistically significant to see whether the scientists can or cannot reject the null hypothesis. This test to see if something is statistically significant is essentially finding whether the data was gotten due to random chance or if the claim is the reason behind the data. The researchers then use many different methods to calculate the probability of how likely the data that was given could have shown up, also known as the p-value. To say something was statistically significant, the probability must be lower than 5 percent. This magic number of 5 percent is the key part or the bane of scientists, as any study that produces a p value lower than 5 percent is determined to be validated as the probability of the null hypothesis being true statistically improbable and rejected, which therefore makes the actual hypothesis true. Any p-value that is 5 percent or higher cannot reject the null hypothesis and cannot prove the claim that the study was trying to make which causes the scientist to either reattempt the study or change the claim altogether.

This system is not a perfect system by any means. Natural errors can still occur when validating the claim. As the data still have a factor of chance in them, some errors can occur without any influence from the scientists. These errors are known as Type I error and Type II error. A Type I error occurs when you reject the null hypothesis and say that the claim was true even though it was false. For example, a type I error would be stating that someone had a disease even though the person does not have the disease. A type II error is the exact opposite, where you reject the null hypothesis and make the actual claim false even though it was true. For example, in the same scenario a type II error would state that someone did not have a disease even though it the person did have it.  Both errors are bad, but these errors are accounted for by scientists. However, the issue comes when scientists intentionally publish what is supposed to be a type I error.

Intentional errors become a major issue as the scientific studies that people take at face value flood the scientific journals become either misleading or entirely untrue. Studies that affect the percentage of c undergo effects such as publication bias and the file-drawer effect describe parts The author Megan L. Head, in the article “The Extent and Consequences of P-Hacking in Science,” defines publication bias as, “the phenomenon in which studies with positive results are more likely to be published than studies with negative results.” The file-drawer effect is the tendency for scientists to refrain from publishing negative studies as due to the lack of money. These effects are very detrimental as there is a noticeable underrepresentation of negative published studies. In “The file drawer problem and tolerance for null results,” Robert Rosenthal describes this effect by saying, “the extreme view of the ‘file drawer problem’  is that journals are filled with the 5% of the studies that show Type I errors, while the file drawers are filled with the 95% of the studies that show nonsignificant results.” This is direct result of scientist attempting to push studies that innately get more attention, as positive-resulting, intriguing studies will be more popular than negative-resulting studies.

However, the bias can even be more direct with something known as p-hacking. Through p-hacking, Scientist can attempt to alter the way they compute the p-value with any given data, since the essential part of a study is primarily based on the comparison of the p-value, in order to find something that is statistically probable.  In the web article “Is Science Broken?” author Christie Aschwanden simulated how easy it is to find something is statistically significant for many different hypothesis with the same data. In his simulation, we are given to choose a category on which political party, Republican or Democratic, we want the hypothesis to support. Aschwanden then demonstrated that, by choosing what choosing to keep and omit some parts of the data such as the type of politicians that we want to consider as politicians and including recessions, the combination of different parts of the data can prove hypothesis for both sides. Even with the same data, the fact that the use of p-hacking can prove completely opposite ideologies shows the massive influence that p-hacking can have.

Works Cited:

Head, M. L. “The Extent and Consequences of P-Hacking in Science.” The Extent and Consequences of P-Hacking in Science. PLoS Biol, n.d. Web. 18 Nov. 2016.

Aschwanden, Christie. “Science Isn’t Broken.” FiveThirtyEight. N.p., 19 Aug. 2016. Web. 15 Nov. 2016.

Rosenthal, Robert. “The File Drawer Problem And Tolerance For Null Results.” Psychological Bulletin 86.3 (1979): 638-641. PsycARTICLES. Web. 15 Nov. 2016.

Leave a comment