Definition Rewrite – thathawkman

The Truth Can Be Skewed

Scientific studies allow science to expand its knowledge, from finding connections between two seemingly different entities to testing and explaining phenomena that the world doesn’t quite understand yet. With the correct use of these scientific studies, scientists can achieve feats that would have been deemed impossible without the newly found knowledge: More cures can be found, larger realizations and trends can be identified, and even more knowledge of a field can potentially make growths as even more studies can elaborate. However, scientific studies’ massive influence is a double-edged sword. These studies can determine what is the truth. However, studies are still fallible and studies that push false claims can skew the truth and push an agenda. This trend is completely detrimental to the science community and the people.

As one might expect, scientific studies have a very rigid system that details what studies must accomplish to make a claim. For a scientific study to prove a claim (scientifically known as a hypothesis), the study must prove that the hypothesis must have an undeniable relationship with the data that is collected. To prove the hypothesis, the scientists first form what is known as a null hypothesis, which assumes the that there is no correlation between the two. For example, if the hypothesis is that a newly made drug increases dopamine levels, the null hypothesis would be that the drug did not exhibit any change in dopamine levels. The scientists then attempt to prove the actual hypothesis by rejecting the null hypothesis.

The data, which is found by the carefully thought-out tests and conditions set in place by researchers, is then analyzed to see if the data was statistically significant enough to reject the null hypothesis. This test is essentially finding whether the data was gotten due to random chance or if the claim is the reason behind the data. The researchers then use many different methods to calculate the probability of how likely the data that was given could have shown up, also known as the p-value. To say something was statistically significant, the probability must be lower than 5 percent. This magic number of 5 percent is key, as any study that produces a p-value lower than 5 percent is deemed to be valid. As the probability of the null hypothesis being true statistically improbable and rejected, the scientist can then conclude that the actual hypothesis true. Any p-value that is 5 percent or higher cannot reject the null hypothesis and cannot prove the claim that the study was trying to make, which forces the scientist to either retry the study or change the claim altogether.

This is not a perfect system by any means; natural errors can still occur when validating the claim. As the data still have a factor of chance in them, some errors can occur without any influence from the scientists. These are known as Type I and Type II errors. A Type I error occurs when the null hypothesis is rejected and say that the claim was true even though it was false. For example, a Type I error would be stating that someone had a disease even though the person does not have the disease. A Type II error is the exact opposite, where you reject the null hypothesis and make the actual claim false even though it was true. For example, in the same scenario, a Type II error would state that someone did not have a disease even though it the person did have it.  Both errors are bad, but these errors are accounted for by scientists. However, the issue comes when scientists intentionally publish what is supposed to be a Type I error.

Intentional errors have become a major issue as the scientific studies, which people take at face value, become either misleading or entirely untrue and flood the scientific journals. Studies that affect the percentage of published claims undergo effects such as publication bias and the file-drawer effect. The author Megan L. Head, in the article “The Extent and Consequences of P-Hacking in Science,” defines publication bias as, “the phenomenon in which studies with positive results are more likely to be published than studies with negative results.” The file-drawer effect is the tendency for scientists to refrain from publishing negative studies as due to the lack of money. These effects are very detrimental as there is a noticeable underrepresentation of negative published studies. In “The file drawer problem and tolerance for null results,” Robert Rosenthal describes this effect by saying, “the extreme view of the ‘file drawer problem’ is that journals are filled with the 5% of the studies that show Type I errors, while the file drawers are filled with the 95% of the studies that show nonsignificant results.” This is a direct result of scientists attempting to push studies that innately get more attention, as positive-resulting, intriguing studies will be more popular than negative-resulting studies.

However, the bias can be even more direct with something known as p-hacking. The essential part of a study is primarily based on the comparison of the p-value to find something that is statistically probable. So through p-hacking, scientists can attempt to alter the way they compute the p-value with any given data. In the web article “Is Science Broken?” author Christie Aschwanden simulated how easy it is to find something statistically significant for many different claims with the same data. In his simulation, we are to choose a category on which political party, Republican or Democratic, we want the hypothesis to support. Aschwanden then demonstrated that by choosing to keep and omit some parts of the data (such as the type of politicians that we want to consider as politicians and including recessions), the combination of different parts of the data can prove a hypothesis for both sides. Even with the same data pool, the fact that the use of p-hacking can prove completely opposite ideologies shows the massive influence that p-hacking can have.

Works Cited:

Head, M. L. “The Extent and Consequences of P-Hacking in Science.” The Extent and Consequences of P-Hacking in Science. PLoS Biol, n.d. Web. 18 Nov. 2016.

Aschwanden, Christie. “Science Isn’t Broken.” FiveThirtyEight. N.p., 19 Aug. 2016. Web. 15 Nov. 2016.

Rosenthal, Robert. “The File Drawer Problem And Tolerance For Null Results.” Psychological Bulletin 86.3 (1979): 638-641. PsycARTICLES. Web. 15 Nov. 2016.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s