Reflective – thathawkman

Core Value I. My work demonstrates that I used a variety of social and interactive practices that involve recursive stages of exploration, discovery, conceptualization, and development.

When I first made the proposal + 5, I based my article on the concept of millennials. I then found the 5 articles that I was planning to use for my eventual research paper. I analyzed the sources and conceptualized how I would have fit the sources in my essay. However, I eventually switched my thesis completely so I had to repeat the same steps to help for my actual research paper. Repeating the same step showed me how the essays can be constructed and molded by the information and citations that you make and how the more modifications that is added, the more the essay can grow and adapt. I started to appreciate the repeated process of finding more and more information to then add on to a concept of an idea that can easily change.

Core Value II. My work demonstrates that I placed texts into conversation with one another to create meaning by synthesizing ideas from various discourse communities. 

In my Visual Rewrite, I analyzed a 30-second ad of a little boy who was attempting to eat food. However, as I described the ad frame by frame, I started to talk to other people to see what they believed certain details represented different claims for different people. From there, used the different ideas that each idea can imply and used to completely analyze a situation. For example, the ad displayed numerous pictures on a fridge. However, once I started to analyze each picture as a separate claim, I was able to deduce much more information than I originally thought I would. Two of the most notable papers was a family drawing, which lacked a large male figure and showed a lady in all blue, and a Diploma. With only these two articles of paper and examining them as claims, I deduced that the father left the household and forced the mother who was a nurse to constantly work to fill a financial gap.

Core Value III. My work demonstrates that I rhetorically analyzed the purpose, audience, and contexts of my own writing and other texts and visual arguments.

In the Stone Money Rewrite, I originally wrote the analyzation of the podcast and numerous articles of the different forms of money as a very formal paper as I was used to that in my former classes. However, the lessons on how to entice the reader to read more instead of trying to simply relay information changed my view on how to write the paper. From attempting to add details in the lecture of Cows and Chips, I started to imagine my language in a way so that the reader can easily understand. I then changed my very abstract descriptions of money to easy to understand representations. This was one of the first times I considered the audience of my writing

Core Value IV: My work demonstrates that I have met the expectations of academic writing by locating, evaluating, and incorporating illustrations and evidence to support my own ideas and interpretations.

In my Causal Rewrite, I viewed many different articles that made many different claims of why the work environment for scientists is  causing issues. I started to understand the situations and constrictions each claim was attempting to make and I combined the separate claims together to make a causal chain that utilizes claims such as the scientists are forced to publish what will make them money, and connect that with the issue of businesses can influence with money and how the two ideologies became a perfect segway for each other.

Core Value V. My work demonstrates that I respect my ethical responsibility to represent complex ideas fairly and to the sources of my information with appropriate citation. 

In my Research Position Paper, I utilized many different studies and controversies to help prove my point. I supplied the information and background that was needed to help understand the pretexts of a situation and why the result of the situation mattered. Since scientific studies are very based on what the study attempted to find, its results, and the actual effect it had. I attempted to use and supply with all the knowledge of my studies and then explain what resulted afterward. For example, one of my studies was showing how a retest of already published studies can differ. Instead of only saying the result, I took the entire context of what the study did in order to explain the source as best as possible.

 

Annotated Bibliography – thathawkman

Annotated Bibliography

1.Head, M. L. “The Extent and Consequences of P-Hacking in Science.The Extent and Consequences of P-Hacking in Science. PLoS Biol, n.d. Web. 18 Nov. 2016.

Background: This article analyzes the many different ways that scientific studies can be changed. From the bias of selection or the File-drawer effect and the concept of inflation (also known as p-hacking) Other methods of “unethical” methods of publishing is conducting analyses midway through experiments to decide whether to continue collecting data a, using many response variables and deciding which to report in the post-analysis and whether to include or drop outliers post analyses, excluding, combining, or splitting treatment groups post analysis stopping data exploration if an analysis yields a significant p-value manipulating the given data to see what correlatesAre false positive.

How I Used It: I derived many of the different implications and methods that scientists can use to have control over the study which should be influenced by the scientist. I also defined the many different methods and cited the article’s definition of publication bias.

2. Rosenthal, Robert. “The File Drawer Problem And Tolerance For Null Results.” Psychological Bulletin 86.3 (1979): 638-641. PsycARTICLES. Web. 15 Nov. 2016.

Background: This article delved deep into the bias known as the file Drawer problem and discussed in depth the implication that null results of studies that are not published to studies that are. Through mathematical computation, Robert Rosenthal found the ratio of how many ‘stored away’ studies it would take to make the significant data significant.

How I Used It: I primarily used this article for its analysis of the file drawer bias. I cited their definition and learned about the implications that both caused the bias and what the bias causes.

3.Aschwanden, Christie. “Science Isn’t Broken.” FiveThirtyEight. N.p., 19 Aug. 2016. Web. 15 Nov. 2016.

Background: This web article delves deep to the utilization of  p-hacking and its process. It describes the potential of how the system where scientific studies are produced can be abused and how the scientific community itself understand that there is a problem. The author also speaks about how p-hacking is not innately evil but is caused due to bias. The article then describes the process of retracting statements and analyzed the situation as a whole and what it means for the scientific community

How I Used It: I used this article for describing in depth the process of p-hacking and what it can do. I also learned about the hardships that scientists have to undergo through and how biases can easily occur. This article also gave ideas of the rebuttal, as the author described that the p-hacking should not be considered as evil

4. Nielsen, MD Bodil. “Association of Funding and Conclusions in Randomized Drug Trials.Association of Funding and Conclusions in Randomized Drug Trials. The JAMA Network, 20 Aug. 2003. Web. 01 Dec. 2016.

Background: A study that analyzed the correlation between the type of funding a study received and the influence on if it made a noticeable impact on the number of positive studies made. This study took 370 randomly selected papers that tested a pharmaceutical drug and found what type of funding it received and if the study gave a positive result. They concluded that the studies that were funded by for-profit organizations  ended up more likely to publish in favor of the drug.

How I Used It: I used this study to show the relationship between corporation funding and studies. This I showed how this relationship favors the businesses as drugs that should not be verified as beneficial are now being verified due to the corporation’s influence.

5.

Turner, Erick H. “Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy — NEJM.New England Journal of Medicine. N.p., 17 Jan. 2008. Web. 28 Nov. 2016.

Background: A study from in 2008 when the FDA that took replication tests  for 74 studies that proved the effectiveness of numerous FDA-registered antidepressants. They retested the published antidepressants and compared their results to the published results. Overall, a majority of the replications were found to be not positive.

How I Used It: I used this study to describe how often and easy it is for untrue claims to be able to be published. This simultaneously showed the strength and potency of replication tests.

6. “Vioxx Recall – Merck and FDA.DrugWatch. N.p., n.d. Web. 18 Nov. 2016.

Background: This article discusses the massive controversy for the drug Vioxx, a prescription painkiller. This drug was spread as quickly as possible but was eventually found that this drug more than doubled the risk of heart attacks and death. This massive controversy brought to light the issues with the publishing system and the FDA corruption.

How I Used It: I used this article to showcase the very real and prevalent issues that bias in scientific studies can cause. This also showcases how fallible the FDA is even though it was supposed to regulate problems to prevent issues like Vioxx created.

7.Hampton, Phil. “Pressure to ‘publish or Perish’ May Discourage Innovative Research, UCLA Study Suggests.UCLA Newsroom. N.p., 08 Oct. 2015. Web. 018 Nov. 2016

Background: The web article from UCLA discussed the lack of innovation from scientific studies and how a study quantified it. The study led by Jacob Foster analyzed a database of more than 6.4 million papers and analyzed papers for 1934 to 2008. Foster found that there has been a drastic decrease in innovation overall. Then they attempt to explain why the decrease has occurred, mentioning issues such as the need to consistently publish.

How I Used It: I used this article/study to demonstrate how narrow  the options a scientist can take really take. I also used this to describe how the lack of innovation can yield even bigger issues in the future.

8.

Hutt, Peter Barton. “Untangling the Vioxx-Celebrex Controversy: A Story about Responsibility.Tran, Lan. N.p., 4 May 2005. Web. 18 Nov. 2016.

Background: A complete in- depth review of the issues the drug Vioxx had. This article lists every interaction Vioxx legally had and supplies a response for each event. It goes from the approval process how the FDA approves drugs, to Merck’s inevitable withdrawal of Vioxx

How I Used It: I used this article by explicitly describing the process of how the FDA approves a drug and how it failed to deny Vioxx with its very harmful side effects. This allowed me to refute the idea that the FDA can complete prevent the consequences of bias studies.

9.

https://www.sciencebasedmedicine.org/lancet-retracts-wakefield-article/

Novella, Steven. “The Lancet Retracts Andrew Wakefield’s Article « Science-Based Medicine.” The Lancet Retracts Andrew Wakefield’s Article « Science-Based Medicine. N.p., 03 Feb. 2010. Web. 25 Nov. 2016.

Background: This article talks about Andrew Wakefield’s  controversial study that there was a relation between the measles vaccination and the development of autism. There were many people authors for the study and the article discussed how they officially renounced the study due to the many implications that the test made. The journal then celebrates and discusses how the journal that originally published the study, Lancet, denounced the study.

How I Used It: I used this study to prove the point that even though a completely biased study was taken down for almost every reason possible, the study still has a large effect to this day.

10.

Altman, D. G. “The Scandal of Poor Medical Research.The Scandal of Poor Medical Research | The BMJ. N.p., n.d. Web. 07 Dec. 2016.

Background: This article showcases and reasons why scientists are publishing studies that have noticeably dropped in quality.

How I Used It: This article familiarized me with the atmosphere that scientist face and the hardships that the scientists must work through in their field.

11.”Hypothesis Testing (cont…).Hypothesis Testing – Significance Levels and Rejecting or Accepting the Null Hypothesis. N.p., n.d. Web. 07 Dec. 2016.

Background: This web article explained the processes of how a study is found. This explains the definitions of keywords such as null hypothesis and p-value.

How I Used It: This article was used as a reminder for the information that I previously learned.

12.

Peng, Roger. “A Simple Explanation for the Replication Crisis in Science.A Simple Explanation for the Replication Crisis in Science · Simply Statistics. N.p., 24 Aug. 2016. Web. 07 Dec. 2016.

Background: This article discussed the idea of looking at previous studies to reprove what the study accomplished and why there is a lack of them.  It discusses what these test actually can infer and its extent on previous studies. It also spoke about how it is much harder than it seems to replicate the studies perfectly as many different factors contribute to the overall data.It also mentions the lack of incentive to do these replication test

How I Used It: This article allowed me to further understand what a replication test entails for science as a whole and its limitations. Overall this article emphasized the importance of replication studies to keep and why there is a lack of them

13.

Fanelli, Daniele. “Do Pressures to Publish Increase Scientists’ Bias? An Empirical Support from US States Data.Do Pressures to Publish Increase Scientists’ Bias? An Empirical Support from US States Data. N.p., 21 Apr. 2010. Web. 07 Dec. 2016.

Background:This article analyzes the biases that studies faces and compares studies to calculate whether or not the pressure to publish altered the data in any way.

How I Used It: This familiarized me with many different types of biases and the atmosphere that the scientists are in. It also proved that there is a relationship, which helps my thesis.

14.

Who Pays for Science?Who Pays for Science? N.p., n.d. Web. 07 Dec. 2016.

Background: An article that explains where funding for scientific studies come from and the potential for studies to be altered due to money.

How I Used It: I used this article to understand the payment method for studies and where issues can arise at.

15.https://www.washingtonpost.com/business/economy/as-drug-industrys-influence-over-research-grows-so-does-the-potential-for-bias/2012/11/24/bb64d596-1264-11e2-be82-c3411b7680a9_story.html

Background: This was an article that lists the negative side that pharmaceuticals have and how much influence the organizations have to get away. It lists numerous different controversy and explains the reasoning of how this power shift came to be

How I Used It: This article familiarized me with how prevalent the corruption of the pharmaceutical companies is. This article also gave me an example that I eventually used in my essay

 

 

Research Position Paper – thathawkman

The Truth Can Be Skewed

Scientific studies allow science to expand its knowledge, from finding connections between two seemingly different entities to testing and explaining phenomena that the world doesn’t quite understand yet. With the correct use of these scientific studies, scientists can achieve feats that would have been deemed impossible without the newly found knowledge: More cures can be found, larger realizations and trends can be identified, and even more knowledge of a field can potentially make growths as even more studies can elaborate. However, scientific studies’ massive influence is a double-edged sword. The “truthful” studies that we believe because they are backed by scientific research may be completely wrong. However, studies are still fallible and studies that push false claims can skew the truth and push an agenda. This trend is completely detrimental to the science community and the people.

As one might expect, scientific studies have a very rigid system that details what studies must accomplish to make a claim. For a scientific study to prove a claim (scientifically known as a hypothesis), the study must prove that the hypothesis must have an undeniable relationship with the data that is collected. To prove the hypothesis, the scientists first form what is known as a null hypothesis, which assumes the that there is no correlation between the two. For example, if the hypothesis is that a newly made drug increases dopamine levels, the null hypothesis would be that the drug did not exhibit any change in dopamine levels. The scientists then attempt to prove the actual hypothesis by rejecting the null hypothesis.

The data, which is found by the carefully thought-out tests and conditions set in place by researchers, is then analyzed to see if the data was statistically significant enough to reject the null hypothesis. This test is essentially finding whether the data was gotten due to random chance or if the claim is the reason behind the data. The researchers then use many different methods to calculate the probability of how likely the data that was given could have shown up, also known as the p-value. To say something was statistically significant, the probability must be lower than 5 percent. This magic number of 5 percent is key, as any study that produces a p-value lower than 5 percent is deemed to be valid. As the probability of the null hypothesis being true statistically improbable and rejected, the scientist can then conclude that the actual hypothesis true. Any p-value that is 5 percent or higher cannot reject the null hypothesis and cannot prove the claim that the study was trying to make, which forces the scientist to either retry the study or change the claim altogether.

Intentional errors have become a major issue as the scientific studies, which people take at face value, become either misleading or entirely untrue and flood the scientific journals. Studies that affect the percentage of published claims undergo effects such as publication bias and the file-drawer effect. The author Megan L. Head, in the article “The Extent and Consequences of P-Hacking in Science,” defines publication bias as, “the phenomenon in which studies with positive results are more likely to be published than studies with negative results.” The file-drawer effect is the tendency for scientists to refrain from publishing negative studies as due to the lack of money. These effects are very detrimental as there is a noticeable underrepresentation of negative published studies. In “The file drawer problem and tolerance for null results,” Robert Rosenthal describes this effect by saying, “the extreme view of the ‘file drawer problem’ is that journals are filled with the 5% of the studies that show Type I errors, while the file drawers are filled with the 95% of the studies that show nonsignificant results.” This is a direct result of scientists attempting to push studies that innately get more attention, as positive-resulting, intriguing studies will be more popular than negative-resulting studies.

However, the bias can be even more direct with something known as p-hacking. The essential part of a study is primarily based on the comparison of the p-value to find something that is statistically probable. So through p-hacking, scientists can attempt to alter the way they compute the p-value with any given data. In the web article “Is Science Broken?” author Christie Aschwanden simulated how easy it is to find something statistically significant for many different claims with the same data. In his simulation, we are to choose a category on which political party, Republican or Democratic, we want the hypothesis to support. Aschwanden then demonstrated that by choosing to keep and omit some parts of the data (such as the type of politicians that we want to consider as politicians and including recessions), the combination of different parts of the data can prove a hypothesis for both sides. Even with the same data pool, the fact that the use of p-hacking can prove completely opposite ideologies shows the massive influence that p-hacking can have.

With these massive rigid systems that scientists must undergo for their livelihood, scientists put massive amounts of value in publication. As innovation comes directly from the scientists, they are put under massive amounts of pressure for publishing. This pressure to publish has directly resulted in the overflowing publication rates that seem to have no end. Thus, a large portion of studies is only partial truths due to the many different biases they are forced to undergo through, intentionally or not. The reason why there is so much potential for bias is due to the fractured system that scientific studies are based off.

Due to the emphasis on quantity over quality for both payments and value, scientists are more inclined to not publish the full potential of what studies could have achieved. Thus, more and more faulty studies with intriguing, misleading theses start to accumulate. To combat this, replication tests are very valuable as they attempt to retest the study exactly to test the study’s validity. These tests are essentially a fail-safe, where another scientific group that is independent to the original does everything that the study did to see if it produces similar results. Erick Turner from the FDA-also known as the Food and Drug Administration- spoke about the replication tests held in 2008. The FDA retested 74 studies that proved the effectiveness of numerous FDA-registered antidepressants. From the replication tests, they found that 23 of them didn’t even have evidence of publication, which left 51 studies to examine. It was reported that 48 of those 51 studies that were left originally showed positive results, yet when the FDA concluded the replication studies they found that only 38 studies out of the original 74 had positive results, completely disproving studies that were now found to be selling ineffective antidepressants.

If such a test is so valuable to validate incorrect tests, then there should not be so many tests that people can view where the study essentially publishes false claims. Sadly, these faulty studies are unlikely to be corrected as there is no incentive within the scientific community to replicate the tests. Even though the FDA made replication tests, the company is not a good representation of the entirety of the community as the FDA is a government funded organization whose primary focus is to regulate issues such as the biased studies.  This occurrence is known as the replication crisis. To make sure that harmful products do not go to the patients and prevent the need for replication tests, organizations such as the FDA place very rigid requirements. However, regulatory associations such as the FDA are simply not enough to keep the influence of drug companies away from scientific studies.

Petter Hutt’s paper, “Untangling the Vioxx-Celebrex Controversy: A Story about Responsibility.” describes the exact process of how the FDA approves a drug.  The FDA first requires what is known as an NDA or New Drug Application. The new drug then undergoes the Investigation New Drug test, or IND test, and three phases to test safety. The IND test is used to see if the production and analyzation had “protection of the human research project, animal studies completed and analyzed, scientific merit, and qualifications of the investigator.” From the IND, the drug then undergoes Phase I, II, and III. Phase I tests the drug on one subject to check for adverse side effects, which moves on to Phase II if successful. Phase II administers the drug multiple times on a small group, which will move on to Phase III. In this phase, the drug is given to thousands of patients with many different methodologies to check for drug interactions/reactions. It is estimated that this entire process takes around 7 to 13 years before the application is finished. After the application is submitted, the FDA then makes a committee to push the new drug and either authorize the drug or stop the process there.

This very methodical authorization system should be able to sort off unsafe drugs after numerous checks. However, the unreliability of the FDA was completely exposed with the Vioxx controversy. DrugWatch, in the web article “Vioxx Recall – Merck and FDA,” discusses the painkiller Vioxx and how it was spread to many different doctors with the primary goal of giving the drug to as many patients as possible. However, in only 5 years, this seemingly harmless drug was found to more than double the risk of heart attacks and death. Eventually, in 2004, Merck recalled Vioxx after being put in the spotlight for their drug. DrugWatch described the havoc Vioxx caused, with over 38,000 deaths, as potentially, “ the worst drug disaster in history.”

The drug went through the entire rigid appeal process of the FDA and was approved in 1999. Not once did the FDA stop the drug until the symptoms the heart issues started to appear and an analyzation was made. But by the time the FDA caught on, Vioxx already damaged thousands of lives. The reason why this disaster even occurred was due to Merck manipulating the data the study had. For the Merck scientists to show that the drug was safe enough for use, they omitted the detrimental data pertaining to patients with heart complications. Otherwise, the drug could not have been released. In fact, Hutt stated that “the General Accounting Office found that of 198 drugs approved by the FDA between 1976-1985, about half had serious post-approval problems.”

Not only that, but this controversy also shed light on the corruption of the FDA. It was noted that Merck persuaded the FDA to remove warning labels for digestive issues with Vioxx before the drug was even approved. The FDA also ignored numerous doctors’ complaints of their patients’ hearts problem until 2002, when a study that showed the relationship between heart complications and Vioxx. When that integral piece of information came out, all the FDA did was simply add a label. The FDA had numerous chances to prevent a disaster from happening and the organization was built to do just that. However, the bias that Merck was pushing forward to validate their product slipped through, which shows even the FDA struggles to mitigate the effect of bias in scientific studies.

As noted before, the scientists’ payment is incentivized to push the claims of whatever will help their career. If the scientists could sustain themselves using the replication test, researchers would have used these replication tests. However, replications tests carry no monetary value, as they only restate what someone else has stated, so scientists avoid the very test that helps counteract faulty claims. As scientists are only human and will have the tendency to prioritize their own living at the expense of integrity, scientists would rather push a swarming number of theses for money. This phenomenon eliminates the fail-safe that is made to get rid of the faulty studies, which means that the number of studies that are fundamentally lying is going to steadily increase with little resistance.

This phenomenon is very detrimental to the future of science. In the article, “Pressure to ‘Publish or Perish’ May Discourage Innovative Research, UCLA Study Suggests,” author Phil Hampton discusses a study lead by Jacob Foster that measures the risks and innovation studies take and the implications that studies make.  Foster found in the fields of biomedicine and chemistry that more than 60% of the studies that were analyzed showed no new connections. This essentially means that innovation is slowly grinding to a halt due to the flawed system. As scientists are fixated with their publications to make a steady income, they will push whatever will gives them the safest income. Even though going with the more innovative idea may result in a breakthrough that will net massive amounts of revenue from publication, there is an even greater chance that the study will not result in a positive study, which would not be beneficial to the scientist. This risk vs reward scenario causes scientists to then make a choice on what they value more, to be put in a textbook or to eat the next day. There, the non-innovative route becomes the favored choice as scientists do not have a safety net that can warrant the risk. Thus, innovation is slowly starting to decrease. This result is one of the worst outcomes, as only innovation causes new leaps and bounds to be made from science. If innovation starting to slow down, science slows down as well.

These issues can be solved by money, so funding from organizations seem to be one of the best solutions. Money being given to the researchers which allow them to remove the restraint of income so better tests are made. However, this harmonious relationship becomes detrimental as both parties benefit too much. A claim from a scientific study is very valuable for a business. The faith people have with how rigid scientific studies are causes people to believe essentially anything a scientific study proves. Thus, companies are willing to invest a lot of money for scientific studies that positively help whatever the company is pushing. This investment would ultimately result in more money for the future. This interest itself causes a cycle that makes this issue worse. A business wants to be able to push their values to gain more money or popularity, so the businesses are more willing to pay money to inevitably reap the benefits. As the business itself pays money for the studies, scientists are more enticed to make a study that proves the business’ value for a better living, giving more and more incentive to produce more or alter claims that prove the value.

This cycle results in countless biased articles that unjustifiably prove the claim of the business that affects the public. Companies such as pharmaceuticals and sport drink companies are repeatedly found in the obvious malpractice. For example, in the study “Association of Funding And Conclusions in Randomized Drug Trials,” Bodil ALs-Nielsen randomly selected 370 random drug trials to see if there was an effect on the result of the test being funded by a non-profit organization or a for-profit organization.  With only 16% of the studies recommending the drugs when it was funded by a non-profit organization and 51% of the studies when funded by a for-profit organization, it is painfully obvious to see the effect that funding sources have.

Biased studies can even be detrimental after it has been disproven. America has kick started  a newly found movement where people are against vaccination and refuse to give their children vaccinated. This movement grew in popularity when Andrew Wakefield released a study that shows the correlation between vaccines and autism. However, this study was completely biased to fit Wakefield’s claim. The study not only took very specific conditions to make the claim, Wakefield was even accused of violating ethical rules.  In the article “The Lancet Retracts Andrew Wakefield’s Article « Science-Based Medicine,”  UK General Medical Council’s Fitness to Practise Panel  officially stated on Jan 28, 2001, that “it has become clear that several elements of the 1998 paper by Wakefield et al are incorrect.” Even though the original paper has been debunked repeatedly, the movement still stays strong and un-wavered. Once the headline of the audacious claim is made, the impact the study has will still remain regardless of the truth. This trend gives even more power to the biased claims.

This corruption of scientific studies must be addressed. Many scientists are aware of the situations and biases but are helpless to do anything about it. Yet, the scientific system sets a precedent that dissuades scientists from reaching their highest potential. This issue can be resolved as long as money is not the primary factor. By giving scientists a steady income, it incentivises them to work on what they deem important rather than safe and potential corruption would disappear. As a result, scientific journals would be filled with unbiased, pure information which allows science to progress in the likes where science has never seen before.

 

Works Cited:

Head, M. L. “The Extent and Consequences of P-Hacking in Science.” The Extent and Consequences of P-Hacking in Science. PLoS Biol, n.d. Web. 18 Nov. 2016.

Aschwanden, Christie. “Science Isn’t Broken.” FiveThirtyEight. N.p., 19 Aug. 2016. Web. 15 Nov. 2016.

Rosenthal, Robert. “The File Drawer Problem And Tolerance For Null Results.” Psychological Bulletin 86.3 (1979): 638-641. PsycARTICLES. Web. 15 Nov. 2016.

 

Turner, Erick H. “Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy — NEJM.” New England Journal of Medicine. N.p., 17 Jan. 2008. Web. 28 Nov. 2016.

Hampton, Phil. “Pressure to ‘publish or Perish’ May Discourage Innovative Research, UCLA Study Suggests.” UCLA Newsroom. N.p., 08 Oct. 2015. Web. 018 Nov. 2016

Nielsen, MD Bodil. “Association of Funding and Conclusions in Randomized Drug Trials.”Association of Funding and Conclusions in Randomized Drug Trials. The JAMA Network, 20 Aug. 2003. Web. 01 Dec. 2016.

 

Hutt, Peter Barton. “Untangling the Vioxx-Celebrex Controversy: A Story about Responsibility.”Tran, Lan. N.p., 4 May 2005. Web. 18 Nov. 2016.

“Vioxx Recall – Merck and FDA.” DrugWatch. N.p., n.d. Web. 18 Nov. 2016.

Novella, Steven. “The Lancet Retracts Andrew Wakefield’s Article « Science-Based Medicine.” The Lancet Retracts Andrew Wakefield’s Article « Science-Based Medicine. N.p., 03 Feb. 2010. Web. 25 Nov. 2016.

 

Definition Rewrite – thathawkman

The Truth Can Be Skewed

Scientific studies allow science to expand its knowledge, from finding connections between two seemingly different entities to testing and explaining phenomena that the world doesn’t quite understand yet. With the correct use of these scientific studies, scientists can achieve feats that would have been deemed impossible without the newly found knowledge: More cures can be found, larger realizations and trends can be identified, and even more knowledge of a field can potentially make growths as even more studies can elaborate. However, scientific studies’ massive influence is a double-edged sword. These studies can determine what is the truth. However, studies are still fallible and studies that push false claims can skew the truth and push an agenda. This trend is completely detrimental to the science community and the people.

As one might expect, scientific studies have a very rigid system that details what studies must accomplish to make a claim. For a scientific study to prove a claim (scientifically known as a hypothesis), the study must prove that the hypothesis must have an undeniable relationship with the data that is collected. To prove the hypothesis, the scientists first form what is known as a null hypothesis, which assumes the that there is no correlation between the two. For example, if the hypothesis is that a newly made drug increases dopamine levels, the null hypothesis would be that the drug did not exhibit any change in dopamine levels. The scientists then attempt to prove the actual hypothesis by rejecting the null hypothesis.

The data, which is found by the carefully thought-out tests and conditions set in place by researchers, is then analyzed to see if the data was statistically significant enough to reject the null hypothesis. This test is essentially finding whether the data was gotten due to random chance or if the claim is the reason behind the data. The researchers then use many different methods to calculate the probability of how likely the data that was given could have shown up, also known as the p-value. To say something was statistically significant, the probability must be lower than 5 percent. This magic number of 5 percent is key, as any study that produces a p-value lower than 5 percent is deemed to be valid. As the probability of the null hypothesis being true statistically improbable and rejected, the scientist can then conclude that the actual hypothesis true. Any p-value that is 5 percent or higher cannot reject the null hypothesis and cannot prove the claim that the study was trying to make, which forces the scientist to either retry the study or change the claim altogether.

This is not a perfect system by any means; natural errors can still occur when validating the claim. As the data still have a factor of chance in them, some errors can occur without any influence from the scientists. These are known as Type I and Type II errors. A Type I error occurs when the null hypothesis is rejected and say that the claim was true even though it was false. For example, a Type I error would be stating that someone had a disease even though the person does not have the disease. A Type II error is the exact opposite, where you reject the null hypothesis and make the actual claim false even though it was true. For example, in the same scenario, a Type II error would state that someone did not have a disease even though it the person did have it.  Both errors are bad, but these errors are accounted for by scientists. However, the issue comes when scientists intentionally publish what is supposed to be a Type I error.

Intentional errors have become a major issue as the scientific studies, which people take at face value, become either misleading or entirely untrue and flood the scientific journals. Studies that affect the percentage of published claims undergo effects such as publication bias and the file-drawer effect. The author Megan L. Head, in the article “The Extent and Consequences of P-Hacking in Science,” defines publication bias as, “the phenomenon in which studies with positive results are more likely to be published than studies with negative results.” The file-drawer effect is the tendency for scientists to refrain from publishing negative studies as due to the lack of money. These effects are very detrimental as there is a noticeable underrepresentation of negative published studies. In “The file drawer problem and tolerance for null results,” Robert Rosenthal describes this effect by saying, “the extreme view of the ‘file drawer problem’ is that journals are filled with the 5% of the studies that show Type I errors, while the file drawers are filled with the 95% of the studies that show nonsignificant results.” This is a direct result of scientists attempting to push studies that innately get more attention, as positive-resulting, intriguing studies will be more popular than negative-resulting studies.

However, the bias can be even more direct with something known as p-hacking. The essential part of a study is primarily based on the comparison of the p-value to find something that is statistically probable. So through p-hacking, scientists can attempt to alter the way they compute the p-value with any given data. In the web article “Is Science Broken?” author Christie Aschwanden simulated how easy it is to find something statistically significant for many different claims with the same data. In his simulation, we are to choose a category on which political party, Republican or Democratic, we want the hypothesis to support. Aschwanden then demonstrated that by choosing to keep and omit some parts of the data (such as the type of politicians that we want to consider as politicians and including recessions), the combination of different parts of the data can prove a hypothesis for both sides. Even with the same data pool, the fact that the use of p-hacking can prove completely opposite ideologies shows the massive influence that p-hacking can have.

Works Cited:

Head, M. L. “The Extent and Consequences of P-Hacking in Science.” The Extent and Consequences of P-Hacking in Science. PLoS Biol, n.d. Web. 18 Nov. 2016.

Aschwanden, Christie. “Science Isn’t Broken.” FiveThirtyEight. N.p., 19 Aug. 2016. Web. 15 Nov. 2016.

Rosenthal, Robert. “The File Drawer Problem And Tolerance For Null Results.” Psychological Bulletin 86.3 (1979): 638-641. PsycARTICLES. Web. 15 Nov. 2016.

 

Causal Rewrite – thathawkman

Poor, Poor Scientists

With these massive rigid systems that scientists must undergo for their livelihood, scientists put massive amounts of value in publication. As innovation comes directly from the scientists, they are put under massive amounts of pressure for publishing. This pressure to publish has directly resulted in the overflowing publication rates that seem to have no end. Thus, a large portion of studies is only partial truths due to the many different biases they are forced to undergo through, intentionally or not. The reason why there is so much potential for bias is due to the fractured system that scientific studies are based off.

Due to the emphasis on quantity over quality for both payments and value, scientists are more inclined to not publish the full potential of what studies could have achieved. Thus, more and more faulty studies with intriguing, misleading theses start to accumulate. To combat this, replication tests are very valuable as they attempt to retest the study exactly to test the study’s validity. These tests are essentially a fail-safe, where another scientific group that is independent to the original does everything that the study did to see if it produces similar results. Erick Turner from the FDA-also known as the Food and Drug Administration- spoke about the replication tests held in 2008. The FDA retested 74 studies that proved the effectiveness of numerous FDA-registered antidepressants. From the replication tests, they found that 23 of them didn’t even have evidence of publication, which left 51 studies to examine. It was reported that 48 of those 51 studies that were left originally showed positive results, yet when the FDA concluded the replication studies they found that only 38 studies out of the original 74 had positive results, completely disproving studies that were now found to be selling ineffective antidepressants.

If such a test is so valuable to validate incorrect tests, then there should not be so many tests that people can view where the study essentially publishes false claims. Sadly, these faulty studies are unlikely to be corrected as there is no incentive within the scientific community to replicate the tests. Even though the FDA made replication tests, the company is not a good representation of the entirety of the community as the FDA is a government funded organization whose primary focus is to regulate issues such as the biased studies.  This occurrence is known as the replication crisis. To make sure that harmful products do not go to the patients and prevent the need for replication tests, organizations such as the FDA place very rigid requirements. However, regulatory associations such as the FDA are simply not enough to keep the influence of drug companies away from scientific studies.

As noted before, the scientists’ payment is incentivized to push the claims of whatever will help their career. If the scientists could sustain themselves using the replication test, researchers would have used these replication tests. However, replications tests carry no monetary value, as they only restate what someone else has stated, so scientists avoid the very test that helps counteract faulty claims. As scientists are only human and will have the tendency to prioritize their own living at the expense of integrity, scientists would rather push a swarming number of theses for money. This phenomenon eliminates the fail-safe that is made to get rid of the faulty studies, which means that the number of studies that are fundamentally lying is going to steadily increase with little resistance.

This phenomenon is very detrimental to the future of science. In the article, “Pressure to ‘Publish or Perish’ May Discourage Innovative Research, UCLA Study Suggests,” author Phil Hampton discusses a study lead by Jacob Foster that measures the risks and innovation studies take and the implications that studies make.  Foster found in the fields of biomedicine and chemistry that more than 60% of the studies that were analyzed showed no new connections. This essentially means that innovation is slowly grinding to a halt due to the flawed system. As scientists are fixated with their publications to make a steady income, they will push whatever will gives them the safest income. Even though going with the more innovative idea may result in a breakthrough that will net massive amounts of revenue from publication, there is an even greater chance that the study will not result in a positive study, which would not be beneficial to the scientist. This risk vs reward scenario causes scientists to then make a choice on what they value more, to be put in a textbook or to eat the next day. There, the non-innovative route becomes the favored choice as scientists do not have a safety net that can warrant the risk. Thus, innovation is slowly starting to decrease. This result is one of the worst outcomes, as only innovation causes new leaps and bounds to be made from science. If innovation starting to slow down, science slows down as well.

These issues can be solved by money, so funding from organizations seem to be one of the best solutions. Money being given to the researchers which allow them to remove the restraint of income so better tests are made. However, this harmonious relationship becomes detrimental as both parties benefit too much. A claim from a scientific study is very valuable for a business. The faith people have with how rigid scientific studies are causes people to believe essentially anything a scientific study proves. Thus, companies are willing to invest a lot of money for scientific studies that positively help whatever the company is pushing. This investment would ultimately result in more money for the future. This interest itself causes a cycle that makes this issue worse. A business wants to be able to push their values to gain more money or popularity, so the businesses are more willing to pay money to inevitably reap the benefits. As the business itself pays money for the studies, scientists are more enticed to make a study that proves the business’ value for a better living, giving more and more incentive to produce more or alter claims that prove the value.

This cycle results in countless biased articles that unjustifiably prove the claim of the business that affects the public. Companies such as pharmaceuticals and sport drink companies are repeatedly found in the obvious malpractice. For example, in the study “Association of Funding And Conclusions in Randomized Drug Trials,” Bodil ALs-Nielsen randomly selected 370 random drug trials to see if there was an effect on the result of the test being funded by a non-profit organization or a for-profit organization.  With only 16% of the studies recommending the drugs when it was funded by a non-profit organization and 51% of the studies when funded by a for-profit organization, it is painfully obvious to see the effect that funding sources have.

Turner, Erick H. “Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy — NEJM.” New England Journal of Medicine. N.p., 17 Jan. 2008. Web. 28 Nov. 2016.

Hampton, Phil. “Pressure to ‘publish or Perish’ May Discourage Innovative Research, UCLA Study Suggests.” UCLA Newsroom. N.p., 08 Oct. 2015. Web. 018 Nov. 2016

Nielsen, MD Bodil. “Association of Funding and Conclusions in Randomized Drug Trials.”Association of Funding and Conclusions in Randomized Drug Trials. The JAMA Network, 20 Aug. 2003. Web. 01 Dec. 2016.

Causal Argument – thathawkman

Poor, Poor Scientists

As innovation comes directly from the scientists, Scientists are put under massive amounts of pressure for publishing. This pressure to publish has directly resulted in the ever-growing publication rates that seeminlgy has no end. With this massive influx of studies, there is a large portion of studies that are partial truths due to many different biases that scientists are forced to work through, intentional or not. The reason why there is so much potential for bias is due to the fractured system that scientific studies are based off.

Due to the emphasis of quantity over quality, for both payments and value, scientists are morelenient to not publish the full potential of what studies could have achieved. As scientists are essentiallyforced to focus on the number of intriguing thesis they make instead of quality and accurate studies,more and more faulty studies start to accumulate. To combat this, replication tests are very valuable asthey attempt to retest the study exactly in order to test the study’s validity. These tests are essentially afail-safe, where another scientific group that is independent to the original does everything that thestudy did to see if it produces similar results. Erick Turner from the FDA spoke about the replication testsheld in 2008. The FDA retested 74 studies that proved the effectiveness of numerous FDA-registeredantidepressants. From the replication tests, they found that 23 of them did not even have evidence of publication, which left 51 studies to examine. It was reported that 48 of those 51 studies that were leftoriginally showed positive results, yet when the FDA concluded the replication studies they found thatonly 38 studies out of the original 74 had positive results, thus completely disproving studies that were now found to be selling ineffective antidepressants.

If such a test is so valuable to validate incorrect tests, then there should not be so many tests thatpeople can view where the study essentially publishes false claims. Sadly, these faulty studies are unlikely to be corrected as there is no incentive within the scientific community to replicate the tests. Even though the FDA made replication tests, the company is not a good representation of the entirety of the community as the FDA is a government funded organization whose primary focus is to regulate issues such as the biased studies. This is known as the replication crisis.

As noted before, scientists’ payment are incentivized to push the claims of whatever will help their career. If the scientists are able to sustain themselves using replication test, researchers would have used these replication tests. However, there is no monetary value for replication tests so scientists avoid the very test that helps counteract faulty claims. As scientists are only human and will tend to prioritize their own living for the expense of integrity, they are forced to push plentiful theses for money and do not focus on retesting as there is no monetary value for validating what someone has already stated. This phenomenon essentially eliminates the fail-safe that is made to get rid of the faulty studies, which means that the number of studies that are essentially inaccurate are going to steadily increase with little resistance.

This phenomenon is very detrimental for the future of science. In the article, “Pressure to ‘Publish or

Perish’ May Discourage Innovative Research, UCLA Study Suggests,” author Phil Hampton discusses a study lead by Jacob Foster that measures the risks and innovation studies take and the implications that it makes. Foster found in biomedicine and chemistry that more than sixty percent of the studies that were analyzed showed no new connections. This essentially means that innovation is slowly grinding to a halt due to the flawed system. As scientists are fixated with their publications to make a steady income, they must push whatever will allow the safest income. Even though going with the more innovative idea may result in a breakthrough that will net massive amounts of revenue from publication, there is an even greater chance that the study will not result in a positive study, which would not be beneficial to the scientist. This risk versus reward scenario causes scientists to then make a choice on what they value more. There, the non-innovative route becomes the favored choice as scientist do not have a safety net that can warrant the risk. Thus, innovation is slowly starting to slow down. This is one of the worst outcomes as only innovation causes new leaps and bounds to be made from science. If innovation is starting to slow down, science as a whole slows down as well.

Since all of these issues can be solved by money, funding from organizations seem to be one of the best solutions. Money is being given to the researchers which allows the researchers to remove the restraint of income so better tests are made. However, this harmonious relationship becomes detrimental as both parties benefit too much. A claim from a scientific study is very valuable for a business. The faith people have with how rigid scientific studies are causes people to believe essentially anything a scientific study proves. As a result, companies are willing to invest a lot of money for scientific studies that positively help whatever the company is pushing. This investment would ultimately result in more money for the future. This interest itself causes a cycle that makes this issue worse. A business wants to be able to push their values to gain more money or popularity, so the businesses are more willing to pay money to inevitably reap the benefits. As the business itself pays money for the studies that prove their values, scientists are more enticed to make a study that proves the business’s value for a better living, giving more and more incentive to produce more or alter claims that prove the value.

This cycle results in countless biased articles that unjustifiably prove the claim of the business that affect the public. Companies such as pharmaceuticals and sport drink companies are repeatedly found in the obvious malpractice. For example, in the study “Association of Funding And Conclusions in Randomized Drug Trials,” Bodil ALs-Nielsen randomly selected 370 random drug trials to see if there was an effect on the result of the test being funded by a non-profit organization or a for profit organization. With only 16% of the studies recommending the drugs when it was funded by a non-profit organization and 51% of the studies when funded by a for-profit organization, it is painfully obvious to see the effect that funding sources has.

Works Cited:

Turner, Erick H. “Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy — NEJM.” New England Journal of Medicine. N.p., 17 Jan. 2008. Web. 28 Nov. 2016.

Hampton, Phil. “Pressure to ‘publish or Perish’ May Discourage Innovative Research, UCLA Study Suggests.” UCLA Newsroom. N.p., 08 Oct. 2015. Web. 018 Nov. 2016

Nielsen, MD Bodil. “Association of Funding and Conclusions in Randomized Drug Trials.”Association of Funding and Conclusions in Randomized Drug Trials. The JAMA Network, 20 Aug. 2003. Web. 01 Dec. 2016.

Definition Argument – thathawkman

The Truth Can Be Skewed

Scientific studies allow science to expand its knowledge, from finding relationships for two seemingly different entities to testing and explaining phenomena that the world doesn’t quite understand. With correct use of scientific studies scientist can achieve feats that would not have been deemed possible without the newly found knowledge. More cures can be found, larger realizations and trends can be identified, and even more knowledge of a field can potentially make grows as even more studies can elaborate. However, the massive influence that scientific studies is a double-edged sword. With the massive influence studies have, these studies can determine what is the truth. However, studies are still fallible and studies that push false claims can skew what the common people believe is the truth.

As expected, scientific studies have a very rigid system that permits what studies have to accomplish to make a claim. In order for a scientific study to prove a claim, scientifically known as a hypothesis, the study must prove that the hypothesis must have an undeniable relationship. In order to prove the hypothesis, the scientists then form what is known as a null hypothesis, which assumes the that there is no correlation between the two. For example, if the hypothesis is that a newly made drug increases dopamine levels, the null hypothesis would be that the drug did not exhibit any change in dopamine levels. The scientists then attempt to prove the actual hypothesis by rejecting the null hypothesis.

The data, which is found by the carefully thought-out tests and conditions that researchers place, is then analyzed to see if the data was statistically significant to see whether the scientists can or cannot reject the null hypothesis. This test to see if something is statistically significant is essentially finding whether the data was gotten due to random chance or if the claim is the reason behind the data. The researchers then use many different methods to calculate the probability of how likely the data that was given could have shown up, also known as the p-value. To say something was statistically significant, the probability must be lower than 5 percent. This magic number of 5 percent is the key part or the bane of scientists, as any study that produces a p value lower than 5 percent is determined to be validated as the probability of the null hypothesis being true statistically improbable and rejected, which therefore makes the actual hypothesis true. Any p-value that is 5 percent or higher cannot reject the null hypothesis and cannot prove the claim that the study was trying to make which causes the scientist to either reattempt the study or change the claim altogether.

This system is not a perfect system by any means. Natural errors can still occur when validating the claim. As the data still have a factor of chance in them, some errors can occur without any influence from the scientists. These errors are known as Type I error and Type II error. A Type I error occurs when you reject the null hypothesis and say that the claim was true even though it was false. For example, a type I error would be stating that someone had a disease even though the person does not have the disease. A type II error is the exact opposite, where you reject the null hypothesis and make the actual claim false even though it was true. For example, in the same scenario a type II error would state that someone did not have a disease even though it the person did have it.  Both errors are bad, but these errors are accounted for by scientists. However, the issue comes when scientists intentionally publish what is supposed to be a type I error.

Intentional errors become a major issue as the scientific studies that people take at face value flood the scientific journals become either misleading or entirely untrue. Studies that affect the percentage of c undergo effects such as publication bias and the file-drawer effect describe parts The author Megan L. Head, in the article “The Extent and Consequences of P-Hacking in Science,” defines publication bias as, “the phenomenon in which studies with positive results are more likely to be published than studies with negative results.” The file-drawer effect is the tendency for scientists to refrain from publishing negative studies as due to the lack of money. These effects are very detrimental as there is a noticeable underrepresentation of negative published studies. In “The file drawer problem and tolerance for null results,” Robert Rosenthal describes this effect by saying, “the extreme view of the ‘file drawer problem’  is that journals are filled with the 5% of the studies that show Type I errors, while the file drawers are filled with the 95% of the studies that show nonsignificant results.” This is direct result of scientist attempting to push studies that innately get more attention, as positive-resulting, intriguing studies will be more popular than negative-resulting studies.

However, the bias can even be more direct with something known as p-hacking. Through p-hacking, Scientist can attempt to alter the way they compute the p-value with any given data, since the essential part of a study is primarily based on the comparison of the p-value, in order to find something that is statistically probable.  In the web article “Is Science Broken?” author Christie Aschwanden simulated how easy it is to find something is statistically significant for many different hypothesis with the same data. In his simulation, we are given to choose a category on which political party, Republican or Democratic, we want the hypothesis to support. Aschwanden then demonstrated that, by choosing what choosing to keep and omit some parts of the data such as the type of politicians that we want to consider as politicians and including recessions, the combination of different parts of the data can prove hypothesis for both sides. Even with the same data, the fact that the use of p-hacking can prove completely opposite ideologies shows the massive influence that p-hacking can have.

Works Cited:

Head, M. L. “The Extent and Consequences of P-Hacking in Science.” The Extent and Consequences of P-Hacking in Science. PLoS Biol, n.d. Web. 18 Nov. 2016.

Aschwanden, Christie. “Science Isn’t Broken.” FiveThirtyEight. N.p., 19 Aug. 2016. Web. 15 Nov. 2016.

Rosenthal, Robert. “The File Drawer Problem And Tolerance For Null Results.” Psychological Bulletin 86.3 (1979): 638-641. PsycARTICLES. Web. 15 Nov. 2016.