In this second portion of the Final Exam, you will critically evaluate a quantitative research study on a social science topic. Your instructor will post an announcement with the reference for the article assigned for the exam. The study will be from a peer-reviewed journal and published within the last 10 years.
In the body of your critique, describe the statistical approaches used, the variables included, the hypothesis(es) proposed, and the interpretation of the results. In your conclusion, suggest other statistical approaches that could have been used and, if appropriate, suggest alternative interpretations of the results. This process will allow you to apply the concepts learned throughout the course in the interpretation of actual scientific research.
Your critique must include the following sections and information:
Introduction:
Methods:
Results:
Discussion:
Conclusion:
*******************************************************************************************************************************************
Article:
Method
This study used a 2x2 experiment to test the effects of commenting systems and comment moderation on perceived message and messenger credibility. The two independent variables were type of commenting system (native vs. non-native) and method of moderation (pre-moderation vs. post-moderation). This resulted in four experimental conditions. Additionally, a control condition was used that included the news story but no user comments. A post-test method of measuring responses and other pertinent information was implemented, as well as a between-subjects design, where participants were exposed to only one of the five story treatments.
Within the experiment, participants were randomly assigned to an experimental group. The conditions featured an online news story—approximately 500 words, which participants believed was from a website associated with a local community newspaper, The Tuscaloosa News. Each news story featured nine reader comments following the content of the story. The researchers had the cooperation of the newspaper, who allowed the exact look and feel of their website to be replicated in the experiment. This resulted in treatments that were essentially indistinguishable from actual online news content. Use of a known messenger was used because of the inherent difficulties in measuring messenger credibility of an unknown messenger and because consistent use of the news organization created a controlling factor in the analysis.
The story embedded within the news site was reported and written by researchers, using the paper’s style to provide a topical, relevant story that would likely engender discussion in the comments section. The story was about a food stamps program under consideration in the state, written in a way so as to present a neutral voice and no opinion toward the topic. Although the topic was of interest to the region, it was not particularly controversial and therefore should not have prompted influencing reactions from participants. Each experimental condition featured the same story. The only difference between the conditions was the type of commenting system and moderation that appeared below the news story itself.
Conditions that featured native commenting systems, ones that were unique to the news site itself, had a statement telling users to “log into The Tuscaloosa News to post a comment.” Conditions that featured non-native commenting systems, which were from external sites and require an external logon, instructed users to “log into Facebook [or Disqus] to post a comment.” The native commenting conditions included photos of three out of nine commenters, and no commenter used a real name. The non-native commenting conditions featured commenters with real names and realistic photographs—not animations or drawings—associated with the comments. The difference between the two moderation conditions were statements about “all comments are reviewed by the [newspaper name] prior to being posted” (pre-moderation) or “all comments may be removed by the [newspaper name] at a later time” (post-moderation). The nine comments in the experimental conditions were taken directly or revised slightly by researchers from actual comments about a similar story from reputable news websites or were written by researchers in order to ensure a variety of opinions commenting on the news story. The comments were selected by utilizing the commentary that was most closely related to the story itself, and any comments that contained offensive language or opinions were excluded from the conditions. Most of the comments were directed at the content of the story, with one that directly criticized the reporting of the story. Each experimental condition included the same comments, with the control group including the story but no comments.
After being asked to read the content on the page, participants completed a post-test questionnaire. Participants were asked to provide limited demographic information and information about their news consumption and commenting habits. The questions on messenger credibility45 and message credibility46 were adopted from scales shown to be correlated in a previous study47 and were randomized. Both measures—messenger and message credibility—featured five questions, for a total of 10 questions about credibility.
Participants were recruited both from undergraduate classes at a large, southeastern university and from outside of the university setting in order to gain a more diverse sample of participants. The study took place entirely online through the use of the software Qualtrics, and the link to the experiment was distributed through the university participant pool and through social media. In order to receive more nuanced responses, researchers did not inform participants in advance about the true nature of the study. After completing the experiment, participants were debriefed about its nature.
Findings
A total of 388 people participated in this experiment. To control for participants who did not fully engage in the study or answer all of the questions, responses were not analyzed if the participant spent less than one minute reading the experimental stimulus (n = 24) or if the participant did not respond to the post-test questionnaire (n = 20). This resulted in an analysis of responses from 344 participants, self-identified as 232 females and 109 males. Two hundred ninety participants reported that they were white or Caucasian (84.3 percent), and 32 participants reported that they were black or African American (9.3 percent). The remaining 7.3 percent were other races. The mean age for participants was 21(SD = 8.50). Participants were relatively evenly distributed between the experimental conditions, with between 17 percent and 23 percent of the participants in each group. A Cronbach’s Alpha reliability analysis showed that scales for messenger credibility α = .81 and message credibility α = .76 were found to be reliable. The two dependent variables were tested for normality, and were both found to be normal.
Table 1 Means for Messenger and Message Credibility
Mean SD Mean SD Native Commenting Systems 3.30 .76 3.41 .75 Pre-Moderation 3.19 .79 3.30 .83 Post-Moderation 3.38 .73 3.50 .66 Non-Native Commenting Systems 3.20 .69 3.38 .64 Pre-Moderation 3.19 .68 3.41 .68 Post-Moderation 3.21 .71 3.36 .59 Control 3.46 .79 3.50 .67 |
Research Question 1 asked whether the presence of comments on a news story affected messenger and/or message credibility. After combining experimental conditions to compare to the control group, an ANOVA revealed that people who were not exposed to any comments (control group) perceived significantly more messenger credibility (M = 3.46, SD = .79) than did people who were exposed to the experimental conditions (M = 3.25, SD = .79), F(1, 338) = 3.97, p < .05. However, there was no significant difference in message credibility, F(1, 339) = 1.04, p < .31. Therefore, the answer to RQ1 is that the presence of comments on a news story significantly lowered messenger credibility.
Research Question 2 asked whether the type of commenting system (native or nonnative) would affect messenger and/or message credibility. (See Table 1 for group means.) An ANOVA between commenting system type and messenger credibility indicated that the difference between means was not significant F(2, 337) = 2.55, p < .08. An ANOVA between commenting system type and message credibility revealed that the difference between means was not significant, F(2, 338) = .57, p < .57. Therefore, commenting system type had no effect on messenger or message credibility.
Research Question 3 asked whether the type of moderation (pre-moderation or post-moderation) would affect messenger and/or message credibility. (See Table 1 for means and standard deviations.) An ANOVA between moderation type and messenger credibility showed the difference between means was not significant F(2, 337) = 2.81, p < .06. An ANOVA between moderation type and message credibility revealed the difference between means was not significant F(2, 338) = 1.04, p < .35. Therefore, moderation type had no effect on either messenger or message credibility.
Research Question 4 asked whether the amount a person reads or comments on online news affects his or her perceptions of messenger and message credibility. A Pearson’s correlation analysis showed no significance between messenger credibility and amount of news read online r(338) = .02, p < .70, message credibility and amount of news read online r(339) = .05, p < 36, or the amount a person comments online and message credibility r(337) = -.10, p < .06. However, there was significance between the amount a person comments online and overall perception of messenger credibility r(336) = -.13, p < .02, with heavy commenters posting lower messenger credibility scores than did people less likely to comment.
Get Answers For Free
Most questions answered within 1 hours.