A new report from the Stanford History Education Group finds that fact checkers read less but learn more – far outpacing historians and top college students.
BY CARRIE SPECTOR
How do expert researchers go about assessing the credibility of information on the internet? Not as skillfully as you might guess – and those who are most effective use a tactic that others tend to overlook, according to scholars at Stanford Graduate School of Education.
Sam Wineburg (Image credit: L.A. Cicero)
A new report released recently by the Stanford History Education Group(SHEG) shows how three different groups of “expert” readers – fact checkers, historians and Stanford undergraduates – fared when tasked with evaluating information online.
The fact checkers proved to be fastest and most accurate, while historians and students were easily deceived by unreliable sources.
“Historians sleuth for a living,” said Professor Sam Wineburg, founder of SHEG, who co-authored the report with doctoral student Sarah McGrew. “Evaluating sources is absolutely essential to their professional practice. And Stanford students are our digital future. We expected them to be experts.”
The report’s authors identify an approach to online scrutiny that fact checkers used consistently but historians and college students did not: The fact checkers read laterally, meaning they would quickly scan a website in question but then open a series of additional browser tabs, seeking context and perspective from other sites.
This is such an interesting way to approach credibility in research. I feel that as we are taught how to research reliable sources, we are told to always fact check our information, but most of us don’t really take this to heart. We continue to fact check vertically, if at all. Cross-checking sources laterally is so important, not only because of content accuracy, but because of the reality that biases do exist. Looking at midterm elections coming up, if we are planning to base all of our decisions off of the first few sources that we find online, our society is in for a treat. The voting outcomes will not be backed up with good reason, and decisions will be very swayed to the biases of the online presenters.
In contrast, the authors write, historians and students read vertically, meaning they would stay within the original website in question to evaluate its reliability. These readers were often taken in by unreliable indicators such as a professional-looking name and logo, an array of scholarly references or a nonprofit URL.
When it comes to judging the credibility of information on the internet, Wineburg said, skepticism may be more useful than knowledge or old-fashioned research skills. “Very intelligent people were bamboozled by the ruses that are part of the toolkit of digital deception today,” he said.
The new report builds on research that SHEG released last year, which found that students from middle school through college were easily duped by information online. In that study, SHEG scholars administered age-appropriate tests to 7,804 students from diverse economic and geographic backgrounds.
For the new report, the authors set out to identify the tactics of “skilled” – rather than typical – users. They recruited participants they expected to be skilled at evaluating information: professional fact checkers at highly regarded news outlets, PhD historians with full-time faculty positions at universities in California and Washington state, and Stanford undergraduates.
“It’s the opposite of a random sample,” Wineburg said. “We purposely sought out people who are experts, and we assumed that all three categories would be proficient.”
I found it very interesting that for this study, researchers purposely selected people they assumed would pass the test with flying colors due to their intelligence and experience in the world of source evaluation. I’d have expected the same thing, so it’s concerning that even the so-called “experts” of the field have a hard time judging whether or not a source is reliable.
I completely agree. Further, these people that we are deeming “experts” are the individuals writing the sources that we argue are the most credible. This just goes to show how crucial it is that we evaluate sources laterally. If we strictly deem a source credible because of the letters behind/in front of the authors’ name, we may find ourselves in trouble.
The study sample consisted of 10 historians, 10 fact checkers and 25 students. Each participant engaged in a variety of online searches while SHEG researchers observed and recorded what they did on-screen.
I definitely agree. I think that if they wanted accurate information, they should have not only increased the sample size, but sampled a greater variety of students. It would have been interesting to see how Stanford undergraduates compared to Central Michigan undergraduates, or historians with different focuses.
In one test, participants were asked to assess the reliability of information about bullying from the websites of two different groups: the American Academy of Pediatrics (AAP), the largest professional organization of pediatricians in the world, and the American College of Pediatricians (ACPeds), a much smaller advocacy group that characterizes homosexuality as a harmful lifestyle choice.
“It was extremely easy to see what [ACPeds] stood for,” Wineburg said – noting, for example, a blog post on the group’s site that called for adding the letter P for pedophile to the acronym LGBT. Study participants were asked to evaluate an article on the ACPeds website indicating that programs designed to reduce bullying against LGBT youth “amount to special treatment” and may “validat[e] individuals displaying temporary behaviors or orientations.”
Fact checkers easily identified the group’s position. Historians, however, largely expressed the belief that both pediatricians’ sites were reliable sources of information. Students overwhelmingly judged ACPeds’ site the more reliable one.
I’m really confused on how people thought ACPeds was more reliable (or as reliable) as the AAP. Wouldn’t it be obvious if a group was bashing a sexuality and calling them pedophiles that the information was extremely biased? I wonder if they even read any information or just judged by a few of the beginning paragraphs and the layout of the website (pictures, logo, etc.)
Going off of your point, I feel that too often we go through an article only to look for the information that we need, and we completely glance over all information that would deem the source non-credible. This is a very unfortunate reality.
In another task, participants were asked to perform an open web search to determine who paid the legal fees on behalf of a group of students who sued the state of California over teacher tenure policies in Vergara v. California, a case that cost more than $1 million to prosecute. (A Silicon Valley entrepreneur financed the legal team, a fact not always mentioned in news reports about the lawsuit.) Again, the fact checkers came out well ahead of the historians and students, searching online sources more selectively and thoroughly than the others.
The tasks transcended partisan politics, Wineburg said, pointing out that advocates across the political spectrum promulgate questionable information online.
“These are tasks of modern citizenship,” he said. “If we’re interested in the future of democracy in our country, we have to be aware of who’s behind the information we’re consuming.”
The fact checkers’ tactic of reading laterally is similar to the idea of “taking bearings,” a concept associated with navigation. Applied to the world of internet research, it involves cautiously approaching the unfamiliar and looking around for a sense of direction. The fact checkers “understood the web as a maze filled with trap doors and blind alleys,” the authors wrote, “where things are not always what they seem.”
Wineburg and McGrew observed that even historians and students who did read laterally did not necessarily probe effectively: They failed to use quotation marks when searching for contiguous expressions, for instance, or clicked indiscriminately on links that ranked high in search results, not understanding how the order is influenced by search engine optimization. Fact checkers showed what the researchers called click restraint, reviewing search results more carefully before proceeding.
When examining what sources can be fact checked, this tells us to navigate laterally opposed to reading a full article vertically, but how do we know when the fake news is effectively separated from the correct information, and how much of an expert do you have to be to know what information is correct and what information is incorrect even when you follow navigating laterally?
This is a great question and one that, in my opinion, can really only be dealt with through practice. Like anything, it takes practice to understand what credible v. non-credible sources look like. I think that we, as a society, need to practice these good research habits in order to combat some of the challenges that are described in this article.
The authors of the report say their findings point to the importance of redeveloping guidelines for users of all ages to learn how to assess credibility on the internet. Many schools and libraries offer checklists and other educational materials with largely outdated criteria, Wineburg said. “Their approaches fit the web circa 2001.”
In January SHEG will begin piloting new lesson plans at the college level in California, incorporating internet research strategies drawn from the fact checkers’ tactics. Wineburg sees it as one step toward updating a general education curriculum to reflect a new media landscape and the demands of civic engagement.
In the state’s 2016 election alone, he noted, voters were confronted with 17 ballot initiatives to consider. “If people spent 10 minutes researching each one, that would be an act of incredible civic duty,” he said. “The question is, how do we make those 10 minutes count?”
From Spector, C. (2017, October 24). Stanford scholars observe “experts” to see how they evaluate the credibility of information online. Retrieved from https://news.stanford.edu/press-releases/2017/10/24/fact-checkers-ouline-information/
As we prepare for our “reading laterally” activity next Monday, please read, view, and discuss work from the Stanford History Education Group via NowComment.
Please offer one initial comment on something you notice in the video, and one initial comment on something you notice in the article. Then, reply at least once to each of the other members of your group. For instance:
What do you notice about the way that the researchers from the SHEG describe the ways most people read online? How does this compare to your own reading habits?
As you consider what the SHEG has discovered in their research, and the fact that we have midterm elections coming up in two weeks, what might you want to discuss with your friends and family?
Finally, as you consider the tools that we have been learning about in HON 206, are there ways that you might be able to change your own online reading habits to combat some of the challenges that the SHEG describes?
Logging in, please wait...
0 archived comments