A 21st century educational system must educate all students in the effective and authentic use of the technologies that permeate society to prepare them for the future. In the past, our educational system emphasized the use of traditional tools such as textbooks, chalkboards, overhead projectors, ring binders, and composition books. Now however, our culture has embraced vastly new and dynamically changing media in everyday life.
Part of this preparation for our futures includes understanding the computational concepts upon which countless digital applications run so that people can no longer just offers children the opportunity to no longer simply “read” digital/traditional media but to become more discerning end users. Perhaps they can potentially become innovative “writers” of new media themselves.
When we think about reading online, we traditionally look at this as making meaning while questioning, locating, evaluating, and synthesizing content. My dissertation focused on critical evaluation of online information as I studied how adolescents think critically about (and create) digital texts.
In looking back on these experiences, I’m struck by how naive my understanding of the online space in terms of literacy practices. Critical evaluation of online information previously was an academic exercise in which we focused on whether or not students were fooled by a website about the Pacific Northwest Tree Octopus.
It now appears that not only are individuals ill equipped to critically evaluate online texts, they are also being actively targeted with measures to fool them.
Distributed Denial-of-Service Attack
In computing, a denial-of-service attack (DoS attack) is a cyber-attack in which a perpetrator seeks to make a machine or network resource unavailable to users by temporarily or indefinitely disrupting services of a host connected to the Internet. A DoS attack is typically accomplished by flooding the targeted resource with superfluous requests in an attempt to overload systems and prevent some or all legitimate requests from being fulfilled.
Put simply, this could be when you want to view a favorite website or blog. An attacker sends hundreds (or more) requests to that website. The website views these all as visitors and is overwhelmed. When you try to visit the site, it appears to be down, or loading slowly as the site is busy dealing with all of the requests from the attacker.
A distributed denial-of-service attack (DDoS attack) steps this by having the attack originate from many different sources. In a DoS attack, the source can (usually) be identified and dealt with. In a large scale DDoS attack, the incoming traffic flooding the victim originates from many different sources. This effectively makes it impossible to stop the attack simply by blocking a single source.
A DoS or DDoS attack is analogous to a group of people crowding the entry door of a store, making it hard for legitimate customers to enter, disrupting trade. If you view the Internet as an “information superhighway” you can view these attacks as many other people on the highway, flooding traffic, and not letting you on.
An online informational war
I believe that we’re currently in an large-scale, online informational war. I’m planning on writing more about this on this blog and elsewhere. If you’re a reader of my weekly newsletter, we’ve been following this story for some time.
This informational war in the digital age has many permutations, but one of these is very relevant to this current discussion about critical evaluation in online spaces. Individuals have only a limited amount of attention available on a daily basis. This attention is the most valuable resource available online.
Attention is gobbled up as a stream of information and media compete for your attention. Social networks serve this content to us while computer algorithms double and triple down on how we interact with these sources. We get a steady stream of porridge that is not too hot…not too cold…it is just right. The end result is that we find ourselves in echo chambers or “filter bubbles” that ensure we get more of what we like.
The challenge of our current context is that sources of online content are actively pumping out information with varying shades of truth and sincerity. Along with the original source that may be mostly true, there will be dozens (or more) additional sources with information more/less true. This information is also presented with varying perspectives or ideological stances to further obfuscate and confuse readers.
Ultimately a mix of real, hoax, and more-reliable/less reliable sites compete for your attention.
Flooding the informational space
In my earlier example about the pacific northwest tree octopus, students were asked to consider information from a hoax website that was almost absurdly false. The website was high quality, but readers needed to question whether or not they have ever seen an aquatic sea-life in a tree.
In an online informational war, you might have 50 to 100 websites reporting about the tree octopus. These websites would all discuss the topic from different perspectives, with varying levels of trust and sincerity of information presented. You might even have real-world impacts as people (for a variety of reasons) go to the forest to protest, hunt down, or report on the tree octopus. Thanks to mobile devices, these accounts can be instantly shared to a global audience and go viral.
You could have multiple hoax sites that present more details and photos about the tree octopus, as well as multiple “truthier” sites that have experts on debunking the idea of a tree octopus. Along with these sources, you would have another twenty to thirty websites from sources that look like credible informational spaces. These would include websites developed to look like newspapers, oceanographic institutes, arboretum groups, etc.
In an online informational war, most of these websites would be created by one source, or a collective with the intent of confusing the situation and what is “truth” on this topic. These content mills would be paid to create content that seems believable, is click worthy, and seeks to fill in the varying levels of honesty and sincerity around a topic. Supporting this content would be a cadre of hundreds (or thousands) of bots (fake accounts & automated social accounts) that would pick up and amplify (favorite & re-share) this content to others.
The goal of this networked collaborative would be to flood the informational space with as much information on varying sides to confuse the reader and obfuscate “truth.” The amount of “noise” on a topic leads users to think that it is actually “real.”
DDoS-ing online readers
In this interaction, the online reader becomes the victim as they are flooded by incoming traffic, or information, originating from many different sources. Trying to stop this attack, or identify the source is simply impossible. Trying to identify truth in a topic is a challenge as the reader is forced to negotiate subtle nuances in truth and fiction.
The reader would ultimately look at the vast amount of information coming at them on a topic from multiple sides, not know what is true, and give up. They ultimately decide that “nothing is true” and head back to their personal belief sets since it is a known quantity and believable.
Ultimately the reader freezes up and their thinking about a topic is impaired or deemed inoperable. If you multiply this by the thousands as you consider that the Internet is the primary text of this generation, you can quickly see how readers may be confused and not able to utilize their basic literacy practices.
This has the ultimate effect of limiting your worldview and the connections that you make online. This limits your ability (or desire) to read, write, and participate in a networked, global economy. There is an aspirational goal that a connected world affords unprecedented learning opportunities that make information plentiful and put experts at our fingertips. It turns out this idealized system of connectivity and networked publics may not be entirely true.
Also published on Medium.