<span class='p-name'>Possible cultural & technological futures of digital scholarship</span>

Possible cultural & technological futures of digital scholarship

In a recent blog post, Greg McVerry examines the challenges and opportunities of injecting some IndieWeb philosophies into Google Scholar, and other systems for tracking, citing, and identifying publications. His post closes with the following statement:

Citations stink. Like I said a canonical link as citation is my dream but we are way off from a digital utopia. Until then I want to help us make using microformats and webmentions as easy as possible for open scholars.

Greg’s main point in the post is about the citation style, and the challenges of this in terms or formatting citations properly for HTML and elsewhere. But, I think Greg might be on to something far more important…IMHO. I think he might have pulled together some recent challenges/thoughts I’ve had about publishing as of late.

I think there is a need to develop a system to track the draft of a manuscript from the beginning to the end of the process. This will open up new possibilities to scaffold new scholars while we onboard them in the process. This will also provide new opportunities for open scholarship and open science. Finally, this will allow researchers to replicate, remix, or reproduce the (research, reflection, writing, revision, publishing) process. The answer may be in indieweb philosophies, but the main impediment may be in the people and systems that make all of this possible. I think we have an opportunity for new technological opportunities in academic publisher, but I’m not sure if culturally we’re ready. Let me explain.

Feel free to check out the video abstract if you want some of the behind the scenes thinking on this post.

IndieWeb Philosophies

Indieweb refers to a set of software utilities that allows people who maintain their own, independently hosted websites the opportunity to independently maintain their own social data on these websites. Put simply, the ethos is that you should own your own content. To make this a reality, they have developed a distributed network of communication tools, plugins, and software that will allow you to own your own content, connect more completely with others, and control all of your digital identity.

The principles behind indieweb further elaborate on this mindset of owning your own data, building your own spaces, and archiving your work…and your work. This is a very simplistic overview of IndieWeb. You can read this post from Greg McVerry in which he is trying to write up the principles for people not regularly diving into this content.

For me, indieweb means that I regularly post my ideas on my main website. I’ve also started sharing/archiving things I’m reading from my digital breadcrumbs site. In the past, I would read (or write) a post and share it off to my social networks. Twitter, Facebook, and other services would take it and do with it as they please. I might receive a notification, or comment on this content as it makes it way through the Internet. But, for the most part, these interactions would quickly disappear. Through the integration of a bunch of indieweb essentials, I can “bring back” these likes, retweets, comments, and reactions to my original comment. If someone on Twitter sent me a response, this response would show up on my original blog post. Future readers would be able to review my original content, along with the supplemental content.

As an academic, this is a dream come true for me. My dissertation focused on critical evaluation of online information. The key impetus of this study was that societally we’re moving from print to pixel as we learn, connect, and socialize. Yet, even with these incredible opportunities, we’re not able to effectively consume, critique, and evaluate the information we receive from these sources. There are a number of reasons for this (you can read the whole dissertation if you can’t sleep at night). In the paper, I indicate the following:

Because of the extent to which individuals give significance to online information in their academic and personal lives, the research literature reflects a growing concern about the reliability of these sources (Alexander & Tate, 1999; Flanagin & Metzger, 2000; Browne, Freeman & Williamson, 2000). This concern is partially due to the lack of filters to analyze, critically evaluate and verify accuracy and reliability of information published online (Flanagin & Metzger, 2000; Johnson & Kaye, 1998; Rieh & Belkin, 1998). Additionally, finding traditional quality indicators is either difficult or sometimes impossible (Fox, 2006). Examples of these quality indicators include facts regarding authorship, vetted content information, and revision audit trails (Fox, 2006). Thus, online reading requires a substantial ability to think critically, evaluate information, and judge the veracity of content (Alexander & Tate, 1999; Flanagin & Metzger, 2000; Johnson & Kaye, 1998; Rieh & Belkin, 1998), perhaps even more so than offline reading.

The ability to use indieweb tools to create a more informed authorship profile, crowd-sourced content validation, and a list of audits or revisions to my content is invaluable. To me, along with tools like Hypothesis, this is the what the future of literacy practices in online spaces should look like.

Pre-print manuscripts

To add some context to my thinking, let me share some recent discussions that I’ve had in my field (education, literacy & technology).

I sit on the Publications Committee of a large organization, and we’ve recently been undergoing discussions about what to do with “pre-print” versions of manuscripts. A “pre-print” version is the Word doc, or PDF you obtain as you upload a manuscript for review. Many (some) scholars upload this pre-print document to an online server, or to their own website. The pre-print version has not been peer-reviewed. It may ultimately be rejected, accepted, and/or modified along the review process. Yet, this version may still be out there available online.

As an example, when I submit manuscripts, I upload the manuscript to the system, but I also share the Google Doc I used to write the manuscript. I share this openly online in a blog post, and mark it with a Creative Commons license. I indicate at the top of the draft that readers are free to leave comments and suggestions, and that this manuscript has been submitted for publication elsewhere. Any comments or suggestions will be added to the final, revised publication (if accepted). If/when the manuscript is published, I link to the publication on the Google Doc and blog post. The manuscript it often vastly different from the published version, and I indicate that the original version (in the Google Doc) is available as a “Director’s Cut” of my publication.

Because there are people like me sharing their manuscripts online, and through other “pre-print servers,” publishers and scholars across multiple fields are getting antsy. The discussion is more of a cultural decision, as opposed to a technological question. The publishers are pushing journals, and professional organizations to identify how they would like to handle pre-print versions of manuscripts.

One of the key benefits of pre-prints is to speed up transmission of ideas out to the publication. Most of this delay in transmission come from the submission/editing/review/publishing process. There are also questions about researchers publishing less than valid/reliable work. There are questions about ideas or content getting scooped or stolen by others. Finally, there are questions about the validity/use of the peer review process.

To qualify as a pre-print, a manuscript needs to be uploaded to a pre-print server. But, all pre-print servers are not equal. Once a manuscript is identified as “pre-print” there is also a lack of agreement on what should happen to that manuscript while in the review process…or what should happen if the manuscript is rejected, and you submit elsewhere. The latest thinking (from many publishers) is that all manuscripts are given a digital object identifier, or DOI. This is a common practice when the manuscript is published, but there is some thinking that the pre-print manuscript should be given a DOI, and this DOI should follow the manuscript from pre-print status, to (hopefully) a published article.

Publishers are currently pushing journals, and the professional organizations behind these publications to indicate how they’ll deal with pre-print manuscripts, and possible use of static DOIs. If the publishers can get all of the journals and organizations on the same page, they’ll have a system in which manuscripts are given a static DOI upon submission of a manuscript. If the manuscript is published, this DOI follows. If the manuscript is rejected, if/when the author(s) submit the manuscript elsewhere, they would be asked if they submitted the piece elsewhere, and if it qualifies as a pre-print manuscript. Does it appear elsewhere online? Does it have a DOI? The new journal that you submit to may reject the manuscript immediately simply because your work has appeared elsewhere, or because it has been rejected elsewhere. Once again…this is a cultural decision, not a technological decision.

One persistent draft

What I would like to see in this process is a way to connect the dots from the beginning to the end of the manuscript. Something open that allows the author to detail the path taken from the genesis of the piece to the end result. This would allow scholars to post grant funding statements, researcher notes, open data, revisions, and other materials and connect this to the overall result. Viewers of the final published version would be able to look back through the links and chain of documentation to see the work that was embedded in this resultant piece.

This is also a great opportunity to federate with the Transparency and Openness Promotion (TOP) Guidelines as developed by the Center for Open Science, a group of university researchers, funders, & publishers. The TOP Guidelines are identified as a broad set of standards for open science, articles, and data that may be included in publication. The standards expand article-citation practices so that authors get credit for making clear the data, methods, and materials needed for replicating their work. The guidelines also set expectations for preregistering research plans so that random unexpected findings can’t be claimed as meaningful outcomes. This introduction by the TOP Guidelines Committee provides the best introduction to the work.

I believe that the same indieweb philosophies that I’m using to own my own data and connect the dots in my thinking can be used to create a distributed network of data, comments, content, and feedback that leads to a “final” academic piece. Indieweb microformats and webmentions can be used to connect the final publication that resides on the publishers website. (Hopefully it is available digitally and not locked behind paywalls.) This document would contain a series of links back to all of the content if the author decided to make this available. The journal could choose to open up the peer review process (much like the great model from Hybrid Pedagogy) and make this feedback available with the final article. Finally, if the author(s) want to rewrite the article, chapter, or book in the future, they can link back to these materials already available, and build upon them. They can indicate where/how changes were made, and indicate why these differences exist.

Cultural as opposed to technological

As I’ve noted throughout this piece (and the rest of this blog), there are regular changes occurring to the the ways in which we read, write, and communicate. A vast number of these changes are the result of result of digital technologies connecting and networking us on a global scale. These technologies have the potential to allow us to educate, empower, and advocate. They also have the potential to disrupt and dislocate groups left behind.

There is also a lot of money behind the scenes making these levers move. In this post I spoke a bit (hypothesized) about the motivations of the publishers in these changes. I’m sure someone will jump in with a comment or Hypothesis annotation to correct me. 🙂

There are also many elements here to unpack, and moving targets as we plan for possible futures. This is ultimately a big game, and we’re adjusting the rules to the game as we are playing the game. There are positives and negatives on both sides. I see a possible better future given trends in the places in which I’ve been playing, and this is causing me to re-examine and problematize our current conceptions of scholarship, authorship, and identity.

I hope posts like this will engender more discussion as we potentially move forward and examine open scholarship/science across our organization and publications. What do you think about what I posted above? Would this be of value to you as an author? Would you care as a reader? How would this impact your view of science, scholarship, and publishing? I’d love to hear your thoughts. Thanks to these indieweb tools…we can pull all of those loose threads together.

If you valued this post, please consider subscribing to my weekly newsletter. Don’t worry…it’s not all super deep like this post. 🙂

Image Credit

Leave A Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.