<span class='p-name'>Better visions of ourselves: Human futures, user data, & The Selfish Ledger</span>

Better visions of ourselves: Human futures, user data, & The Selfish Ledger

As technology and global connectivity become more ubiquitous in society, we become more accustomed to ubiquitous access to our data for little, to no cost. As digital technologies connect us, we also realize there is not much of a difference between the online and the offline, “real spaces” around us in which we exist.

In a post-Snowden world we understand that our data and digital footprint is public. We must contend with the potential that we are under constant surveillance from business, government, and other entities. Our online and offline interactions are woven together into a transmedia narrative that forms different parts of our identity. It follows us as we browse online and when we walk around our neighborhoods or towns. 

As our data is collected and connected across online and offline spaces, we’re beginning to learn more about what can be done with this information as we leave it behind. We’re learning more about what these entities have done with the data we’ve willingly (or in some instances unwillingly or unknowingly) shared. Some of what we’re learning inspires important reflection and critique, whereas some of what we see is general technopanic. A recent news story has inspired a bit of technopanic (IMHO), while also inspired me to think a bit more about opportunities for technoagency.

The Selfish Ledger

An internal video, titled The Selfish Ledger, was obtained by The Verge and released to the public. The video was made in late 2016 by Nick Foster, the head of design at X (formerly Google X) and a co-founder of the Near Future Laboratory. The video imagines a future of total data collection, where Google helps nudge users into alignment with their goals, custom fabricates personalized devices to collect more data, and even guides the behavior of entire populations to solve global problems like poverty and disease.

The Verge reached out to Google for more info about the video, and received the following response:

“We understand if this is disturbing — it is designed to be. This is a thought-experiment by the Design team from years ago that uses a technique known as ‘speculative design’ to explore uncomfortable ideas and concepts in order to provoke discussion and debate. It’s not related to any current or future products.”

Before we proceed, and before you go off searching for more info, please take the 9 minutes to watch the clip.

I also recommend reading the post from The Verge in which they announce, and unpack the content in the video. This post from Slashgear and this post from Business Insider also help provide some perspective. A quick search will bring about descriptive words like terrifying, creepy, unsettling, and dystopian. In short, most on the Internet are not happy about this recent bit of insight, or possibly foresight from Google’s X Labs.

The video embedded below is also from The Verge, and it does a good job of unpacking everything in a short (7:53) overview.

While it’s unclear how Google would go about creating the technology, the implications could be major. The ledger would essentially collect everything there is to know about you, your friends, your family, and everything else. It would then try to move you in one direction or another for your or society’s apparent benefit. Privacy concerns and whether people would feel comfortable with a single company swaying public opinion would obviously come about if the ledger were ever pitched as an actual product. The challenge is that, for those of us that study Google and use their various products, the DNA of this thinking is present in many of their tools.

The overall structure and content of the video reminded me of a class I took while an undergrad called Games Thinkers Play. It was a mix of philosophy, creativity, logic, and (what I know recognize as) divergent thinking. The class regularly began with a visual prompt and rarely ended with common assured learning experience for the students. Our only “text” for the class was Ode on a Grecian Urn by John Keats. We continued to read, reread, and remix this text throughout the semester. The course was infuriating at times, yet it remains to be one of the few classes I remember. I now know that our instructor was trying to problematize our established epistemologies and ontologies. He wanted to break our thinking about thinking.

I see a lot of parallels between that course, and this video. I think both were meant to be a thought project. Yes, there is a large probability that we could end up in (or are currently in) a dystopian future landscape and this video was the rosetta stone to make sense of it all. I choose to believe that this video serves as an opportunity to think through possible futures.

Digital Residue

As we move across digital spaces, we leave a trail of data, or digital breadcrumbs to show where we have been, and what we’ve done. As an Internet researcher, I originally believed that this information was an archive of our work and interactions. While working on a publication about digital badges, Daniel Hickey pushed back on this and labeled this a “digital residue.” The understanding that you were there, you did something, and this is what ever was left behind or swept by tools as you passed through.

Questions remain about how this data could/should be collected and used to inform others…or perhaps let us know a bit more about ourselves. This may include social media networks scooping up data about our likes and interactions. This may also include Internet service providers collecting everything about us, and then connecting this to more personal information that they keep behind the scenes. We should question where and how these services collect our data, and for what purpose they’re using it. We should also question our own behaviors and wonder why we’re clicking those links, or whether we every read all of the policies and terms for a product or app. Social media companies (e.g., Facebook) regularly indicate that they don’t steal or sell your data. They give you a nice digital place to interact and for that you willingly share your information. What they do with that data is their business.

A great deal of transparency and discussion is needed between developers, these corporations/entities, and the networked publics” that utilize these tools and spaces. danah boyd indicates that in these spaces, networked publics are not just individuals grouped together, but “transformed by networked media, its properties, and its potential.” The interactions, needs, and concerns of these collectives are shaped and modified by the spaces and tools they use to congregate.

The selfish journey

As we learn more about what companies, governments, and other entities are doing (or plan to do) with our data, much needed questions and dialogue has continued. We also see a population of users that are scared, and motivated by technopanic, and desires to quit use of certain networks or tools. Whether these desires to quit usage of these tools is another story. Even more to the point, I do not see a wide-scale desire to get to the root of what society could and should do with these tools. We’re also seeing a continuation of the patterns, habits, and dispositions that brought us to this current situation. We need discussion, development, and a fair amount of introspection across all levels of our society to think deeply about the next steps. I don’t see this event horizon coming anytime soon. Perhaps this is a journey that has to be conducted individually.

Personally, I have made the decision to recognize the challenges in the model, and perhaps trying to switch up my signals, and perhaps own a bit more of my work. This has me researching more IndieWeb philosophies, and identifying ways to have me act as a more informed, active user of these digital spaces and tools. As I undergo this personal journey and (perhaps) retake my digital identity, I’m cognizant of what these social networks and tools can (and cannot) do for me. I make decisions about what to share and not share, even as I tend toward open in all activities I conduct digitally. I also understand that I’m regularly experimenting with my own work processes and products over time. I am experimenting in the open while trying to balance risk.

The learner and the interface

The video brought together a number of thoughts I’ve had from work over the last couple of years, as well as things I’ve learned as I’ve experimented with my digital identity and signals. Perhaps the video got a lot of things correct in their assessment of social and individual computer interactions.

For one, perhaps we need more “thought experiments” and divergent thinking in our daily activities. Too often, we seem locked into specific actions, because “that’s the way it’s always been done.” Perhaps some cognitive or emotional constraints are needed to help us imagine possible, progressive futures.

Perhaps there is also need for individuals, and networked publics as a group to think more about the “could” and “should” involved in usage of technology. Yes, there are certain things that we could or should do with technology, but that doesn’t mean that it is always a good idea. As a corollary to this thinking, just because we could or should use technology in a capacity, doesn’t mean that we should. Yes, I could buy a bluetooth earbud to use for phone calls. Just because I could purchase a bluetooth headset and wear it around, doesn’t mean that I should. I’ve tried it…it’s not a good look. Into this bucket, I’d add wearable tech (smartwatches), notifications, and screentime infatuation. Just because we could doesn’t mean that we should.

Perhaps the video already made a correct assumption about how individuals interact with computers. One aspect of the video that unnerves viewers is a focus on how the data, and other devices can be used to “control” or motivate users. As a researcher that studies human computer interactions, I’ve seen a lot of instances in which devices, and digital platforms have already impacted and modified human behavior. One of the earliest examples I have is as a researcher studying online reading comprehension more than a decade ago. We asked adolescents to use the Internet to search for facts about a topic. In one example, we asked for facts about “Britney Spears”, a key figure in pop culture at that time. As researchers, we noted that our participants didn’t know how to use the browser, and they didn’t know how to enter the address for a search engine, and then enter the search terms for “Britney Spears” in the correct place in the tool. Instead, we found it interesting when participants would enter their keywords in the address bar on the browser. We would comment in presentations about how “silly” the participants were, and that they were not “good online readers.” Very quickly, we noticed that developers of the browsers began to fold into the tool, the ability to have the search bar identify text in the address bar and conduct a search. The developers noticed instances where the user could not adapt to the tool, so they modified the user interface to make it easier for what the user wanted (or what the developer wanted) to achieve.

The user, ledger, and behaviors

The key thing that resonated with me from the Selfish Ledger video was the idea behind the ledger. The idea is that we have the opportunity to collect and aggregate a lot of data on you and learn more about you by comparing your actions and data to people just like you. A company like Google already has a lot of information about you. In the video they present new opportunities to suggest devices and tools that will help get the data about you that they need, but do not currently have.

This partnership between user, data, and devices would create “a constantly evolving representation of who we are.” Foster terms this partnership a “ledger”, or a data profile of you that could be built up over time, used to modify behaviors, and possibly transferred from one user to another. This ledger would contain the data on our “actions, decisions, preferences, movement, and relationships.” It contains important information that could be passed on to other generations in your family, and if the data is partialled out over time, it could help others learn about themselves as well.

“User-centered design principles have dominated the world of computing for many decades, but what if we looked at things a little differently? What if the ledger could be given a volition or purpose rather than simply acting as a historical reference? What if we focused on creating a richer ledger by introducing more sources of information? What if we thought of ourselves not as the owners of this information, but as custodians, transient carriers, or caretakers?”

We already see some of this happening as devices track our steps, spending activity, online reading behaviors and other content. We’re often more alike than we are different. Big data and crunching of these datasets can identify commonalities in what you’re doing, and help you understand connections behind your actions. Additionally, they can help nudge you to make better decisions. We already have smartwatches that will nudge you to get up and take a walk during the day. We can see a future where you’ll be sent home from the hospital after a procedure, and your smartwatch will monitor you and your decisions/actions to make sure you’re abiding by your aftercare instructions.

The learner and the ledger

The idea of the ledger is not new to me at all. A couple of years ago I was working with a blockchain research group and we were discussing the opportunity to develop a learning ledger. Specifically, I was inspired by our thinking and understanding of the possibilities associated with distributed ledger technologies, and by Serge Ravet’s thinking about a connection between ePortfolio’s and open badges.

Our problem was that learners were increasingly dumping their educational history, and documentation of learning over time into silos, or not saving it at all. More interesting to me was a failure to focus on growth in learning, as opposed to presenting your identity in a resume. To make this point much clearer, we learn how to write a resume by starting with a statement of your purpose for the writing the resume, or indicating the job you’d like to have. Yet, once we are grown and out in the workforce, we keep this hidden. We rarely indicate the things we’d like to learn and do. We don’t share the MOOCs we’re taking, clubs we’re involved in, and times spent “moonlighting” to change careers. We ignore the real interesting parts of our learning that share who we perhaps really are.

In our idea of a personal learning ledger, we’d bring this back and allow individuals to identify what they’re doing, and where they’re going. We’d allow them to remain in control of their data and information. We’d allow them to pull together the strings and digital breadcrumbs they’ve left behind, and allow others to see the learning pathways left behind over time. It was our hope to make this infrastructure all in the control of the individual as they dictate the identity or identities they want to project.

This thinking of the learning ledger is present in the video from Google, but I think they got some things wrong. The video details the idea of the ledger, and an opportunity to not just share information to guide the individual user, but collecting information from multiple sources to guide the actions and decisions of the user. 

I agree with the thinking about this ledger, but do not agree with how it is situated in the video. I would see an opportunity for the individual to determine what information comes in to the ledger, and how it is displayed. As an example, each of the arrows coming pointing in to the ledger could be streams of information from your website, Twitter feed, Strava running app, and any other metrics you’d like to add. Each of these would come in with a modified read/write access, and sharing settings from the originating app/program/service. As the individual, you’d be in control of dictating what you present, and how you present this information in your ledger.

Perhaps you want to showcase the fact that you’ve earned digital badges and completed some MOOCs on your ledger. You could decide to put those above, or perhaps below your credentials from more “established” learning institutions. You might also showcase your reviews left behind on GoodReads, or the fact that you’re an award-winning online DJ. The user would have total control in how they present this information. This would ultimately be an open mix of LinkedIN and Reddit. The Reddit side of the equation comes from connecting to others through a distributed network of connections. That’s another post for another day. 🙂

The video continues to suggest that this ledger that you create could be shared with others, and also inform the community about your decisions and actions. An example of this would be the algorithms warning you and others that people that weigh X, eat Y, and do Z will generally have {enter malady here}. I this connection a bit differently. I see this sharing of your ledger, or a digital indication of what the video calls details in an homage to Richard Dawkins’ 1976 book The Selfish Gene. This is a digital accounting of your traits, or “a constantly evolving representation of who we are.” This ledger, is a data profile that could be built up, used to modify behaviors, and transferred from one user to another.

I see this a bit differently. I see this as an opportunity to share an archive of your learning and thinking over time. When you meet someone for the first time, they’re encountering you “in the middle of the story.” Many times, they don’t know where you’re from, or experiences that you’ve had. Many people search for someone online before we meet them for the first time. This may result in a broken, disparate view of who they really are based on where you look, and what you have access to. Development and control of your own personal learning ledger allows you to dictate what elements you want to foreground. You can connect the dots and make it easier for people to see how things fit together.

This sharing of your learning ledger also helps you make granular the learning pathways you took to get to where you are. As other individuals consider how to get to a specific career, they want specific advice. This is a challenge as students are currently studying for careers that don’t exist as of yet. Your documentation of “how I got to here” and pulling together common threads across common pathways across common ledgers would be a value for all. This is possible (I think) via distributed ledger technologies.

Possible better human futures

If you’ve made it to the end of this long read, I appreciate it. When I first saw the Selfish Ledger video, I was intrigued, inspired, and cautious. I was also taken by the heavy negative reaction from many online. I do understand (and agree with) the cause for concern about the contents of the video, the author of the video (Google et al.), and the current discourse around privacy/security/data. I just think that we need to treat this as a thought project, and be a bit more thoughtful about possible futures.

In the book Seveneves, Neal Stephenson coins a term for a culture known as Amistics. Amistics describes the study of the choices made by different cultures as to which technologies they would embrace or spurn. Perhaps there is a need to problematize our epistemologies and ontologies about tech and our processes. Perhaps we need to break our thinking about our use of tools and how we’re supposed to use them. Perhaps there is a need to think more about the could and should in our use of these texts, tools, and spaces.

I think there is a reasoned response to technopanic. Perhaps a sense of technoagency is necessary. Now more than ever, faster than ever, technology is driving change. The future is an unknown, and that scares us. However, we can overcome these fears and utilize these new technologies to better equip ourselves and steer us in a positive direction.

 

If you enjoyed this, you should subscribe to my weekly newsletter as we try to make sense of the future.

Cover image credit

14 Comments Better visions of ourselves: Human futures, user data, & The Selfish Ledger

  1. hannahgerber

    Fascinating concept (I admit I am one who is frightened by the thought-experiment of the Selfish Ledger). The idea of user control of the data to me is the epicenter of what we need to argue more for (but therein this comes with a host of different issues, in other ways).

    Reply
  2. hannahgerber

    BTW, I have been meaning to respond to your email…has been an insane month with office moves, classes ending, and teaching minimester, but a response is coming this week:-)

    Reply
  3. Pingback: Aaron Davis

Leave A Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.