TL;DR Version: This post shares my thoughts about the "Facebook Mood Experiment" and how this relates to online research and the IRB. Finally, I conclude with possible next steps for a free, safe, transparent Internet.
Over the weekend, news of the Facebook “mood” experiment was revealed. The truth of the matter was that the results of the study were published and the news spread like wildfire throughout the Internet after that. You can read the full research report here.
If you haven’t been following the reporting and opinions, please review the following links:
I’ve shared a lot about this in social media, and have some thoughts about the elements of the work, not specifically the research itself.
It’s “good” research. I’m an educator, scholar, and researcher that studies literate practices of individuals in online and hybrid spaces. This sort of work is fascinating, and begs for more work to be done. I’ve heard a lot of people comment that this is “stupid research” or an “idiotic question.” I think nothing could be further from the truth. We need to learn more about these online spaces and the effect of multimodal information (images, video, audio, text, etc.) on individuals. This is challenging considering this understanding is gained in an environment that is constantly changing. I think the research focus is intriguing, the methods and instrumentation are problematic, and there are much BIGGER concerns, but the research itself is intriguing. Also…I’d love to have a sample size of almost 700,000 people.
Review and the Institutional Review Board (IRB). As I’ve stated above, I conduct research in online spaces. Much of this research involves vulnerable populations (students, minors) in online spaces in social scholarship practices. In all of these situations, ALL, I have had to submit protocols to my IRB board. They in turn review my research and need to approve it before I am permitted to collect data. From having dealt with the IRB panel many times, I know what questions to anticipate, and what answers to provide. I also have to indicate that I’m a former middle and high school teacher, and as a result always have the best interest of students at heart. However, as I’ve noted here on this blog many times, I have concerns about the future of literacy in online spaces, and the potential for future damage of my students, and sometimes research participants. Nevertheless, I work with my IRB before I conduct research to ensure that I am protected, my University is protected, and my participants are protected.
The details about the “Facebook Mood Study” are still very vague, but it appears that they never went through an IRB review process. Supposedly, they went through a review by an internal panel at Facebook. The details are very murky, but I would like to focus on what this means for research review and the IRB. Conducting research in online spaces is terribly fascinating and also very challenging. There is a lot of “noise” that envelops the online environment, and it can be tough to understand not only what is happening, but educate individuals to make informed choices.
As I’ve stated many times, the general user, or student is technologically savvy (they may have the toys and tools), but they’re not informationally savvy (they don’t know what to do with it). I think this naiveté extends to the review process as conducted by many IRB panels. It is my opinion and experience that the IRB panels trust the researchers that they will strive to “do no harm” to participants. I also believe that IRB panels (at least the ones that I’ve dealt with) do not fully understand the complexity of the online space, and therefore do not understand what questions to ask, or what to approve/outlaw. I would also suggest that the IRB and review process may be obsolete, or at least an artifact from a bygone era that needs to be reviewed. An example of this is from my dissertation research. In my dissertation I had adolescents creating hoax websites and posting them online. The fact that my students could create texts and post them online with whatever images, videos, or text they chose did not concern my IRB office. They were concerned with the fact that I would interview students and video record these interviews for data analysis. My IRB office wanted to know what would happen to these videos, and where would they be stored.
This might be taken as an indictment or criticism of IRB panels and the process. It is not. I have the utmost respect for the IRB process, and the individual members of panels that I’ve dealt with. Many times they conduct herculean work with little support. I also find the IRB to be more than helpful in thinking through research design and procedures. I merely hope that this dialogue created by the Facebook Mood Experiment will cause researchers, scholars, and IRB panels to review and tighten up the events that brought about the need for a review process. Throughout history, there has been a great deal of research that has raised serious questions about the ethical treatment of subjects, and as a result brought about the need for the IRB. I hope that this spurs more debate about codes of conduct for researchers and protections for participants.
Lab rats in online networks. This is not the first time, or the last time that we’ve seen this from Facebook. It seems like Facebook routinely tweaks a setting, changes privacy rules, or taken advantage of users, and then said they’re sorry when they get caught. We’ve seen this again, and again. The blogosphere was quick to indicate that this is covered in their terms-of-service, however it recently same to light that they added the “research” clause 4 months after the data for the “Mood Study” was collected. One thing should be clear, Facebook is a corporation, and they bear a responsibility to share-holders to make money. As much as I would hope that they would “protect” the information they have on me, my Wife, and family…they could care less. They have a product, and need to improve the product. I’m also not naive enough to think that Facebook is the only group doing this. We should not be afraid of this recent study published…I would be more concerned about the work Facebook and other business/government is conducting that we don’t know about.
One thing that we should learn from this is that we are beholden to the corporation or programmer when we use these online, or cloud tools. This is in light of the fact that not only are businesses and corporations toying with us in their “sandbox”, but we’re also being studied by businesses, governments, and unknown entities. Thanks to the work of Edward Snowden, we realize that nothing is lost, private, or forgotten online. The ultimate question becomes – What future do we want and expect in online spaces?
I view this as being a symptom of a much bigger problem. I detailed my thoughts on this in an earlier post and discussion thread. I think it calls for dialogue and a declaration of Internet freedoms and liberties for all online citizens. In truth, I don’t know what the answers ultimately are, but I do know that we need to start asking the questions and work together to fight for what is right.
- Identify the problem
- Name the problem
- Visualize the problem
- Find a focus point
Thank you Facebook for (hopefully) providing us with a common example to discuss the problem. Hopefully now we can organize online and create a network that is free, safe, and transparent for all.
Image CC by pookstar