I’m slightly late catching up on the content for Week 8: Research as it’s such a wide and important topic in UX, so I spent time reading down various rabbit holes. One of the reasons I was attracted to the field of user experience design is that, as I understand it, UX research is about understanding human behavior and challenging or own perceptions. As a designer I want to create products that meet real human needs and that people find easeful & pleasurable to use, so developing processes to involve users in my design process will be an essential part of my transition from graphic designer to UX designer.
Facebook, a Human Laboratory
First stop, design ethics. We were encouraged to read into Facebook’s controversial research on emotional states, which was interesting to revisit considering the growing mistrust and recent public security breaches from various tech giants. The paper, “Experimental evidence of massive-scale emotional contagion through social networks,” was published in The Proceedings Of The National Academy Of Sciences in 2014, and sparked an onslaught of polarising opinion pieces online. It’s important to realize that these kinds of studies are very common (we just don’t hear about them), whether they’re A/B testing content types to gauge engagement, or trialing ads according to your demographic. The difference in 2014 was that the study were particularly designed to gain information on user’s psychology, and that the research was published. It also provoked questions of potential harm to vulnerable users, due to the nature of emotional contagion.
“It’s a charming reminder that Facebook isn’t just the place you go to see pictures of your friends’ kids or your uncle’s latest rant against the government, it’s also an exciting research lab, with all of us as potential test subjects.” – (
Researchers can never wholly guarantee safety and participants must therefore be aware of the risks and accept them before taking part in research. (Shaw & Barrett. 2006) Yet in the case of this Facebook study, participants were not aware that they were being used as test subjects, hence the ethical questioning. In academic research informed consent is regarded as best practice – the notion that research participants should be provided with the information needed to make a meaningful decision as to whether or not they will participate. Yet in online spaces, where both researcher and participant retain a sense of anonymity, there is more room for exploitation. Facebook may have taken a relativist approach, deciding that their research results justified the means. They showed that emotional states can be transmitted across social networks, proving that online spaces are optimal sites for large-scale emotional contagion. Which is great news for Facebook data scientists hoping to prove a point about modern psychology. But not so good news for users who are unaware of the ways in which their emotions are being manipulated as they scroll.
Manipulated by Design
I agree with Tal Yarkoni when he says “There aren’t any new risks introduced by this manipulation that aren’t already dwarfed by the risks associated with using Facebook itself”. (Yarkoni.2014)
Facebook filters how we experience our social life, so it’s manipulated by design. In this case users were randomly selected by an AI and some had their feeds tinkered with, to see ‘less positive content’ resulting in them posting ‘less happy’ content themselves. The changes were so subtle, using content from the users’ own news feeds, which they may well have seen anyway, so some argue that Facebook didn’t commit harm. If we step back from our gut reactions, it’s important to realize that these kinds of studies are exceptionally common, they’re not illegal, or particularly unethical. (Leeper.2014) Yarkoni also argues that the study was only classed as research because Facebook decided to publish it, that thousands of experiments just like this are conducted every day by large companies, which go unseen and therefore are not subject to scrutiny.
I understand why people may be upset hearing that their emotions have been manipulated by Facebook as part of a study to find out whether ’emotions are contagious’. Facebook has time and time agin proved itself to untrustworthy, we should know by now that it doesn’t exist to facilitate our wellbeing. Since the study was published, Facebook have been caught up in various other controversies regarding apparent misuse of the platform, influencing elections and spreading disinformation, with wide-spread effects.
In 2021 Facebook released a 44-page disinformation report which they called a study of “coordinated inauthentic behavior.” Facebook’s head of cybersecurity policy defined the term as “when groups of pages or people work together to mislead others about who they are or what they’re doing.” (First Draft.2021) In the report Facebook disclose the ways in which they’ve developed their security teams, policies, automated detection tools, and enforcement frameworks to tackle ‘deceptive actors’. This shows the global scale and recent growth of the so-called disruption, and is a reminder that Facebook have created a tool which is often misused to quietly impact our emotions and effectively shape our ideas, political opinions and desires.
Ethics in online spaces
I found that revisiting the controversy of Facebook’s 2014 report was an effective starting point to delve into theories around the ethics of social research. Reading the wide discussions about this study among academics and journalists working in both technology and science introduced me to the parallels between digital research and scientific research. In scientific research regulations such as The Belmont Report exist to provide a framework for ethical practice, the 1974 report was designed to outline potential threats to human values in the realm of investigation. However medical research and social/behavioral research often merit rather fundamentally different ethical discussions. (Leeper. 2014)
Conducting research in digital spaces provides a certain level of anonymity of both parties, yet the researcher often holds more power, particularly in cases where the participant is unaware. Yadlin-Segal et al use the term ‘lurking’ to describe the lower visibility afforded to researchers in online spaces, allowing them to remain unseen (2019). They argue that ethnographers of digital contexts should be able to introduce themselves and their research as an ideal and responsible methodological practice. However they admit that lurking is not usually viewed by academics as unethical practice, which could be why some scholars agreed that Facebook’s actions in the 2014 study were valid. The ever-changing nature of digital networks provides a fragmented, multi-faceted area for research ethics and questions of privacy. The digital context complicates the notion of authorship, anonymity and ownership.
Yadlin-Segal et al suggest that when considering the ownership and creators of online texts as an ethical matter, addressing solidarity is just as important as contemplating the legal aspects of the issue. In other words, when conducting ethnographic research in online spaces we should remember the humanity of the people involved, consider the possible effects of the research methods and subsequent exposure of data. At this point we need to turn to our reflective practice; It’s essential that we observe our biases and the social structures that influence our ways of thinking so that we can ask the right questions and avoid causing harm to participants.
Conclusion & learnings
The ethics of research is a huge, complex subject to explore, so the past few weeks have been merely an introduction. As I mentioned, using the specific case of the Facebook controversy was a useful starting point as I touched on the areas of informed consent, ethical questions in scientific research as well as privacy & ownership in online spaces. Since most of the research I will be conducting as part of my Masters will be online, I see that it’s of upmost importance that I get familiar with best practice as an academic researcher in online spaces.
Through this module I’m starting to realise that developing my own personal research methodology, based on accredited practices and on my own values and goals, is an integral part of developing as a UX professional. I have my own style when it comes to design, this has developed through years of arts & design studies and my own professional career. Therefore part of my learning journey will include reading and reflecting on research practices which interest me.
In light of this I have bookmarked a few items for further reading:
- The journal ‘Research governance: regulating risk and reducing harm?’ as further introduction to ethics in medical research.
- I’d like to explore the references from the paper ‘The ethics of studying digital contexts’, particularly those which relate to the field of UX.
- The book ‘Researching the Vulnerable’ to gain a perspective on researching sensitively with marginalised groups.
Another issue which comes to me when starting to compile references and research sources is the question of diversity within the texts I refer to. It’s very common for academic texts to come from hegemonic sources, scientific journals & academic texts are overwhelmingly from American or European publishers. Women publish fewer scientific articles compared to men, female authorship internationally has only reached 33% in 2016. (Bendels.2018) It’s simply that I don’t accept the default of primarily white middle-class older men as the voice of academia, and so part of my practice will be going beyond traditional sources to discover diverse, international voices in the fields that I’m researching. The following sources are my first point of call:
CALLAHAN, Kristin. 2015. ‘Empowering Designers Through Critical Theory’. Chicago: The Third International Conference for Design Education Researchers. Learn X Design: Lewis University and Northern Illinois University.
Design & Culture. The Journal of the Design Studies Forum. Available at: https://www.tandfonline.com/doi/full/10.1080/17547075.2020.1794368
FRERIE, Paulo. 1974. ‘Education for Critical Consciousness’. London: Continuum.
First Draft. 2021. ‘Facebook’s sprawling report on disinformation’. First Draft [online]. Available at: https://firstdraftnews.org/articles/facebooks-sprawling-report-on-disinformation/ [accessed 13th August 2021]
GLEICHER, Nathaniel.FRANKLIN, Margarita. 2021. ‘Threat Report The State of Influence Operations 2017-2020’ Facebook [online]. Available at: https://about.fb.com/wp-content/uploads/2021/05/IO-Threat-Report-May-20-2021.pdf?mc_cid=89da5b8593&mc_eid=63b8f7c7c3 [accessed 13th August 2021]
HUGHES, Thomas. 2014. ‘Facebook Tinkered With Users’ Feeds for a Massive Psycology Experiment’. Av Club [online]. Available at: https://www.avclub.com/facebook-tinkered-with-users-feeds-for-a-massive-psych-1798269841 [accessed 22nd July 2021]
LEEPER, Thomas. 2014. ‘Science, Social Media, and the Boundaries of Ethical Experimentation’. Thomas Leeper [online] Available at: https://thomasleeper.com/2014/06/facebook-ethics/ [accessed 3rd August 2021]
SHAW, Sara & BARRATT, Geraldine. 2006. ‘Research governance: regulating risk and reducing harm?’. JOURNAL OF THE ROYAL SOCIETY OF MEDICINE, Volume 99. Available at: https://journals.sagepub.com/doi/pdf/10.1177/014107680609900109 [accessed 13th August 2021]
YARKONI, Tal. 2014. ‘In Defense of Facebook’. Personal Blog [online]. Available at: http://www.talyarkoni.org/blog/2014/07/01/in-defense-of-in-defense-of-facebook/ [accessed 3rd August 2021]
YADLIN-Segal A, Tsuria R, Bellar W. 2020. The ethics of studying digital contexts: Reflections from three empirical case studies. Hum Behav & Emerg Tech. 2020;2:168–178. Available at: https://doi.org/10.1002/hbe2.183 [accessed 13th August 2021]
Fig 1: GLEICHER, Nathaniel.FRANKLIN, Margarita (2021) ‘Threat Report The State of Influence Operations 2017-2020’ Facebook. Available at: https://about.fb.com/wp-content/uploads/2021/05/IO-Threat-Report-May-20-2021.pdf?mc_cid=89da5b8593&mc_eid=63b8f7c7c3 [accessed 13th August 2021]