Prioritising Mindful Consumption of Information

The Problem
Modern social media is built for speed. It prioritizes first impressions, reactionary reshares, and emotional comments over deliberate consideration, lateral reading, and mindful interactions with others.

While this emphasis on fast interactions increases user engagement (and thus exposure to advertising), it may also contribute to the propagation of mis and disinformation across the platform and to society at large.
The Solution
As a team, we were interested in providing a social media experience that prioritizes slower introspective and analytic processing over fast intuitive reactions.
Final Report


As a part of the class on “Addressing the Challenges of Online Mis- and Disinformation” in Fall 2021, I was fortunate enough to learn from one of the most AMAZING experts in Disinformation, Dr. Kate Starbird.

Along with learning a lot of theory, we had to pair up in teams to work on trying to lay the foundation for a potential solution to one aspect of online mis- and disinformation.

Making Information Systems trustworthy is something which is a deeply complex problem, spanning fields like policy, platform design or media literacy (to name a few) and there isn’t going to be a magical silver bullet which solves it all.

Keeping all this in mind, within the scope of the quarter - our team decided to look at platform design as an approach.

Why do we believe lies?

There is no doubt that in the information spaces we access today - social media, discussion forums etc. The line between what is accurate and what is false was increasingly beginning to get blurry.

Something which stood out to us when trying to understand what makes false information feel true, was a quote from this paper.  Disinformation moves fast, because people share fast.
After a lot of discussion, I realised that this would be a worthwhile area of inquiry, and we set a few outcomes which we wanted our solution to lead to.
Desired Outcomes
  • Thoughtful Consumption and Posting: Nudge users to consider the veracity of information and utilize System 2 thinking
  • Slow Down: Reduce the viral spread of misinformation and disinformation
  • Digital Wellbeing: Improve impact of social media on individuals and communities

Point of Intervention

Even within social media, there are multiple places where interventions can take place so we wanted to analyse which areas our potential solution would have the most potential for impact.
The social media engagement loop
We spent a lot of time coming up with an accurate representation, and even longer arguing about the best point to intervene.
A newspaper article talking about why dating apps make you feel awfulA newspaper article talking about why dating apps are no way to find true loveA newspaper article talking about why dating apps are no way to find true love
I found myself leaning towards focusing our intervention during the scrolling phase.
Different people in the team had strong opinions on where we should focus, these were the reasons why I felt like scrolling would be the best point of intervention.
It’s Early in the ‘Journey’
Using social media isn’t nearly as linear as we’ve shown here. There are loops which feed into each other from one activity to another. The longer people stay in loops of mindlessly sharing and reacting to things, the current design of platforms makes it extremely easy to slip into System 1 thinking.
Scrolling Lower is Bad
Some ways of dealing with information which is problematic but doesn’t have the grounds to be removed, is by demoting it to a lower position on the feed... with the hope that most people won’t come across it. However, this means  the lower you scroll, the more vulnerable you might get to misinformation.
We Can’t Wait till the Last Minute
Intervening at the moment right before a user is about to share something, might not be as effective. Often people have already made up their minds about a piece of information when they first come across it. The key is to already be in a critical state of mind when you are encountering the information for the first time.
After brainstorming a bit, the team settled on the top three interventions and then decided to put it to vote. These interventions are platform agnostic and can all be applied to any platform.
Top 3 Interventions
  • Scrolling Interface (4 votes)
    Interventions which change the appearance and interaction design of the scrolling experience, such as introducing friction (slow scrolling, reduced text density), decreasing fidelity (desaturated colors), unexpected affordance changes (moving/resizing buttons)
  • Behavior-Based Limiting (3 votes)
    Interventions which restrict or deny a user the ability to perform possibly problematic actions based on behavior in the user's current session. Examples include daily or hourly thresholds of posts or shares; and/or frequency limits (time since last share, etc).
  • Mindful Re-Share (2 votes)
    Interventions which prompt users to enter a more mindful state before taking an action, composed of affordances to allow people to set their intentions before engaging, engage in sharing, and then reflect on the affect the reshares have caused.
The three interventions mapped to the user’s journey
From here we downselected into the intervention with the most votes - scrolling. We now wanted to test this out with an experiment, and for the sake of convenience, chose twitter as the platform to execute this on.

Designing the Experiment

We wanted to look at how many people view information in an analytical/critical nature before they share it, so we based our scenario on the “Noah’s Ark”/“Moses’ Ark” riddle used in similar experiments.

The riddle would look at how many people would be able to realise when they came across an incorrect sentence like -
“How many animals of each kind did Moses take on the Ark?”
Most people respond "Two" despite knowing that Noah rather than Moses was the biblical actor.

This is what the experiment would entail -
  • The experiment would be to first have a news feed simiIarly populated with Noah and Moses objects.
  • I would then build a prototype around a feature (fading the content to grayscale over time) and a scenario (an activity)
  • This activity is what a participant will perform with the prototype as part of the experiment, to simulate social media behaviour.
  • Each time they would come across a certain artefact, the participant would be required to do a particular action. At the end of it, we would measure how successful the treatment was as compared to the control group.
I then mapped out the complex pathways of a user navigating the app at different stages. This essentially came down to three user flows — Setting up their profile, Before they match with anyone and After they match with someone.

Finding the right artefact

Finally you’re ready to go on your date, everything is going according to plan so far. But like all things in life, it’s always better to be prepared.

Safety is one of the biggest concerns when meeting someone new for the first time. It was very important for us to make sure we accounted for all possibilities which might arise.
Baby Images detectable by System 1 thinking (left) and System 2 (right)
I would have to similar analogies for text as well as emoji versions of the same. This entire exercise was also challenging since along with each artefact having a “Noah” and a “Moses” equivalent, it also had to satisfy the following criteria -
  • Apolitical (eg. avoid Trump tweets or partisan issues).
  • Something which did not evoke an emotional response (eg. anger, sorrow).
  • Something which works across text as well as images.
  • Something which would not feel out of place on people’s social media feeds.
  • Something with lots of easy/hard identifiable images online.
One of the things which fit all the criteria above, were (drumroll please)

Cats are hidden in negative space or camouflaged
The word cat is hidden in the user name or tweet

Conducting the Experiment

Based on this, I made 3 different twitter feeds and populated them with real looking content. For helping us measure, I made a legend where I marked the “Noah” pieces in green and the “Moses” pieces in red (with intensity varying based on the level of difficulty)
Users were first given a practice version so they got a hang of it, and then half of the participants used the control version and the other half used the treatment.
The word cat is hidden in the user name or tweet
The tweets in the Control and treatment group were identical, except for the colour desaturation as you scrolled lower.
The word cat is hidden in the user name or tweet

What did the numbers say?

Our pilot study tested eight people on a find-the-cat exercise in a between-subjects experiment, in which the treatment group received the fade-to-gray intervention while the control group received the same social media feed in full-color.

Looking at the quantitative analysis from our experiment, we weren’t able to find conclusive results.
Quantitative Results of the Control Group
Quantitative Results of the Treatment Group
Statistically, the results are nearly the same. We did see some promising results on analysing the feed on a tweet by tweet basis, for capability of identifying hard “Moses” tweets accurately.

However, this variation in accuracy and completion time could be attributed to variations in participants’ familiarity with social media in general and Twitter in particular. Testing more subjects and controlling for these variables would let us identify and remove outliers and arrive at more statistically significant results.

What did people say?

All participants with the experimental treatment noticed the colors desaturating. Some found this made it easier to perform the task. Participants mentioned the desaturation made them examine the content more closely in order to complete their task.
"Grayscale [may] have made it easier to find cats" -P5

What does this mean?

Although this pilot study suggests our prototype is usable and that our basic experiment design is sound, we can not conclude that the treatment increases or decreases accuracy and time-to-complete in statistically significant manners. We would need to test a larger population and control for prior social media experience and content sharing behaviours in order to improve the experimental design used here.

While our study results were not conclusive, they suggest that the effects of modifying visual design elements of social media apps warrant further investigation

Learnings and Takeaways

This quarter felt extremely fast and super slow at the same time. Going into it, I had no idea I would be able to learn such a vast amount in such a short time.

I still go back to some of the papers we covered in this class. I can say without fail, despite the outcome of this project, what I learnt in this class is going to stay with me for a lifetime. I'm also looking forward to have that knowledge bank grow with time, given how complex this space is.
Move Fast, Prototype Things
  • The more prototypes I built, the more confidence I gained. I knew we probably weren’t going to get everything right the first time (spoiler: we didn’t). But I also knew how much we could learn when we got things wrong.
Scope Well
  • Given that we only ended up with ~15-20 hours of total work time. Looking back the area of inquiry we chose was definitely biting off more than we could chew.
  • Given how much of a wicked problem misinformation is, my team and I would definitely have benefit a lot from choosing some extremely small point of intervention.
There are no low hanging fruit
  • We have this tendency to think that there’s probably a silver bullet which will neatly solve all the problems associated within a space (here it often takes the form of things like fact checking etc.)
  • But spending a little longer thinking about them surface many potentials for misuse. Maybe even a study which has been done on why the intervention isn’t effective.
I'm extremely passionate about misinformation. If you are too, say hi!
click to compose email
click to copy email id
Copied to clipboard!
Jump to Section