In our time, we are losing our capacity to make sense of the world. Sense-making permits people to identify their values and strive to realize them in the world. In this paper, I argue that ubiquitous digital media technologies restrict our ability to make sense of the world and thus limit our power to protect what we value in the near and long term future. We must safeguard sense-making by redesigning digital media platforms and changing the way we use them in our daily lives.
Several of today's most pressing social and political problems stem from the fact that people do not have access to shared, accurate information about the world. For example, as political polarization has progressed, formerly centrist news media networks have gradually entrenched themselves in deeply partisan corners.[1],[2] Many partisans now feel they can only trust a limited set of news media outlets because all other sources seem problematic, unfairly biased or too low-quality to warrant attention. And when politically-relevant content from news outlets starts to differ significantly from source to source, these trusting subscribers' political views can diverge along with them and cause more political polarization overall. Relatedly, fake news constitutes an enormous problem because it disrupts citizen's shared sense of truth and reality. Since people have recently tended to entrench themselves in their beliefs when facing contradiction, the problem of widespread fake news is magnifying the severity of epistemic disagreements, both in the public sphere and in our institutions of governance. Finally, in the years of the Trump presidency and COVID-19, many citizens have lost trust in our society's institutions of knowledge and government. Without agencies and institutions to rely on for truthful knowledge, citizens have tried to make sense of research literature, journalistic sources and popular opinion on their own; and because partisanship and fake news are so widespread, self-directed citizens have struggled to pin down what is true and what is useful in a sea of conflicting information. These complex and interrelated problems— 'polarization,' 'fake news,' and 'institutional distrust'— are caused and exacerbated by the fact that most members of society are beginning to lack access to shared, accurate information about the world.
Personalized, algorithm-driven digital media platforms limit their users' access to shared and accurate information about the world. For example, platforms quickly narrow the range of available media content in someone's feed to match their interests. Since the algorithms learn from user's stated preferences and demographics as well as unintentional behaviors (such as tracking mouse movements and the exact time you spend hovering over each piece of content), each user's experience of the platform becomes hyper-personalized at the explicit and implicit-psychological level, sometimes within hours.[3] Because users' feeds diverge so comprehensively, different users within the same communities, families and institutions do not necessarily share the same news, events or ideas. Second, the architecture of these platforms encourages users to think and act within ideological "echo chambers" by amplifying social pressure for them to conform to what appear to be dominant views and priorities.[4] Because user's reactions to content are highly-visible, enduring and open for judgment, unusually coercive social pressures to engage in certain discourses and express opinions to earn public approval emerge without a single intentionally-coercive actor.[5] While the basic social forces of conformity are not new, these platforms amplify these pressures to unprecedented degrees to generate unusually powerful echo chambers. And finally, even if accurate or representative information were made available to users, these platforms are filled with design features such as auto-playing videos, infinitely-scrolling feeds and nudges to see something new that all degrade the quality of attention users pay to what they are seeing. The quality of information presented becomes irrelevant when one's capacity to may attention to it is being intentionally limited. In these three ways, modern digital media platforms are limiting their users' access to shared and accurate information about the world.
Let us more thoroughly explore what psychological systems are impaired by using highly-interactive, algorithm-mediated digital media technologies. To begin, the cognitive load caused by high-velocity and high-volume streams of digital content far exceed the capacities of our working memory, meaning we cannot mentally retain much of the content we encounter, nor can we make "rich" connections between bits of information.[6] Additionally, since impaired working memory degrades the quality of attention, users become more susceptible to distractions. Under these conditions, neither can we profit from the information we receive, nor can we mentally orient ourselves within its streams to recognize the extent of our confusion— our minds tumble, batted by algorithm-prompted nudges and drowned in a river of media. Second, as van Nimwegen et al showed (2003), individuals using simple software tools can sometimes solve problems more effectively and efficiently than those using intelligent and 'helpful' software, because relegating cognitive work to machines prevents the user from building their own schemas of knowledge.[7] Since our tech is enormously more intelligent and interactive than the programs researched by van Nimwegen et al., the average user's problem-solving capacity is likely dulled by the same degree. Finally, to the extent that "happiness" is a psychological construct— and, indeed, it is by another name, "affective wellbeing"— this valuable mental quality is consistently and measurably undermined by using digital media technology. In a 2015 study, Verduyn et al. showed that "passive" Facebook use (i.e. scrolling through the central algorithm-curated feed) decreased affective wellbeing during platform use, immediately after, and for hours afterwards.[8] As working memory overload impairs their decision-making, awareness and self-orientation on the platform, users who remain scrolling become measurably less happy and bereft of the cognitive tools necessary to recognize their mental states and take action to change them. Psychologically, these effects create a trap for the user which harms them the longer they remain in its grips.
Unfortunately, companies who make digital media platforms are incentivized to avoid changing their systems, because the design features which disrupt sense-making are those which maximize advertising revenue. The first such features provoke rapid and superficial engagement with lots of content. Google's services like Search are designed to connect us with information as rapidly as possible, and then to encourage us to move on quickly without having a "deep, prolonged engagement with a single argument, idea, or narrative."[9] Other platforms like Facebook and YouTube use "next" buttons, "keep watching?" prompts and automatic-play-next features that nudge users to keep rifling through more items after only spending a few seconds on each one. By "getting users in and out really quickly," (Google's stated design goal for all products) platforms receive more data about users to tailor future advertising more precisely. [10],[11] Second, other platform features maximize "engagement," i.e. how long users stay online. The now-ubiquitous "recommended" sections on all major platforms provide an endless stream of seemingly-essential content directly related to what we've seen recently or what the platform predicts we would find interesting. These features encourage users to spend minutes or hours exploring "rabbit holes" of content. Similarly, infinite-scrolling features on Facebook and Tik Tok feeds maximize engagement the way a hamster wheel maximizes running; with no fixed end-point, the hamster will use its wheel until it is tired or distracted— the wheel itself will never, ever prompt it to stop. Thirdly, coercive online social environments are a meta-feature likely unintended by designers which nonetheless accrue cultural relevance and attract rich engagement from users. Their gravitational force results from human social tendencies, highly-personalizable networks, algorithm-driven content experiences and the rapid spread of inflammatory information.[12] By increasing content volume and engagement time, all of these features allow platforms to collect more data, generate better predictive user-models, deploy more effective advertising and thereby increase company revenue.[13]
Here we might pause and ask: why does it matter that digital media technologies increase certain psychological capacities and decrease others; and why should people care if those companies have a profit motive to sustain or continue magnifying these psychological effects? After all, using platforms like Facebook is voluntary; those who wish to have a better learning experience can always visit the library instead. What is worrisome is that these particular online conditions make it harder for everyone exposed to them to make sense of the world normally— even at the library, in conversation, or in private reflection.
Neuroplasticity research suggests using algorithm-driven digital media can impair valuable offline mental skills. Briefly, the brain's physiology changes with new experiences; whole brain regions can expand in response to habits of mind and frequent experiences.[14] The brain will also swiftly repurpose any neural circuits not being used to stay efficient. Neuroplasticity lets the brain optimize its performance for whatever tasks it does most often. Given the aforementioned pace, complexity and volume of digital content served by platforms, users develop mental skills to "speedily locate, categorize, and assess disparate bits of information," while their capacities for reflection and deliberative thinking go unpracticed.[15] After years of daily "practice," our attention, memory and deliberative capabilities become shorter, faster, wider and more shallow— we become, to use Richard Foreman's language, "pancake people," trained out of our capacities to enjoy and creatively reproduce deep information.[16] The resulting changes in brain circuitry limit our general capacity to deploy slow, sustained attention—as in reading a long and challenging book, engaging in moral decision-making, noticing subtle human emotions or working through complex problems.[17] If we value the offline mental capacity to reflect deeply, slowly and purposefully, then the neurological imprint of "frenzied" online mental habits threatens to deeply and lastingly change what we value about our offline minds and mental capabilities.
Since billions of people use algorithm-driven digital media technologies, individual changes in offline mental capacities are likely influencing human culture, value-formation and imagination at a global scale; throughout history, widely-proliferated novel "intellectual technologies" have caused similar large-scale changes.[18] The advent of the map permanently changed our perception of physical space to enable better abstract spatial reasoning, but worse perceptions of immediate environmental details. The clock reconstituted our experience of time and oriented human actions and efforts around it to achieve punctuality and productivity. The printed book facilitated centuries of unprecedentedly self-aware and complexly self-referential writing, democratized knowledge economies and even changed how libraries and schools were built to suit silent reading.[19] These and other examples affirm what Thorstein Veblen called "technological determinism"— the notion that information technologies powerfully mold human history through ongoing reciprocal relationships between the new human potentials they enable and the subjectivities, cognitive skills, forms of expression and social schemas they upregulate over others. The sheer scale of digital media technology's influence is the same as was the clock, the map and the book, and its psychological effects are magnitudes greater than these predecessor inventions, so there is good historical precedent to thinking it, too, will play at least an equally determinative role in guiding our world's future.
Some may say that the new directions our minds and social lives have taken through digital media technology represent the natural evolution of human existence. I agree that technological progress is inevitable and that our lives and societies will keep harmonizing with the shared intellectual tools we use. But there is nothing natural or inevitable about the current directions of our mental and social progress. For one thing, our evolution is not taking place in a value-neutral space. As we learned from Shoshana Zuboff and others, the digital media companies who designed these platforms have particular behavioral goals for us which serve their incentive towards profit. And, as we learned from Nicolas Carr and several psychological researchers, these goals entail high valuations of speedy decision-making, high-volume information intake, shallow modes of deliberation and strong appetites for novel content. They also progressively devalue independent problem-solving, emotional depth, moral decision-making, patience and attentional executive function. These values are not intrinsic to the natural world, nor do they necessarily belong to any implicit human constructions of value. Instead, and far more disappointingly, they are the nondeliberate byproducts of particular corporations' methods for remaining profitable in a post-IT-revolution world. The values we are adopting are being set as means for companies' financial success, not as ends in themselves or as reflections of any intrinsic values in nature.[20]
Another reason to be skeptical of the tech-effects-as-natural-evolution view is that we have unprecedented foresight and agency over our future. Whereas the inventors of the book, the clock and the map could not foresee the impacts of their inventions, we know today roughly how digital media technology is changing us and with what consequences while we are still in the process of designing, implementing and using it. This gives each of us something unprecedented and crucial: a moment of agency to decide how we want to evolve into the future as beings and as societies, before we proceed too far on any irreversible course of action. We can choose to align extremely powerful digital media algorithms with the mental and social foundations of what we conceive our ideal visions of the future to be, and they can propel us towards those futures. But if we fail to establish a clear, conscious understanding of our values now, then we will lose the aspects of our mental and social lives which do not serve the current attention-capture optimization goals of media platform algorithms. This would be a subtle, slow and unsung loss of the human capacity for agency and self-determination.
Importantly, Moore's Law— when taken in conjunction with the digital media tech industry's current data collection practices— should warn us that our window of opportunity for applying our foresight to any meaningful effect is closing fast. Moore's Law states that transistor density on computer chips, and thus computational speed, will double yearly; and new computational technologies and methods like quantum computing, DNA computing and machine learning may sustain or accelerate beyond Moore's Law in the near future. [21],[22] Since hardware and software efficiencies determine the power of computational systems, Moore's Law predicts continuous increases in the computational power of the systems that run most of our digital media experiences. When these systems are given huge sums of user data collected over years of online tracking, algorithms will produce ever-sharper predictive analytics, ever more accurate user behavior models and ever more powerful attention-capture and behavioral control effects on users. If the fundamental values and goals of those algorithms do not evolve to protect aspects of human experience we decide are intrinsically valuable while we can, people in the future who use these platforms regularly will be more likely to become overwhelmed by the power of their technology, and they will lose the chance to make meaningful choices about how they think and understand the world.
Right now, the values we should encode in technology are the ones which are most threatened by its advent. Not because those values and capabilities are superior to the ones technology reinforces, but because protecting them will establish the stable cognitive, social and informational conditions we need to discover our values and structure our futures around them. Many of the human capabilities threatened by digital media technologies— including our capacity to control our attention deliberately, our capacity to sift through information critically and consciously, our capacity for emotional awareness, and our capacity to make sound independent judgments— are necessary tools for developing our values. Attentional control and patience help us focus on questions, problems and events and observe how they affect us, which can tell us something about what we value. If we are constantly shifting our attention, we may not notice if certain notions or states of being are resonating deeply with us and miss the opportunity to clarify our values with these new experiences. And the capacity to make careful, deliberate judgments in complex situations helps us deepen our value-awareness and value-commitments in bidirectional relationships with others and with our environment. Without the capacity to navigate the world with discerning judgment, we cannot translate the values we may already have into the practice of our lives. Finally, our capacity for emotional awareness connects us with individual others and communities that facilitate identity-formation and the exchange of knowledge. Emotionally-sensitive relationships can be cradles for self-development and the establishment of deep, personal values. Since the capacities being eroded by current digital media technology play an important role in value-formation and self-understanding, we should protect these cognitive capabilities from further loss until we develop stronger understandings of our values and encode them into our technologies. The alternative is to let the values implicit in Silicon Valley companies' profit models continue to guide the course of our species' social, cultural and technological evolution, with unknown consequences for the general human experience of future decades and future generations.
While our digital media technologies remain functionally problematic, every user can protect their sense-making capacities from erosion by changing how they use these technologies. Since platform algorithms adapt to target the psychologies of users over time, users can limit their coercive power by reducing how much time they spend online. This could involve setting time limits on apps, blocking out daily times to be away from screens, or setting rules of intention like "I refuse to open X platform if I am only bored and want entertainment". Parents can educate their kids about digital media literacy and keep algorithm-driven social media out of their lives during their development; many Silicon Valley tech executives already have "militant" parenting stances on digital media technology for their own children.[23] Several widgets, extensions and apps help users remain in control of their digital media experiences, such as the YouTube suggested-videos blocker (created, incidentally, by the same person who designed YouTube's recommendation algorithm), the AdBlock plugin, Screen Time for Apple devices and several opt-out features within platforms' settings and through YourAdChoices and tracking-blocking plugins for general browsing.[24] These rules and tools may make a user's experience of digital media technology slightly less harmful to their capacities for value discovery. You can also spend a period of weeks or months away from digital media technology, or, as Jaron Lanier suggests, choose to quit the platforms permanently until more humane alternatives become available.[25] Which of these strategies will work for you will depend on your reasons for using digital media technology in the first place and your willingness to prioritize sensemaking and value-discovery over convenience and adherence to the status quo. Changing our daily habits of technology use, though, can lay the psychological and behavioral groundwork for deepening our values and steering our lives towards what is meaningful for us.
Neuroplasticity implies that we can regain lost capacities or maybe even improve upon them by establishing smarter relationships with better-designed, value-aligned digital media technologies. Nicholas Carr spent time away from these platforms while researching and writing The Shallows, and found "old, disused neural circuits [...] springing back to life" the longer he spent disconnected from them and the more he worked at exercising attentional control during deliberative tasks.[26] The internet itself is full of blogs written by people who rediscovered a sense of freeness and direction by spending time away from social media.[27],[28] In these examples, people essentially leveraged the continuous plasticity of their brains to make positive and intentional changes in their mental lives, demonstrating that whatever has been neurophysiologically "made" by using certain technologies can be unmade by using others. Our brains' neuroplasticities are an optimistic sign for anyone who wishes to restore valuable mental capacities after many years of heavy digital media use.
Further still, if we create well-designed technologies oriented towards strengthening human capacities and not weakening them, our brains could develop novel or enhanced cognitive features beyond our "original," pre-IT brains. Platforms could work towards enhancing users' rational decision-making skills, emotional intelligence, wisdom, altruism, affinities for honest social and civic participation by tailoring their algorithms and user interfaces to respect and nurture these qualities. Over time, small changes in awareness and attentional control of each user could magnify across populations into meaningful improvements in the way we live. One could also imagine future platforms playing central roles in systems of education, governance and knowledge-production because they are well-aligned with institutional goals and protect wider social conditions for open, democratic societies.[29] However, the improvements to self and institution we want may only become possible by limiting the functionality of the platforms in certain ways. For example, if a platform detected that a user's attention was waning, it could learn to shut down until the user became ready to browse again instead of seizing the moment to revive their interest artificially. The user, now able to listen to and respect their body's rhythms of attention, could then engage in offline experiences that supported their value-development in other equally or more important ways. To empower platforms to help users grow will probably mean encoding humility in them— a recognition that they are only one tool among many others for the user to lead a meaningful life.
[1] Pew Research Center. "The Shift in the American Public's Political Values" (October 20, 2017). https://www.pewresearch.org/politics/interactives/political-polarization-1994-2017/
[2] Pew Research Center. "Political Polarization in the American Public" (June 12, 2014). https://www.pewresearch.org/politics/2014/06/12/political-polarization-in-the-american-public/
[3] Jeff Orlowski, The Social Dilemma. Exposure Labs, 2020. 1 hr., 35 min. https://www.netflix.com/watch/81254224
[4] "Social Media Enables Undue Influence," The Consilience Project, December 5, 2021, https://consilienceproject.org/social-media-enables-undue-influence/
[5] Ibid.
[6] Nicholas Carr, The Shallows (New York, NY: W. W. Norton & Company, 2010), 125.
[7] Christof van Nimwegen, "The Paradox of the Guided User: Assistance Can Be Counter-effective," SIKS Dissertation Series No. 2008-09, Ultrecht University, January 1, 2008.
[8] Philippe Verduyn, David S. Lee, Jiyoung Park, Holly Shablack, Ariana Orvell, Joseph Bayer, Oscar Ybarra, John Jonides and Ethan Kross, "Passive Facebook Usage Undermines Affective Well-Being: Experimental and Longitudinal Evidence," Journal of Experimental Psychology Research: General, 144, No. 2 (2015): 480—488. http://dx.doi.org/10.1037/xge0000057
[9] Nicholas Carr, The Shallows, 157.
[10] in 2010, during Irene Au's tenure as Google's Director of User Experience
[11] Digital media companies extrapolate usage data from each user into a predictive model they can use to predict the real behavior of their user. The more refined and accurate the model becomes, the more closely the algorithms' predictions correlate with real user behavior. See The Social Dilemma for a brilliant visual illustration.
[12] Chris Bail, "Social Media and the Quest for Status" in Breaking the Social Media Prism: How to Make Our Platforms Less Polarizing (Princeton, NJ: Princeton University Press, 2021), 48-49
[13] An interesting side note is that, even internally, Facebook in particular incentivizes its own employees to maximize user engagement by doling out bonuses and promotions on the basis of which designers can increase engagement the most. The attention-based profit model of Facebook exerts forms of behavioral influence on its users and its own employees. See The Social Dilemma for more.
[14] Mark Brown, "How Driving a Taxi Changes London Cabbies' Brains," Wired, December 9, 2011, https://www.wired.com/2011/12/london-taxi-driver-memory/
[15] Carr, The Shallows, 142.
[16] Ibid., 196.
[17] Ibid., 216-222.
[18] Ibid., 44.
[19] Ibid., 45.
[20] Perhaps someone could say that nature is organized by power, competition and fitness, and that the financial successes of these companies (and any decisions, conditions or effects deriving from their pursuits of them) are natural. But given the flexibility of economic and social organization of human history, the particular practices and designs on which large digital media companies rely for their success are not necessarily inevitable or "natural" forms of information technologies in the abstract. See David Graeber and David Wengrow, The Dawn of Everything (New York, NY: Farrar, Straus and Giroux, 2021) for this account of a creative human history.
[21] Stephen McBride, "These 3 Computing Technologies Will Beat Moore's Law," Forbes, April 23, 2019. https://www.forbes.com/sites/stephenmcbride1/2019/04/23/these-3-computing-technologies-will-beat-moores-law/?sh=1cb8136e37b0
[22] Marco Chiapetta, "Chips Designed by AI Are the Future of Semiconductor Evolution Beyond Moore's Law," Forbes, May 5, 2021. https://www.forbes.com/sites/marcochiappetta/2021/05/25/chips-designed-by-ai-are-the-future-of-semiconductor-evolution-beyond-moores-law/?sh=57c70c15430f
[23] "Family Engagement Toolkit," Common Sense Media, https://www.commonsense.org/education/toolkit/family-engagement-resources (accessed March 31, 2022).
[24] Some resources include YouTube suggested video blocker "Unhook" (https://unhook.app/), "AdBlock Plus" (https://adblockplus.org/), "Screen Time" (native Apple device feature), cross-website advertising opt-out options via "AdChoices" (https://youradchoices.com/) and the "Facebook Newsfeed Eradicator" (https://west.io/news-feed-eradicator/).
[25] Jaron Lanier, "How We Need to Remake The Internet," TED2018: The Age of Amazement, Vancouver, Canada, Filmed April 10-14, 2018, https://www.ted.com/talks/jaron_lanier_how_we_need_to_remake_the_internet
[26] Carr, The Shallows, 199.
[27] Jason Zook, "What I Learned from a 30-Day Social Media Detox," Wandering Aimfully, https://wanderingaimfully.com/social-media-detox-recap/
[28] May, "Why I Detox From Social Media," PS I Love You, August 12, 2020, https://psiloveyou.xyz/why-i-detox-from-social-media-7c2692ed84d7
[29] "Social Media Enables Undue Influence," The Consilience Project, December 5, 2021.
Thanks to Michael Cheng, Dr. Russell P. Johnson, and the UChicago EA community.
You really grabbed my attention with this article.