Attention Integrity: The Digital Right We Don't Know We've Lost
- Feb 23
- 5 min read
Between Us and the Machine | Episode 2
There's a number that's hard to accept.
In 2004, the average person could focus on a screen for about 150 seconds. By 2021, that had dropped to somewhere between 44 and 50 seconds. That's not distraction. That's structural change and it happened without most of us noticing, let alone consenting to it. This is what Episode 2 is about. Not just what's happening to our attention, but who's responsible, who benefits, and whether we have a right to do anything about it.
You Are Not the Customer
The phrase has become cliché, but it still lands: on most social media platforms, you are the product. Your attention is the asset being sold. Herbert Simon identified the underlying logic decades ago: in a world of information abundance, attention becomes the scarce resource. Platforms didn't create this insight. They industrialised it.
What we now call the attention economy is built on a simple and ruthless model: the longer you scroll, the more data you generate, the more precisely you can be targeted, the more valuable you become to advertisers. Infinite scroll, autoplay, notification pulses, algorithmically timed dopamine hits none of this is accidental. It is design. Specifically, it is what researchers call Hyper-Engaging Dark Patterns, or HEDPs: features engineered not to serve you but to make it harder for you to leave.
The term surveillance capitalism, developed by Shoshana Zuboff, names the broader system: personal data extracted and analysed, often without meaningful consent, to generate behavioural predictions that can be sold. The manipulation is not incidental to the model. It is the model.
What This Is Doing to Us
The effects operate at every level — neurological, psychological, sociological.
Neurologically, constant overstimulation desensitises the brain. The result is a kind of dissatisfaction with depth — a creeping preference for short-form, high-intensity content that delivers stimulation without requiring sustained attention. Add disrupted sleep from blue light and hyperarousal, divided attention from multitasking, and you have a cognitive environment that is actively degrading the capacity it depends on.
Psychologically, there is what researchers call the boredom loop: overstimulation creates dependency on digital media to escape boredom, which deepens dissatisfaction, which drives more compulsive use. Anxiety rises. Empathy erodes. Social isolation increases among heavy users. And algorithms, optimised for engagement, have learned that anger travels furthest — so enraging content gets prioritised, further narrowing our capacity for the kind of slow, complex thinking that democracy actually requires.
Sociologically, the effects compound. Collective attention spans for cultural topics are shrinking measurably. The window in which a Twitter hashtag stayed culturally relevant dropped from 17.5 hours in 2013 to 11.9 hours in 2016. Overproduction of content fragments collective focus. And when citizens cannot sustain shared attention on shared problems — climate change, inequality, public health — the capacity for collective action weakens.
This is not a personal failing. It is a designed outcome.
The $30 Billion Experiment Nobody Consented To
If you need evidence that this is no longer theoretical, consider what Fortune reported just days ago: the United States spent $30 billion replacing textbooks with laptops and tablets in schools. The result, according to researchers, is the first generation measurably less cognitively capable than their parents.
Not less intelligent. Less capable of sustained, deep cognitive work — the very capacity that screens, optimised for engagement over learning, systematically erode. This is what makes the attention economy's harm so insidious. It does not announce itself. It arrived in classrooms dressed as innovation, backed by billions in public funding, and the cost is only now becoming visible in the cognitive profiles of an entire generation. By the time the evidence was conclusive, the damage was already done. This is precisely why scholars argue we cannot afford to wait for certainty before acting. The precautionary principle exists for moments exactly like this one.
So Who Is Responsible?
This is where the conversation gets legally interesting and politically urgent. Several international frameworks already place obligations on states to protect cognitive and psychological wellbeing. The ICCPR protects privacy and autonomy. The ICESCR mandates progressive realisation of the right to health, including mental health. The ECHR under Article 8 covers psychological integrity and its interpretation has been broad enough to encompass digital risks like manipulative algorithms. The EU Charter of Fundamental Rights protects mental integrity under Article 3.
The question being asked by scholars like İlke Soysal, whose research informs this episode, is whether these frameworks are sufficient or whether the systematic erosion of attention represents a harm distinct enough to require its own recognition. Her answer, compellingly argued, is that we may need to name this as a right: attention integrity — the right to protection of your cognitive autonomy from technologies designed to override it.
The argument draws on the precautionary principle: states are not required to wait for conclusive scientific proof before acting when foreseeable harm is at stake. The ECHR's Tatar case established that risk assessments are required even without proven harm. If we apply that logic to digital attention — and the neuroscientific and psychological evidence is already substantial — then states have an obligation to act now, not after the damage is irreversible.
What Action Looks Like
The proposals are concrete. Disable attention-seeking features by default rather than requiring users to opt out. Provide genuine user control over digital engagement. Invest in research on cognitive and psychological effects. Hold platforms accountable for design choices that prioritise engagement over wellbeing. Recognise the power asymmetry between individuals and platforms, and regulate accordingly.
None of this requires dismantling the digital economy. It requires treating human cognitive capacity as something worth protecting the same way we protect physical environments, food safety, and labour conditions. Design can be ethical. It just needs to be required to be.
This Is a Civic Issue, Not a Tech Issue
At Between Us and the Machine, we keep coming back to the same question: who is in the room when these decisions get made?
The attention economy did not emerge from a neutral market. It emerged from specific design choices, specific incentive structures, specific regulatory gaps. And it can be shaped differently — but only if enough people understand what is happening and insist that their governments, their institutions, and their platforms are accountable for it.
Attention is not just a personal resource. It is the substrate of democracy, of empathy, of collective action. When it is systematically harvested and degraded, something genuinely civic is at stake.
This episode is an invitation to think about that slowly, carefully, and without the next notification pulling you away.
Episode 2 - out this March 5th - features the research of İlke Soysal, LLM candidate at Tilburg Institute for Law, Technology, and Society. Subscribe to not miss:



Comments