At the dawn of the internet, we asked whether the world wide web would shrink the globe or expand it. Today, the question is no longer about scale but about substitution. We are beginning to wonder whether virtual reality is not supplementing the real world but quietly replacing it, and whether the systems we built to connect us are now altering what it means to be human.
Are we still using the internet, or has the internet begun to use us?
In K.G.M. v. Meta, a Los Angeles jury found Meta Platforms (Instagram) and Google (YouTube) liable on all counts for harm caused to a young woman who began using their platforms as a child. The jury determined that the platforms were negligently designed, that the companies knew their designs were dangerous, and that they failed to warn users of those risks. It further concluded that the design of these systems was a substantial cause of the plaintiff’s mental health harm. The case did not hinge on content, it focused squarely on the design of the product itself.
For decades, companies such as Meta Platforms and Google have argued that they do not create the content on their platforms, that they merely provide the infrastructure, and that responsibility ultimately rests with users. The recent case challenges their argument by asserting something fundamentally different: if a company designs a system that predictably produces harm, it can be held accountable. This shift has far-reaching implications, opening the door to thousands of similar lawsuits already underway, increasing regulatory pressure, and potentially redefining these companies in legal terms.
A few factors make this court decision a turning point. Whistleblower testimony suggested that companies such as Meta Platforms and Google understood the psychological effects of their systems, and this has converged with the first generation raised online reaching an age where they can clearly articulate the harm they experienced. Equally important was a change in legal strategy. Rather than focusing on harmful content, plaintiffs reframed the issue around defective design, arguing that the structure of the platforms themselves was the source of harm. That final shift, from content to design, is what makes this moment so significant.
The dominant narrative tells us that these platforms are addictive because of infinite scroll, short-form video, or the neurological hooks of intermittent reward. It is a story about attention spans, dopamine, and personal weakness dressed up as technological inevitability. The blame often rests on content and weakened attention spans. This explanation feels intuitive, but it reduces a systemic condition into a behavioral defect. It allows the problem to be framed as one of personal discipline rather than structural design.
The problem is not the length of a video or the velocity of a feed. The problem is the intention of the corporate controllers of the algorithms. These platforms are not addictive by default. They are addictive because addiction is profitable.
The algorithm is not a passive tool that reflects human intention. It is an active mechanism that determines what can be seen, what can be said, and what can gain traction. It inserts itself between people and their perception of reality. When that layer is governed by incentives tied to profit, the system becomes coercive by design. The result is not a neutral marketplace but instead a rigged structure where a small number of companies mediate a vast portion of human communication.
To understand why this matters, we have to be clear about how these platforms actually make money, and we have to resist the comforting myth that digital advertising is fundamentally different from what came before.
From radio to television, the underlying model has always been the same. The medium does not sell products to audiences. It sells audiences to sponsors. Programs exist to gather and hold attention so that attention can be packaged and sold. Social media did not invent this model. It perfected it. Advertising in this system is not primarily about selling sneakers and phones. It is about measuring how you respond under conditions of uncertainty, insecurity, and stimulation. The most valuable data comes from moments when you are impulsive, reactive, or emotionally charged. Calm, reflective users are less profitable than agitated ones.
Shoshana Zuboff, in her book “The Age Of Surveillance Capitalism,” explains why platforms extract behavioral data from users. Your searches, pauses, clicks, location, and emotional reactions are turned into data. That data is fed into systems designed to predict what you will do next. Those predictions are then sold to advertisers, political campaigns, and anyone seeking influence. Over time, prediction alone is not enough. The system begins to shape behavior to make those predictions more accurate and more profitable.
The goal is no longer to understand you. The goal is to modify you. The user is no longer the audience. The user is the raw material.
And all of this is sustained by a peculiar inversion. We pay for the infrastructure of the internet, for devices, for connectivity, for access, and then we enter platforms that present themselves as free while extracting value from every action we take within them. The inversion is so normalized that it no longer appears strange.
Jaron Lanier, who coined the term “virtual reality,” warned years ago that when services are free, users themselves become the product. What is now visible is the full implication of that warning. Our reactions, our uncertainties, our impulses, and our sense of self are being continuously measured, refined, and sold.
The most valuable data does not come from calm, grounded individuals. It comes from people who are destabilized, reactive, and distracted. The system therefore has a built-in incentive to produce those conditions. Polarization is profitable. Anxiety is profitable. Impulsivity is profitable.
When every interaction becomes data, life itself becomes a resource. Your relationships, your beliefs, your fears, and your identity are not simply expressed online. They are captured, analyzed, and fed back into a system that shapes future behavior. This is not limited to consumer choices. It extends to political perception, social trust, and the way individuals understand one another.
We are told that our attention spans are collapsing, that we have become incapable of sustained thought. This narrative shifts responsibility onto individuals while leaving the system untouched. When every pathway is engineered to redirect focus toward stimuli that generate data, attention becomes a resource to be harvested rather than a capacity to be cultivated.
In classrooms and in everyday life, many young people come to recognize that academic achievement, historical understanding, or intellectual curiosity are not consistently aligned with power, wealth, or social recognition. They absorb the unsettling lesson that truth and knowledge are not the primary paths to success in our capitalist society, even if they cannot fully explain why. This produces a quiet dissonance. A child can feel that something is off long before they have the language to name it.
This is why the conversation about addiction misses the mark. Addiction describes the symptom, an end point, but the entry point is belonging.
It is often said that social media monopolies achieved dominance because users chose them. There is truth in that, but it is incomplete. Once a platform becomes the place where everyone is, the bait of belonging locks users in place.
The social media monopolies did not achieve their status because they offered distraction. They became behemoths because they offered connection. They convinced billions of people that this was where life was happening, that this was where community existed, and that absence from these platforms meant absence from the world. The sense of belonging they offered was not artificial at first. It was rooted in real human desire.
Belonging is one of the deepest human needs. It is also one of the most exploitable.
What these systems have done is take that fundamental need and route it through a structure that converts every expression of it into data. You are encouraged to speak, to share, to react, to participate, but you are not given meaningful control over the environment in which those actions occur. The comment section becomes a site of conflict not because people inherently want to fight, but because conflict generates more engagement, and engagement generates more data. The system continues to invite you to speak while removing any real leverage over what you are speaking into.
It now seems that all information is a distraction. The Epstein files, which at first we were told was a distraction, are now framed alongside the Iran war as another distraction, becoming a distraction for the distraction itself. It is not just that we are being distracted from the truth. It is that we are losing the ability to recognize what truth would even feel like.
The line between persuasion and manipulation becomes increasingly difficult to trace. When systems can continuously test and refine targeted propaganda in real time across vast audiences, they gain the power to shape narratives with unprecedented precision. The danger is not only misinformation. It is the erosion of a shared reality.
This moment offers an opportunity to question the very structures that once gave us a sense of shared reality. That earlier pre-internet reality may feel, in retrospect, more coherent or grounded, but it was never neutral or purely organic. It was constructed through institutions that reflected and reinforced the interests of those in power. While it provided a common frame of reference, it also carried deep exclusions, shaped by racism and systemic inequality, and was often used to marginalize and control minority populations in this country. What appears as a lost unity was, in many ways, a manufactured consensus that served a select few while obscuring the experiences of others.
This is a truth we must confront without retreating into despair or avoidance. It is easy to feel overwhelmed when so many stories about the technologies that now surround us and shape our daily lives are framed in negative terms, as if the system were beyond repair. Yet the very same technologies that have been used to concentrate power and distort reality also contain the potential to dismantle longstanding structures of oppression. Within them lies the capacity to create more equitable systems, to expand access to knowledge, and to foster forms of connection that are grounded in social justice and collective well-being.
The challenge is not to abandon these tools, but to reclaim them and reshape their purpose so that they serve a more just and humane society.
If internet access were treated as a public utility rather than a private commodity, the entire structure could be reorganized. Fast and free internet access could be guaranteed by the government as a right of citizenship. Platforms could operate on subscription models, allowing users to choose environments aligned with their values rather than being forced into a single, centralized system. Some platforms could include advertising by choice, while others could exclude it entirely. The key shift would be that users, not advertisers, would determine the terms of participation.
A diverse ecosystem of smaller, purpose-driven platforms could replace the current concentration of power, reducing the ability of any single entity to observe and influence the entire population instantaneously. It would restore a sense of scale and context that has been lost in the pursuit of total capture.
This way, perhaps our vacation photos would not appear in the same feed as someone proposing sex, or our dinner plate pictures alongside images of genocide, and above all we would not subject our children to any of this. They could have their own spaces as they should in the real world, free from advertising pressures that make them feel insecure or even suicidal, and free from environments that can be exploited by bullies and sexual predators.
The world is already structured around places where you have to be a certain age to enter, can afford or cannot afford to be, or are a member of a certain culture or community. At the same time, there remains a fundamental freedom to move through the physical world, even as economic, racial, and gender oppression imposes real constraints.
We should remember that an email address or a web address is, in many ways, like real estate in a virtual world, and the more we hold that space to the moral standards and values of the physical world, the healthier and more beneficial it will be for society.
The virtual world will never be a utopia, and it should not be expected to be, just as the real world is not. What matters is that we begin to shape it with intention, guided by principles of love, integrity, and equity. In doing so, the world we build online should serve to only support and improve the one we inhabit offline, allowing us to remain fully human within both.
The alternative is to continue along the current path, where the same content circulates across only a handful of platforms, where differentiation is superficial, and where both users and influencers become participants in a system that extracts value from them while offering diminishing returns in meaning or connection.
It is a confrontation with the ways in which our desire for belonging, recognition, and certainty can be organized against us. It is a question of whether we are willing to reshape the systems that mediate our lives or whether we will continue to adapt ourselves to systems that were never designed to serve us.
The internet remains one of the most extraordinary inventions in human history. Social media, at its core, contains the possibility of genuine connection, creativity, and to galvanize shared experience in ways never before imagined.
The verdict against Meta and YouTube signals that the era of unquestioned design is over. What comes next will depend on whether we continue to treat the symptoms or finally confront the system itself.
