Big Tech has been in the news a lot over the last decade. Initially, the coverage focused on the new possibilities that were created around communication and information sharing and the benefits that these would bring. New tech networks offered unprecedented tools, offering everything from reuniting families separated by emigration to assisting in the overthrow of autocratic regimes and restoring power to the people. 

Next, we heard about the tremendous value Big Tech was creating, bringing billions of dollars to founders and workers, as well as the pension funds that invested in them. We knew they were a force for good in the world, not least because they never missed an opportunity to tell us this fact.

The sentiment toward Big tech changed near the end of 2016, fuelled by an unexpected result in the United States presidential election. Big Tech platforms were no longer tools to promote individuality and self-expression; they had swiftly become enablers of hatred and lies. Seemingly overnight, these companies went from darlings to pariahs, from bastions of free speech to being weaponized by malicious interests and rogue states to sway elections, planting false narratives. Individuals in control of the platforms went from defenders of freedom to being likened to dictators. Journalists wrote that Big Tech now had more capital than many governments and greater control of speech than any media outlet — without any democratic checks and balances or regulation to curb their worst impulses.

These events brought to the fore the amount of power that currently resides within Big Tech companies, along with the need to consider how we define speech in the modern world and how it should be amplified and regulated. That, in turn, touches on how the platforms that determine modern speech should be governed.

From decentralization to streaming

To address this, we should examine how the early internet unleashed so much creativity in its early days. Back then, the web was decentralized in its own way, with each website representing its own space, resulting in a vast network of nodes threaded together by hyperlinks. Some nodes were bigger than others, but none so big that they would distort the landscape or require specific regulation. The internet could be viewed as a vast garden, being added to with each additional website.

As both the network and the number of users grew, there was increasing demand for this network to be organized and made more efficient. Google capitalized on this by building an algorithm that searched the web and returned results and, in the process, kicked off a new internet that was defined by algorithms. Content was suddenly being recommended and defined by algorithms across music (Spotify), news (Facebook and Twitter) and entertainment (Netflix). The garden became a stream, and suddenly, we were all being influenced and directed by black-box algorithms that we knew very little about.

It is this new stream model of the internet that has caused such vitriol to be directed toward Big Tech. Big Tech companies dictate what content is acceptable to share and what should be promoted often by considering what is most beneficial to their bottom lines. Content controls are described as moderation for those who approve of them and censorship by those who disagree. The loudest voices dominate the conversation, often disproportionately favoring the Big Tech workforce and the traditional media — a small group with identifiable biases.

Back to the decentralized internet

What is the correct way to govern these massive platforms? Centralizing the power of founders is far too limiting, and outsourcing it to Californian employees and western media is only slightly better. Instead, we should look back to the decentralized internet of the past and see how we could recreate the period many older heads look back on with such nostalgia. Many claim that it is impossible to put this genie back in the box, given the enormous economic value that derived specifically from centralizing digital content and making it more accessible.

Blockchain has enabled decentralized governance of companies, allowing a form of democratic decision-making that is weighted toward those with skin in the game. Individuals buy governance tokens in a network, such as decentralized finance product suite Yearn.finance, which provides them with votes on the governance of that ecosystem while also holding independent value and/or providing dividends. Companies can be natively decentralized like Yearn, or transition to this model over time, like DeFi lender Aave. This model provides returns, aligns strategy with ownership, and removes the principal-agent problem that is rife in public and private organizations. Companies can use it to distribute admin fees to owners as well as make strategic decisions.

Public discourse on content moderation often draws from legal and philosophical concepts, with a liberal sprinkle of America’s first amendment, to construct a top-down solution. This presumes that a small number of people knows what is best for millions, even billions, of users. But decentralized governance, proven effective by the booming DeFi industry, may allow for a bottom-up solution that puts the power in the hands of users. Twitter CEO Jack Dorsey even announced his interest in such an approach at the end of 2019.

Decentralized governance could be achieved by providing tokens to users, as described above, which, in turn, would allow them to vote on principles of moderation. This could even be calibrated to the issue at hand — members of minority groups might have a greater weighting in issues related to discrimination or religious groups on freedom of religion. Power users might have greater weighting to their votes than casual ones. By trusting the broader issue of moderation to the wider community, users are engaging in a social contract that will make them far more likely to buy into principles that are adopted. As well as making moderation more efficient, this would likely repair some of the reputational harm suffered by social media companies, creating a clear distinction between censorship and moderation.

The biggest tech platforms have user populations bigger than the world’s largest countries, but none of them have the equivalent democratic checks and balances that we look for in governance. Identifying complex pain points, such as censorship and moderation, and finding ways to empower users to own these processes gives them skin in the game and access to create a flexible policy mechanism to help heal the bruised reputations of Big Tech. It is in the companies’ best interests, too, as the reputational hit of poor content policies has led to antitrust speculation and calls to break up Facebook, for example.

This article originally appeared on Cointelegraph.

You may also like

There is something wrong with Feed URL