Who Owns the Internet?
Who Owns the Internet?
“Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.” – John Perry Barlow
In 1989, Sir Tim Berners-Lee, the inventor of the World Wide Web, envisioned a decentralized, universal, and open Internet for all. Early adopters and fellow cyberlibertarians shared the same vision, including John Perry Barlow, a former cattle rancher, Grateful Dead lyricist, and one of the founding members of the Electronic Frontier Foundation (EFF), a non-profit dedicated to defending digital privacy and free speech. Barlow, who has since passed, wrote the now famous manifesto Declaration of the Independence of Cyberspace, which has a simple message: Governments should not govern the Internet.
This vision was shared by many – and not just by technologists and cyberlibertarians. In 1997, Supreme Court justice John Paul Stevens wrote for the Court that the Internet “constitute[s] a unique medium – known to its users as ‘cyberspace’ – located in no particular geographical location but available to anyone, anywhere in the world.” The Internet has not, Stevens writes, “been subject to the type of government supervision and regulation that has attended the broadcast industry.”
The democratizing potential of the Internet echoed across the world: it was seen as a true equalizer, a force for good, and one that looked the same everywhere irrespective of one’s geographical location — it defied all territorial borders. This vision was hardly seen as radical or controversial. Quite the opposite, it was widely embraced among tech circles, even by the likes of Microsoft.
But that Golden Age of the Internet is long gone.
While there are still outliers that hold this vision of an open and free Internet, the now emerging belief among netizens is that government should do more to intervene in digital life. In 2018, a series of reports jointly conducted by the Knight Foundation and Gallup reveal a few emerging trends. Of the Americans surveyed, 79% believe that Internet companies should be regulated like the news media. Additionally, 85% of Americans don’t feel that Internet companies are doing enough to stop the spread of misinformation. In 2020, a Consumer Reports survey corroborates these sentiments: more than half of Americans surveyed support more government regulation of online platforms.
When one takes into account the current political landscape, the results aren’t all surprising.
In the United States, the Biden administration is facing criticism from all sides to reform the Internet: he is being called to take action on net neutrality, reform Section 230 to remove legal protections from online platforms, quash misinformation, subdue online radicalization, and address the growing digital divide – these are just some issues on the laundry list of concerns raised by Americans. Then the January 6 Capitol events brought the issue of content moderation to the forefront, with users expressing more vocal support for government to rein in Big Tech.
While many of these concerns are justified – and should rightly be addressed, the solutions may not be necessarily found in more regulation. Firstly, regulation is a loaded and ambiguous term, and can be approached from many different angles. Every country already has their own regulatory levers in place, some more strict than others.
In Australia, existing regulations give the government power to issue fines and block online content they deem harmful or abusive. And now there’s another proposed legislation underway that would grant the Australian government unprecedented power to unmask identities behind anonymous or fake accounts. Questions abound. Will the government be transparent about which accounts it unmasks, and why? How can users ensure they won’t be unjustly targeted? What about smaller user-generated sites that can now be held liable for content that their users post?
Similarly, the UK government is proposing legislation that would counter ‘fake news’ and harmful material by creating an independent regulator to issue fines and block access to sites. How will ‘fake news’ be defined? This phrase, among others, has been used so widely that it has all but lost its meaning. It’s also highly politicized, and can be used as a tool to censor content that the government does not want users to see, as we have seen with the Hunter Biden story. How can users be certain this legislation won’t be used for political ends? The answer is: they can’t – and that’s the crux of the issue. Phrases like ‘fake news’ and ‘hateful content’ can be construed in different ways by different people.
Additionally, such measures require the government to work more closely with Internet service providers, tech companies, and other online platforms, which is exactly what many users are seeking to push back against.
The above measures give us a glimpse of what regulation can look like under democratic societies, and it’s already looking to be far more draconian than once imagined. While many hold the belief that users in democratic societies are immune to such authoritarian-inspired restrictions, many of these measures resemble China’s own restrictive policies (for example, China also has a policy on removing what it deems to be ‘harmful content’ under the guise of protecting citizens). Once the floodgates open, it becomes easier to roll out more policies that will place more barriers on the information highway.
Fortunately, the U.S., unlike other countries, has Section 230, a provision in the Communications Decency Act that has been described as “one of the most valuable tools for protecting freedom of expression and innovation on the Internet.” It is precisely because of Section 230 that content providers (e.g. YouTube, Reddit, etc.) are not treated as publishers, and are therefore not legally responsible for what others say or do on their platforms. These legal protections afforded to content platforms are unfortunately not modelled in other countries, which is how Australia and the UK are able to pass such surreptitious legislation under the guise of protecting users’ safety.
There are other promising initiatives created free from regulation.
Over the past few years, many alternative social networks emerged as a way for users to self-govern, and take back power over their privacy and data. Mastodon, a free and open-source platform, allows users to independently host their own social networking platforms (called “instances”), while also working to protect user privacy and curb online harassment. Mastodon is one of many platforms that make up the decentralized social media network called the ‘Fediverse.’ No single person or company controls the network; anyone can sign up for an account on any server they like; and users can even set up their own network with their own rules if they want.
Every Fediverse platform is self-governing: administrators set their own rules, and users who don’t adhere to the rules are reprimanded depending on that specific platform’s rules. Mastodon, for example, has a strict ban on content that is illegal in Germany and France, such as Holocaust denialism and Nazi symbolism. Other platforms may have an approval process for new users, requiring interested users to submit an online form that will then be manually reviewed by administrators.
If a user doesn’t like the way a platform is run, they have countless others to choose from. Likewise, a user can hide posts from other users or platforms, allowing users to create their own customized feed. Additionally, as the Fediverse comprises of interconnected networks, administrators are able to communicate with each other and innovate free from government interference.
Unlike Big Tech platforms, the biggest selling point of the Fediverse model is its interoperability between platforms. In other words, digital walls are broken down and users on one social network are able to communicate with users on another social network. Big Tech, on the other hand, purposely blocks interoperability to prevent users on different platforms from communicating, tethering users to their single platforms. Cory Doctorow describes this phenomenon as a “mutual hostage-taking” in which users feel forced to stay on a certain platform because ‘everyone else is on there.’ The more people that are on a specific platform, the harder it is to leave. Additionally, leaving Twitter and other major networks can be professionally damaging for users with a public presence, and for those who rely on these platforms to publicize their products, create connections, and communicate with their supporters.
Efforts to decentralize are carried out by individuals scattered all over the world — government regulation is not required, because it is not needed in these self-governing models. But if more levers are implemented to regulate online content, will alternative networks even be allowed? Or will we be forced to stay on giant tech platforms where the only protections we have are ones set out by the government who have their own political and economic interests top of mind?
While it is easy to call for government intervention when things go awry, it is not always in the best interest of the people to do so: governments all around the world have a checkered history of curbing innovation, trampling civil liberties, and silencing subversive voices. Unique problems demand unique solutions – and more regulation is not a unique solution. Regulation still needs to be explored more deeply, both as a moral concept and a legal framework, but at this moment, regulation may create more problems than it purports to solve.