- How GAFA Are Undermining Our Democracy - October 28, 2020
- Hagia Sophia and the Role of Religion in Turkey’s Politics - July 31, 2020
- Press Freedom in Europe: Only a Fairytale? - June 24, 2020
In the U.S about 65% of likely voters believe Big Tech such as Amazon, Google and Facebook, to cite some, detain a power so extensive they could harm the country’s economy, a survey shows. And this fear has solid grounds: Facebook has 2.4 billion active monthly users, Amazon accounts for nearly 40% of all e-commerce spending in America, Google gets more than 92% of global search engine inquiries. Since the start of the pandemic, big tech only experienced a relatively insignificant low-point at the stock market, when Wall Street actors were worried about the crisis, and rose up to reach even higher profits than before the emergency with a 43% bounce. As an example of this trend, Apple stock value reached $2 trillion in August. Their power has become so big, some experts have started to compare them to government.
But why are they so strong? A combination of factors seems to be the answer.
First of all, in our daily routine it is impossible not to deal with them: from reading the news, to look up for something on the internet, share pictures and messages, we couldn’t even imagine our lives without them. And they played an essential role during this time of quarantine, by helping us connecting with our loved ones, to buy goods, to use services, to watch a movie. But this is not all: when we think about tech giants we picture Facebook, Apple, Amazon, Google, and often we are not aware of the fact that many other platforms or services we use belong to one of these mega tech companies, named together by the acronym ”GAFA” (Google, Apple, Facebook, Amazon): Google owns Youtube and plans to acquire FitBit, Facebook owns WhatsApp and Instagram and so on.
This brings to the first issue -which is also one of the reason why GAFA are so powerful: market concentration. In the U.S, a lawsuit against Google’s search dominance is soon to be launched. In July, big tech’s CEOs testified in front of the Congress in a hearing called ”Online Platforms and Market Power: Examining the Dominance of Amazon, Apple, Facebook, and Google’‘, focusing on the anti-competitive behavior of their firms, in the framework of an investigation started over a year ago. Antitrust authorities, both in the US and in the EU, are more and more concerned over the role of these companies in our democracies.
Another difficult point is data: GAFA store and process a giant amount of information on consumers’ behavior and personal data, companies and producers, hence having a strategic advantage not only on users but also on producers whilst they act as intermediaries. The transaction of ”data” as opposed to the transaction of ”money” is more challenging to identify in the harm it causes and hence to regulate, especially because the big tech companies owning this striking amount of information are the same who run the algorithms on which the transactions are based.
Regardless of any Antitrust considerations, what seems to be at stake is also the digital environment and its users. A research showed that 41% of first Google’s search results are other Google’s services, with huge consequences on smaller websites’ businesses which saw a drastic drop in their revenues. In the same way, Facebook (but it is not the only one, see Google for instance), born as a small social network to reach recently more than 2 billion active users monthly, is underpinning the media industry: media outlets have reached out to the platform to share their contents mainly because of the wide audience it provides. But algorithms work in a way that only certain news are showed and to pre-identified users, with the first significant consequence of diminishing media pluralism online, undermining one of the pillars of any democratic system. More and more media outlets are seeing huge losses in revenues directly dependent on Facebook’s news feed algorithm.
Finally, there is a disparity in the profits Facebook and publishers gain from the content shared, where of course publishers are those who are actually losing money, being unable to ask for better terms given their lack of market power. And this is true not only for small media outlets, but also for the biggest ones like BuzzFeed, which can no longer afford to invest in the platform given the disparity of treatment in showing the news on the feed.
There is another way – probably even more insidious – in which platforms are considered being undermining democracy.
After the Cambridge Analytica scandal, which saw millions of users’ data being harvested without consent and profiled to push Trump’s win in 2016 through an app available on Facebook, not much has been done to control platforms’ interference in democratic processes such as elections. Facebook claims it has learned its lesson since 2016, and it has since then put into practice actions to prevent a similar outcome in the upcoming elections. Advertising transparency (including political advertising), verification and labelling of political ads in the US, disabling fake accounts and tackling disinformation are some of the policy developments Facebook has put into place to clean its reputation and gain trust from stakeholders and citizens.
In 2019, the platforms started limiting the visibility of some content on the news feed when it ”undermines the authenticity of the platform”. Ahead of the 2020 US presidential elections, Facebook shared a post over new measures such as Facebook Protect, to preserve the accounts of candidates and their teams, showing confirmed page owners and enhancing fact-checking. But is it working? During UK elections in 2019, the platform refused to remove manipulated videos by the Conservative party, claiming that a private company has no right to censor politicians, therefore that very content is not subject to fact-checking by Facebook external partners.
In the same year, Facebook’s ex-head of global elections integrity operations Yal Eisenstat declared that
”it’s clear that the company won’t make the necessary fixes without being forced to, either by advertisers who refuse to spend money on their platforms until Facebook cleans up the spread of misinformation and other harmful content; employees who continue to demand accountability and responsibility from their leaders; and most immediately, government action”.
Recently, a boycott campaign named ”StopHateForProfit” has been launched to hold social media platforms accountable for the content they spread, especially concerning online hate speech. Facebook has been particularly hit by this initiative, to which more than 1200 large companies (amongst them: Coca Cola and Unilever) adhered, refusing to run advertising campaigns on the platform.
When talking about the platforms’ role in influencing elections and democratic processes in general, we must distinguish between micro-targeted political advertising and disinformation campaigns. Micro-targeting uses the data regarding a certain individual, collected by the platform, to show him or her specific ads based on his/her profile. Hence, in the case of political advertising, it can be used to influence users’ opinions through specific campaigns which often contain false or misleading information. On Facebook, micro-targeting is set by using different types of data, e.g. incomes, geographical area, level of education etc. Political micro-targeting is not an issue occurring only on Facebook, all major platforms are concerned, and despite their tentatives to self-regulate and control the phenomenon, the results are still fable and precarious, so much that governments started to step up to ensure the protection of the delicate mechanisms that is democracy.
Strictly connected to micro-targeting and political ads on platforms, we find disinformation spreading. It is the voluntary spread of misleading or false information with the purpose of manipulating and harming a person, a group of individuals or even a country. During the COVID-19 pandemic, we have seen how dangerous it can be to share false information, especially when it comes to citizens’ health: these information can reach billions of people in a short period of time. The complexity of targeted disinformation campaigns is made even more insidious by the fact that the actors behind are in the majority of cases other countries: the most famous examples are the disinformation campaigns conducted by Russia, which has targeted both the US and the EU, using trolls, closed Facebook groups, targeted advertising and fake accounts.
Finally, it is important to consider the vast amount of resources allowing GAFA to realize extremely effective lobbying campaigns, especially in the EU. Between 2013 and 2018, GAFA increased their expenditure on lobbying in the EU of the 444%, meaning they went from 2.8 millions euro in 2013 to 15.25 millions euro in 2018. They regularly hire lobbyists from European institutions, gaining an important advantage over the EU and its pushes for stricter regulation and policies. By using this influencing power, combined with their economic dominance, they tipped the balance between governments and corporations in favor of the latter, buying themselves a position in which are the governments those who need to bargain.
Policy makers are understanding that the problem is not whether GAFA are lawful or not, but the fact that they represent a new type of public utilities which, therefore, need to be regulated to safeguard the public interest. The EU is currently working on the Digital Services Act package and a Democracy Action Plan, both expected to be presented by the Commission by end of this year, which will address issues such as platforms’ regulation, illegal activities online, transparency and legal obligations for platforms as well as a new competition tool and tackling disinformation. What is clear is that platforms need to be regulated and that self-regulation is not enough: what is at stake is our democracy, and given the continuous growth of these actors it is logical to assume that this topic will be on top of the policy agenda in the EU and across the entire globe for many years onwards.
- How should GAFA be regulated?
- Will governments be able to win over the power of platforms?
- Should we, as private citizens, act to defend our rights against GAFA?
Suggested readings
Big Tech’s aggressive EU lobbying has caused a power shift
Concentrated power in Big Tech harms the US
Commission eyes US GAFA hearing for future competition challenges
Europe failed to tame Google. Can the U.S. do any better?
How Big Tech got even bigger in the Covid-19 era
Poll: majority of Americans concerned about Big Tech’s economic, political power