Power and accountability in the digital economy

Data and big tech companies have more and more power. As the digital economy develops, we need to put people and society at the centre.

A new economy is emerging, powered by a new type of fuel: data. As the data economy becomes increasingly prominent, there are troubling signs that it is worsening existing power imbalances, and creating new problems of domination and lack of accountability. But it would be wrong simply to draw dystopian visions from our current situation. Technology does not determine social change. We must decide for ourselves which future we want: emancipation, or inequality and discrimination.

Over the course of 12 months we have looked in depth at the digital economy, uncovering the major emerging issues in a series of four papers followed by three proposals for how we could reform it.


In The rise of the data oligarchs and Digital power players, we looked at the most visible aspect of the digital economy. Mega corporations like Google, Amazon and Facebook harvest data, including personal data, on an industrial scale. We saw how they use old tactics — like lobbying and acquisitions — and new ones — like novel share structures — to exert their power and maintain their economic and information dominance. Although many of these companies began as start-ups in garages, they are now the incumbent forces defending their position. Our research highlighted the inadequacy of existing regulations, like the EU’s General Data Protection Regulation (GDPR), to protect us as individuals, as well as the inability of competition law to curtail the monopolistic dominance of the tech giants.

Although GDPR is a world leading improvement for data privacy, it is not a panacea. The burden of managing and enforcing our data sharing is increasingly falling on the individual rather than the companies who want to use our data. Eventually privacy could become the preserve of the rich, who would be the only ones able to spend the necessary time or money to ensure it.

Competition law has also not kept up with the modern digital landscape. Much anti-monopoly legislation has focused on stopping consumer price gouging. Because the large tech companies have kept prices low (or free) and consumers happy, they have managed to circumvent a lot of this anti-monopoly legislation (with some exceptions from the EU). A more accurate term for the tech giants is monopsony’, a structure in which a single buyer controls the market. But this concept needs to be developed further if it is to be useful.

In Who watches the workers we delved into the implications of the emergence and growth of the digital economy for the jobs market. The jobs market has always had a delicate balance between workers and employers. The introduction of technology which gathers, analyses and uses data has disrupted that balance and shifted power firmly back to employers. Digital technology has enabled some employers to extend surveillance over their workers. This creates a digital panopticon’, where surveillance is always happening but workers never know if anyone is actually looking at the data. In addition the benefits of data gathering and analysis are highly skewed towards management and often facilitate the intensification of work and a reduction of employees. Employers are also avoiding accountability by increasingly using black box’ algorithms as a tool to obscure decision making processes.

In Controlled by calculations we found that algorithms have become essential to managing our digital lives and navigating the digital world. Without them we could not make sense of the huge mass of digital information available. But now that algorithms wield such influence, we have a responsibility not to misuse them — especially since lazily programmed algorithms entrench existing social discrimination and bias. We found that we are entering a scored society, with access to essential services, both public and private, increasingly mediated through algorithms. Algorithms have morphed from curating online content to shaping and actively influencing our lives.


The data economy is still very young, but it’s already transforming our economy and society, driven by the increasing technical capabilities of software, services and products. Furthermore, the spread of digital devices like smartphones has contributed to the accelerating economic and social impact of digital technologies more generally, both positive and negative. Many of the techniques used by these companies, from having every interaction with the digital world recorded, analysed and traded, to being served personalised adverts and constantly scored by algorithmic systems, have become part of the everyday practice of the industry. This is despite users and society more generally not being consulted.

The tide, however, is starting to shift. We must ensure that the future data economy puts the protection of people above the interests of private companies or government surveillance. The stakes are too high.

In Protection before profit, we set out six principles to guide this transition:

  • Protect by default: hardware, software and platforms should protect users by default.
  • Build on a decentralised architecture: digital infrastructures should be based as far as possible on decentralised architecture to disperse power, while creating a more secure and less vulnerable network.
  • Enable the collective: the narrative around individual rights and actions needs to be supplemented by a narrative around collective rights and actions.
  • Data is a public good: because of its ability to help us transform the economy for the common good, we must realise that the real value of data is not monetary, but social.
  • Ensure clear accountability: as the data economy enters more areas of our lives, we need to ensure there is clear accountability for those collecting and processing our data.
  • Increase transparency: we need to ensure that there is transparency within the system.
Principles for a new data economy
Principles for a new data economy: click to enlarge.


Going beyond GDPR

While GDPR addresses some privacy issues, it does not address the issue of power in the data economy generally. GDPR is limited because it focuses too heavily on individual actions, like giving consent, or lodging a complaint with the Information Commissioner’s Office (ICO). Accountability for tech giants is undermined by allowing justifications such as legitimate interest’ or​‘necessity’ to be used by data collection companies. We recommend going further than GDPR in a number of ways:

  • Devices, software, and online interactions should be subject to privacy by default and design thereby automatically set to not collect, share or sell on our personal data with tools to adjust the settings.
  • When consenting to data collection the company should make it clear exactly, in a standardised and consistent format, what data is being collected and who it may be shared with or sold to. To facilitate this, reviews of terms and conditions could be crowdsourced, or consent could be given by proxy through trusted individuals or groups, perhaps for a small fee.
  • Protecting people should be prioritised over corporations’ business models by restricting the use of loopholes, like the GDPR legitimate interest’ justification, which allows companies to gather and process personal data without consent from users.
  • Data sharing and selling between companies without the consent of users should be banned, whether within the same company family (like Google and YouTube) or not.

Banning personal data use in online advertising

On June 20, the UK Information Commissioner’s Office issued a report intended to warn the ad tech industry that it was contravening GDPR and needed to clean up its act. Although they did not prescribe specific changes, the fact that the ICO made the declaration at all was big news. In December 2018, in Blocking the data stalkers, NEF proposed a GDPR-compliant reform which would ban the sending of personally identifiable data to advertising networks.

When someone clicks a link to a webpage, between their action of clicking’ the link and the page loading, information about them is compiled and sent out in order for advertisers to assess the value of showing them an advert. These are called bid requests’, and they invariably fail to ensure the protection of personal data against unauthorised access. They can even include sensitive information such as a person’s sexuality, or political beliefs. Bid requests on UK users are being sent at a rate of 10 billion per day, or 164 per person per day. Each is seen by hundreds, if not thousands, of intermediaries.

The ad tech industry is exposing every internet user to the non-consensual sharing of their data, with thousands of companies potentially able to copy, share and resell the shared data. The now infamous Cambridge Analytica used this valuable stream of data to operate.

We recommend that instead of relying on the use of personal data, when users click on weblinks, bid requests should give advertisers demographic information about the audience of the website and generic information about the user. This would allow them to show demographically appropriate advertising, without compromising the privacy of users. Where websites do sell advertisement space that uses personal data, they should be required to gain explicit consent from individuals in order to do so.

Algorithmic accountability

Algorithms are not new, but thanks to the digital revolution, they’re becoming part of almost every aspect of our lives.

They’re indispensable in the online world to sort the huge volumes of information online in order to make the internet the valuable resource that it is today. Algorithms are expanding into new areas to help people make decisions about whether to offer an applicant a job interview, whether prisoners will reoffend, and what social care provision a service user needs. Despite presenting a technological veneer of objectivity around their decisions, algorithms, and the data collection that powers them, are designed by people and shaped by human decisions and biases.

In Digital self control we found that we need meaningful accountability for those who create and deploy algorithmic decision systems, especially in areas where decisions have a significant impact on individuals. We should establish a legal standard for algorithms governing access to goods, services, and law enforcement.

This standard would require that:

  • Individuals know when they are interacting with an algorithm.
  • There is clear responsibility of algorithmic decision systems with rules about who is formally and legally responsible for the system.
  • Details of the accuracy of the system are provided along with a list of data inputs used to train the system.
  • A statement is provided highlighting any biases and confirming that they are not discriminatory.
  • There is a secure and verifiable audit trail which would record the queries submitted and data used to process the query.
  • Right to an explanation is extended to any decision involving an algorithm.

Our digital selves

There are thousands of digital profiles of each of us, collated from data trails we’ve left online. Acxiom, one of the largest data brokers on the planet, concedes that about 30% of the data held in each profile is incorrect. Given the uncertain quality of the profiles being built and the increasing use of digital profiles in the public and private sectors, incorrect outcomes could have series ramifications in our lives. We must ensure that we, as individuals, are not held accountable for things we didn’t do, or for being someone we are not.

To ensure that digital profiles are accurate, and that individuals are empowered to own their digital identity, we recommend the development of a new alternative that ensures we have control of our digital profile while prioritising our privacy.

In Digital self control we argued that the government should create a Digital Passport system. This would be an independently governed piece of decentralised infrastructure that allows us to securely and privately prove our digital identity online. In addition, they should create an independently run National Data Store. This would be a decentralised digital data store for our profiles that individuals can access and control through an easy-to-use app or website. While the state would establish the system, its structure and architecture would ensure that they don’t have easy access to it and the role of the state would be restricted to establishing and enforcing the rules and rights needed for the system to work.

As a consequence, users would have direct control over the data, verified attributes, and inferences in our profiles. Users would be able to differentiate the data that we want to share with different types of systems and algorithms. The independent organisation running the National Data Store would also stipulate conditions of access, so that companies, government agencies, and municipalities can tap into this identity system in lieu of privately maintained digital profiles and reputation scores.

For companies who chose to continue to use their own systems and profiles, algorithmic decisions that have been based on incorrect information or unverifiable and unreasonable inferences should result in fines for the company deploying the algorithmic decision system and damages for the person involved.

Changing the rules of the data economy

Following these principles and reforms will help ensure that the needs of people and society are placed at the centre of the data economy as it develops. The principles can continue to guide and shape the data economy, even as the technical and legal landscape shifts and develops. The structures, practices, and regulations that develop should put the protection of people first, including their right to privacy. This should always be favoured over facilitating the needs of private companies and their constant quest for profit, or the government’s desire to monitor and track elements of our digital behaviour. The stakes are just too high for us to fail.

Read the full series:

  1. The rise of the data oligarchs: New technology isn’t disrupting power – it’s enforcing it.
  2. Who watches the workers?: New technology can shift power to workers, but also means employers can more easily track their employees.
  3. Controlled by calculations: Algorithms have huge influence over our lives. Could they be reinforcing inequality?
  4. Digital power players: The problem and the power of tech monopolies.
  5. Protection before profit: Principles for the new data economy
  6. Blocking the data stalkers: Going beyond GDPR to tackle power in the data economy
  7. Digital self-control: Algorithms, accountability, and our digital selves

Image: Pexels

If you value great public services, protecting the planet and reducing inequality, please support NEF today.

Make a one-off donation

£5 £10 £25 £50 £100

Make a monthly donation

£3 £5 £10 £25 £100