Social impact of AI algorithms prompts academic and government interest

Algorithms are increasingly taking over decision-making processes from human managers in the modern workplace. This is leading to growing movements to regulate artificial intelligence (AI), amid accusations that it can lead to bias and unfairness.

Critics of the rapidly evolving technology are warning that algorithms can exacerbate the bias and that we need human intervention to avoid relying on automated decisions that are overly driven by consumer choice and the drive for increasing profits, to the detriment of the worker.

Academics and policy makers have expressed concern about fuelling a “race to the bottom” that doesn’t take fairness and diversity into account, and criticism of algorithms has already been widely seen in the platform services sector, where it is contended that drivers can be disconnected or have their access to desirable work restricted, without any human considering their case.

Dutch gig workers invoke GDPR

 

For example, gig workers recently called on the EU’s General Data Protection Regulation (GDPR) articles 15 and 22 in a Dutch court. There were several cases of ride sharing drivers – who formerly worked for Uber and the Indian firm Ola – who made legal challenges against the platform companies when they were dismissed as the result of automated decisions made by algorithms.

Article 22(3) of the GDPR states that the organisation must “implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.” Article 15, meanwhile, grants individuals affected by automated decisions the right to be given information about the logic underlying them.

Although the court did not side with the drivers on every aspect of their cases, it did hold in the Ola case that algorithm-based penalties imposed by the company on drivers had a significant effect on them, and that therefore they were entitled to have the logic explained.

And now cases like these are catching the attention of governments.

 

US mulls a Bill of Artificial Intelligence Rights

 

In the US the Biden administration is gathering information for a Bill of Artificial Intelligence Rights, which is being developed by the White House’s Office of Science and Technology Policy (OSTP). The OSTP has started the process by making a public request for information about biometric technologies.

Eric Lander, science adviser to the president and director of the OSTP, together with Alondra Nelson, the OSTP’s deputy director for science and society, recently wrote an opinion column on the subject for Wired.

They wrote: “In the United States, some of the failings of AI may be unintentional, but they are serious and they disproportionately affect already marginalised individuals and communities.

“They often result from AI developers not using appropriate data sets and not auditing systems comprehensively, as well as not having diverse perspectives around the table to anticipate and fix problems before products are used (or to kill products that can’t be fixed).”

The White House advisers said that such powerful technologies need to respect American democratic values and abide by the central tenet that everyone should be treated fairly.

“These principles needed to be built into the code of the technologies before they are employed,” they said. “Developing a bill of rights for an AI-powered world won’t be easy, but it’s critical.”

 

EU aims to extend data rights, but Brexit Britain begs to differ

 

The EU has already regulated the digital sector through its 2016 General Data Protection Regulation (GDPR), plus the proposed Data Governance Act and the proposed Digital Services Act. It is also proposing to regulate AI.

Meanwhile, in post-Brexit Britain, the UK government is proposing to remove or at least dilute Article 22 of the EU’s GDPR regulations, which gives people the right to a human review of decisions that have been made as a result of AI technology.

In September the UK government published its proposed reforms for “a bold new data regime” Data: a new direction in which it stated, “the current operation and efficacy of Article 22 is subject to uncertainty”. It also said, “It is therefore important to examine whether Article 22 and its provisions are keeping pace with the likely evolution of a data-driven economy and society, and whether it provides the necessary protection”, and it is now is seeking further evidence on the potential need for legislative reform.”

 

Warning of ‘algorithmic despotism’

 

But what happens if the logic behind some of the algorithms’ decisions is based on bias – or even “despotism” in some cases – as certain academics claim?

A recent study Algorithmic Control in Platform Food Delivery work (by Kathleen Griesbach, Adam Reich, Luke Elliott-Negri and Ruth Milkman), was published in the American Sociological Association’s journal Socius. It analysed the processes by which food delivery platforms control workers and found significant cross-platform variation in the algorithmic management used to assign and evaluate work.

Certain levels of algorithmic control were so stringent that the researchers referred to them as “algorithmic despotism” – because they regulated the time and activities of workers so closely.

Incentivising certain kinds of behaviour, such as high acceptance rates and good customer reviews, with the promise of additional benefits such as bonus pay and regular customers, is standard practice on food delivery platforms. Concern surrounds not so much the concept itself, as cases where it is taken to extremes.

For example, one platform studied set such stringent requirements for workers to sustain what it called “early access status” – which they needed to be allocated jobs – that workers were afraid of losing this status. They reported having less control over their work and being under intense pressure to accept any given order.

“Instacart thus exerts greater control over workers than other platforms in two key ways: by demanding a greater time commitment by incentivising maintaining early-access status and by making it more tedious and time-consuming to reject orders,” said the study.

“We call this control system, in which workers have little control over either their time or the activities that they perform while working, algorithmic despotism because of the way in which it reproduces the ‘petty tyranny of the bosses’, although now in algorithmic form.”

 

Games-like incentives

 

Rating systems can also be a strong method of controlling workers, particularly if they don’t have access to individual reviews, just their overall score.

“Platforms operate on the model of capturing consumers so they set up labour pools that are much larger than the number of jobs available. They then offer a variety of incentives to engender competitiveness between the workers,” said Bama Athreya, author of the paper “Bias in, bias out: gender and work in the platform economy, ” published by the International Development Research Centre in Canada.

“If jobs are then allocated based on rating systems this is a form of control because individuals are internalising the need to please the client at all costs and self-policing to a point which could be detrimental to the worker,” she wrote.

According to Athreya, such “gamification” (a way of motivating and engaging people using games-style strategies) can cause an override of self-control that incentivises a worker to do a task that isn’t in their physical best interests.

For example, the code behind the app could nudge a driver to do one more ride when they’re at the end of a long shift and might normally be too tired and therefore more at risk of having an accident.

 

Lack of face-to-face contact reduces behaviour checks and balances

 

The removal of social intermediaries takes the human element out of recommending and reviewing jobs, Athreya said. Workers in low-wage and precarious roles such as cleaners often get jobs through word-of-mouth, particularly in low- and medium-income countries and the face-to-face social aspect of this lends itself to more respectful behaviour.

“If I’m terrible to that cleaning lady she could tell someone who knows her,” wrote Athreya. “Behaviour checks occur in face-to-face interactions that don’t occur when there’s no human connection with that worker and all communication is via that app. It reduces the worker’s ability to negotiate and has a pernicious effect on employment conditions.”

Taking the business model of putting the customer first at all costs to its ultimate extreme, can mean that instances where workers have been harassed or abused by customers are ignored.

Athreya pointed out that there are steps that platform companies can take to make working life fairer, such as having a complaints channel for workers as well as customers – and one that comes with a meaningful sanction. Making data more transparent is also key. For example, platforms could give drivers access to information such as their personal reviews and more choice of jobs, rather than continuing to push certain gigs to certain workers, which amplifies bias.

She also suggested that platforms could also make prevailing wages more transparent so that workers could compare the financial worth of each gig. Platforms could also make work accessible to a more diverse range of people, such as migrants and those with disabilities.

“The alternatives are all viable as long as there’s no crowding out by the race to the bottom,” she said, arguing that public investment could correct the pernicious impact of the business model.

 

Government interest responds to citizen actions

 

“Most governments are concerned with employment and under-employment and have a pro-forma to take action,” Athreya said. “Unhappy people put pressure on governance but the level of action taken does depend on the extent to which the government is citizen-responsive and that does often rule out to the low-income countries.”

Which could be why current legislative action is taking place in countries such as Spain – where the recently approved national “Riders Law” requires platform delivery companies to share the algorithms with the workers’ representatives – and the US, which has seen bills or resolutions related to AI introduced in at least 17 states in 2021 and the issue has now caught the attention of the White House.

As algorithm-driven analysis is increasingly adopted across a whole swathe of industries, more and more people are being exposed to both the benefits and the downsides. In democracies many of those people tend to be voters, hence the increasing interest of governments.

– Lorraine Mullaney PlatformsIntelligence staff

Image: Gerd Altmann (edited)

Print Friendly, PDF & Email