latest

New report urges action to avoid “entrenched bias” in algorithms

CDEI report recommends steps to ensure algorithms are fairer

algorithm
30th November 2020 about a 3 minute read

The Centre for Data Ethics and Innovation (CDEI) has published its delayed report into the risks of bias in algorithmic decision-making. 

The CDEI focused on the use of algorithms in significant decisions about individuals. The review looks at their use in:

  • policing
  • local government
  • financial services
  • recruitment.

In October the Guardian newspaper published findings from a Freedom of Information request on the use of algorithms. 

This found that 100 out of 229 councils have used, or are using, automated decision-making programmes for decisions including benefit claims and allocation of social housing.

In one council area the results from an algorithm were only 26% accurate in some instances. This was said to result from inaccurate inputting.

"We need to work together... to ensure that algorithms are used to promote fairness, not undermine it." Adrian Weller, CDEI Board Member

Adrian Weller, Board Member for the CDEI, said: “It is vital that we work hard now to get this right as adoption of algorithmic decision-making increases.

“Government, regulators and industry need to work together with interdisciplinary experts, stakeholders and the public to ensure that algorithms are used to promote fairness, not undermine it.

“Roadmap to tackle the risks”

“Not only does the report propose a roadmap to tackle the risks, but it highlights the opportunity that good use of data presents to address historical unfairness and avoid new biases in key areas of life.”

The government commissioned the CDEI to make recommendations on how they should address this issue.

The review was delayed by the onset of COVID-19. This final review report includes a set of formal recommendations to the government.

Algorithms in decisions about individuals

The CDEI said its aim was to help build the right systems so that algorithms improve, rather than worsen, decision-making.

The sectors were selected as they all involve significant decisions about individuals, and because there is evidence of both the growing uptake of algorithms and historic bias in decision-making in these sectors.

Recommendations

The CDEI states it hopes its recommendations will result in a “step change” in the behaviour of all organisations making life-changing decisions on the basis of data. The ultimate goal is to improve both accountability and transparency. Key recommendations include:

  • Government should place a mandatory transparency obligation on all public sector organisations using algorithms that have an impact on significant decisions affecting individuals
  • Organisations should be actively using data to identify and mitigate bias. They should make sure that they understand the capabilities and limitations of algorithmic tools, and carefully consider how they will ensure fair treatment of individuals
  • Government should issue guidance that clarifies the application of the Equality Act to algorithmic decision-making. This should include guidance on the collection of data to measure bias, as well as the lawfulness of bias mitigation techniques (some of which risk introducing positive discrimination, which is illegal under the Equality Act).

What next?

The CDEI says it will help industry, regulators and government in taking forward the practical delivery work to address the issues identified and any future challenges which may arise.