In recent news, the Centre for Data Ethics and Innovation (CDEI) in the UK, published a review study on the risks of bias in Algorithmic Decision Making (ADM) systems.

This study was commissioned by the UK government, during the 2018 October Budget session. This document analyses the impact of bias in algorithms that can represent a significant and imminent ethical threat. Based on the analysis, it presents several policy recommendations to the government and regulators.

This article tries to analyse some of these policy recommendations to the government, the reasons that led to formulating them, and if they can be adopted in the Indian context given the legal bodies and infrastructure.

Ensuring diversity for protection against bias

The CDEI document recognises the significance of diversity across a range of roles involved in the deployment and development of ADM systems. In order to ensure diversity, the CDEI recommends that the government should continue to support and invest in programmes that facilitate greater diversity. 

This point is especially relevant in India since it has such a wide range of demographics in terms of caste, religion, language, sexuality, and state, among others. 

India does not currently have a mechanism to ensure diversity in the development and deployment of ADM systems. However, India has used reservation quotas to ensure the representation of historically and currently disadvantaged groups in government offices and education. Affirmative actions similar to the reservation system can be leveraged to promote more representation in tech firms, to achieve algorithmic fairness.

Setting up safe guidelines to monitor outcomes & analysing bias

Data is needed to monitor outcomes and identify bias, but access to characteristic data can become a tricky affair. The CDEI document calls for working with ‘relevant regulators’ for clear guidance on collection and use of protected characteristic data for monitoring outcomes of ADM systems. For bias evaluation, it recommends leveraging frameworks like the Secure Research Service of the Office of National Statistics, that allows access to only accredited researchers.

On the other hand, the Data Security Council of India, a research body set by NASSCOM, has been committed to creating safe cyberspace by establishing best practices, standards, and initiatives. The body does extensive research on data protection frameworks, introduced in India, in the form of bills and committee reports. This expertise can be used to regulate and define robust guidelines to keep algorithms in check for biases.

Establishing laws to address the resulting discrimination

As of now, the CDEI does not think that the UK needs a new specialised regulator or primary legislation to address discrimination resulting from algorithmic biases. It, however, recommends more guidance that clarifies the ‘Equality Act responsibilities’ of organisations that use ADM systems. This is not only in terms of mitigating technical bias but also the collection of personal data. 

This recommendation to not introduce new legislation is based on several instances that proved the current legislation to be efficient. For instance, a recent judgement in the courts successfully retracted an ADM system for facial recognition, deployed in the public sector, because enough steps were not taken to establish fairness in that system. 

On the other hand, the extant law in India doesn’t account for fairness of ADM systems. While there is some legal framework to address who can use or process data, these rules were not made in consideration of the ADM systems. One of the legal frameworks is the Privacy Protection Bill 2019, which is still under review under a Joint Parliamentary Committee.

To address the issue of discrimination in general, the Constitution of India under Article 14 does have provisions for the ‘equality of law’. However, similar to the UK’s Equality Act, this Article lacks the language to address ‘equality’ in the context of ADM systems.

If the Data Protection Bill is passed with robust guidelines on processing the data along with more accountability on the entity that is processing it, it can then be combined with Article 14 to form a legal framework to address the resulting discrimination from algorithms.

Establishing mechanisms for transparency and explainability of ADM systems

The CDEI document states that the UK government has shown leadership in setting out guidance on AI usage in the public sector. However, it still calls for a mandatory transparency obligation on all public sector organisations using algorithms that have a ‘significant influence on significant decisions affecting individuals’. 

To ensure more transparency in the public sector, the Government of India passed the Right To Information Act (RTI) in 2005, which has been used before to get more transparency on algorithms as well. However, experts have mentioned that there is a major lack of legal processes available to actually hold an algorithm accountable. In terms of explainability, NITI Aayog, India’s policy think-tank, introduced the concept of Explainable AI (XAI), to create a suite of machine learning techniques that produce more explainable models. 

Achieving transparency and explainability for public sector algorithms needs more work over the current existing frameworks. Similar to the UK, laws need to be introduced to make public sector algorithms open. This can help industry experts analyse algorithms for its fairness and hold the governments accountable.

Wrapping Up

In the case of India, an oversight body was introduced by NITI Aayog to play an enabling role for the research and policy for AI in the country. While this article analyses mechanisms in the Indian system that can be leveraged to address algorithmic bias, a thorough review by this body will be important. This will help identify issues in an Indian context.

The post Does India Have The Infrastructure To Implement Policy Recommendations To Avoid Algorithmic Bias appeared first on Analytics India Magazine.