interview | Padmashree Shagrithaya Business

Bias in AI is a key topic of concern: Capgemini VP

Padmashree Shagrithaya   | Photo Credit: Special Arrangement

UNESCO’s framework for ethical AI would have a far-reaching impact on the entire gamut of AI activities, including development, application, ethicality, data privacy, and regulation, says Padmashree Shagrithaya, VP, Analytics & Artificial Intelligence – India, Capgemini in an interview. Excerpts:

What are the challenges involved in building impartial, non-discriminative and sensitive AI platforms?

AI systems, we know, are built in an attempt to mimic human intelligence. The systems are taught through sharing enormous data of past human actions to learn from. While this is useful on one end, to reduce the human interference in repetitive decision making and allowing for larger, more complex problem solving, this approach is wrought with inherent challenges of bias and discrimination. Historic data is studded with biases and discriminations that are at times deliberate and many times, inadvertent, in nature. How to identify, segregate/correct them and feed it back to the algorithm to ignore them or systematically adjust for the same, is one of the biggest challenges.

Is forcing consumer/patient/citizen consent through pop-up windows good enough to move on?

The general view is that the more information we have about individuals, one could build sharper algorithms to target them for offers/recommendations/treatments etc. Recommendation (through data filtering) algorithms have off late, come hugely under the radar for ethical reasons. There is a lot of talk around it, for infringing on personal, sensitive information and its negative impact. They are useful in many scenarios like recommending the right treatment plan for patients or offering useful recommendations that consumers are looking for. But the question again is, where to draw the line on how much information about an individual would be tantamount to infringing on sensitive personal data.

What are the key things AI developers should keep in mind to avoid gender bias?

Bias in AI is a key topic of concern and a lot of research activities are happening around it. In fact, gender bias has been a focal point. It is important to address this systematically through the AI development lifecycle, starting from scoping all the way to proving that it has indeed been taken care of. Developers should ensure that the scope is neither myopic nor intended to bring in any intersection disparity. Additionally, keeping in view the overall societal impact the outcomes may have. Data collections and curation and the methodology involved will play a very key role to ensure fairness and ethical end outcome. Accountability and ownership have to be well-defined, to ensure fairness and inclusion are adopted right through the development, validation, and delivery of an AI system. More importantly, all these elements have to be part of a regulatory and compliance framework.

Are technology developers and deployers sensitive enough to these critical issues?

There is still a great degree of ignorance about how AI can impact our lives. We have been seeing rigorous campaigns in recent years to create awareness about sensitivity issues and also to bring about changes in leadership mindset. But still, a large ground needs to be covered. In this journey, just awareness is not enough but recognition, acceptance and addressing of the issue is more important. We should have legal frameworks that support a holistic inclusion of coders, developers, and decision-makers. Data collection approaches and data distribution have to be streamlined to avoid bias against under-represented groups (if any). More importantly, a collective effort from the AI community is required to create a sustainable ecosystem to provide transparency and gain user confidence, which is of paramount importance, for any tech-driven system to work effectively.

UNESCO is working towards a framework to ensure ethical AI...what does it mean for AI developers and users?

UNESCO’s framework for ethical AI will have a far-reaching impact in the space of AI. The idea is to have a holistic and evolving framework of values, principles and actions that can guide societies in dealing responsibly, with the known and unknown impact of AI on human beings and society at large. Through this, UNESCO aims to have a global commitment by individual countries to view AI through ethical lenses. The framework will have impacts on a wide range of areas such as sensitivity to privacy and inclusion; transparency, fairness and non-discriminant; accountability through participation; mindset change that supports a sustainable AI environment and a proper balance between business growth and promotion of human values.

What is Capgemini’s contribution towards building an ethical and fair AI environment?

We, at Capgemini, believe that Ethical AI is the cornerstone upon which customer trust and loyalty are built. From conceptualisation to development, delivery and monitoring of AI systems, we take utmost care to ensure that we build a trustworthy AI solution. Our Code of Ethics for AI has seven key principles which are the foundational building blocks for our AI systems. We have established an “Ethical AI review board” for reviewing AI projects and independently reporting on the practices adopted, and adherence to the principles. We have our own solution framework (AI Glass Box and AI Fairness Tool- fAIry) for businesses to visualise complex topics like transparency, bias and fairness with simplicity and also to customers to identify and measure the fairness of an AI model, respectively.

What kind of stakeholder partnership is required to make search-engine technology more secular, neutral, and unbiased?

Search engines have the power to shape user behaviour. Though the entire cycle of collecting, indexing and ranking content is automated, still it is prone to misuse and unintentional bias. For search engines to be secular, neutral, unbiased and conflict-free, their searches have to be objective, based on continuous learning of algorithms and they should be intelligent enough to offer the right search outputs. A lot of ambiguities can be resolved to ensure human values and societal benefits through a stakeholder partnership involving AI providers, developers, end-users, regulators, compliance agencies, industry experts, and search engine providers.

Our code of editorial values

This article is closed for comments.
Please Email the Editor

Printable version | Oct 19, 2021 5:06:10 AM |

Next Story