ADVERTISEMENTREMOVE AD

Generative AI Needs To Be Explainable, Accurate, and Accountable: Here’s Why

The absence of explainability in AI models poses a risk in almost every industry.

Published
story-hero-img
i
Aa
Aa
Small
Aa
Medium
Aa
Large

The ability of generative AI applications to generate human-like responses, answer questions, and create art and music, amongst others, has amassed millions of users quickly, becoming the fastest internet service to reach 100 million users.

This has prompted policymakers to consider regulating it, as well as assess the accompanying risks and opportunities of generative AI.

Through this piece, we look to outline and address concerns for a range of stakeholders namely, AI developers, AI users (hospitals, companies, etc), and end-users (Indians using AI services).

This article is a part of 'AI Told You So', a special series by The Quint that explores how Artificial Intelligence is changing our present and how it stands to reshape our future. Click here to view the full collection of stories in the series.

Generative AI Needs To Be Explainable, Accurate, and Accountable: Here’s Why

  1. 1. First of All, Why Do We Need To Regulate AI?

    As AI technology advances and further integrates into our daily lives, there is a need to address existing regulatory lacunae and adopt a sustainable approach towards regulating it.

    Recently, while assuming the chair of the Global Partnership on Artificial Intelligence (GPAI), Minister of State for Electronics and IT Rajeev Chandrashekhar highlighted that India is working to modernise its cyber laws framework to utilise the power of AI for citizens and global consumers while ensuring adequate safeguards to prevent misuse and user harm.

    Currently, generative AI services touch upon established laws such as the IT Act of 2000, the Indian Copyright Act of 1957, and the Draft Digital Personal Data Protection Bill of 2022.

    Tech companies are increasingly exploring the integration of generative AI with their products and services. It has also prompted governments across the globe to explore the application of generative AI for governance. The potential of generative AI to improve efficiency, provide personalised information, and automate processes is being channelled into novel use cases.

    However, the line between the cognitive capabilities of humans and machines is being challenged every day.

    Many fear that generative AI technology could change the future of work by automating laborious tasks and mimicking human efforts.

    Others have raised concerns about the technology aggravating the existing privacy, security, misinformation, technology-facilitated abuse, and IPR risks arising from the use of AI; some have also filed class-action lawsuits to defend their rights.

    Moreover, the data-maximalist nature of large language models or LLMs (like ChatGPT) requires a steady data source to improve their services continually. These models clash with the core data protection principles, including data minimisation and purpose limitation that posits minimising data collection and retention.

    Problems may also arise while exercising our data subject rights against artificial intelligence tools, including the right to be forgotten, especially in the absence of a domestic data protection law.

    Expand
  2. 2. Where Do We Stand on AI Regulation?

    As a response to the expanding role of AI across sectors and in decision-making, governments worldwide are looking to rework their respective AI governance landscapes.

    Through its AI Bill, the European Union is looking to broadly regulate AI applications by assigning them to four risk categories:

    • Unacceptable risk

    • High-risk

    • Limited risk

    • Minimal risk

    Based on these risk categories, there are certain essential requirements, including requiring developers to adhere to the EU technical standards.

    On the other hand, China has taken a narrower approach by only regulating AI algorithms in online recommendation systems. The regulation requires companies to provide moral, ethical, accountable and transparent services and spread “positive energy”.

    Other countries such as Brazil, the UK, USA and Canada are also following suit by deliberating their own AI governance regulations. The Government of India has also initiated discussions and introduced strategies and policies in the recent past.

    India is preparing to address the rise and mainstreaming of AI services through the upcoming Digital India Act, which is expected to contain AI and algorithmic accountability provisions.

    The foundational principles developed by NITI Aayog for the responsible use of AI also focus on transparency and accountability in the design, deployment and management of AI systems.

    The adoption, continued usage, and benefits of AI hinge upon their decisions being reviewable, trustworthy, and contextualised to the problem they are being used to solve.

    In effect, AI models need to be:

    • Explainable

    • Accurate

    • Accountable

    It is also imperative to note that explainability is not the same as transparency in AI models.

    Instead, explainability helps actualise transparency for users and boosts user trust in these increasingly entrenched systems.

    Expand
  3. 3. What Is Explainable AI?

    Explainability in AI is the capacity to express how an AI system reached a particular decision, recommendation, or prediction.

    To enable explainability, developers need to:

    • Understand how the AI model works

    • Know the types of data it is trained on

    • Be able to trace a particular AI output back to the specific data points it used to arrive at its conclusions.

    Is this an easy task? Not quite. The increasing sizes of datasets and the complexity of new techniques such as machine learning and deep learning complicate the process of enabling explainability.

    But operationalising explainability in AI has its merits too. Explainable AI can lead to an increase in revenues, build consumer trust, and surface issues more promptly, thus allowing for timely interceptions by AI companies.

    Expand
  4. 4. Is Explainable AI the Solution or Detour?

    The call for explainability in AI models is by no means new. What has changed, however, is the complexity that new-age AI-powered models bring to the fore.

    It was the operationalisation of the European Union's General Data Protection Regulation (GDPR) that added legislative weight to the call for Explainable AI (XAI) with Article 22 and Recital 71 of the GDPR mandating ‘fair and transparent processing of data’ while also providing EU citizens with the right to access ‘meaningful information about the logic involved’ under certain algorithmic decision-making systems.

    The National Institute of Standards and Technology (NIST) in the United States has also developed a set of four key principles of explainable AI. The proposed principles are based on fundamental properties for an explainable AI system, including:

    • Meaningful explanation

    • Accuracy

    • Knowledge

    While developments have been made in curating legislative tools, technical capabilities have yet to enable truly meaningful explainability while retaining usefulness in machine learning models.

    Research by the Defense Advanced Research Projects Agency (DARPA) in the US highlighted that the level of explainability of a machine learning model tends to be inversely proportional to its prediction accuracy. This means that as the model’s prediction accuracy increases, its level of explainability decreases.

    The absence of explainability in AI models poses a risk in almost every industry. In certain fields, such as healthcare, the risks can be especially significant if AI aids in diagnosis as healthcare professionals must be able to explain to their patients the parameters used in arriving at such a diagnosis.

    In financial services, for instance, customers may require an explanation for lending decisions and be informed of the specific parameters as well as the weights of parameters that went into deciding their applications.

    Furthermore, a lack of explainability also presents another risk for companies – a reluctance in users to adopt AI, which can result in wasted investments and the possibility of falling behind competitors.

    However, caveats such as the impact on intellectual property and process disclosures in enabling explainability must also be kept in mind.

    Take AI models that are used to give out loans. These models must not entail that AI-based credit rating systems disclose their decision-making parameters as it would result in individuals attempting to game the system by either hiding such parameters or fraudulently claiming to be compliant.

    Additionally, generative AI requires more contextualised approaches to explainability as they also produce digital artefacts. For instance, the explainability required for a text-to-image generator such as DALL-E will be higher and differ from that of a text-based generative AI such as ChatGPT, as images have more significant cultural and aesthetic contexts.

    Expand
  5. 5. Going Forward...

    The approach should involve a broad range of stakeholders in the early stages of the development of generative AI models to shape the design of the system.

    Previously, researchers have sought to understand from users of such generative AI models, what types of explanations, in particular, they needed from the outputs they received. Such an approach allows for explainability to be implemented in a manner that users will benefit the most from.

    As AI seeps deeper into our daily lives, it is imperative for the government and industry to work in collaboration and set technical standards that make AI-enabled decisions more transparent and accountable for the developers as well as the end-users.

    It is also important to highlight that explainable AI is a work in progress. The nuances to reach an optimum level of explainability may thus defer between services and jurisdictions, based on the regulations in force.

    Until such an understanding between state and industry is established, generative AI tools should be approached cautiously, owing to the myriad concerns for all stakeholders highlighted in this article.

    (Bhavya Birla and Garima Saxena are research associates at The Dialogue, a research and public policy think tank based in New Delhi, India. This is an opinion piece and the views expressed above are the author’s own. The Quint neither endorses nor is responsible for the same.)

    (At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)

    Expand

First of All, Why Do We Need To Regulate AI?

As AI technology advances and further integrates into our daily lives, there is a need to address existing regulatory lacunae and adopt a sustainable approach towards regulating it.

Recently, while assuming the chair of the Global Partnership on Artificial Intelligence (GPAI), Minister of State for Electronics and IT Rajeev Chandrashekhar highlighted that India is working to modernise its cyber laws framework to utilise the power of AI for citizens and global consumers while ensuring adequate safeguards to prevent misuse and user harm.

Currently, generative AI services touch upon established laws such as the IT Act of 2000, the Indian Copyright Act of 1957, and the Draft Digital Personal Data Protection Bill of 2022.

Tech companies are increasingly exploring the integration of generative AI with their products and services. It has also prompted governments across the globe to explore the application of generative AI for governance. The potential of generative AI to improve efficiency, provide personalised information, and automate processes is being channelled into novel use cases.

However, the line between the cognitive capabilities of humans and machines is being challenged every day.

Many fear that generative AI technology could change the future of work by automating laborious tasks and mimicking human efforts.

Others have raised concerns about the technology aggravating the existing privacy, security, misinformation, technology-facilitated abuse, and IPR risks arising from the use of AI; some have also filed class-action lawsuits to defend their rights.

Moreover, the data-maximalist nature of large language models or LLMs (like ChatGPT) requires a steady data source to improve their services continually. These models clash with the core data protection principles, including data minimisation and purpose limitation that posits minimising data collection and retention.

Problems may also arise while exercising our data subject rights against artificial intelligence tools, including the right to be forgotten, especially in the absence of a domestic data protection law.

ADVERTISEMENTREMOVE AD

Where Do We Stand on AI Regulation?

As a response to the expanding role of AI across sectors and in decision-making, governments worldwide are looking to rework their respective AI governance landscapes.

Through its AI Bill, the European Union is looking to broadly regulate AI applications by assigning them to four risk categories:

  • Unacceptable risk

  • High-risk

  • Limited risk

  • Minimal risk

Based on these risk categories, there are certain essential requirements, including requiring developers to adhere to the EU technical standards.

On the other hand, China has taken a narrower approach by only regulating AI algorithms in online recommendation systems. The regulation requires companies to provide moral, ethical, accountable and transparent services and spread “positive energy”.

Other countries such as Brazil, the UK, USA and Canada are also following suit by deliberating their own AI governance regulations. The Government of India has also initiated discussions and introduced strategies and policies in the recent past.

India is preparing to address the rise and mainstreaming of AI services through the upcoming Digital India Act, which is expected to contain AI and algorithmic accountability provisions.

The foundational principles developed by NITI Aayog for the responsible use of AI also focus on transparency and accountability in the design, deployment and management of AI systems.

The adoption, continued usage, and benefits of AI hinge upon their decisions being reviewable, trustworthy, and contextualised to the problem they are being used to solve.

In effect, AI models need to be:

  • Explainable

  • Accurate

  • Accountable

It is also imperative to note that explainability is not the same as transparency in AI models.

Instead, explainability helps actualise transparency for users and boosts user trust in these increasingly entrenched systems.

What Is Explainable AI?

Explainability in AI is the capacity to express how an AI system reached a particular decision, recommendation, or prediction.

To enable explainability, developers need to:

  • Understand how the AI model works

  • Know the types of data it is trained on

  • Be able to trace a particular AI output back to the specific data points it used to arrive at its conclusions.

Is this an easy task? Not quite. The increasing sizes of datasets and the complexity of new techniques such as machine learning and deep learning complicate the process of enabling explainability.

But operationalising explainability in AI has its merits too. Explainable AI can lead to an increase in revenues, build consumer trust, and surface issues more promptly, thus allowing for timely interceptions by AI companies.

ADVERTISEMENTREMOVE AD

Is Explainable AI the Solution or Detour?

The call for explainability in AI models is by no means new. What has changed, however, is the complexity that new-age AI-powered models bring to the fore.

It was the operationalisation of the European Union's General Data Protection Regulation (GDPR) that added legislative weight to the call for Explainable AI (XAI) with Article 22 and Recital 71 of the GDPR mandating ‘fair and transparent processing of data’ while also providing EU citizens with the right to access ‘meaningful information about the logic involved’ under certain algorithmic decision-making systems.

The National Institute of Standards and Technology (NIST) in the United States has also developed a set of four key principles of explainable AI. The proposed principles are based on fundamental properties for an explainable AI system, including:

  • Meaningful explanation

  • Accuracy

  • Knowledge

While developments have been made in curating legislative tools, technical capabilities have yet to enable truly meaningful explainability while retaining usefulness in machine learning models.

Research by the Defense Advanced Research Projects Agency (DARPA) in the US highlighted that the level of explainability of a machine learning model tends to be inversely proportional to its prediction accuracy. This means that as the model’s prediction accuracy increases, its level of explainability decreases.

The absence of explainability in AI models poses a risk in almost every industry. In certain fields, such as healthcare, the risks can be especially significant if AI aids in diagnosis as healthcare professionals must be able to explain to their patients the parameters used in arriving at such a diagnosis.

In financial services, for instance, customers may require an explanation for lending decisions and be informed of the specific parameters as well as the weights of parameters that went into deciding their applications.

Furthermore, a lack of explainability also presents another risk for companies – a reluctance in users to adopt AI, which can result in wasted investments and the possibility of falling behind competitors.

However, caveats such as the impact on intellectual property and process disclosures in enabling explainability must also be kept in mind.

Take AI models that are used to give out loans. These models must not entail that AI-based credit rating systems disclose their decision-making parameters as it would result in individuals attempting to game the system by either hiding such parameters or fraudulently claiming to be compliant.

Additionally, generative AI requires more contextualised approaches to explainability as they also produce digital artefacts. For instance, the explainability required for a text-to-image generator such as DALL-E will be higher and differ from that of a text-based generative AI such as ChatGPT, as images have more significant cultural and aesthetic contexts.

ADVERTISEMENTREMOVE AD

Going Forward...

The approach should involve a broad range of stakeholders in the early stages of the development of generative AI models to shape the design of the system.

Previously, researchers have sought to understand from users of such generative AI models, what types of explanations, in particular, they needed from the outputs they received. Such an approach allows for explainability to be implemented in a manner that users will benefit the most from.

As AI seeps deeper into our daily lives, it is imperative for the government and industry to work in collaboration and set technical standards that make AI-enabled decisions more transparent and accountable for the developers as well as the end-users.

It is also important to highlight that explainable AI is a work in progress. The nuances to reach an optimum level of explainability may thus defer between services and jurisdictions, based on the regulations in force.

Until such an understanding between state and industry is established, generative AI tools should be approached cautiously, owing to the myriad concerns for all stakeholders highlighted in this article.

(Bhavya Birla and Garima Saxena are research associates at The Dialogue, a research and public policy think tank based in New Delhi, India. This is an opinion piece and the views expressed above are the author’s own. The Quint neither endorses nor is responsible for the same.)

(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)

Speaking truth to power requires allies like you.
Become a Member
Read More
×
×