Reshape the governance structures of AI companies

The issue is important as social objectives are often subsumed by broader profit-driven goals

Updated - August 16, 2024 02:06 pm IST

Published - August 16, 2024 12:08 am IST

‘The modern corporate governance regimes in capitalistic and neo-capitalistic economies have traditionally favoured the theory of shareholder primacy’

‘The modern corporate governance regimes in capitalistic and neo-capitalistic economies have traditionally favoured the theory of shareholder primacy’ | Photo Credit: Getty Images

The modern corporate governance regimes in capitalistic and neo-capitalistic economies have traditionally favoured the theory of shareholder primacy. This means that in modern corporations, the objectives of profit generation and wealth creation for the shareholders and investors take primacy over other objectives of the business including the objective of public good. In contrast, there have been proponents of a stakeholder benefit approach of corporate governance, which seeks to maximise the benefits of all stakeholders.

In recent years, corporations with ostensibly alternative governance models, leaning towards stakeholder capitalism have become more common. Corporations are increasingly getting involved in products, technologies and services that cannot be driven solely on the objectives of profit making and have a greater social objective. Generative Artificial Intelligence (AI) is one such instance, where corporations are seeking alternative governance structures to balance the objectives of generating profit with that of greater social responsibility.

Data access issues

The development of AI technologies requires access to data, which may, in turn, accelerate the ability to utilise personal information to undermine privacy. For instance, Meta was asked to pause its plans to train its large language models using public content shared on Facebook and Instagram in the European region over concerns raised by the Irish privacy regulator. In addition to this, it has been noted that human prejudices may find their way into AI systems and lead to algorithmic biases with harmful results.

Recently, Amazon discontinued using a recruiting algorithm after it discovered that it was plagued with gender bias. Moreover, researchers at Princeton University conducted an experiment where they used AI software to analyse and link words and found that European names were perceived as more pleasing than their African-American counterparts. These examples demonstrate how AI can perpetuate existing biases and create inequality with respect to opportunities, and access. It is important for the creators of AI to act responsibly towards all stakeholders.

These considerations have prompted several companies to alter their corporate governance structures. To counter the risks posed by AI advancements, OpenAI, and Anthropic, have resorted to structures with public good and developing responsible AI as core objectives leading to creation of public benefit corporations. For instance, Anthropic is governed by a structure called Long-Term Benefit Trust. This trust is composed of five financially disinterested members who have the authority to select and remove a portion of Anthropic’s board. Similarly, OpenAI was incorporated as a non-profit, but it transitioned into a hybrid design by incorporating a capped profit-subsidiary to support its capital intensive innovation.

Purpose versus profits

While these companies started out with alternative models, when there was a clash between the company’s goals of purpose and its profit-generating machinery, the monetary interests won. OpenAI, the creator of ChatGPT, found itself embroiled in a corporate governance debacle last year when the non-profit board of the company fired the CEO of the company, Sam Altman, due to concerns about the rapid commercialisation of AI products at the cost of compromising user safety. The dismissal was strongly criticised by Microsoft, OpenAI’s largest investor, which was supported by about 90% of the employees, holding employee stock options in OpenAI.

Consequently, Mr. Altman was reinstated, and the existing board was replaced. This debacle has raised questions on the viability of public benefit corporate structures in the technological industry, which rely on capital infusion from shareholders and investors with deep pockets, to fund research and innovations. Recently, there are rumours that OpenAI may be considering a move to a for-profit governance structure.

In 1970, Milton Friedman famously asserted that businesses have a social responsibility to generate profits for their shareholders. From these recent events, it is evident that even in this new age of public benefits corporation, the purported public benefit may be nothing more than disguised profit seeking. Pursuing social interest at the cost of financial considerations may not be feasible merely through adopting creative governance structures. Rather, these governance structures further reinforce the shareholder primacy, especially in tech companies where even the employees hold stock-based incentives.

Workable strategy

The present accountability structure is based on appointing an independent board and adopting a social benefit objective for the business. These measures are not sufficiently strong to protect against this amoral drift, where the social objectives of a corporation are often subsumed by the broader profit-driven goals as the market enables unrestricted corporate control. Policymakers need to employ innovative methods of regulating corporations involved in developing AI-based products which balance these conflicting interests.

From a strictly economic perspective, this can be done by targeting three key areas: enhancing long-term profit gains of corporations from adopting a public benefit purpose; incentivising managerial compliance of such purposes, and reducing compliance costs of adopting such purposes. This would require framing ethical standards for the governance of AI product companies, along with providing adequate regulatory backing through reforms in corporate governance norms. With the increasing involvement of AI in multiple spheres of life, it is imminent that governance models promoting the ethical development of AI for generating profits need to be adopted.

Neha Lodha is a Senior Resident Fellow in the Corporate Law and Financial Regulation team at the Vidhi Centre for Legal Policy. Shuchi Agrawal is a Research Fellow in the Corporate Law and Financial Regulation team at the Vidhi Centre for Legal Policy

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.