This research is a part of the technical report series by the Vidhitsa Law Institute of Global and Technology Affairs, also known as VLiGTA® – the research & innovation division of Indic Pacific Legal Research.
Responsible AI has been a part of the technology regulation discourse for the AI industry, policymakers as well as the legal industry. As ChatGPT and other kinds of generative AI tools have become mainstream, the call to implement responsible AI ethics measures and principles in some form becomes a necessary one to consider.
The problem lies with the limited and narrow-headed approach of these responsible AI guidelines, because of fiduciary interests and the urge to be reactive towards any industry update. This is exactly where this report comes. To understand, the problems with Responsible AI principles and approaches can be encapsulated in these points:
- AI technologies have use cases which are fungible
- There exist different stakeholders for different cases on AI-related disputes which are not taken into consideration
- Various classes of mainstream AI technologies exist and not all classes are dealt by every major country in Asia which develops and uses AI technologies
- The role of algorithms in shaping the economic and social value of digital public goods remains unclear and uneven within law
This report is thus a generalist and specificity-oriented work, to address & explore the necessity of internalising AI explainability measures into perspective.