Skip to main content

Privacy regulations for artificial intelligence systems

 New A.I. regulations aim for ethical use and privacy protection. The E.U.'s GDPR safeguards data and restricts harmful automated decisions. It states that people can't face impactful choices made solely by AI, though some exceptions exist. While A.I. has benefits, oversight is needed to uphold rights and prevent unintended harm. GDPR also introduces mandatory Data Protection Impact Assessments for organizations to evaluate the risks of new projects, including those using AI. These assessments determine when high-risk A.I. systems should be subject to further regulation. Impact assessments require documenting the automated decision-making process, assessing necessity and proportionality, and mitigating potential risks.[1]. The United States lacks a singular regulation like GDPR but instead relies on a sectoral approach. For example, the Fair Credit Reporting Act requires explainable credit scoring algorithms and gives consumers access to their credit data[2]. The Equal Credit Opportunity Act prohibits credit discrimination[3]. The Federal Trade Commission has also provided A.I. accountability guidelines focused on transparency, explainability, fairness, and security[4].

Some examples of high-risk A.I. systems under the E.U.'s proposed A.I. Act include:

  •      A.I. for critical infrastructure like transportation or energy grids
  •      A.I. is used in healthcare for diagnosing diseases
  •      A.I. recruitment tools for evaluating and selecting candidates
  •      A.I. for law enforcement activities like evaluating risk assessments
  •      A.I. used in education or vocational training

In 2017, New York City passed a groundbreaking law mandating audits for bias in algorithms used by city agencies. This municipal regulation leads to holding automated decision systems accountable. It was prompted by concern over personality tests used in hiring teachers.[5] California later became the first U.S. state to advance similar legislation. In 2021, the European Commission introduced the first laws regulating high-risk A.I. systems. The E.U. A.I. Act defines risk categories and sets obligations for developers to follow based on an A.I. system's intended use and potential harm.[6] Critics argue that the E.U. A.I. Act overly burdens innovators. However, regulators believe responsible oversight is needed for safety and to avoid unintended discrimination. They point to cases like Apple's credit limit algorithm inadvertently discriminating by gender. A.I. transparency is hard because advanced A.I. systems are like black boxes. We can't see how they work inside, so it's hard to detect bias.

The consequences for organizations that fail to comply with GDPR and other A.I. regulations can include:

  •       Substantial fines - under GDPR, penalties can be up to 4% of an organization's global revenue for non-compliance
  •       Reputational damage from data breaches or unethical practices
  •       Suspension of A.I. systems found to be unlawfully discriminatory or risky
  •       Lawsuits from individuals whose rights were violated
  •       Stricter regulatory scrutiny in the future
  •       Loss of consumer and public trust

Beyond regulations, the A.I. research community itself is establishing guidelines and frameworks for the accountable development of artificial intelligence. One influential set of guidelines is the Asilomar A.I. Principles developed at a 2017 conference of A.I. experts.

A.I. ethical principles differ from external regulations in that:

  •      Principles are adopted voluntarily rather than being legally mandatory
  •      They focus on broader issues of accountability, transparency, and avoiding harm
  •      They aim to guide internal governance rather than impose outside restrictions
  •      There are fewer repercussions for violating principles compared to regulations
  •      Principles can complement regulations but lack enforcement mechanisms

However, both strive to ensure A.I. systems are developed responsibly and align with human values. Principles provide high-level guidance, while regulations create obligatory standards. Many technology companies and organizations have also adopted A.I. ethical principles. For example, Google's A.I. principles commit to developing beneficial, accountable, and socially aware A.I.[7]. Microsoft and Facebook have endorsed similar A.I. guidelines. The Organization for Economic Cooperation and Development (OECD) has provided internationally agreed-upon A.I. principles that serve as policy benchmarks. Some critics argue that more than principles alone will be needed to constrain harm, and more robust oversight is required. However, well-designed and implemented guidelines can be highly effective for internal governance within an organization.  

~ By Mr. Bivek Chaudhary 

(B.A.LL.B, Nepal Law School)


[1] Supra Note 1

[2] 15 U.S.C. § 1681 (2018).

[3] 15 U.S.C. § 1691 (2018).

[4]  Commissioner Rohit Chopra, Fed. Trade Comm’n, Prepared Remarks at the Federal Trade Commission's Open Meeting: Strengthening the F.T.C.'s Hand To Protect Consumers 2-3 (April 21, 2022).

[5] N.Y.C., N.Y., Local Law No. 49 (Jan. 11, 2018)

[6] Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, C.O.M. (2021) 206 final (April 21, 2021).

[7] Sundar Pichai, AI at Google: Our Principles, Google Keyword (June 7, 2018), https://blog.google/technology/ai/ai-principles.

Comments

Popular posts from this blog

UNESCO Guidelines on Generative AI in Schools

The advent of artificial intelligence has assumed prominence amongst all industries and various facets of people's personal lives. The integration of AI in education has been inevitable, given the significance and role of information, knowledge production and administration in the sector. This is especially so as its capabilities entail replicating higher-order thinking. Besides assisting in the education process, it also brings the element of real-life relevance, allowing education to be imparted against the backdrop of the evolving world due to the same AI. It tends to have implications on the subject matter that needs to be imparted, which tends to be something that constantly needs to answer the question of "Why and how is this particular subject matter relevant for learning?".  This induces policy-makers and educational institutions to rethink what they need to impart as knowledge, the area of matter, and the manner of thinking to be emphasised. This is because educa...

Dark Web: Safe or unsafe? Truth Revealed!

  The dark web is the part of the internet that is not visible to search engines. With the advancement in technology, digitization has resulted in different types of attacks. We can talk to anyone as long as we have an internet connection. The main concern is with privacy and anonymity in mind.  A team of computer scientists and mathematicians working for one branch of the US navy which is known as the Naval Research laboratory (NRL), developed a new technology known as Onion Routing. It allows anonymous communication where the source and destination cannot be determined by the third party. A network using the Onion Routing technique is classified as Darknet. The NRL released the Onion Routing Technique and it became The Onion Router, also known as TOR. Advantages of Dark Web  Humans are allowed to hold privacy and express their views freely. Privacy is considered to be critical for honest persons through the different criminals and stalkers.  The growing tendency of...

India's Cybersecurity Landscape: New Rules, Rising Threats, and Government Response

The recent interaction of the newly reappointed Union IT Minister with journalists has sparked significant interest within the IT Industry and among privacy enthusiasts. Ashwini Vaishnaw announced on June 15 that the MEITY will soon release the rules under the Digital Personal Data Protection (DPDP) Act, a development of immense significance for India's cybersecurity landscape. [1] 's Acts. It holds immense significance for the country, especially with the increasing number of internet users.  Of 2023 for public consultation. The rules hold immense significance for a country like India, with 751.5 million internet users at the commencement of 2024 [2] . With the continuous surge in internet usage across India, the volume of personal data shared online is also on the rise. This occurs either voluntarily, such as an individual providing personal information to a social media platform to access its services, or involuntarily, as a consequence of falling victim to a cybercrime inci...