Skip to main content

Privacy regulations for artificial intelligence systems

 New A.I. regulations aim for ethical use and privacy protection. The E.U.'s GDPR safeguards data and restricts harmful automated decisions. It states that people can't face impactful choices made solely by AI, though some exceptions exist. While A.I. has benefits, oversight is needed to uphold rights and prevent unintended harm. GDPR also introduces mandatory Data Protection Impact Assessments for organizations to evaluate the risks of new projects, including those using AI. These assessments determine when high-risk A.I. systems should be subject to further regulation. Impact assessments require documenting the automated decision-making process, assessing necessity and proportionality, and mitigating potential risks.[1]. The United States lacks a singular regulation like GDPR but instead relies on a sectoral approach. For example, the Fair Credit Reporting Act requires explainable credit scoring algorithms and gives consumers access to their credit data[2]. The Equal Credit Opportunity Act prohibits credit discrimination[3]. The Federal Trade Commission has also provided A.I. accountability guidelines focused on transparency, explainability, fairness, and security[4].

Some examples of high-risk A.I. systems under the E.U.'s proposed A.I. Act include:

  •      A.I. for critical infrastructure like transportation or energy grids
  •      A.I. is used in healthcare for diagnosing diseases
  •      A.I. recruitment tools for evaluating and selecting candidates
  •      A.I. for law enforcement activities like evaluating risk assessments
  •      A.I. used in education or vocational training

In 2017, New York City passed a groundbreaking law mandating audits for bias in algorithms used by city agencies. This municipal regulation leads to holding automated decision systems accountable. It was prompted by concern over personality tests used in hiring teachers.[5] California later became the first U.S. state to advance similar legislation. In 2021, the European Commission introduced the first laws regulating high-risk A.I. systems. The E.U. A.I. Act defines risk categories and sets obligations for developers to follow based on an A.I. system's intended use and potential harm.[6] Critics argue that the E.U. A.I. Act overly burdens innovators. However, regulators believe responsible oversight is needed for safety and to avoid unintended discrimination. They point to cases like Apple's credit limit algorithm inadvertently discriminating by gender. A.I. transparency is hard because advanced A.I. systems are like black boxes. We can't see how they work inside, so it's hard to detect bias.

The consequences for organizations that fail to comply with GDPR and other A.I. regulations can include:

  •       Substantial fines - under GDPR, penalties can be up to 4% of an organization's global revenue for non-compliance
  •       Reputational damage from data breaches or unethical practices
  •       Suspension of A.I. systems found to be unlawfully discriminatory or risky
  •       Lawsuits from individuals whose rights were violated
  •       Stricter regulatory scrutiny in the future
  •       Loss of consumer and public trust

Beyond regulations, the A.I. research community itself is establishing guidelines and frameworks for the accountable development of artificial intelligence. One influential set of guidelines is the Asilomar A.I. Principles developed at a 2017 conference of A.I. experts.

A.I. ethical principles differ from external regulations in that:

  •      Principles are adopted voluntarily rather than being legally mandatory
  •      They focus on broader issues of accountability, transparency, and avoiding harm
  •      They aim to guide internal governance rather than impose outside restrictions
  •      There are fewer repercussions for violating principles compared to regulations
  •      Principles can complement regulations but lack enforcement mechanisms

However, both strive to ensure A.I. systems are developed responsibly and align with human values. Principles provide high-level guidance, while regulations create obligatory standards. Many technology companies and organizations have also adopted A.I. ethical principles. For example, Google's A.I. principles commit to developing beneficial, accountable, and socially aware A.I.[7]. Microsoft and Facebook have endorsed similar A.I. guidelines. The Organization for Economic Cooperation and Development (OECD) has provided internationally agreed-upon A.I. principles that serve as policy benchmarks. Some critics argue that more than principles alone will be needed to constrain harm, and more robust oversight is required. However, well-designed and implemented guidelines can be highly effective for internal governance within an organization.  

~ By Mr. Bivek Chaudhary 

(B.A.LL.B, Nepal Law School)


[1] Supra Note 1

[2] 15 U.S.C. § 1681 (2018).

[3] 15 U.S.C. § 1691 (2018).

[4]  Commissioner Rohit Chopra, Fed. Trade Comm’n, Prepared Remarks at the Federal Trade Commission's Open Meeting: Strengthening the F.T.C.'s Hand To Protect Consumers 2-3 (April 21, 2022).

[5] N.Y.C., N.Y., Local Law No. 49 (Jan. 11, 2018)

[6] Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, C.O.M. (2021) 206 final (April 21, 2021).

[7] Sundar Pichai, AI at Google: Our Principles, Google Keyword (June 7, 2018), https://blog.google/technology/ai/ai-principles.

Comments

Popular posts from this blog

UNESCO Guidelines on Generative AI in Schools

The advent of artificial intelligence has assumed prominence amongst all industries and various facets of people's personal lives. The integration of AI in education has been inevitable, given the significance and role of information, knowledge production and administration in the sector. This is especially so as its capabilities entail replicating higher-order thinking. Besides assisting in the education process, it also brings the element of real-life relevance, allowing education to be imparted against the backdrop of the evolving world due to the same AI. It tends to have implications on the subject matter that needs to be imparted, which tends to be something that constantly needs to answer the question of "Why and how is this particular subject matter relevant for learning?".  This induces policy-makers and educational institutions to rethink what they need to impart as knowledge, the area of matter, and the manner of thinking to be emphasised. This is because educa

Dark Web: Safe or unsafe? Truth Revealed!

  The dark web is the part of the internet that is not visible to search engines. With the advancement in technology, digitization has resulted in different types of attacks. We can talk to anyone as long as we have an internet connection. The main concern is with privacy and anonymity in mind.  A team of computer scientists and mathematicians working for one branch of the US navy which is known as the Naval Research laboratory (NRL), developed a new technology known as Onion Routing. It allows anonymous communication where the source and destination cannot be determined by the third party. A network using the Onion Routing technique is classified as Darknet. The NRL released the Onion Routing Technique and it became The Onion Router, also known as TOR. Advantages of Dark Web  Humans are allowed to hold privacy and express their views freely. Privacy is considered to be critical for honest persons through the different criminals and stalkers.  The growing tendency of employers to track

Need for Anti-Spam Laws in India: Comparative Analysis

  Introduction Spam is unsolicited, usually commercial messages (such as e-mails, text messages, or internet postings) sent to a large number of recipients or posted in a large number of places. The spamming activity is usually considered to cause a lot of nuisance and mental annoyance. Spamming is carried out with the help of an electronic mechanism to send unsolicited messages and advertisements. It can also be termed “An unsolicited e-mail” from which the sender attempts to gain an advantage. "India is the seventh biggest spammer in the world 7.8 billion spam e-mails sent in past 24 hours". It’s high time that India has to come up with its legislation to curb the activity.  The author will also argue the need for anti-spam legislation in India with a comparative analysis of various other jurisdictions. Why is it a concern? The term spam emerged due to the spread of unsolicited commercial messages in the internet space. The main challenge is that it has varied charact