Skip to main content

Privacy regulations for artificial intelligence systems

 New A.I. regulations aim for ethical use and privacy protection. The E.U.'s GDPR safeguards data and restricts harmful automated decisions. It states that people can't face impactful choices made solely by AI, though some exceptions exist. While A.I. has benefits, oversight is needed to uphold rights and prevent unintended harm. GDPR also introduces mandatory Data Protection Impact Assessments for organizations to evaluate the risks of new projects, including those using AI. These assessments determine when high-risk A.I. systems should be subject to further regulation. Impact assessments require documenting the automated decision-making process, assessing necessity and proportionality, and mitigating potential risks.[1]. The United States lacks a singular regulation like GDPR but instead relies on a sectoral approach. For example, the Fair Credit Reporting Act requires explainable credit scoring algorithms and gives consumers access to their credit data[2]. The Equal Credit Opportunity Act prohibits credit discrimination[3]. The Federal Trade Commission has also provided A.I. accountability guidelines focused on transparency, explainability, fairness, and security[4].

Some examples of high-risk A.I. systems under the E.U.'s proposed A.I. Act include:

  •      A.I. for critical infrastructure like transportation or energy grids
  •      A.I. is used in healthcare for diagnosing diseases
  •      A.I. recruitment tools for evaluating and selecting candidates
  •      A.I. for law enforcement activities like evaluating risk assessments
  •      A.I. used in education or vocational training

In 2017, New York City passed a groundbreaking law mandating audits for bias in algorithms used by city agencies. This municipal regulation leads to holding automated decision systems accountable. It was prompted by concern over personality tests used in hiring teachers.[5] California later became the first U.S. state to advance similar legislation. In 2021, the European Commission introduced the first laws regulating high-risk A.I. systems. The E.U. A.I. Act defines risk categories and sets obligations for developers to follow based on an A.I. system's intended use and potential harm.[6] Critics argue that the E.U. A.I. Act overly burdens innovators. However, regulators believe responsible oversight is needed for safety and to avoid unintended discrimination. They point to cases like Apple's credit limit algorithm inadvertently discriminating by gender. A.I. transparency is hard because advanced A.I. systems are like black boxes. We can't see how they work inside, so it's hard to detect bias.

The consequences for organizations that fail to comply with GDPR and other A.I. regulations can include:

  •       Substantial fines - under GDPR, penalties can be up to 4% of an organization's global revenue for non-compliance
  •       Reputational damage from data breaches or unethical practices
  •       Suspension of A.I. systems found to be unlawfully discriminatory or risky
  •       Lawsuits from individuals whose rights were violated
  •       Stricter regulatory scrutiny in the future
  •       Loss of consumer and public trust

Beyond regulations, the A.I. research community itself is establishing guidelines and frameworks for the accountable development of artificial intelligence. One influential set of guidelines is the Asilomar A.I. Principles developed at a 2017 conference of A.I. experts.

A.I. ethical principles differ from external regulations in that:

  •      Principles are adopted voluntarily rather than being legally mandatory
  •      They focus on broader issues of accountability, transparency, and avoiding harm
  •      They aim to guide internal governance rather than impose outside restrictions
  •      There are fewer repercussions for violating principles compared to regulations
  •      Principles can complement regulations but lack enforcement mechanisms

However, both strive to ensure A.I. systems are developed responsibly and align with human values. Principles provide high-level guidance, while regulations create obligatory standards. Many technology companies and organizations have also adopted A.I. ethical principles. For example, Google's A.I. principles commit to developing beneficial, accountable, and socially aware A.I.[7]. Microsoft and Facebook have endorsed similar A.I. guidelines. The Organization for Economic Cooperation and Development (OECD) has provided internationally agreed-upon A.I. principles that serve as policy benchmarks. Some critics argue that more than principles alone will be needed to constrain harm, and more robust oversight is required. However, well-designed and implemented guidelines can be highly effective for internal governance within an organization.  

~ By Mr. Bivek Chaudhary 

(B.A.LL.B, Nepal Law School)


[1] Supra Note 1

[2] 15 U.S.C. § 1681 (2018).

[3] 15 U.S.C. § 1691 (2018).

[4]  Commissioner Rohit Chopra, Fed. Trade Comm’n, Prepared Remarks at the Federal Trade Commission's Open Meeting: Strengthening the F.T.C.'s Hand To Protect Consumers 2-3 (April 21, 2022).

[5] N.Y.C., N.Y., Local Law No. 49 (Jan. 11, 2018)

[6] Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, C.O.M. (2021) 206 final (April 21, 2021).

[7] Sundar Pichai, AI at Google: Our Principles, Google Keyword (June 7, 2018), https://blog.google/technology/ai/ai-principles.

Comments

Popular posts from this blog

Dark Web: Safe or unsafe? Truth Revealed!

  The dark web is the part of the internet that is not visible to search engines. With the advancement in technology, digitization has resulted in different types of attacks. We can talk to anyone as long as we have an internet connection. The main concern is with privacy and anonymity in mind.  A team of computer scientists and mathematicians working for one branch of the US navy which is known as the Naval Research laboratory (NRL), developed a new technology known as Onion Routing. It allows anonymous communication where the source and destination cannot be determined by the third party. A network using the Onion Routing technique is classified as Darknet. The NRL released the Onion Routing Technique and it became The Onion Router, also known as TOR. Advantages of Dark Web  Humans are allowed to hold privacy and express their views freely. Privacy is considered to be critical for honest persons through the different criminals and stalkers.  The growing tendency of...

Famous Cyber crimes That Targeted People Instead Of Money

  With the emergence in the field of technology, the number of cyber crimes have also emerged. Different types of cyber crimes daily take place such as phishing, email spoofing, data theft, identity theft, bullying, harassment, cyber extortion, ransomware attacks, etc. The majority of the people would opine that the Cyber criminals engage in these activities for the sake of money but there are numerous examples where money wasn’t the only target of the Cyber criminals, rather they targeted people.  The intent of cyber criminals,  targeting the people can differ from case to case. Sometimes it may target a single person or sometimes a large number of persons. Various examples show the instances where the target of the cyber crimes were people, be it belonging to specific religion or age group or sometimes a sole person. Various examples of such crimes are discussed below: Bulli Bai App - This app was created over the platform namely GitHub which is an open software develop...

New Privacy Policy of Twitter

  Twitter is one of the most commonly used social media networking platforms and is very popular amongst celebrities as well. People enjoy uploading tweets and retweets over Twitter. It is a good platform for social messaging, news and marketing, etc. But recently twitter has been in the news for a change in its privacy policy. Just a few weeks after the announcement that Elon Musk was going to acquire that social media platform, Twitter had revamped its privacy policy. Everyone was expecting something big after that announcement and many were curious to know if there will be any changes in that micro-blogging platform. Later on, Twitter introduced the privacy policy to their users by creating a video guide for users so that they can understand the privacy policy. Also, Twitter has said that its new privacy policy is much more simplified and organized under three new heads namely data use, data sharing, and data collection. The new version of twitter’s privacy policy will take effe...