New A.I. regulations aim for ethical use and privacy protection. The E.U.'s GDPR safeguards data and restricts harmful automated decisions. It states that people can't face impactful choices made solely by AI, though some exceptions exist. While A.I. has benefits, oversight is needed to uphold rights and prevent unintended harm. GDPR also introduces mandatory Data Protection Impact Assessments for organizations to evaluate the risks of new projects, including those using AI. These assessments determine when high-risk A.I. systems should be subject to further regulation. Impact assessments require documenting the automated decision-making process, assessing necessity and proportionality, and mitigating potential risks.[1]. The United States lacks a singular regulation like GDPR but instead relies on a sectoral approach. For example, the Fair Credit Reporting Act requires explainable credit scoring algorithms and gives consumers access to their credit data[2]. The Equal Credit Opportunity Act prohibits credit discrimination[3]. The Federal Trade Commission has also provided A.I. accountability guidelines focused on transparency, explainability, fairness, and security[4].
Some
examples of high-risk A.I. systems under the E.U.'s proposed A.I. Act include:
- A.I.
for critical infrastructure like transportation or energy grids
- A.I.
is used in healthcare for diagnosing diseases
- A.I.
recruitment tools for evaluating and selecting candidates
- A.I.
for law enforcement activities like evaluating risk assessments
- A.I.
used in education or vocational training
In
2017, New York City passed a groundbreaking law mandating audits for bias in
algorithms used by city agencies. This municipal regulation leads to holding
automated decision systems accountable. It was prompted by concern over
personality tests used in hiring teachers.[5] California later became the
first U.S. state to advance similar legislation. In 2021, the European
Commission introduced the first laws regulating high-risk A.I. systems. The
E.U. A.I. Act defines risk categories and sets obligations for developers to
follow based on an A.I. system's intended use and potential harm.[6] Critics argue that the E.U.
A.I. Act overly burdens innovators. However, regulators believe responsible
oversight is needed for safety and to avoid unintended discrimination. They
point to cases like Apple's credit limit algorithm inadvertently discriminating
by gender. A.I. transparency is hard because advanced A.I. systems are like
black boxes. We can't see how they work inside, so it's hard to detect bias.
The consequences for organizations that fail to comply with GDPR and other A.I. regulations can include:
- Substantial fines - under GDPR, penalties can be up to 4% of an organization's global revenue for non-compliance
- Reputational
damage from data breaches or unethical practices
- Suspension
of A.I. systems found to be unlawfully discriminatory or risky
- Lawsuits
from individuals whose rights were violated
- Stricter
regulatory scrutiny in the future
- Loss
of consumer and public trust
Beyond
regulations, the A.I. research community itself is establishing guidelines and
frameworks for the accountable development of artificial intelligence. One
influential set of guidelines is the Asilomar A.I. Principles developed at a
2017 conference of A.I. experts.
A.I. ethical principles differ from external regulations in that:
- Principles are adopted voluntarily rather than being legally mandatory
- They
focus on broader issues of accountability, transparency, and avoiding harm
- They
aim to guide internal governance rather than impose outside restrictions
- There
are fewer repercussions for violating principles compared to regulations
- Principles
can complement regulations but lack enforcement mechanisms
However,
both strive to ensure A.I. systems are developed responsibly and align with
human values. Principles provide high-level guidance, while regulations create
obligatory standards. Many technology companies and organizations have also
adopted A.I. ethical principles. For example, Google's A.I. principles commit
to developing beneficial, accountable, and socially aware A.I.[7]. Microsoft and Facebook have
endorsed similar A.I. guidelines. The Organization for Economic Cooperation and
Development (OECD) has provided internationally agreed-upon A.I. principles
that serve as policy benchmarks. Some critics argue that more than principles
alone will be needed to constrain harm, and more robust oversight is required.
However, well-designed and implemented guidelines can be highly effective for
internal governance within an organization.
~ By Mr. Bivek Chaudhary
(B.A.LL.B, Nepal Law School)
[1] Supra Note 1
[2] 15 U.S.C. § 1681 (2018).
[3] 15 U.S.C. § 1691 (2018).
[4]
Commissioner Rohit Chopra, Fed. Trade Comm’n, Prepared Remarks at the
Federal Trade Commission's Open Meeting: Strengthening the F.T.C.'s Hand To
Protect Consumers 2-3 (April 21, 2022).
[5] N.Y.C., N.Y., Local Law No. 49
(Jan. 11, 2018)
[6] Proposal for a Regulation Laying
Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)
and Amending Certain Union Legislative Acts, C.O.M. (2021) 206 final (April 21,
2021).
[7]
Sundar Pichai, AI at Google: Our Principles, Google Keyword (June 7, 2018),
https://blog.google/technology/ai/ai-principles.
Comments
Post a Comment