Roundtable #20 | Artificial Intelligence: Adopting Cross-National Rights-Based Frameworks
Τhe views in these articles are those of the individual authors and not of the Columbia Undergraduate Law Review
Section 1: AI: Adjudicator?
The adjudicative capacity of human judges, and hence the outcome of a process of legal dispute resolution, can be impacted by a great number of circumstances – the judge’s reliance on intuition, their personal beliefs, and even how long ago they last ate. [1] The proposition of (metaphorically) seating Articial Intelligence (AI) on the bench is thus, prima facie, attractive, because, surely, computers can be more objective than human beings. A commonly held view by proponents of using AI in an adjudicatory role is that algorithms are more objective because they are thought to overlook ancillary characteristics like gender and race, which are generally not relevant to the legal question at hand, and towards which humans hold implicit biases. [2] Moreover, using AI in an adjudicatory capacity could potentially help in lowering the costs of administering justice, and streamline dispute resolution; indeed, this was the motivation behind Estonia’s use of AI to resolve certain small-claims cases. [3] AI then seems like a panacea to the twin ills of ineciency and non-objectivity in judicial decision making. However, there are compelling pragmatic and principled reasons, both pragmatic and principled, against involving AI in the judicial decision making process.
Firstly, the idea that AI, as it stands today, is indeed objective, or free of bias in any way, is not one that stands up to sustained scrutiny. This is because of the way AI learns and operates. AI learns by processing vast amounts of data, and that data, in this context, would be hundreds of years of statutes and court decisions. AI then makes decisions based on such patterns as it identies in that dataset. Undoubtedly, a number of these decisions will be prejudiced in some way, and these prejudices can be perpetuated by an AI judge. These prejudices arrive in AI and allied technologies from the data originally fed in, and “programmers and others can replicate bias without intending to do so.” [4] Removing such biases from AI, or even from simpler allied technologies has been proven to be a Herculean task because biases are often not immediately obvious. [5] The role of the programmer here, then, is dierent from the role of a programmer providing a search tool, for example, because of the constitutional importance of an unbiased judge. Removing bias would require an identication of bias, and then the programmer would have to alter the algorithm to eliminate this bias. Making this change – in eect, tinkering with a ‘judge’s mind’ gives the programmer inuence over an adjudicator (or an advisor to one), which raises questions about the neutrality of AI and the objectivity of the decisions it reaches. The alternative is allowing bias to persist, which is surely contrary to even the most bare-bones conceptualization of justice.
The most well-known example of AI exacerbating bias comes from its use in advising a human judge on sentencing in the American case of Wisconsin v, Loomis (2016). [6] The defendant, Loomis, pleaded guilty to two of the ve charges leveled against him. The court at rst instance referred to a risk assessment from the proprietary software COMPAS to decide the defendant’s sentence. A ProPublica analysis of COMPAS found evidence of signicant bias: black defendants were classied as being a higher recidivism risk than they truly were. [7] Importantly, COMPAS is a proprietary software whose exact working and algorithm are unknown, but despite this, it was still used in an advisory fashion to reach the sentencing decision. [8] This undermines the legitimacy of the justice system as COMPAS appears both biased and opaque.
The Supreme Court of Wisconsin held that the algorithm’s use was not against due process, but that the risk assessment should be accompanied with an advisement which warns judges about the complaints about
COMPAS, including the fact that it was developed for post-sentencing applications, that it disproportionately classes minority oenders as having a higher risk of recidivism and that it is proprietary in nature. [9] COMPAS was purported to be a check on judicial bias; the court is now asking judges to consider the algorithm’s bias. [10] However, considering the tendency to trust what seems to be empirical data, it is unlikely that the advisement will achieve its intended eect, and indeed the opposite may happen. [11]
On a principled basis, the judgment is even harder to justify - it is a clear dilution of due process, and perhaps even the fundamental principles behind the rule of law. Due process understood as signifying “the eective right to contest a specic interpretation of the law,” the due process problem arises as, it is not possible to ascertain how COMPAS reached a conclusion. [12] Therefore, by allowing programmers to bypass the expectation of explanation we lay upon our judges, an element of arbitrariness is introduced into the system. The opacity of proprietary software means that AI developers have, in eect, an unexaminable inuence over how, and what judicial decisions are made.
A related and inevitable consequence of any judicial AI operating using principles extrapolated from precedent is the development of law which is not organic - a law that “sties development.” [13] The conservatism that is an inevitable consequence of using articial means arises because articial means necessarily rely on precedent since computers do not possess an understanding of semantics (meaning) but do possess an understanding of syntax (a formal structure of operation). [14] Consequently, analysis of previously obtained data plays an important role in decision making. [15] Such static, unchanging law is at odds with how democracies understand the law as “a coherent, organic body of rules and principles with its own internal logic.” [16] Algorithms may thus limit legal evolution. In common law systems, the precedential value of past cases combined with AI’s conservative proclivity could lead to future decisions being incompatible with contemporary ideas of equity and justice, and could lead to judgments that do not appreciate strong policy arguments.
AI, and any allied technologies used in an adjudicatory capacity ought to be able to interpret statute and previous judgments to the high degree of accuracy that human judges can – alongside being able to produce an explanation for that interpretation that can be questioned in accordance with due process. This hermeneutic ability is of the highest importance due to the need for interpretation in law as it exists today. After all, legal practice “seems engrossed in a hermeneutical approach to legal texts.” [17] The need for decision-making articial means to demonstrate advanced hermeneutic capabilities arises from legal systems being “simultaneously confronted with the ambiguity that is inherent in natural language and with the need for legal certainty (Dworkin 1982). [18] However, AI (or allied technologies) possessing such hermeneutic and explanatory capabilities has proven elusive, because of the semantic-syntax problem earlier discussed - judges understand both the connotations and denotations of a statute or precedent; the former is beyond AI. Once this is understood, it becomes readily evident that disclosure statements like the one in Loomis, are not t for purpose if the AI has an adjudicatory role. This is because they do not make clear that AI is incapable of comprehending the full meaning of a rule in an earlier, binding case.
Therefore, while AI might have lower monetary costs associated with it, it is likely less competent than human judges at the essentials of legal reasoning. Moreover, it may exacerbate bias across the legal system, and therefore should not be employed in adjudicatory decision making. This is not to say that the legal system has no role for AI at all - it can be used quite eectively by legal professionals in carrying out legal research and in carrying out logistical functions. [19] However, AI should stay entirely out of the decision making arena because its use is
incompatible with the fundamental conceptions of the rule of law and allied principles like due process. AI is a phenomenon which has distinct social and cultural impacts on the fabric of law and of society more generally.
by Medhansh Kumar
Section 2: Sociocultural Implications of Articial Intelligence
Despite its inception a mere fty years ago, articial intelligence today is prolic and often intrusive. It lives within our phones, drives our cars, and streamlines our bureaucracies, altogether abetting quotidian, societal operations on a magnitude that scholars predict will yield another epochal shift in the digital frontier, comparable to that of the smartphone era. Given that AI algorithms draw from existing reservoirs of data to determine future action, its proliferation will extend the prejudices that exist within current social landscapes in perpetuity. It is thus pertinent to consider how novel applications of AI apply to attempts to regulate unintentional discrimination, and how such cases should be litigated when they occur under algorithmic purview.
ESC rights encompass those to health, education, labor conditions and participation in cultural life and other creative activities. First dened and covered by the UN declaration on ESC rights, most ESC rights are positive rights, meaning that they typically require both active state intervention and an “extensive [delegation] of resources” to be upheld. [1] Within the policymaking process, AI systems analyze existing data to identify specic areas of need, track and expedite implementation of programs to ameliorate such need and altogether enable a more responsive and attentive form of government. In practice, these processes assume forms like Australia’s recent Covid-19 “syndromic surveillance” system that gathers data from symptomatic patients to identify major, Covid-19-induced public health concerns or New York City’s 1990s CompStat system which mapped crime by the neighborhood and redirected resources to regions of need, ultimately reducing the city’s murder rate by 70%. [2] Quantiable, statistical results like these generate a popular conception that AI systems are more objective and ecient than human decision-making processes.
However, a crucial component of data-driven policy making relies on the quality of historical data that AI algorithms use to recommend courses of action. According to Professor Barry Friendman, existing data sources draw from historical, existing trends to curate future action, all of which reproduce conservative sociocultural phenomena at the risk of reifying social prejudices. [3] Indeed, in the U.S., risk-assessment algorithms used to determine whether those oered bail would reoend would reduce prison populations by 40%. However in practice, such programs, notably New Jersey’s risk-assessment program, reduced prison populations but preserved the “racial and ethnic makeup” of those who remained. [4] The same inequities can be found in predictive policing practices that seek to forecast who is likely to commit crime, and where it might be committed. In New York City, the CompStat crime drew from centuries of police arrest records that occur disproportionately in low-income areas which then pushed ocers back into the same over-policed and over-criminalized communities. [5] The lack of standardized procedures for data-collection also introduces errors in data-collection: for example, over a hundred retired New York Police Department ocers were put under “intense pressure” to manipulate crime statistics and “produce annual crime reductions.” [6] Altogether, the bias-in, bias-out phenomena promises to widen America’s chasmic inequalities, whether intentionally or otherwise. Policy measures that make use of AI algorithms must be appendaged with closer legal scrutiny to achieve equitable results.
A slew of legal cases have since targeted the eects of AI use. Most of these eects are felt sharply in the housing industry which makes heavy use of tenant screening algorithms that are nourished by centuries of “de jure segregation, Jim Crow laws, redlining, restrictive covenants, white ight and other explicitly and implicitly racist laws, policies, and actions” that make it likely for data-driven algorithmic rules to replicate the industry’s history of racist results. [7] Codied, algorithmic discriminatory behavior is likely to be amplied through algorithmic deep learning, the process by which algorithms are fed input data and reproduce its patterns by emulating the human brain’s decision-making processes. The process is exemplied by Microsoft’s AI chatbox, Tay, which conversed with users on Twitter for 16 hours before quickly reproducing many of the antisemitic, misogynist, and racist sentiments on the platform. [8] In the housing industry, algorithms could tend to disadvantage prospective tenants who come from neighborhoods with high eviction rates or immigrants who seek housing through familial ties rather than traditional negotiations with a landlord. [9] Most mortgage lenders currently utilize automated underwriting, a process in which software analyzes applicant data to make determinations regarding their mortgage applications. [10] The disparate impact standard concerns neutral policies that produce unintentional forms of discrimination. [11] The prevalent overlap between eviction or blacklisting rates and membership in specic protected classes, such as minority and immigrant communities makes the disparate impact standard particularly pertinent to the housing industry. Without regulation under a disparate impact standard, each extension of AI to new domains of the housing industry under the guise of objectivity will only continue to deny prospective tenants equal access to housing.
A recent case in housing equity, Massachusetts Fair Housing Center and Housing Works Inc., v. United States Department of Housing and Urban Development and Ben Carson (2020) directly considers whether AI can be subject to disparate impact claims. Specically, the case probes whether the Department of Housing and Urban Development’s (HUD) “nal rule,” issued in August of 2019 and amended September, 2020 which stipulates that the Fair Housing Act’s (FHA) disparate impact standard stands in cases that use algorithmic tools. [12] The FHA’s disparate impact standard holds housing providers accountable to the implications of their industry-standard models; without it, lenders are immunized from disparate impact liability which altogether paralyzes the FHA’s ability to ght de facto housing segregation. [13]
One of the sensitive points in HUD’s nal rule is how it shifts the burden of proof from housing providers onto their discriminated plaintis. Plaintis (aicted tenants) must show a “‘robust causality’ tying “a defendant’s particular policy(s) to an alleged disparate impact.” [14] The court writes specically that “[r]acial imbalance... does not... establish a prima facie case of disparate impact,” meaning evidence of racial disparities alone do not satisfy a disparate impact case. [15] Plaintis must instead probe through algorithmic inputs to identify a source of discrimination which many ordinary citizens lack the expertise to accomplish. [16] The more complex an algorithm is, typically the higher its performance ecacy, which also compounds the issue’s diculty. Plaintis must then prove that another policy exists which would serve their interests in a legitimate and nondiscriminatory manner. However, the HUD’s broad guidelines shield any “industry standard” algorithm “analyzed by a neutral third party who determined the model was empirically derived” or is in other ways a “statistically sound algorithm” from legal scrutiny over its disparate impact. [17] The vague and legally contentious arbiter of “industry standard” leaves plaintis with few other options to choose from. The Final Rule, as it stands, thus aords virtually no transparency for tenants who require it to establish any causality, much less a “robust” one between the facts of the algorithmic biases in their case and the disparate treatment they face.
HUD updated its 2019 rule in September of 2021, after receiving over 45,000 public comments on its nal rule. In its response, HUD argued that its revised burden shifting approach “provides more detail and clarity” in amending the 2013 version of the Rule that they deem “inappropriate...” in requiring the defendant to prove the necessity of the policy in achieving a “substantial, legitimate, nondiscriminatory interest.” [17] They persist in asserting that “it is ultimately the plainti's burden to prove a case.” [19] Clause 100.500(d)(2)(i) that received most public attention (stipulating how plaintis will fail to meet their burden of proof if defendants show that the policy which predicts an outcome, serves their valid interest, and does not have a disparate impact) was revised to include more language that expands defendants’ opportunity to prove either of their three defenses. [20] HUD claims that this defense “eliminat[es] the necessity for examining all the components of the algorithm.” [21] Yet ultimately, HUD’s revisions only uphold its results-based approach that overlooks the algorithms with racist social operations embedded in its data: the very source of disparate impact.
While it is crucial to continue to treat cases of algorithm-induced disparate impact on a case-by-case basis,
Massachusetts Fair Housing Center and Housing Works Inc., v. United States Department of Housing and Urban Development and Ben Carson exhibits the need for robust updates to discrimination law in an algorithmic age. Only by addressing the blurred legal distinctions between implicit and explicit discrimination will there be clear guidelines for programmers and players in the housing industry to follow. Updates would help “spare housing providers from litigation and liability” and allow programmers to design algorithms that counteract human biases. [22] Updating nondiscrimination laws for algorithmic discrimination could also ameliorate similar issues in other industries, including hiring, credit, admissions and criminal justice. [23] Algorithms provide immense opportunities for cheap and ecient decision-making. The law must ensure that the benets of algorithmic eciency are not outweighed by its potential for injustice.
by Iris Chen
Section 3: Intellectual Property Rights and AI: American and English Models
The development of increasingly-complex AI technologies has surfaced concerns about intellectual property rights. Specically, whether there exists an ability to protect AI-generated inventions as intellectual property has become a debated topic by legal scholars worldwide. Intellectual property (IP) describes the ownership of any intangible intellectual creations, such as inventions and designs, by their creators. [1] Patents are poised to play a crucial role in fostering AI innovation, specically AI-generated inventions. The role of IP law in AI innovation falls into a legal gray area and has taken on a variety of interpretations in accordance with dierent countries’ laws. Taking into summary both the US and UK Courts’ decisions on cases in this sphere, it is clear that IP law in its present format is not suited to protect inventions of the ongoing Fourth Industrial Revolution. Recognizing that the existing standards of mental conception and personhood are not sucient to encompass the complexity of emerging AI inventions, legislators must adapt the law to protect AI-generated inventions as protected as intellectual property and incentivize future innovation.
In the US, patents are governed by statutory law and are issued by the United States Patent and Trademark Oce (USPTO). The USPTO uses the standard of “conception” of an invention as an important requirement for claiming inventorship. [2] The ruling in Univ. of Utah v. Max-Planck-Gesellschaft Zur Foerderung der Wissenschatfen (2013) armed that the conception standard strictly limits inventorship to humans. Max-Planck concerned whether Dr. Brenda Bass was entitled to sole or joint inventorship rights to RNAi technology. The plainti argued that the invention was inuenced by Dr. Bass’s published hypothesis which
described the novel RNAi, thus establishing Dr. Bass’s mental conception of the invention prior to defendant and principal named inventor Dr. Tuschl. In Max-Planck, the US Court of Appeals for the Federal Circuit held that a state could not be an inventor because inventorship requires conceiving an invention, and conception is a mental act exclusive to humans. [3]
The USPTO further solidied this doctrine in April 2020 when it issued a ruling limiting inventorship to natural persons or individuals in Thaler v. Iancu, et al. [4] The Thaler ruling was prompted by two patent applications for inventions created by an AI machine named DABUS, designed by Dr. Stephen Thaler of the Articial Inventor Project. While Thaler acknowledged that DABUS was a machine legally incapable of holding a patent, he argued that DABUS should be listed as an inventor on the patents. [5] The USPTO, citing the usage of “whoever” in the U.S. patent statutes, reasoned that allowing a machine to claim inventorship would contradict the statute which refers to persons or individuals. [6] The District Court rearmed the USPTO’s ruling in Thaler v. Iancu, et al, stating that AI has not yet reached a level of sophistication that warrants inventorship. [7] Existing U.S. patent statutes plainly refer to inventors as individuals and personhood is the prevailing legal standard for inventorship in the U.S. The Thaler ruling represents a signicant shift, implying that sophistication could emerge as a new legal standard for AI-generated inventorship in the future. This ruling opens avenues for AI technologies or their developers to be recognized with inventorship rights for AI-generated creations.
The Thaler case demonstrates that existing patent statutes need to be reevaluated to encompass recent AI technological advances. The Patent Act of 1952 was the most recent revision to U.S. patent statutes, including the qualication that the USPTO interpreted as limiting inventorship to individuals. [8] For context, AI was rst proposed in 1956 and early articial neural networks (predecessors to DABUS) were not developed until 1957. [9] While the USPTO decision utilized conception as a requirement to claim inventorship, it advanced an inexible denition of the concept based on the Univ. of Utah ruling which centered around whether a state or corporation could be an inventor. The USPTO maintains that the ability to conceive inventions is exclusive to the human mind despite overwhelming evidence to refute that notion. Contemporary AI technologies hold the capabilities to conceive complex ideas that equal or rival those of human minds. DABUS employs creative neural systems, nets of up to trillions of articial neurons, to transform simple notions into complex ones which are then ltered based on their assessed “novelty, utility, or value.” [10]Thus, the inexible denition of conception used to justify the Univ. of Utah ruling cannot reasonably be applied in cases concerning IP rights for AI-generated inventions.
Thaler also led patent applications for DABUS’s inventions in the United Kingdom’s Intellectual Property Oce, and on July 27, 2021, the UK Court of Appeals ruled that AI cannot be listed as an inventor under UK law. [11] Like the USPTO, the Court reasoned that because DABUS did not meet the standard of personhood, it could not claim inventorship pursuant to Section 7 of the Patents Act of 1977 which uses “persons” to describe inventors. [12] This interpretation is similar to the ruling by the US Court in Thaler v. Iancu, et al and is based on a legal standard of personhood to claim inventorship. Addressing an issue that the US Court failed to consider, the UK Court of Appeals went further to rule that Thaler was not entitled to hold the patent himself due to lack of a “satisfactory derivation of right.” [13] Simply put, Thaler was not entitled to the patents of DABUS’s inventions under UK law just because he owned DABUS. Considering that an AI machine has virtually no legal capabilities, this adds an additional layer of complexity in determining who or what can claim inventorship of AI-generated creations. Whether such patents are legal in the UK could potentially impact future investment in research and development of advanced AI technologies.
Existing IP legal frameworks in the US and UK are not suciently evolved to govern AI-generated inventions that aim to accelerate technological advancement. Recognizing that AI machines are capable of conceiving complex thoughts and expanding the requirements for inventorship in the US and UK are necessary rst steps to granting IP rights to AI-generated inventions. Personhood, the prevailing legal standard for inventorship in the US and UK, is not sucient to encompass the extent of AI advancement today and could severely limit incentives for future technological innovation. The US Court ruling on the Thaler case shows a possibility that the level of sophistication of AI technologies could evolve into a new legal standard for inventorship, allowing AI machines to gain IP rights for their creations. However, US Courts are relying on Congressional authority to shoulder the weighty responsibility of determining patent-eligibility of AI-generated inventions.[14] In the UK, it is uncertain whether inventors of AI machines will be even able to hold patents for the inventions created by their AI projects. A similar standard of sophistication implemented in UK law, in addition to creating a special inventorship status for developers of AI machines, would work around the current obstacles to claiming inventorship rights to AI-generated inventions. AI and machine learning have the potential to signicantly benet society, but only if rights to these inventions are protected to increase investment by companies in this sector. IP law should be adapted to our technologically-driven world to maximize the gains from AI innovation.
by Shreya Shivakumar
Section 4: AI Development: The Right to Privacy
AI technologies have evolved beyond performing tasks they are programmed to perform and have instead begun innovating for themselves. This accelerated rate of development poses a challenge for legislators and judges as they struggle with new questions such as what this means for intellectual property. Another question that has arisen is how to maintain a comprehensive protection of privacy rights in the face of advancing AI. The UK and US have separate histories of the right to privacy and oer dierent routes to protections. Most notably, the US has a more rights-based culture compared to the UK, with rights explicitly dened in the Constitution and Bill of Rights. This level of codication is scarcely seen in the UK when enumerating rights. However, the specic right to privacy is said to have little to do with the constitution. [1] In fact, a clearer codication is available in the UK Human Rights Act. [2] The UK still struggles, however, to produce specic legislation enumerating privacy rights. Meanwhile the US relies on State rather than federal frameworks to create a patchwork of statutory protection. Both jurisdictions exhibit a struggle to provide a comprehensive defense of privacy rights in the face of rapidly advancing AI. The courts need to be given the tools to answer privacy questions clearly. Currently they are inundating them with similar questions to be answered by reliance on whatever close enough legislation they can nd to t each specic set of facts.
The right to privacy is a modern development in both the US and the UK. The UK derives many of its rights from the Human Rights Act 1999 where Article 8 issues a clear ‘right to respect for privacy and family life.’ [3] In comparison, the right of privacy in the US has strength as a common law right. In Meyer v. Nebraska (1923), the Supreme Court considered whether Nebraska’s ban on teaching foreign languages in public schools was a justiable infringement on a parent’s right to privacy and educational choice. [4] In a 7-to-2 decision, the Court rules concluded the state failed to show a justiable need to infringe a right of parents to decide on matters of their children. [5] This judgment establishes a ‘family’ element in the right to privacy, a distinction which is mirrored in the UK’s HRA article 8 right. The UK statute and US common law rights therefore bear similar meanings in both jurisdictions today. However, the issue of extending privacy rights to AI is less well established.
A breach of this statutory Article 8 and more specic rights legislation by AI has been considered in the UK Court of Appeal in Bridges v. Chief Constable of South Wales (2020). [6] This case highlighted the diculty the courts had in giving a decisive answer to the question of privacy rights in the face of AI. The case concerned the use of Automated Facial Recognition software (ARF) by the Police in public places. The software could be used for reasons ranging from looking for suspects to detecting ‘vulnerable people.’. The grounds of the appeal argued that the use of this technology is not compatible with Article 8 rights nor the Public Sector Equality Duty in section 149 of the Equality Act of 2010. [7] The judgment distinguished this technology from ngerprints which require the cooperation of those they wish to obtain them from. [8] In contrast, facial recognition was described as a ‘virtual lineup’ that you would have no knowledge of and therefore could be done without cooperation. [9] The case demonstrated the uncharted nature of privacy rights interaction with AI.
Much of the court's decision hinged on the UK Data Protection Act 2018 section 35 (5) which concerned sensitive processing of information. [10] The nal requirement in the statute is that, at the time of ‘processing’ faces through the software, an appropriate policy document was in place. This was to outline how the software would be used and what steps they would take to mitigate any impacts. Within Bridges this requirement was approached by writing a ‘Privacy Impact Assessment.’ [11] This document outlined how privacy rights would be breached and how this would be mitigated. It was found there was little within the Privacy Impact Assessment produced by the police department for the use of ARF to allow the requirement in the UK Data Protection Act to be met. [12] This meant a breach of Article 8 rights was found. However, the decision leant heavily on legislation surrounding the use of data rather than the software itself. In doing so it skirted analysis of the technology’s potential to breach citizen’s privacy. Implicit in the Judges’ reliance on the DPA 2018 is that a more comprehensive Privacy Impact Assessment could have allowed this breach to be deemed justied. However the court avoided this discussion in order to hand down a judgment that was, in this case, intuitively right. The lack of tailored legislation for AI rather than data processing has left the courts to explore more creative solutions in other areas in order to defend Article 8 rights. A similar analysis was repeated following the conclusion of the decisive article 8 point. They turned to the Public Service Equality Duty (PSED). [13] There followed a purely intellectual discussion of whether this duty had been met considering the biases of the algorithm that was used. They concluded that this duty was not fullled. It is therefore a possibility that if the DPA had not suced to resolve the case on the rst point the court could have turned to the PSED and found issue there in order to defend the rights. Such analysis by UK courts was useful in deciding for a particular case. However it requires roundabout analysis based on data. This is instead of confronting how AI should be managed in order to defend privacy rights and therefore is not yet a coherent system. Such analysis is undesirable as it forces Judges to use creative and roundabout reasoning. This will lead to an excess of future cases as the issue of how much privacy rights can be infringed has not been answered. The courts are yet to be given the tools to answer it decisively rather than turning to other legislation.
The US has faced a similar conundrum with the rapid rise of facial recognition software. It has led to a patchwork of state laws protecting privacy but no coherent approach. This risks an overload of cases: each case asks a new set of facts and serves as a reminder of the US’s lack of rules in this eld. The issues with AI were echoed by the work of the ACLU on Amazon’s technology ‘Rekognition.’ [14] The technology incorrectly identies 28 members of Congress, disproportionately people of color, as people who had been arrested. The Congressional Black Caucus expressed concern about the ‘profound unintended consequences’ of facial recognition articial intelligence but failed to translate this into substantial legislation. [15] However, the issue of AI and privacy rights is increasingly turned to, in dierent ways, by states who are producing legislation on
the topic. Illinois passed a Biometric Information Privacy Act (BIPA) to safeguard the use of biometric information by private entities. The act was relevant in Rosenbach v. Six Flags Entm't Corp. (2019) where Six Flags theme park goers sued for the collection of their thumb prints. [16] Customers claimed the company violated the act when it collected these thumb prints without informed consent. The question at issue in Rosenbach was if this constituted a breach of BIPA, among whose provisions were certain requirements for the use of biometric information, notably the presence of consent. A breach was found when the case came before the Illinois Supreme court. The legislation therefore seems to be performing its role of protecting biometric information. The court made clear the mere breach of BIPA was sucient for a successful case, a loss was not required. [17] However, this statute only issues requirements for handling biometric information and therefore skirts the issue of an infringement of privacy by AI that does not collect or retain such information. This is similar to the approach taken by the UK courts in Bridges v. Chief Constable of South Wales (2020) that too focused on requirements of handling data rather than a clear stance on the use of the technology itself. [18] The problem is that each new breach will require a similar analysis of its compliance with BIPA as no overall stance has been established.
Other state legislation has emerged in the policing context such as the Public Oversight of Surveillance Technology (POST). [19] This aims to hold the NYPD accountable for its use of surveillance technologies. The accountability mechanisms include releasing information about how it uses the surveillance tools and what safeguards are in place to prevent exploitation of them. This, unlike POST, does focus on technologies rather than data. However, it remains limited to the Police and therefore does little to confront the position of infringements of privacy committed by private entities. These two statutes limit elements of how AI functions but provide no overall defense of privacy rights. [20] POST applies to the police in New York and BIPA applies to private entities and biometrics in Illinois. There is yet to emerge a clear legislative approach to privacy rights protection that would ensure more eective rights protection in the face of the development of AI rather than pockets of protection in certain areas and matters. The US approach bears similar faults to the UK which is a lack of comprehensiveness. Instead both jurisdictions currently remain limited to specic areas rather than oering the courts specic tools to answer questions of privacy law and therefore set a clear precedent.
Taking into account the approach in Bridges and the emerging patchwork of State laws it is clear that neither jurisdiction has yet to adequately provide a protection of Privacy rights in the face of the rapid development of AI. The problem stems largely from a lack of legislation that confronts head on the potential for a violation of privacy rights. However, this reform is necessary. The implication in the Bridges reasoning is that there is a possibility of a justiable breach if the DPA requirements are met. This leaves open the possibility of much further litigation. [21] If the DPA requirements are met privacy can still be breached without the statute being able to step in to help. Therefore further questions will be raised of what can be done for those whose rights are breached beyond the statute. Alternatively it may have to fall to a creative judiciary. They must nd another statute under which the current use of AI constitutes a breach of rights. In the US, the statutory regime has many gaps that will quickly be found by the burgeoning AI industry; this state of aairs will inevitably ask further questions for the judiciary. In both jurisdictions, a more comprehensive and specic set of legislation is needed to allow the judiciary to answer questions of privacy law decisively rather than turning to indirect interpretations case after case.
by Lucy Maxwell
This Roundtable was edited by Emily Bach, Lorenzo Thomas Garcia, Artem Ilyanok, Katy Brennan, and Shreya Shivakumar
Section 1 Citations:
[1] Sourdin, Tania. “Judge v. Robot? Articial Intelligence and Judicial Decision-Making.” University of New South Wales Law Journal 41, no. 4 (January 12, 2018). https://doi.org/10.53637/zgux2213.
[2] Harris, Briony. “Could an AI Ever Replace a Judge in Court?” World Government Summit, November 7, 2017. https://www.worldgovernmentsummit.org/observer/articles/2017/detail/could-an-ai-ever-replace-a-judge-in-co urt.
[3] Niiler, Eric. “ Can AI Be A Fair Judge In Court? Estonia Thinks So.” Wired, March 25, 2019. https://www.wired.com/story/can-ai-be-fair-judge-court-estonia-thinks-so/.
[4] Id.
[5] Hao, Karen. “This Is How AI Bias Really Happens-and Why It’s so Hard to Fix.” MIT Technology Review, February 4, 2019. https://www.technologyreview.com/2019/02/04/137602/this-is-how-ai-bias-really-happensand-why-its-so-har d-to-x/.
[6] State v. Loomis, 881 N.W.2d 749 (Wis. 2016).
[7] Larson, Je, Julia Angwin, Lauren Kirchner, and Surya Mattu. “How We Analyzed the COMPAS Recidivism Algorithm.” ProPublica, May 23, 2016. https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm.
[8] Israni, Ellora Thadaney. “When an Algorithm Helps Send You to Prison.” The New York Times, October 26, 2017. https://www.nytimes.com/2017/10/26/opinion/algorithm-compas-sentencing-bias.html.
[9] State v. Loomis, para. 100. [10] 130 Harv. L. Rev. 1530.
[11] Israni, Ellora. “Algorithmic Due Process: Mistaken Accountability and Attribution in State v. Loomis.” Harvard Journal of Law & Technology, August 31, 2017. https://jolt.law.harvard.edu/digest/algorithmic-due-process-mistaken-accountability-and-attribution-in-state-v-l oomis-1.
[12] Hildebrandt, Mireille. “The Meaning And The Mining Of Legal Texts.” Understanding Digital Humanities, 2012, 145–60. https://doi.org/10.1057/9780230371934_8, 156.
[13] Maxwell, Karen. “Summoning the Demon: Robot Arbitrators: Arbitration and Articial Intelligence.” Practical Law Arbitration Blog, January 17, 2019. http://arbitrationblog.practicallaw.com/summoning-the-demon-robot-arbitrators-arbitration-and-articial-inte lligence/.
[14] Id. at 14; Chalmers, David John, and John R. Searle. “Can Computers Think?” Essay. In Philosophy of Mind: Classical and Contemporary Readings, 669–75. New York, NY: Oxford University Press, 2002.
[15] Id. at 14.
[16] Wacks, Raymond. Law: A Very Short Introduction. New York, NY: Oxford University Press, 2015. [17] Id. at 12.
[18] Id. at 146.
[19] “Possible Use of AI to Support the Work of Courts and Legal Professionals.” European Commission for the Eciency of Justice (CEPEJ). https://www.coe.int/en/web/cepej/tools-for-courts-and-judicial-professionals-for-the-practical-implementation- of-ai.
Section 2 Citations:
[1] Tan, Jun-E. “Can’t Live with It, Can’t Live without It? AI Impacts on Economic, Social, and Cultural Rights.” Coconet. Coconet, February 12, 2020. https://coconet.social/2020/ai-impacts-economic-social-cultural-rights/.
[2] Etsy, Daniel, and Reece Rushing. “The Promise of Data-Driven Policymaking.” Issues in Science and Technology 23, no. 4 (2007).
[3] Larsson, Stefan. “The Socio-Legal Relevance of Articial Intelligence.” Droit et Société N°103, no. 3 (March 2019): 573–93. https://doi.org/10.3917/drs1.103.0573; Griard, Molly. “A Bias-Free Predictive Policing Tool?: An Evaluation of the NYPD's Patternizr.” Fordham Urban Law Journal 47, no. 1 (2019): 43–83.
[4] Grant, Glenn A. Rep. Report to the Governor and the Legislature. New Jersey Courts, 2019.
[5] Griard, Molly. “A Bias-Free Predictive Policing Tool?: An Evaluation of the NYPD's Patternizr.” Fordham Urban Law Journal 47, no. 1 (2019): 43–83.
[6] Rashbaum, William K. “Retired Ocers Raise Questions on Crime Data.” The New York Times. The New York Times, February 6, 2010. https://www.nytimes.com/2010/02/07/nyregion/07crime.html.
[7] Schneider, Valerie. “Locked out by Big Data: How Big Data, Algorithms and Machine Learning May Undermine Housing Justice.” Columbia Human Rights Law Review 52, no. 1 (2020): 251–305.
[8] Vincent, James. “Twitter Taught Microsoft’s AI Chatbot to Be a Racist Asshole in Less than a Day.” The Verge. The Verge, March 24, 2016. https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist.
[9] Schneider, “Locked out by Big Data.”
[10] Aronowitz, Michelle, and Edward Golding. “HUD’s Proposal to Revise the Disparate Impact Standard Will Impede Eorts to Close the Homeownership Gap.” Urban Institute, September 2019, 1–11.; Foggo, Virginia, and John Villasenor. “Algorithms, Housing Discrimination, and the New Disparate Impact Rule.” The Columbia Science & Technology Law Review 22 (2020): 1–62.
[11] Barocas, Solon, and Andrew D. Selbst. “Big Data's Disparate Impact.” California Law Review 104, no. 3 (June 2016): 671–732. https://doi.org/http://dx.doi.org/10.15779/Z38BG31.
[12] Sarkesian, Lauren, and Spandana Singh. “HUD’s New Rule Paves the Way for Rampant Algorithmic Discrimination in Housing Decisions.” New America, October 1, 2020. https://www.newamerica.org/oti/blog/huds-new-rule-paves-the-way-for-rampant-algorithmic-discrimination-i n-housing-decisions/.
[13] Aronowitz and Golding, “HUD’s Proposal.”
[14] Foggo and Villasenor, “Algorithms, Housing Discrimination.”
[15] Inclusive Communities, 576 U.S. at 542.
[16] Id.
[17] Complaint, Massachusetts FHC and Housing Works Inc., v. HUD and Ben Carson, 3:20-cv-11765-MGM, (U.S. D. Mass. 2020) (No. 0101-8437693).
[18] HUD's Implementation of the Fair Housing Act's Disparate Impact Standard, 85 Fed. Reg. 60288 (September 24, 2020).
[19] Id.
[20] Id.
[21] Id.
[22] Schneider, “Locked out by Big Data.” [23] Kleinberg, et al., “Discrimination.” Section 3 Citations:
[1] “Intellectual Property.” Legal Information Institute. Legal Information Institute. Accessed October 30, 2021. https://www.law.cornell.edu/wex/intellectual_property.
[2] “Manual of Patent Examining Procedure.” MPEP. USPTO, 2020. https://mpep.uspto.gov/RDMS/MPEP/e8r9#/e8r9/d0e207607.html.
[3] Univ. of Utah v. Max-Planck-Gesellschaft Zur Foerderung der Wissenschatfen, 881 F. Supp. 2d 151 (D. Mass. 2012), citing Burroughs Wellcome Co. v. Barr Labs., Inc., 40 F.3d 1223, 1227-28 (Fed. Cir. 1994).
[4] Stulberg, Barry. “USPTO Rules Articial Intelligence Cannot Be Named as Inventor for Patent Application: Davis Wright Tremaine.” Articial Intelligence Law Advisor | Davis Wright Tremaine, April 5, 2020. https://www.dwt.com/blogs/articial-intelligence-law-advisor/2020/05/uspto-ai-inventorship-ruling.
[5] Chen, Angela. “Can an AI Be an Inventor? Not Yet.” MIT Technology Review. MIT Technology Review, April 2, 2020. https://www.technologyreview.com/2020/01/08/102298/ai-inventor-patent-dabus-intellectual-property-uk-eu ropean-patent-oce-law/.
[6] “United States Patent and Trademark Oce,” 2019. https://www.uspto.gov/sites/default/les/documents/16524350_22apr2020.pdf.
[7] “Thaler v Iancu, Et Al, No. 1:2020CV00903 - Document 33 (E.D. Va. 2021).” Justia Law, 2021. https://law.justia.com/cases/federal/district-courts/virginia/vaedce/1:2020cv00903/483404/33/.
[8] 35 USC § 100(f).
[9] Press, Gil. “A Very Short History of Articial Intelligence (AI).” Forbes. Forbes Magazine, June 21, 2021. https://www.forbes.com/sites/gilpress/2016/12/30/a-very-short-history-of-articial-intelligence-ai/?sh=11b80f 8f6f b.
[10] Press, Gil. “The Articial Inventor Project.” The Articial Inventor Project, December 30, 2016. https://articialinventor.com/dabus/.
[11] “AI Cannot Be the Inventor of a Patent, Appeals Court Rules.” BBC News. BBC, September 23, 2021. https://www.bbc.com/news/technology-58668534.
[12] “Patents Act 1977.” Legislation.gov.uk. Statute Law Database, April 30, 1979. https://www.legislation.gov.uk/ukpga/1977/37/contents.
[13] “Thaler v Comptroller General of Patents Trade Marks And Designs [2021] EWCA Civ 1374 .” England and Wales Court of Appeal (Civil Division) decisions. Royal Courts of Justice, September 21, 2021. https://www.bailii.org/ew/cases/EWCA/Civ/2021/1374.html.
[14] “Thaler v Iancu, Et Al.”
Section 4 Citations:
[1] Jed Rubenfeld, The Right of Privacy, 102 Harv. L. Rev. 737, 744 (1989).
[2] Human Rights Act (1999), https://www.legislation.gov.uk/ukpga/1998/42/contents.
[3] Id., Sch. 1, Art. 8.
[4] Meyer v. Nebraska, 262 U.S. 390, 398 (1923).
[5] Id.
[6] R(on the application of Ed Bridges) v Chief Constable of South Wales Police EWCA Civ 1058 (CA 2020).
[7] Equality Act, c.15, Part 11, Ch 1, Section 149 (2010).
[8] R(on the application of Ed Bridges) v Chief Constable of South Wales Police EWCA Civ 1058, para 23 (CA 2020).
[9] Lauren Feiner and Annie Palmer, “Rules around Facial Recognition and Policing remain Blurry.” CNBC, June 12, 2021, A-year-later-tech-companies-calls-to-regulate-facial-recognition-met-with-little-progress.html.
[10] Data Protection Act, Section 35 (5) (2018).
[11] R(on the application of Ed Bridges) v Chief Constable of South Wales Police EWCA Civ 1058, para 123-134, (CA 2020).
[12] Id.
[13] Id., para 191-193.
[14] Jacob Snow, “Amazon’s Face Recognition Falsely Matched 28 Members of Congress with Mugshots.” ACLU, July 26, 2018, https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matc hed-28.
[15] Congressional Black Caucus, “Letter to Amazon about Facial Recognition Technology.” CBC, May 24, 2018, https://cbc.house.gov/news/documentsingle.aspx?DocumentID=896.
[16] Biometric Information Privacy Act, 740 ILCS 14/20 (2008). [17] Rosenbach v. Six Flags Entm't Corp, IL 123186, para 37 (2019).
[18] Lynch, Jennifer and Adam Schwartz. "Victory! Illinois Supreme Court Protects Biometric Privacy." Electronic Frontier Foundation, January 24, 2019, https://www.e.org/deeplinks/2019/01/victory-illinois-supreme-court-protects-biometric-privacy.
[19] R(on the application of Ed Bridges) v Chief Constable of South Wales Police EWCA Civ 1058 (CA 2020). [20] Public Oversight of Surveillance Technology, Int 0487-2018, June 18, 2020.
[21] Public Oversight of Surveillance Technology, Int 0487-2018 (2020); Biometric Information Privacy Act, 740 ILCS 14/20 (2008).
[22] Data Protection Act, Section 35 (5) (2018).