publications
publications by categories in reversed chronological order.
2024
- (A)I Am Not a Lawyer, But...: Engaging Legal Experts towards Responsible LLM Policies for Legal AdviceInyoung Cheong, King Xia, K.J. Kevin Feng, and 2 more authorsACM FAcct 2024 Main track, 2024
The rapid proliferation of large language models (LLMs) as general purpose chatbots available to the public raises hopes around expanding access to professional guidance in law, medicine, and finance, while triggering concerns about public reliance on LLMs for high-stakes circumstances. Prior research has speculated on high-level ethical considerations but lacks concrete criteria determining when and why LLM chatbots should or should not provide professional assistance. Through examining the legal domain, we contribute a structured expert analysis to uncover nuanced policy considerations around using LLMs for professional advice, using methods inspired by case-based reasoning. We convened workshops with 20 legal experts and elicited dimensions on appropriate AI assistance for sample user queries (“cases”). We categorized our expert dimensions into: (1) user attributes, (2) query characteristics, (3) AI capabilities, and (4) impacts. Beyond known issues like hallucinations, experts revealed novel legal problems, including that users’ conversations with LLMs are not protected by attorney-client confidentiality or bound to professional ethics that guard against conflicted counsel or poor quality advice. This accountability deficit led participants to advocate for AI systems to help users polish their legal questions and relevant facts, rather than recommend specific actions. More generally, we highlight the potential of case-based expert deliberation as a method of responsibly translating professional integrity and domain knowledge into design requirements to inform appropriate AI behavior when generating advice in professional domains.
- Safeguarding Human Values: Rethinking US Law for Generative AI’s Societal ImpactsInyoung Cheong, Aylin Caliskan, and Tadayoshi KohnoJournal of AI and Ethics, 2024
Our interdisciplinary study examines the effectiveness of US law in addressing the complex challenges posed by generative AI systems to fundamental human values. Through the analysis of diverse hypothetical scenarios developed in collaboration with experts, we identified significant shortcomings and ambiguities within the existing legal framework regarding the protection of crucial values like physical and mental well-being, privacy, autonomy, diversity, and equity. Notably, constitutional and civil rights law currently struggles to hold AI companies responsible for AI-assisted discriminatory outputs. Even without considering the liability shield provided by Section 230, demonstrating causal links for liability claims such as defamation or product liability proves exceptionally difficult due to the intricate and opaque nature of these systems. To effectively address these unique and evolving risks posed by generative AI, we advocate for a legal framework that adapts to recognize new threats and utilizes a multi-pronged approach: enshrining fundamental values in legal frameworks; establishing comprehensive safety guidelines; and implementing liability models adapted to the complexities of human-AI interactions. This framework would complement existing individual rights, proactively mitigate unforeseen harms like mental health impacts and privacy breaches, and empower users with more trust and control over generative AI systems.
- Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and BenefitsJimin Mun, Liwei Jiang, Jenny Liang, and 5 more authorspreprint, 2024
General purpose AI, such as ChatGPT, seems to have lowered the barriers for the public to use AI and harness its power. However, the governance and development of AI still remain in the hands of a few, and the pace of development is accelerating without proper assessment of risks. As a first step towards democratic governance and risk assessment of AI, we introduce Particip-AI, a framework to gather current and future AI use cases and their harms and benefits from non-expert public. Our framework allows us to study more nuanced and detailed public opinions on AI through collecting use cases, surfacing diverse harms through risk assessment under alternate scenarios (i.e., developing and not developing a use case), and illuminating tensions over AI development through making a concluding choice on its development. To showcase the promise of our framework towards guiding democratic AI, we gather responses from 295 demographically diverse participants. We find that participants’ responses emphasize applications for personal life and society, contrasting with most current AI development’s business focus. This shows the value of surfacing diverse harms that are complementary to expert assessments. Furthermore, we found that perceived impact of not developing use cases predicted participants’ judgements of whether AI use cases should be developed, and highlighted lay users’ concerns of techno-solutionism. We conclude with a discussion on how frameworks like Particip-AI can further guide democratic AI governance and regulation.
2023
- Case Repositories: Towards Case-Based Reasoning for AI AlignmentK.J. Kevin Feng, Quan Ze Chen, Inyoung Cheong, and 2 more authorsNeurIPS 2023 MP2 Workshop, 2023
Case studies commonly form the pedagogical backbone in law, ethics, and many other domains that face complex and ambiguous societal questions informed by human values. Similar complexities and ambiguities arise when we consider how AI should be aligned in practice: when faced with vast quantities of diverse (and sometimes conflicting) values from different individuals and communities, with whose values is AI to align, and how should AI do so? We propose a complementary approach to constitutional AI alignment, grounded in ideas from case-based reasoning (CBR), that focuses on the construction of policies through judgments on a set of cases. We present a process to assemble such a case repository by: 1) gathering a set of “seed” cases – questions one may ask an AI system – in a particular domain, 2) eliciting domain-specific key dimensions for cases through workshops with domain experts, 3) using LLMs to generate variations of cases not seen in the wild, and 4) engaging with the public to judge and improve cases. We then discuss how such a case repository could assist in AI alignment, both through directly acting as precedents to ground acceptable behaviors, and as a medium for individuals and communities to engage in moral reasoning around AI.
- Envisioning Legal Mitigations for LLM-based Intentional and Unintentional Harms (Extended Abstract)Inyoung Cheong, Aylin Caliskan, and Tadayoshi KohnoICML 2023 Workshop on Generative AI and Law, 2023
Large language models (LLMs) have the potential for significant benefits, but they also pose risks such as privacy infringement, discrimination propagation, and virtual abuse. By developing and examining “worst-case” scenarios that illustrate LLM-based harms, this paper identifies that U.S. law may not be adequate in addressing threats to fundamental human rights posed by LLMs. The shortcomings arise from the primary focus of U.S. laws on governmental intrusion rather than market injustices, the complexities of LLM-related harms, and the intangible nature of these harms. As Section 230 protections for online intermediaries may not extend to AI-generated content, LLM developers must demonstrate due diligence (alignment efforts) to defend themselves against potential claims. Moving forward, we should consider ex-ante safety regulations adapted to LLMs to give clearer guidelines to the fast-paced AI development. Innovative interpretations or amendments to the Bills of Rights may be necessary to prevent the perpetuation of bias and uphold socio-economic rights.
- Legal Perspectives on AI Alignment, Evaluation, and InterpretabilityInyoung CheongKorea Association for Telecommunications Policies Conference 2023, 2023
The rise of generative AI has raised concerns around algorithmic bias and discrimination, privacy invasion, and the proliferation of harmful content. Europe is responding proactively with efforts to regulate AI systems. However, the US remains cautious about overarching legislation in this fast-moving domain. America’s stance stems from its distinct legal tradition emphasizing free speech and limiting government intervention. This libertarian ethos leaves emerging technological issues largely to private lawsuits between individuals and companies. The current US legal and technological landscape faces challenges in proactively governing the risks of generative AI systems. The core issue is that these technologies are complex and opaque with limited interpretability. Their development involves many parties, and their societal impacts emerge gradually through widespread diffusion. In this environment, an approach fixated on assigning legal blame in isolated incidents proves insufficient. It cannot adequately detect emerging harms nor provide systemic incentives guiding development towards safety. While understandable given American values, this reactive stance seems ill-suited for AI’s breakneck pace and societal consequences. More comprehensive oversight and guidance throughout the technology lifecycle may prove essential. This includes setting clear expectations for safety practices and having processes to continually re-evaluate policies as capabilities advance. Rather than just responding to harms, the law can proactively shape technology’s trajectory if coupled with scientific insight. This highlights the need for multidisciplinary collaboration and creative governance amidst AI’s dynamism.
- Freedom of Algorithmic ExpressionInyoung CheongUniversity of Cincinnati Law Review, 2023
Can content moderation on social media be considered a form of speech? If so, would government regulation of content moderation violate the First Amendment? These are the main arguments of social media companies after Florida and Texas legislators attempted to restrict social media platforms’ authority to de-platform objectionable content. This article examines whether social media companies’ arguments have valid legal grounds. To this end, the article proposes three elements to determine that algorithms classify as “speech:” (1) the algorithms are designed to communicate messages; (2) the relevant messages reflect cognitive or emotive ideas beyond mere operational matters; and (3) they represent the company’s standpoints. The application of these elements makes it clear that social media algorithms can be considered speech when algorithms are designed to express companies’ values, ethics, and identity (as they often are). However, conceptualizing algorithms as speech does not automatically award a social media company a magic shield against state or federal regulation. It is true that social media platforms’ position is likely to be favored by the U.S. Supreme Court, which has increasingly taken an all-or-nothing approach whereby “all speech invokes strict scrutiny of government regulation.” Instead, this article argues for the restoration of the Court’s approach prior to the 1970s, when decisions emphasized considerations such as the democratic values of speech, the irreplaceability of forums, and the socioeconomic inequality of speakers and audiences. Under the latter principles, social media companies’ market dominance and their harmful effect on juveniles or political polarization would justify legislative efforts to increase algorithmic transparency even if they restrict social media’s free speech. Therefore, most big tech companies’ algorithms can and should be regulated for legitimate government purposes.
2022
- The U.S. Federal Administrative Law and Civil Penalties (Korean)Inyoung CheongAdministrative Law Journal, Nov 2022
This article aims to introduce the landscape of American civil penalties to Korean legal academia by analyzing U.S. federal statutes and case laws in areas such as antitrust, securities regulations, occupational safety, and environmental protection. The analysis highlights the growing authority of administrative agencies in imposing monetary penalties, replacing traditional court proceedings. American scholars have raised concerns about the constitutionality of administrative civil penalties. Issues include the blurred distinction between criminal and civil penalties, potentially violating the Double Jeopardy Clause and the Seventh Amendment’s right to a jury trial. Moreover, current practices may encourage coerced settlements, where individuals accept administrative agency deals under the threat of criminal penalties. As a result, U.S. federal civil penalties have theoretical and practical flaws. The article references the Jackesy v. SEC case, where the Fifth Circuit ruled that the SEC’s civil penalties violated the Seventh Amendment due to the lack of a jury trial in administrative procedures. However, the article critiques the "public rights" theory in the Fifth Circuit’s opinion, arguing that it lacks solid grounding in U.S. case law. Instead, the article suggests developing more effective judicial review systems for administrative penalties, acknowledging that courts may not be well-equipped to handle all disputes efficiently. The evolution of U.S. federal civil penalties provides insights for South Korean penalty systems, reflecting the challenge of balancing conflicting values like adversarial legalism, due process, cost-efficient policy implementation, and administrative accountability.