AI & Human Rights: Risks & Opportunities | Harvard Law School

This is an online event, for Zoom registration click here

In recent years, rapid advances in Artificial Intelligence (AI) technology, significantly accelerated by the development and deployment of deep learning and Large Language Models, have taken center stage in policy discussions and public consciousness. Amidst a public both intrigued and apprehensive about AI’s transformative potential across workplaces, families, and even broader political, economic, and geopolitical structures, a crucial conversation is emerging around its ethical, legal, and policy dimensions.

This webinar will convene a panel of prominent experts from diverse fields to delve into the critical implications of AI for humans and their rights. The discussion will broadly address the anticipated human rights harms stemming from AI’s increasing integration into society and explore potential responses to these challenges. A key focus will be on the role of international law and human rights law in addressing these harms, considering whether this legal framework can offer the appropriate tools for effective intervention.

Panelists

Anxiety is Asa Griggs Candler Professor of Law at Emory Law School where she is also the Associate Dean for Projects and Partnerships and the Founding Director of the A.I. and Future of Work Program. Dr. Ajunwa is author of the book, The Quantified Worker, published by Cambridge University Press, and the co-author of The Oxford Handbook on Algorithmic Governance. Her recent essay for the Yale Law Journal, “A.I. and Captured Capital,” examines the human rights implications for A.I. development and deployment. A prolific expert on A.I. governance, an international keynote speaker, and a board member on A.I. ethics advisory boards, Dr. Ajunwa is a graduate of Columbia University and Yale Law School.

Julie E. Cohen is the Mark Claster Mamolen Professor of Law and Technology and a faculty co-director of the Institute for Technology Law and Policy at the Georgetown University Law Center. She teaches and writes about surveillance, privacy and data protection, intellectual property, information platforms, and the ways that networked information and communication technologies are reshaping legal institutions. She is the author of Between Truth and Power: The Legal Constructions of Informational Capitalism (Oxford University Press, 2019); Configuring the Networked Self: Law, Code and the Play of Everyday Practice (Yale University Press, 2012), which won the 2013 Association of Internet Researchers Book Award and was shortlisted for the Surveillance & Society Journal’s 2013 Book Prize; and numerous journal articles and book chapters.

Laurence R. Helfer is the Harry R. Chadwick, Sr. Professor of Law and co-director of the Center for International and Comparative Law at Duke University, and a Permanent Visiting Professor at iCourts: Center of Excellence for International Courts at the University of Copenhagen.  Helfer has authored more than 100 scholarly publications and lectured widely on his diverse research interests, which include international human rights law and institutions, treaty design, and international adjudication.  He and Veronika Fikfak recently published Automating International Human Rights Adjudication46 Mich. J. Int’l Law 69 (2025).  Helfer was nominated by the United States and elected as a member of the UN Human Rights Committee (2023-2026).

Gerald L. Neuman, PhD MIT 1977, is the J. Sinclair Armstrong Professor of International, Foreign, and Comparative Law at Harvard Law School, and Director of its Human Rights Program.  He served on the (UN) Human Rights Committee from 2011-2014.  He participated in the 2023 workshop of the Institute for Ethics in AI on the human right to a human decision, and has written for the resulting conference volume, forthcoming from Oxford University Press.

Yuval shany is the Hersch Lauterpacht Chair in International Law and former Dean of the Law Faculty of the Hebrew University of Jerusalem. He also serves as a Senior Research Fellow at the Israel Democracy Institute, and a Visiting Professor in the Center for Transnational Legal Studies (CTLS) at King’s College, London and the Graduate Institute of International Studies in Geneva. He was a member of the UN Human Rights Committee between 2013-2020 (chairing the Committee between 2018-2019). His current research focuses on international human rights law and new technology and he leads a European Research Council group or researchers investigating the three generation of digital human rights (3GDR). Shany is an honorary fellow of the Senior Common Room at Magdalen College (2023-2024) and an inaugural Accelerator Fellow at the Ethics in AI Institute in Oxford.


This event is co-sponsored by Harvard Law School’s Human Rights Program and the University of Oxford’s Institute for Ethics in AI.

AI & Human Rights: A Looming Clash? Expert Insights on Navigating the Ethical Minefield

Time.news: The rapid advancement of artificial intelligence (AI) is transforming our world, raising both excitement and apprehension. Today, we’re diving deep into the crucial intersection of AI and human rights with Dr. Anya Sharma, a leading expert in technology ethics and policy. Dr. Sharma, thanks for joining us.

Dr. Sharma: It’s a pleasure to be hear.

Time.news: A recent webinar hosted by Harvard Law School and the University of Oxford underscored the growing concerns surrounding AI’s impact on human rights. What are some of the most pressing human rights harms anticipated from AI’s widespread integration into society?

Dr. sharma: We’re seeing potential harms across numerous areas. In the workplace, issues of worker surveillance, algorithmic bias in hiring and promotion, and the erosion of worker autonomy are meaningful concerns. Beyond that, consider the potential for AI-driven discrimination in access to housing, healthcare, and financial services, further exacerbating existing inequalities. The rise of AI-powered misinformation and disinformation campaigns also threatens freedom of expression and democratic processes. We also have to consider the potential of AI being used to automate state sanctioned human rights abuses.

Time.news: The webinar focused on whether international law and human rights law can effectively address these challenges. Are current legal frameworks equipped to handle the unique complexities posed by AI?

Dr. Sharma: That’s the billion-dollar question. Existing human rights law offers a foundation, principles like non-discrimination, privacy, and due process offer guidance, but their submission in the context of complex AI systems isn’t always straightforward. We need to adapt that foundation to the challenges presented by AI, and that includes clarifying responsibilities, strengthening enforcement mechanisms, and fostering international cooperation.

Time.news: The panelists in the webinar included prominent legal scholars like Dr. Ajunwa, Julie E. Cohen, Laurence R. Helfer, Gerald L. Neuman, and Yuval Shany, each bringing unique perspectives to the table. Which of their areas of expertise do you find especially relevant in this conversation?

Dr. Sharma: All of their expertise is invaluable. Dr. Ajunwa’s work on the future of work and AI is crucial for understanding the impact on labor rights. Julie Cohen’s focus on surveillance and privacy provides a vital lens for examining data protection and informational capitalism. Laurence Helfer’s and Gerald Neuman’s experience on the UN Human Rights Committee provides insights into how to translate those values into practical application. Yuval Shany’s work on digital human rights offers a comprehensive view of the challenges and potential solutions in the digital age. The collective knowledge of these individuals is critical for navigating the complex intersection of law, technology, and human rights.

Time.news: You mentioned the need to adapt existing frameworks.What specific changes or additions to international law might be necessary to effectively regulate AI and protect human rights?

Dr. Sharma: Firstly, we need greater clarity on accountability. When an AI system causes harm, who is responsible? The developer, the deployer, or the user? Secondly, we need stronger safeguards against algorithmic bias. That includes mandating clarity in algorithms, requiring impact assessments, and establishing independent auditing mechanisms. International standards for data governance, including robust data protection laws and limitations on the use of personal data for training AI systems, are essential. We also have to create international frameworks to monitor and regulate AI usage by states.

Time.news: for Time.news readers, who may not be legal scholars, what are some practical steps they can take to advocate for ethical AI development and deployment?

Dr. Sharma: Awareness is key. Educate yourself on the human rights implications of AI. Support organizations working to promote responsible AI development. Demand transparency from companies about how they are using AI. Hold elected officials accountable for enacting strong regulatory frameworks. Advocate for policies that prioritize human well-being and human rights over technological progress.

Time.news: looking ahead, what gives you the most hope, and what concerns you the most regarding the future of AI and human rights?

Dr. Sharma: I’m encouraged by the growing awareness of these issues among policymakers, academics, and the public. The convening of events like this Harvard/Oxford webinar signals a commitment to addressing these challenges head-on. My biggest concern is the speed of AI development outpacing our ability to understand and regulate its impact.If we fail to act quickly and decisively, we risk creating a future where AI reinforces existing inequalities and undermines essential human rights. We need ongoing dialogue, strong legal frameworks, and a shared commitment to ensuring AI serves humanity, not the othre way around.

Time.news: Dr. Sharma, this has been incredibly insightful. Thank you for sharing your expertise with us today.

Dr. Sharma: My pleasure. Thank you for having me.

You may also like

Leave a Comment