“What is responsible AI?”
Provost Rebecca L. Walkowitz posed the question on everyone’s mind during her welcoming remarks at Barnard’s “Symposium on Responsible AI and the Liberal Arts.”
Barnard hosted the event on campus, inviting artificial intelligence researchers and alumnae from various fields and domains to attend and expand upon this timely conversation as we adjust to a new age of technology both within and beyond the classroom.
During a time when artificial intelligence has been integrated into many aspects of peoples’ daily lives — including education — the ethics and how it’s used has become a hotly debated topic.
“As a general definition, responsible AI is the practice of designing, developing, and deploying artificial intelligence systems, in ways that are transparent, fair, accountable, safe, and aligned with human values,” said Walkowitz.
Despite the rapidly-evolving nature of artificial intelligence, many still grapple with the risks and ethics associated with the technology, in and out of the classroom: When and how should students utilize AI? What precautions should be taken? Can AI usage widen inequalities?
Barnard organized the symposium to guide faculty and students to engage both critically and creatively with artificial intelligence in various settings. To expand AI literacy, users must invest in a deeper understanding of what artificial intelligence is and its social and environmental impacts in order to decide whether and how to use it for various tasks.
Welcoming and introductory remarks were given by Melissa Wright, executive director of the Center for Engaged Pedagogy, Provost Walkowitz, and Melanie Hibbert, director of ATLIS, (Academic Technologies & Learning Innovation Services).
“This symposium was an opportunity to bring the liberal arts squarely into the conversation on the present and future of AI,” said Wright. “When it might feel at times like generative AI is happening to us, the planning team wanted to remind our community that we have experts here and across the street who have been researching AI for decades, that we have alumnae who are thoughtful and conscientious about the benefits and risks of this technology, and that we are fortunate to have a wide array of students, faculty, and staff across fields engaging in vibrant work on the dangers and possibilities of AI.”
The symposium's keynote speaker was Kathleen McKeown, the Henry and Gertrude Rothschild Professor of Computer Science and founding director of Columbia's University's Data Science Institute. McKeown guided attendees through the arc of artificial intelligence, starting with her own research which began in the ’80s. She then addressed some current issues of Large Language Models (LLMs), AI systems such as ChatGPT, which generate human language.
“Today’s landscape is dominated by large language models of all kinds,” said McKeown. “It ushered in a paradigm shift of how we do research.”
McKeown reflected on her doctoral research at the University of Pennsylvania, where she pursued language generation at a time when most scholars focused on interpretation.
As part of the symposium, five Barnard alumnae — Lauren Beltrone ’17, Grace Li ’24, Sonia Mohandes ’23, Julie Scelfo ’96, and Jessica Wall ’10 — returned to campus for a panel discussion titled “Responsible AI across Fields.” They discussed their work and experience with AI to varying degrees.
During the panel discussion, the five alumnae reflected on what responsible AI means within their respective fields — from journalism and education to mental health and product design — repeatedly returning to themes of safety, transparency, and human connection.
Scelfo, a journalist and founder of Mothers Against Media Addiction (MAMA), spoke about the intersection of AI, social media, and youth mental health. For Scelfo, responsible AI begins with safety and regulation:
“Responsible AI is built with people in mind,” said Scelfo. “Safe AI is only rolled out once we know it’s safe. That there are regulations in place and transparency requirements, so that we can know what’s happening inside these companies.”
Li spoke from her work in research and education, emphasizing literacy in AI.
“I think responsible AI is about the context,” said Li. “Helping [students] understand the conceptual knowledge around learning language models, how they work, and also translating that knowledge into practice.”
Mohandes, a dance movement and therapy graduate student, reflected on AI in clinical settings, where both privacy and human connection matter deeply. Beltrone, a conversational designer, considers responsible AI as an everyday design practice.
Across industries, panelists returned to one shared theme: human connection.
The symposium culminated in a community showcase of projects, creative works, and research regarding AI across campus. Projects included “Intersectional Feminist Approaches to AI - A Year of Programming at BCRW 26-27” from the Barnard Center for Research on Women and “AI Readiness Transformation: Aligning People, Processes, and Technology for Sustainable AI Success” from the School of Professional Studies.
Barnard and AI
Barnard has already been at the forefront of the conversation surrounding artificial intelligence in numerous ways — launching student and faculty surveys on experiences with generative AI, offering semesterly workshops by Atlas and the CEP focused on AI tools, literacy, and pedagogical practices, and more. Barnard was featured in The New York Times article, “A.I. Is Coming to Class. These Professors Want to Ease Your Worries,” about Professor Benjamin Breyer’s first-year writing seminar, in which he attempted to use A.I. to “supplement, not short-circuit” his students’ efforts in academic writing.
In 2024, “A Framework for AI Literacy” was published by four Barnard faculty and staff members: Melanie Hibbert, director of ATLIS, Elana Altman, senior associate director for UX & Academic Technologies, Tristan Shippen, senior academic technology specialist, and Melissa Wright, executive director of the Center for Engaged Pedagogy — thrusting Barnard to the forefront of the use of AI in higher education. The framework provides a structure for utilizing AI, in the form of a pyramid that breaks AI literacy into four levels:
1. Understand AI
2. Use and apply AI
3. Analyze and evaluate AI
4. Create AI
The AI Working Group — one of the 4 task forces started by President Rosenbury, under the strategy of "Leading for Tomorrow & Infrastructures of Excellence: Artificial Intelligence” — has been adding additional dimensions to AI literacy, such as including responsible and ethical use of AI.
“We have also explored other graphical representations that are circular rather than a pyramid to have a less ‘hierarchical’ symbol,” said Hibbert. “Adding a dimension around responsible AI in AI literacy is an important outcome from this symposium,” she said.
As artificial intelligence continues to reshape research, pedagogy, and professional practice, the question posed at the start of the symposium remains pressing: “What is responsible AI?”
At Barnard, the conversation extends beyond capability to responsibility.