Image
Melanie Hibbert profile picture

About three years ago, Melanie Hibbert (left)— who oversees the academic technology team, soon to be known as ATLIS (Academic Technologies and Learning Innovation Services) — began to examine AI seriously, more seriously than other cutting-edge technologies that also intrigued her, such as virtual, augmented, and extended reality, and even cryptocurrency. For her, AI was in another class. 

“We are living in real time through a major technological change that is on par with, if not greater than, the internet,” said Hibbert, who is also director of Barnard’s Sloate Media Center. “I'd say it's also on par with the industrial revolution, in as far as how it’s going to transform so many [aspects of our lives].” 

Image
woman with blonde curly hair wearing a blue blouse
Elana Altman
Image
Man with beard smiling in outdoor setting
Tristan Shippen

Last summer, Hibbert and her technology colleagues Elana Altman and Tristan Shippen, and Melissa Wright, executive director of the Center for Engaged Pedagogy, published “A Framework for AI Literacy” in the higher-ed tech journal Educause. The paper, outlining four basic concepts of engagement, has gone on to garner national attention for its accessible approach to using AI in the classroom. 

“I think the fact it is relatively simple is part of why it has had so much resonance,” said Hibbert. 

Indeed, Inside Higher Ed featured the work, and it was also cited in Forbes and Fast Company. The team has been invited to speak about the paper at Emory University, the Access Conference, and the Future Trends Forum. Inquiries have also come in from the University of Texas, Stanford University, School of Visual Arts, University of Montana, Columbia Business School, Columbia Center for Teaching & Learning, and other organizations affiliated with higher education.

Image
Man leaning against desk in front of bookshelves
Alexander Cooley
Image
Woman wearing dark suit jacket sitting at desk
Melissa Wright

Hibbert is not alone in her pedagogical work on AI at Barnard. She participates in Barnard’s AI Operations Group, which meets regularly to discuss the ever-changing AI landscape. The cohort includes Wright; Alexander Cooley, vice provost for research, libraries, and academic centers; and Victoria Swann, executive director of IT.

Image
2 female students sit next to each other, using a laptop

In collaboration with the team, Wright developed recommendations, titled “Generative AI and the College Classroom,” to support faculty with how best to engage with AI — making Barnard one of the first institutions of higher ed to do so. Elsewhere on campus, several professors have incorporated AI into their research and curriculum.

Hibbert said her team is building upon the work of Benjamin Bloom, whose 1956 Taxonomy of Educational Objectives classified learning objectives into a six-level pyramid structure: Knowledge, Comprehension, Application, Analysis, Synthesis, and Evaluation. Likewise, their paper breaks down AI into basic terms and concepts: 

  • Understand AI: According to the paper, understanding involves basic terms and concepts around AI, such as machine learning, large language models (LLMs), and neural networks. 
  • Use & Apply AI: When using the technology, one should be able employ the tools toward desired responses. “This has been a particular focus at Barnard, including hands-on labs, or real-time, collaborative prompt engineering to demonstrate how to use these tools,” the authors wrote in the paper’s abstract. 
  • Analyze & Evaluate AI: When analyzing and evaluating the work, users should pay attention to “outcomes, biases, ethics, and other topics beyond the prompt window,” said the authors.
  • Create AI: The authors noted that while Barnard has provided workshops at the Computational Science Center for users to begin creating with AI, this is an area that’s still evolving.

“I think we are all grappling with how to approach this transformational technology with AI,” said Hibbert. “Hopefully, this pyramid contributes to how people are introducing and educating people about it.”

In other AI news, Michelle Greene, assistant professor of neuroscience & behavior and psychology, published new research, “Digital divides in scene recognition: uncovering socioeconomic biases in deep learning systems,” which reveals significant biases in artificial intelligence models used for scene classification.