Computer scientists have sought to achieve artificial intelligence since the earliest computers were developed, but only in recent years has AI technology advanced to the point that it can be widely used. With these advances, AI has become a part of the new normal, bringing both excitement and concerns about its use. Barnard computer science professors are harnessing AI to help their research and the world. 

“Artificial intelligence has the potential to help us make advances across a wide range of disciplines,” says Rebecca Wright, the Druckenmiller Professor and chair of Barnard’s Computer Science Department. “In our department, we are focused in our research and in the classroom on helping to improve the foundations and applications of AI so that it can produce fair, transparent, and equitable outcomes.”

Here, four of the College’s computer science professors share examples from their research on how they use AI for good. (Their comments have been edited for brevity and clarity.) 

Mark Santolucito, Assistant Professor 

Image
Mark Santolucito

Santolucito spoke from South Korea, "where he is working as a Fulbright scholar with with semiconductor manufacturers and academics on programming language design. In developing program synthesis and analysis techniques, Santolucito seeks to provide programmers with new ways to interface with code. While he encourages his students to use AI’s large language models (LLMs) in their coding assignments, he also stresses that even though AI can help them write code faster, a solid foundation of coding skills must come first. 

“I’ve always focused on looking at programming languages from an accessibility and usability point of view so that they’re safer and easier to use. This is generally in the field of program language design and program verification. We want to make sure that people write the code that they want to write and not accidentally write code that does that wrong thing. Wrong code can be really bad. Stock markets can crash and rockets can explode when you have bugs in your code. With AI, I’ve been focused on code generation with large language models (LLMs). This is exciting work because LLMs are changing the way we write programs. Programmers write less code by hand and, obviously, use LLMs to generate more code. But how can we be sure that the code that was generated does the right thing? That’s the focus of my work: trying to design tools for programming languages in such a way that when we use the LLMs we can be confident in the results and have better transparency into what code has been generated.”

Smaranda Muresan, Associate Professor 

Image
Smaranda Muresan

Muresan’s research is focused on natural language processing (NLP), a branch of AI that enables computers to understand and generate language. She develops human-centered NLP technology for social good. For example, in collaboration with the Friends Research Institute, the New York City’s Office of Chief Medical Examiner (OCME), and Columbia’s School of Social Work, she is developing novel NLP technology to predict overdose mortality from narrative investigation reports. Noting the long turnaround times for issuing finalized death certificates in suspected drug-related deaths, she aims to build explainable predictive technology that can quickly analyze these reports so that public health professionals can work with near-real-time data. Muresan uses the same NLP techniques in social media to understand people’s struggles with opioid use disorder. 

“We are analyzing social media posts on a Reddit discussion forum where we’re trying to examine how people move from use to misuse to addiction — and then determining how we can get insights from this discussion using natural language processing that can later inform policy or intervention. We’re taking a Reddit post and predicting whether a person is using or whether the person is already addicted. We also want the system to provide evidence from the text as to why it made the specific prediction. This is very useful for the end user — say, a social worker or a public health professional — who is trying to use this system to understand the epidemic, and it also builds trust into the model.”

Corey Toler-Franklin, Assistant Professor

Image
Toler Franklin

Toler-Franklin directs Barnard’s Graphics, Imaging & Light Measurement Laboratory. Her lab builds customized imaging devices for multispectral imaging, spectroscopy, and microscopy to capture the world “as we see it,” analyze patterns, and measure the distribution of reflectance off surfaces. She uses principles from physics to make representations of the real world that ultimately can be used for scientific applications. 

“Every project in my lab is motivated by real-world challenges. We develop AI algorithms to further neuroscience and cancer research, and methods that use quantum physics to analyze and simulate materials for applications in forensics and cultural understanding. One application of what we do is in forensics. I work with the lead forensic scientist at the site of the Tulsa Race Massacre. Scientists are exhuming the bodies and doing DNA testing. They are trying to identify them and bring closure to the families. When DNA testing is equivocal or they can’t get enough samples, I’ll use my imaging and analysis methods to complement what they do. For example, I use my spectral imaging techniques on site to decipher faded information on grave markers and connect the information to public records. We’re looking at how we can find other ways to do identification to reconstruct the social network that existed.”

Brian Plancher, Assistant Professor 

Image
Brian Plancher

Plancher leads Barnard’s Accessible and Accelerated Robotics Lab (A²R Lab). His research focuses on developing and implementing open-source algorithms for dynamic motion planning and control of robots “to improve the positive societal impacts of all autonomous systems.” His intentional emphasis on open-source software and courseware enables global access to such cutting-edge technology. For example, Plancher co-chairs the Tiny Machine Learning Open Education Initiative (TinyMLedu), a global educational consortium that brings AI’s machine learning on microcontrollers to the masses at no cost.

“I try to make algorithms that are usable and deployable so that we can start to get robotic systems out of the lab and into the real world. My lab’s historical focus has been primarily on foundational, theoretical, and algorithmic computer science work, with the understanding that a robot will never function if it can’t think fast enough to hit real-time rates. For example, if a humanoid robot has to think for an hour while it’s in the middle of a single step, it’s gonna fall on its face, right? So we use mathematical and computational insights, paired with advanced AI algorithms, to compress, scale out, and speed up how we do this. And we release all of our software open-source to help to bring the global audience into being a part of our research vision. We actually need these sorts of systems to help everyone.

Image
Robot on the grass
A quadruped robotic dog that Professor Plancher and his students work with at Barnard’s robotics lab