Skip to main content

Artificial Intelligence and Animal Group Behavior

By linking cognitive strategy, neural mechanisms, movement statistics, and artificial intelligence (AI) a team of interdisciplinary researchers are trying to better understand animal group behavior.

Published December 23, 2024

By Nick Fetty
Digital Content Manager

A bay-breasted warbler in Central Park. Image courtesy of Rhododendrites, CC BY-SA 4.0, via Wikimedia Commons.

A new research paper in the journal Scientific Reports explores ways that artificial intelligence (AI) can analyze and perhaps even predict animal behavior.

The paper, titled “Linking cognitive strategy, neural mechanism, and movement statistics in group foraging behaviors,” was authored by Rafal Urbaniak and Emily Mackevicius, both from the Basis Research Institute, and Marjorie Xie, a member of the first cohort for The New York Academy of Sciences’ AI and Society Fellowship Program.

For this project, the team developed a novel framework to analyze group foraging behavior in animals. The framework, which bridged insights from cognitive neuroscience, cognitive science, and statistics, was tested with both simulated data and real-world datasets, including observations of birds foraging in mixed-species flocks.

“By translating between cognitive, neural, and statistical perspectives, the study aims to understand how animals make foraging decisions in social contexts, integrating internal preferences, social cues, and environmental factors,” says Mackevicius.

An Interdisciplinary Approach

Each of the paper’s three co-authors brought their own expertise to the project. Mackevicius, a co-founder and director of Basis Research Institute, holds a PhD in neuroscience from MIT where her dissertation examined how birds learn to sing. She advised this project, collected the data on the groups of birds, and assisted with analytical work. Her contributions built upon her postdoctoral work studying memory-expert birds in the Aronov lab at Columbia University’s Center for Theoretical Neuroscience.

Xie, who holds a PhD in neurobiology and behavior from Columbia University, brought her expertise in computational modeling, neuroscience, and animal behavior. Building on a neurobiological model of memory and planning in the avian brain, Xie worked along Mackevicius to design a cognitive model that would simulate communication strategies in birds.

“The cognitive model describes where a given bird chooses to move based on what features they value in their environment within a certain sight radius,” says Xie, who interned at Basis during her PhD studies. “To what extent does the bird value food versus being in close proximity to other birds versus information communicated by other birds?”

Bayesian Methods and Causal Probabilistic Programming

Urbaniak brought in his expertise in Bayesian methods and causal probabilistic programming. For the paper, he built all the statistical models and applied statistical inference tools to perform model identification.

“On the modeling side, the most exciting challenge for me was turning vague, qualitative theories about animal movement and motivations into precise, quantitative models. These models needed to capture a range of possible mechanisms, including inter-animal communication, in a way that would allow us to use relatively simple animal movement data with Bayesian inference to cast light on them,” says Urbaniak, who holds a PhD in logic and philosophy of mathematics from the University of Calgary, Canada and held previous positions at Trinity College Dublin, Ireland, and the University of Bristol, U.K.

For this project, the researchers set up video cameras in Central Park to analyze bird movements, which they then used to study behavior. In the paper, the researchers pointed out that birds are an appealing subject to study animal cognition within collaborative groups.

“Birds are highly intelligent and communicative, often operate in multi-agent or even multi-species groups, and occupy an impressively diverse range of ecosystems across the globe,” the researchers wrote in the paper’s introduction.

The paper built upon previous work within this realm, with the researchers writing that “[this work demonstrated] how abstract cognitive descriptions of multi-agent foraging behavior can be mapped to a biologically plausible neural network implementation and to a statistical model.”

Expanding their Research

For both Mackevicius and Xie, this project enabled them to expand their research from studying individual birds to groups of birds. They saw this as an opportunity to “scale up” their previous work to better understand how cognition differs within a group context. Since the paper was published in September, Mackevicius has applied a similar methodology to study NYC’s infamous rats, and she sees potential for extending this work even further.

“This research has broad implications not just for neuroscience and animal cognition but also for fields like artificial intelligence, where multi-agent decision-making is a central challenge,” Mackevicius wrote for the Springer Nature blog. “The ability to infer cognitive strategies from observed behavior, particularly in group contexts, is a crucial step toward designing more sophisticated AI systems.”

Xie says she “learned many skills on the spot” throughout the project, including reinforcement learning (an AI framework) and statistical inference. For her, it was especially rewarding to observe how all these small pieces shaped the bigger picture.

“This work inspires me to think about how we apply these tools to reason about human behavior in group settings such as team sports, crowds in public spaces, and traffic in urban environments,” says Xie. “In crowds, humans may set aside their individual agency and operate on heuristics such as following the flow of the crowd or moving towards unoccupied space. The balance between pursuing individual needs and cooperating with others is a fascinating phenomenon we have yet to understand.”

The AI and Society Fellowship is a collaboration with Arizona State University’s School for the Future of Innovation in Society. For more info, click here.

Basis AI is currently seeking Research Interns for 2025. For more info, click here.

The Ethics of Developing Voice Biometrics

A writer conducts an interview with an AI researcher.

Various ethical considerations must be applied to the development of artificial intelligence technologies like voice biometrics to ensure disenfranchised populations are not negatively impacted.

Published August 29, 2024

By Nitin Verma, PhD
AI & Society Fellow

Nitin Verma, PhD, (left) conducts an interview with Juana Caralina Becerra Sandoval at The New York Academy of Sciences’ office in lower Manhattan.
Photo by Nick Fetty/The New York Academy of Sciences.

Juana Catalina Becerra Sandoval, a PhD candidate in the Department of the History of Science at Harvard University and a research scientist in the Responsible and Inclusive Technologies initiative at IBM Research, presented as part of The New York Academy of Sciences’ (the Academy) Artificial Intelligence (AI) & Society Seminar series. The lecture – titled “What’s in a Voice? Biometric Fetishization and Speaker Recognition Technologies” – explored the ethical implications associated with the development and use of AI-based tools such as voice biometrics. After the presentation, Juana sat down with Nitin Verma, PhD, a member of the Academy’s 2023 cohort of the AI & Society Fellowship, to further discuss the promises and challenges society faces as AI continues to evolve.

*Some quotes have been edited for length and clarity*

Tell me about some of the big takeaways from your research so far on voice biometrics that you covered in your lecture?

I think some of the main takeaways from the history of the automation of speaker recognition are, first, really trying to understand what are the different motivations or incentives for investing in a particular technology and a particular technological future. In the case of voice biometrics, a lot of the interesmyt is coming from different sectors like the financial sector, or the security and surveillance sector. It’s important to keep those interests in mind and observe how they inform the way in which voice biometrics get developed or not.

The other thing that’s important is that even though we have a notion of technological progress, some of the underlying ideas and assumptions are very old. This includes ideas about the body, about what the human body is, and how humans have the ability to change, or not, their body and the way they speak. In the case of voice biometrics, these ideas date back to 19th-century eugenic science, and they continue informing research, even as we have new technologies. We need to not just look at this technology as new, but ask what are the ideas that remain, or that sustain over time, and in which context did those ideas originate.

So, in your opinion, what role does, or would, AI play in your historical accounting of voiceprint technology?

I think, in some way, this is the story of AI. So, it’s not a separate story. AI doesn’t come together in the abstract. It always comes along in relation to a particular application. A lot of the different algorithmic techniques we have today were developed in relation to voice biometrics. Really what AI entails is a shift in the logic of the ontology of voice where you can have information surface from the data or emerge from statistical methods, without needing to have a theory of what the voice is and how it relates to the body or identity and illness. This is the kind of shift and transformation that artificial intelligence ushers.

What would you think is the biggest concern regarding the use of AI in monitoring technologies such as voice biometrics?

Well, I think concerns are several. I definitely think that there’s already inscripted within the history of voice biometrics an interest in over-policing, and over-surveilling of Black and Latinx communities. There’s always that inherent risk that technology will be deployed to over-police certain communities and voice biometrics then enter into a larger infrastructure where people are already being policed and surveilled through video with computer vision or through other means.

In the security sector, I think my main concern is that there’s a presumption that the relationship between voice and identity is fixed and immutable, which can create problems for people who want to change their voice and or for people whose voice changes in ways outside of their control, like from an injury or illness. There are numerous reasons why people might be left out of these systems, which is why we want to make sure we are creating infrastructures that are equitable.

Speaking to the other side of this same question, in your view, what would be some of the beneficial or ethical uses of this technology going forward?

Rather than starting from the point of ‘what do corporations or institutions need to make their job easier or more profitable?’, we should instead focus on ‘what are the kinds of tools and techniques that people want for themselves and for their lives?’, and ‘in what ways can we leverage the current state of the art towards those ends?’. I think it’s much more about the approach and the incentive.

There’s nothing inherent to technology that makes it cause irreparable harm or be inherently unethical. It’s more about: what is the particular ontology of voice?; what’s the conception of voice that goes into the system?; and towards whose ends is it being leveraged? I’m hopeful and optimistic about anything that is driven by people and people’s desires for a better life and a better future.

Your work brings together various threads of research or inquiry, such as criminology, the history of technology, inequality, and the history of biometric technology as such. What are some of the challenges and benefits that you’ve encountered on account of this multidisciplinary approach to studying the topic?

I was trained as a historian, and originally my idea was to be a professor, but once I started working at IBM Research and the Responsible and Inclusive Tech team, I think I got much closer to the people who very materially and very concretely wanted to make technology better, or, more specifically, to improve the infrastructures and the cultures in which technology is built.

That really pushed me to take a multidisciplinary approach and to think about things not just from a historical lens, but be very rooted in the technical, as well as present day politics and economic structures. I think of my own immigrant background. I’m from Colombia and I naturally already had this desire to engage with humanities and social science scholarship that was critical of these aspects of society, but this may not be the same for everyone. I think the biggest challenge is effectively engaging different audiences.

In the lecture you described listening as a political process. Can you elaborate on that?

I’m really drawing on scholars in sound studies and voice studies. The Sonic Color Line, Race as Sound, and Black Linguistics, are three of the main theoretical foundations that I am in conversation with. The point they try to make is that when we attend to listening, rather than voice itself as a sort of thing that stands on its own, we can see and almost contextualize how different voices are understood, described, interpreted, classified, and so on.

The political in listening is what makes people have reactions to certain voices or interpret them in particular ways. Accents are a great example. Perceptions of who has an accent and what an accent sounds like are highly contextual. The politics of listening really emphasizes that contextuality and how we’ve come to associate things like being eloquent through particular ways of speaking or with how particular voices sound, and not others.

Is there anything else you’d like to add?

Well, I think something that strikes me about the story of voice biometrics and voiceprints is how little the public knows about what’s happening. A lot of decisions about these technologies are made in contexts that are not publicly shared. So, there’s a different degree of awareness in the kind of different public discourses around the ethics of AI and voice. It’s very different from facial recognition, computer vision, or even toxic language.

Also read: The Ethics of Surveillance Technology

Have We Passed the Turing Test, and Should We Really be Trying?

A black and white headshot of computer scientist Alan Turing.

The 70th anniversary of Turing’s death invites us to ponder: can we imagine AI models that will do well on the Turing test?

Published August 22, 2024

By Nitin Verma, PhD
AI & Society Fellow

Alan Turing (1912-1954) in 1936 at Princeton University.
Image courtesy of Wikimedia Commons.

Alan Turing is perhaps best remembered by many as the cryptography genius who led the British effort to break the German Enigma codes during WWII. His efforts provided crucial information about German troop movements and helped bring the war to an end.

2024 has been a noteworthy year in the story of Turing’s life as June 7th marked 70 years since his tragic death in 1954. But four years before that—in 1950—he kickstarted a revolution in digital computing by posing the question “can machines think?” and proposing an “imitation game” to answer it.

While this quest has been the holy grail for theoretical computer scientists since the publication of Turing’s 1950 paper, the public launch of ChatGPT in November 2022 has brought the question to the center stage of global conversation.

In his landmark 1950 paper, Turing predicted that: “[by about the year 2000] it will be possible to programme computers… [that] play the imitation game so well that an average interrogator will not have more than 70 per cent. chance of making the right identification after five minutes of questioning.” (p. 442). By “right identification”, Turing meant accurately distinguishing between human-generated and computer-generated text responses.

This “imitation game” eventually came to be known as the Turing test of machine intelligence. It is designed to determine whether a computer can successfully imitate a human to the point that a human interacting with it would be unable to tell the difference.

We’re much past the year 2000: Are we there yet?  

In 2022, Google let go of Blake Lemoine, a software engineer who had publicly claimed that the company’s LaMDA (Language Model for Dialogue Applications) program had attained sentience. Since then, the closest we’ve come to seeing Turing’s prediction come true is, perhaps, GPT-4, deepfakes, and OpenAI’s “Sora” text-to-video model that can churn out highly realistic video clips from mere text prompts.

Some researchers argue that LLMs (Large Language Models) such as GPT-4 do not yet pass the Turing test. Yet some others have flipped the script and argued that LLMs offer a way to assess human intelligence by positing a reverse Turing Test—i.e., what do our conversational interactions with LLMs reveal about our own intelligence?

Turing himself made a noteworthy remark about the imitation game in the same 1950 paper: “… we are not asking whether all digital computers would do well in the game nor whether the computers at present available would do well, but whether there are imaginable computers which would do well.” (Emphasis mine; p. 436).

Would Turing have imagined the current crop of generative AI models such as GPT-4 as ‘machines’ capable of “doing well” on the Turing test? I believe so, but we’re not quite there, yet. As an information scientist, I believe that in 2024 AI has come closer than ever to passing the Turing test.

If we’re not there yet, then should we strive to get there?

As with any other technology ever invented, as much as Turing may have only been thinking of the public good, there is always the potential for unforeseen consequences.

Technologies such as deepfake apps and conversational agents such as ChatGPT still need human creativity to be useful and usable. But still, the advanced AI that powers these technologies carries the potential of passing the Turing test. That potential portends a range of consequences for society that deserve our serious attention.

Leading scholars have already warned about the consequences of the ability of “fake” information to fuel distrust in public institutions including the judicial system and national security. The upheaval in the public imagination caused by ChatGPT even prompted US President Biden to issue an Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI in the fall of 2023.

We’ll never know what Turing would have made of the spurt of AI advances in light of his own foundational work in theoretical computer science and artificial intelligence. His untimely death at the young age of 41 deprived the world of one of the greatest minds of the 20th century and the still more extraordinary achievements he could have made.

But it’s clear that the advances and use of AI technology have brought society to a turning point that he anticipated in his seminal works.

It remains difficult to say when—or whether—machines will truly surpass human-level intelligence. But more than 70 years after Turing’s death we are at a point where we can imagine AI agents that will do well on the Turing test. And if we can imagine it, we can someday build it too.

Passing a challenging test can be seen as a marker of progress. But would we truly rejoice in having our AI pass the Turing test, or some other benchmark of human–machine indistinguishability?

A More Scientific Approach to Artificial Intelligence and Machine Learning

A researcher poses next to a vertical banner with the text "The New York Academy of Sciences."

Taking a more scientific perspective, while remaining ethical, can improve public trust of these emerging technologies.

Published August 13, 2024

By Nitin Verma, PhD
AI & Society Fellow

Savannah Thais, PhD, is an Associate Research Scientist in the Data Science Institute at Columbia University with a focus on machine learning. Dr. Thais is interested in complex system modeling and in understanding what types of information is measurable or modelable and what impacts designing and performing measurements have on systems and societies.

*This interview took place at The New York Academy of Sciences on January 18, 2024. This transcript was generated using Otter.ai and was proofread for corrections. Some quotes have been edited for length and clarity*

Tell me about the big takeaways from your talk?

The biggest highlight is that we should be treating machine learning and AI development more scientifically. I think that will help us build more robust, more trustworthy systems, and it will help us better understand the way that these systems impact society. It will contribute to safety, to building public trust, and all the things that we care about with ethical AI.

In what ways can the adoption of scientific methodology make models of complex systems more robust and trustworthy?

I think having a more principled design and evaluation process, such as the scientific method approach to model building, helps us realize more quickly when things are going wrong, and at what step of the process we’re going wrong. It helps us understand more about how the data, our data processing, and our data collection contributes to model outcomes. It helps us understand better how our model design choices contribute to eventual performance, and it also gives us a framework for thinking about model error and a model’s harm on society.

We can then look at those distributions and back-propagate those insights to inform model development and task formulation, and thereby understand where something might have gone wrong, and how we can correct it. So, the scientific approach really just gives us the principles, and a step-by-step understanding of the systems that we’re building. Rather than, what I see a lot of times, a hodgepodge approach where the only goal is model accuracy, in which something goes wrong, we don’t necessarily know why or where.

You have a very interesting background, and your work touches on various academic disciplines, including machine learning, particle physics, social science, and law. How does this multidisciplinary background inform your research on AI?

I think being trained as a physicist really impacts how I think about measurements and system design. We have a very specific idea of truth in physics. And that isn’t necessarily translatable to scenarios where we don’t have the same kind of data or the same kind of measurability. But I think there’s still a lot that can be taken from that, that has really informed how I think about my research in machine learning and its social applications.

This includes things like experimental design, data validation, uncertainty, propagation in models. Really thinking about how we understand the truth of our model, and how accurate it is compared to society. So that kind of idea of precision and truth that’s fundamental physics, has affected the research that I do. But my other interests and other backgrounds are influential as well. I’ve always been interested in policy in particular. Even in grad school, when I was doing a physics PhD, I did a lot of extracurricular work in advocacy in student government at Yale. That impacted a lot how I think about understanding how systems affect society, resource access, and more. It really all mixes together.

And then the other thing that I’ll say here is, I don’t think one person can be an expert in this many things. So, I don’t want it to seem like I’m an expert at law and physics and all this stuff. I really lean a lot on interdisciplinary collaborations, which is particularly encouraged at Columbia. For example, I’ve worked with people at Columbia’s School of International and Public Affairs as well as with people from the law school, from public health, and from the School of Social Work. My background allows me to leverage these interdisciplinary connections and build these truly collaborative teams.

Is there anything else you’d like to add to this conversation?

I would reemphasize that science can help us answer a lot of questions about the accuracy and impact of machine learning models of societal phenomena. But I want to make sure to emphasize at the same time that science is only ever going to get us so far. And I think there’s a lot that we can take from it in terms of experimental design, documentation, principles, model construction, observational science, uncertainty, quantification, and more. But I think it’s equally important that as scientific researchers, which includes machine learning researchers, we really make an effort to both engage with other academic disciplines, but also to engage with our communities.

I think it’s super important to talk to people in your communities about how they think about the role of technology in society, what they actually want technology to do, how they think about these things, and how they understand them. That’s the only way we’re going to build a more responsible, democratic, and participatory technological future. Where technology is actually serving the needs of people and is not just seen as either a scientific exercise or as something that a certain group of people build and then subject the rest of society to, whether it’s what they actually wanted or not.

So I really encourage everyone to do a lot of community engagement, because I think that’s part of being a good citizen in general. And I also encourage everyone to recognize that domain knowledge matters a lot in answering a lot of these thorny questions, and that we can make ourselves better scientists by recognizing that we need to work with other people as well.

Also read: From New Delhi to New York

Using AI and Neuroscience to Transform Mental Health

A headshot of a woman smiling for the camera.

With a deep appreciation for the liberal arts, neuroscientist Marjorie Xie is developing AI systems to facilitate the treatment of mental health conditions and improve access to care.  

Published May 8, 2024

By Nick Fetty
Digital Content Manager

As the daughter of a telecommunications professional and a software engineer, it may come as no surprise that Marjorie Xie was destined to pursue a career in STEM. What was less predictable was her journey through the field of artificial intelligence because of her liberal arts background.

From the City of Light to the Emerald City

Marjorie Xie, a member of the inaugural cohort of the AI and Society Fellowship, a collaboration between The New York Academy of Sciences and Arizona State University’s School for the Future of Innovation in Society, was born in Paris, France. Her parents, who grew up in Beijing, China, came to the City of Light to pursue their graduate studies, and they instilled in their daughter an appreciation for STEM as well as a strong work ethic.

The family moved to Seattle, Washington in 1995 when her father took a job with Microsoft. He was among the team of software engineers who developed the Windows operating system and the Internet Explorer web browser. Growing up, her father encouraged her to understand how computers work and even to learn some basic coding.

“Perhaps from his perspective, these skills were just as important as knowing how to read,” said Xie. “He emphasized to me; you want to be in control of the technology instead of letting technology control you.”

Xie’s parents gifted her a set of DK Encyclopedias as a child, her first serious exposure to science, which inspired her to take “field trips” into her backyard to collect and analyze samples. While her parents instilled in her an appreciation for science and technology, Xie admits her STEM classes were difficult and she had to work hard to understand the complexities. She said she was easily intimated by math growing up, but certain teachers helped her reframe her purpose in the classroom.

“My linear algebra teacher in college was extremely skilled at communicating abstract concepts and created a supportive learning environment – being a math student was no longer about knowing all the answers and avoiding mistakes,” she said. “It was about learning a new language of thought and exploring meaningful ways to use it. With this new perspective, I felt empowered to raise my hand and ask basic questions.”

She also loved reading and excelled in courses like philosophy, literature, and history, which gave her a deep appreciation for the humanities and would lay the groundwork for her future course of studies. Xie designed her own major in computational neuroscience at Princeton University, with her studies bringing in elements of philosophy, literature, and history.

“Throughout college, the task of choosing a major created a lot of tension within me between STEM and the humanities,” said Xie. “Designing my own major was a way of resolving this tension within the constraints of the academic system in which I was operating.”

She then pursued her PhD in Neurobiology and Behavior at Columbia University, where she used AI tools to build interpretable models of neural systems in the brain.

A Deep Dive into the Science of Artificial and Biological Intelligence

Xie worked in Columbia’s Center for Theoretical Neuroscience where she studied alongside physicists and used AI to understand how nervous systems work. Much of her work is based on the research of the late neuroscientist David Marr who explained information-processing systems at three levels: computation (what the system does), algorithm (how it does it), and implementation (what substrates are used).

“We were essentially using AI tools – specifically neural networks – as a language for describing the cerebellum at all of Marr’s levels,” said Xie. “A lot of the work understanding how the cerebellar architecture works came down to understanding the mathematics of neural networks. An equally important part was ensuring that the components of the model be mapped onto biologically meaningful phenomena that could be measured in animal behavior experiments.”

Her dissertation focused on the cerebellum, the region of the brain used during motor control, coordination, and the processing of language and emotions. She said the neural architecture of the cerebellum is “evolutionarily conserved” meaning it can be observed across many species, yet scientists don’t know exactly what it does.

“The mathematically beautiful work from Marr-Albus in the 1970s played a big role in starting a whole movement of modeling brain systems with neural networks. We wanted to extend these theories to explain how cerebellum-like architecture could support a wide range of behaviors,” Xie said.

As a computational neuroscientist, Xie learned how to map ideas between the math world and the natural world. She attributes her PhD advisor, Ashok Litwin-Kumar, an assistant professor of neuroscience at Columbia University, for playing a critical role in her development of this skill.

“Even though my current research as a postdoc is less focused on the neural level, this skill is still my bread and butter. I am grateful for the countless hours Ashok spent with me at the whiteboard,” Xie said.

Joining a Community of Socially Responsible Researchers

After completing her PhD, Xie interned with Basis Research Institute, where she developed models of avian cognition and social behavior. It was here that her mentor, Emily Mackevicius, co-founder and director at Basis, encouraged her to apply to the AI and Society Fellowship.

The Fellowship has enabled Xie to continue growing professionally through opportunities such as collaborations with research labs, the winter academic sessions at Arizona State, the Academy’s weekly AI and Society seminars, and by working with a cohort of like-minded scholars across diverse backgrounds, including the other two AI and Society Fellows Akuadasuo Ezenyilimba and Nitin Verma.

During the Fellowship, her interest in combining neuroscience and AI with mental health led her to develop research collaborations at Mt. Sinai Center for Computational Psychiatry. With the labs of Angela Radulescu and Xiaosi Gu, Xie is building computational models to understand causal relationships between attention and mood, with the goal of developing tools that will enable those with medical conditions like ADHD or bipolar disorder to better regulate their emotional states.

“The process of finding the right treatment can be a very trial-and-error based process,” said Xie. “When treatments work, we don’t necessarily know why they work. When they fail, we may not know why they fail. I’m interested in how AI, combined with a scientific understanding of the mind and brain, can facilitate the diagnosis and treatment process and respect its dynamic nature.”

Challenged to Look Beyond the Science

Xie says the Academy and Arizona State University communities have challenged her to venture beyond her role as a scientist and to think like a designer and as a public steward. This means thinking about AI from the perspective of stakeholders and engaging them in the decision-making process.

“Even the question of who are the stakeholders and what they care about requires careful investigation,” Xie said. “For whom am I building AI tools? What do these populations value and need? How can they be empowered and participate in decision-making effectively?”

More broadly, she considers what systems of accountability need to be in place to ensure that AI technology effectively serves the public. As a case study, Xie points to mainstream social media platforms that were designed to maximize user engagement, however the proxies they used for engagement have led to harmful effects such as addiction and increased polarization of beliefs.

She is also mindful that problems in mental health span multiple levels – biological, psychological, social, economic, and political.

“A big question on my mind is, what are the biggest public health needs around mental health and how can computational psychiatry and AI best support those needs?” Xie asked.

Xie hopes to explore these questions through avenues such as journalism and entrepreneurship. She wants to integrate various perspectives gained from lived experience.

“I want to see the world through the eyes of people experiencing mental health challenges and from providers of care. I want to be on the front lines of our mental health crises,” said Xie.

More than a Scientist

Outside of work, Xie serves as a resident fellow at the International House in New York City, where she organizes events to build community amongst a diverse group of graduate students from across the globe. Her curiosity about cultures around the world led her to visit a mosque for the first time, with Muslim residents from I-House, and to participate in Ramadan celebrations.

“That experience was deeply satisfying.” Xie said, “It compels me to get to know my neighbors even better.”

Xie starts her day by hitting the pool at 6:00 each morning with the U.S. Masters Swimming team at Columbia University. She approaches swimming differently now than when she was younger and competed competitively in an environment where she felt there was too much emphasis on living up to the expectations of others. Instead, she now looks at it as an opportunity to grow.

“Now, it’s about engaging in a continual process of learning,” she said. “Being around faster swimmers helps me learn through observation. It’s about being deliberate, exercising my autonomy to set my own goals instead of meeting other people’s expectations. It’s about giving my full attention to the present task, welcoming challenges, and approaching each challenge with openness and curiosity.”

Read about the other AI and Society Fellows:

From New Delhi to New York

A headshot of a man.

Academy Fellow Nitin Verma is taking a closer look at deepfakes and the impact they can have on public opinion.

Published April 23, 2024

By Nick Fetty
Digital Content Manager

Nitin Verma’s interest in STEM can be traced back to his childhood growing up in New Delhi, India.

Verma, a member of the inaugural cohort for the Artificial Intelligence (AI) and Society Fellowship, a collaboration between The New York Academy of Sciences and Arizona State University’s School for the Future of Innovation in Society, remembers being fascinated by physics and biology as a child. When he and his brother would play with toys like kites and spinning tops, he would always think about the science behind why the kite stays in the sky or why the top continues to spin.

Later, he developed an interest in radio and was mesmerized by the ability to pick up radio stations from far away on the shortwave band of the household radio. In the early 1990s, he remembers television programs like Turning Point and Brahmānd (Hindi: ब्रह्मांड, literally translated to “the Universe”) further inspired him.

“These two programs shaped my interest in science, and then through a pretty rigorous school system in India, I got a good grasp of the core concepts of the major sciences—physics, chemistry, biology—and mathematics by the time I graduated high school,” said Verma. “Even though I am an information scientist today, I remain absolutely enraptured by the night sky, physics, telecommunication, biology, and astronomy.”

Forging His Path in STEM

Verma went on to pursue a bachelor’s in electronic science at the University of Delhi where he continued to pursue his interest in radio communications while developing technical knowledge of electronic circuits, semiconductors and amplifiers. After graduating, he spent nearly a decade working as an embedded software programmer, though he found himself somewhat unfulfilled by his work.

“In industry, I felt extremely disconnected with my inner desire to pursue research on important questions in STEM and social science,” he said.

This lack of fulfillment led him to the University of Texas at Austin where he pursued his MS and PhD in information studies. Much like his interest in radio communications, he was also deeply fascinated by photography and optics, which inspired his dissertation research.

This research examined the impact that deepfake technology can have on public trust of photographic and video content. He wanted to learn how people came to trust visual evidence in the first place and what is at stake with the arrival of deepfake technology. He found that perceived, or actual, familiarity with content creators and depicted environments, contexts, prior beliefs, and prior perceptual experiences guide public trust in the material deemed trustworthy.

“My main thesis is that deepfake technology could be exploited to break our trust in visual media, and thus render the broader public vulnerable to misinformation and propaganda,” Verma said.

A New York State of Mind

Verma captured this image of the historic eclipse that occurred on April 8, 2024.

After completing his PhD, he applied for and was admitted into the AI and Society Fellowship. The fellowship has enabled him to further his understanding of AI through opportunities such as the weekly lecture series, collaborations with researchers at New York University, presentations he has given around the city, and by working on projects with Academy colleagues such as Marjorie Xie and Akuadasuo Ezenyilimba.

Additionally, he is part of the Academy’s Scientist-in-Residence program, in which he teaches STEM concepts to students at a Brooklyn middle school.

“I have loved the opportunity to interact regularly with the research community in the New York area,” he said, adding that living in the city feels like a “mini earth” because of the diverse people and culture.

In the city he has found inspiration for some of his non-work hobbies such as playing guitar and composing music. The city provides countless opportunities for him to hone his photography skills, and he’s often exploring New York with his Nikon DSLR and a couple of lenses in tow.

Deepfakes and Politics

In much of his recent work, he’s examined the societal dimensions (culture, politics, language) that he says are crucial when developing AI technologies that effectively serve the public, echoing the Academy’s mission of “science for the public good.” With a polarizing presidential election on the horizon, Verma has expressed concerns about bad actors utilizing deepfakes and other manipulated content to sway public opinion.

“It is going to be very challenging, given how photorealistic visual deepfakes can get, and how authentic-sounding audio deepfakes have gotten lately,” Verma cautioned.

He encourages people to refrain from reacting to and sharing information they encounter on social media, even if the posts bear the signature of a credible news outlet. Basic vetting, such as visiting the actual webpage to ensure it is indeed the correct webpage of the purported news organization, and checking the timestamp of a post, can serve as a good first line of defense against disinformation, according to Verma. Particularly when viewing material that may reinforce one’s beliefs, Verma challenges them to ask themselves: “What do I not know after watching this content?”

While Verma has concerns about “the potential for intentional abuse and unintentional catastrophes that might result from an overzealous deployment of AI in society,” he feels that AI can serve the public good if properly practiced and regulated.

“I think AI holds the promise of attaining what—in my opinion—has been the ultimate pursuit behind building machines and the raison d’être of computer science: to enable humans to automate daily tasks that come in the way of living a happy and meaningful life,” Verma said. “Present day AI promises to accelerate scientific discovery including drug development, and it is enabling access to natural language programming tools that will lead to an explosive democratization of programming skills.”

Read about the other AI and Society Fellows:

Applying Human Computer Interaction to Brain Injuries

A woman smiles for the camera.

With an appreciation for the value of education and an athlete’s work ethic, Akuadasuo Ezenyilimba brings a unique perspective to her research.

Published April 19, 2024

By Nick Fetty
Digital Content Manager

Athletes, military personnel, and others who endure traumatic brain injuries (TBI) may experience improved outcomes during the rehabilitation process thanks to research by a Fellow with Arizona State University and The New York Academy of Sciences.

Akuadasuo Ezenyilimba, a member of the inaugural cohort of the Academy’s AI and Society Fellowship, conducts research that aims to improve both the quality and the accessibility of TBI care by using human computer interaction. For Ezenyilimba, her interest in this research and STEM more broadly can be traced back to her upbringing in upstate New York.

Instilled with the Value of Education

Growing up in Rochester, New York, Ezenyilimba’s parents instilled in her, and her three younger siblings, the value of education and hard work. Her father, Matthew, migrated to the United States from Nigeria and spent his career in chemistry, while her mother, Kelley, grew up in Akron, Ohio and worked in accounting and insurance. Akuadasuo Ezenyilimba remembers competing as a 6-year-old with her younger sister in various activities pertaining to their after-school studies.

“Both my mother and father placed a strong emphasis on STEM-related education for all of us growing up and I believe that helped to shape us into the individuals we are today, and a big reason for the educational and career paths we all have taken,” said Ezenyilimba.

This competitive spirit also occurred outside of academics. Ezenyilimba competed as a hammer, weight, and discus thrower on the track and field team at La Cueva High School in New Mexico. An accomplished student athlete, Ezenyilimba was a discus state champion her senior year, and was back-to-back City Champion in discus as a junior and senior.

Her athletic prowess landed her a spot on the women’s track and field team as an undergraduate at New Mexico State University, where she competed in the discus and hammer throw. Off the field, she majored in psychology, which was her first step onto a professional path that would involve studying the human brain.

Studying the Brain

After completing her BS in psychology, Ezenyilimba went on to earn a MS in applied psychology from Sacred Heart University while throwing weight for the women’s track and field team, and then went on to earn a MS in human systems engineering from Arizona State University. She then pursued her PhD in human systems engineering at Arizona State, where her dissertation research focused on mild TBI and human computer interaction in regard to executive function rehabilitation. As a doctoral student, she participated in the National Science Foundation’s Research Traineeship Program.

“My dissertation focused on prototype of a wireframe I developed for a web-based application for mild traumatic brain injury rehabilitation when time, finance, insurance, or knowledge are potential constraints,” said Ezenyilimba. “The application is called Ụbụrụ.”

As part of her participation in the AI and Society Fellowship, she splits her time between Tempe, Arizona and New York. Arizona State University’s School for the Future of Innovation in Society partnered with the Academy for this Fellowship.

Understanding the Societal Impacts of AI

The Fellowship has provided Ezenyilimba the opportunity to consider the societal dimensions of AI and how that might be applied to her own research. In particular, she is mindful of the potential negative impact AI can have on marginalized communities if members of those communities are not included in the development of the technology.

“It is important to ensure everyone, regardless of background, is considered,” said Ezenyilimba. “We cannot overlook the history of distrust that has impacted marginalized communities when new innovations or changes do not properly consider them.”

Her participation in the Fellowship has enabled her to build and foster relationships with other professionals doing work related to TBI and AI. She also collaborates with her fellow cohort postdocs in brainstorming new ways to address the topic of AI in society.

“As a Fellow I have also been able to develop my skills through various professional workshops that I feel have helped make me more equipped and competitive as a researcher,” she said.

Looking Ahead

Ezenyilimba will continue advancing her research on TBI. Through serious gamification, she looks at how to lessen the negative context that can be associated with rehabilitation and how to better enhance the overall user experience.

“My research looks at how to increase accessibility to relevant care and ensure that everyone who needs it is equipped with the necessary knowledge to take control of their rehabilitation journey whether that be an athlete, military personnel, or a civilian,” she said.

Going forward she wants to continue contributing to TBI rehabilitation as well as telehealth with an emphasis on human factors and user experience. She also wants to be a part of an initiative that ensures accessibility to and trust in telehealth, so everyone is capable of being equipped with the necessary tools.

Outside of her professional work, Ezenyilimba enjoys listening to music and attending concerts with family and friends. Some of her favorite artists include Victoria Monet and Coco Jones. She is also getting back into the gym and focusing on weightlifting, harkening back to her days as a track and field student-athlete.

Like many, Ezenyilimba has concerns about the potential misuses of AI by bad actors, but she also sees potential in the positive applications if the proper inputs are considered during the development process.

“I think a promising aspect of AI is the limitless possibilities that we have with it. With AI, when properly used, we can utilize it to overcome potential biases that are innate to humans and utilize AI to address the needs of the vast majority in an inclusive manner,” she said.

Read about the other AI and Society Fellows:

The Artificial Intelligence and Society Fellowship Program

Overview
The logo for The New York Academy of Sciences.

In response to the urgent need to incorporate ethical and humanistic principles into the development and application of artificial intelligence (AI), The New York Academy of Sciences offers a new AI and Society post-doctoral fellowship program, in partnership with Arizona State University’s School for the Future of Innovation in Society.

Merging technical AI research with perspectives from the social sciences and humanities, the goal of the program is the development of multidisciplinary scholars more holistically prepared to inform the future use of AI in society for the benefit of humankind.

Promising young researchers from disciplines spanning computer science, the social sciences, and the humanities will be recruited to participate in a curated research program. Fellows’ time will be divided between New York City, Arizona State University, and on-site internships, working alongside seasoned researchers who are well-versed in academia, industry, or policy work.

From the Academy Blog

Learn about the accomplishments of AI and Society Fellows.

Program Requirements
Meet the Fellows

Ethical Implications in the Development of AI

An AI researcher poses for the camera.

Published November 21, 2023

By Nick Fetty
Digital Content Manager

Betty Li Hou, a Ph.D. student in computer science at the New York University Courant Institute of Mathematical Sciences, presented her lecture “AI Alignment Through a Societal Lens” on November 9 at The New York Academy of Sciences.

Seminar attendees included the 2023 cohort of the Academy’s AI and Society post-doctoral fellowship program (a collaboration with Arizona State University’s School for the Future of Innovation in Society), who asked questions and engaged in a dialog throughout the talk. Hou’s hour-long presentation examined the ethical impacts that AI systems can have on societies, and how machine learning, philosophy, sociology, and law should all come together in the development of these systems.

“AI doesn’t exist independently from these other disciplines and so AI research in many ways needs to consider these dimensions, otherwise we’re only looking at one piece of the picture,” said Hou.

Hou’s research aims to capture the broader societal dynamics and issues surrounding the so-called ‘alignment problem,’ a term coined by author and researcher Brian Christian in his 2020 book of the same name. The alignment problem aims to ensure that AI systems pursue goals that match human values and interests, while trying to avoid unintended or undesirable outcomes.

Developing Ethical AI Systems

As values and interests vary across (and even within) countries and cultures, researchers are nonetheless struggling to develop ethical AI systems that transcend these differences and serve societies in a beneficial way. When there isn’t a clear guide for developing ethical AI systems, one of the key questions from Hou’s research becomes apparent: What values are implicitly/explicitly encoded in products?

“I think there are a lot of problems and risks that we need to sort through before extracting benefits from AI,” said Hou. “But I also see so many ways AI provides potential benefits, anything from helping with environmental issues to detecting harmful content online to helping businesses operate more efficiently. Even using AI for complex medical tasks like radiology.”

Social media content moderation is one area where AI algorithms have shown potential for serving society in a positive way. For example, on YouTube, 90% of videos that are reviewed are initially flagged by AI algorithms seeking to spot copyrighted material or other content that violates YouTube’s terms of service.

Hou, whose current work is also supported by a DeepMind Ph.D. Scholarship and an NSF Graduate Research Fellowship, previously served as a Hackworth Fellow at the Markkula Center for Applied Ethics as an undergraduate studying computer science and engineering at Santa Clara University. She closed her recent lecture by reemphasizing the importance of interdisciplinary research and collaboration in the development of AI systems that adequately serve society going forward.

“Computer scientists need to look beyond their field when answering certain ethical and societal issues around AI,” Hou said. “Interdisciplinary collaboration is absolutely necessary.”

The New York Academy of Sciences Announces First Cohort of Post-Doctoral Fellows in Inaugural Artificial Intelligence and Society Fellowship Program with Arizona State University

The AI & Society Fellowship was developed to address the unmet need for scholars who are trained across technical AI and social sciences and the humanities.

New York, NY | August 14, 2023 – Three post-doctoral scholars have been named as the first cohort of Fellows for the Artificial Intelligence and Society Fellowship program.

Launched by The New York Academy of Sciences and Arizona State University in April 2023, the fellowship was developed to address the unmet need for scholars who are trained across technical AI and social sciences and the humanities. This innovative training program will produce the next generation of scholars and public figures who are prepared to shape the future use of AI in ways that will advance the public good.

The Fellows are:

Nitin Verma, PhD, University of Texas at Austin, School of Information

Nitin studies the ethical, societal, and legal impacts of deepfakes and other generative AI technologies. His multidisciplinary research interests include misinformation, trust, human values, and human-computer interaction. He is a native of India, and attended the University of Delhi, graduating with a B.Sc. in electronic science.

Akuadasuo Ezenyilimba, PhD, Arizona State University (ASU), The Polytechnic School; Human Systems Engineering

As a National Science Foundation Research Trainee, Akuadasuo has worked on citizen-centered solutions for real-world problems. Currently, she is researching the relationship between human-computer interaction and traumatic brain injury, executive function, and traumatic brain injury rehabilitation.

Marjorie Xie, PhD, Columbia University Medical Center, Center for Theoretical Neuroscience

Marjorie’s work combines AI, mental health, and education. She interned at Basis Research Institute, building AI tools for reasoning about collaborative intelligence in animals. Marjorie completed her Ph.D. in Neurobiology & Behavior at Columbia University, where she used AI tools to build interpretable models of neural systems in the brain.

Developing the Next Generation of AI Researchers

“AI now permeates every facet of our society,” said Nicholas Dirks, Ph.D., President and CEO, The New York Academy of Sciences. “The technology holds extraordinary promise. It is crucial that researchers have the training and capacity to bring an ethical perspective to its application, to ensure it is used for the betterment of society. That’s why our with partnership with Arizona State University, where much of the pioneering research in AI and society is being conducted, is so imperative.”

“ASU is very excited to join with The New York Academy of Sciences for this fellowship,” said David Guston, professor and founding director of ASU’s School for the Future of Innovation in Society, with which the post-docs will be affiliated. “Our goal is to create a powerhouse of trainees, mentors, ideas, and resources to develop the next generation of AI researchers poised to produce ethical, humanistic AI applications and promote these emerging technologies for the public interest” he added.

Beginning in August 2023, the promising young researchers will participate in a curated research program and professional development training at the Academy’s headquarters in New York City, Arizona State University, and on-site internships, with seasoned researchers from academia, industry, or public policy organizations.

About Arizona State University

Arizona State University, ranked the No. 1 “Most Innovative School” in the nation by U.S. News & World Report for eight years in succession, has forged the model for a New American University by operating on the principles that learning is a personal and original journey for each student; that they thrive on experience and that the process of discovery cannot be bound by traditional academic disciplines. Through innovation and a commitment to accessibility, ASU has drawn pioneering researchers to its faculty even as it expands opportunities for qualified students.

As an extension of its commitment to assuming fundamental responsibility for the economic, social, cultural and overall health of the communities it serves, ASU established the Julie Ann Wrigley Global Futures Laboratory, the world’s first comprehensive laboratory dedicated to the empowerment of our planet and its inhabitants so that all may thrive. It is designed to address the complex social, economic and scientific challenges spawned by the current and future threats from the degradation of our world’s systems.

This platform lays the foundation to anticipate and respond to existing and emerging challenges and use innovation to purposefully shape and inform our future. It includes the College of Global Futures, home to four pioneering schools including the School for the Future of Innovation in Society that is dedicated to changing the world through responsible innovation. For more information, visit globalfutures.asu.edu.