Skip to main content

New Insight into the Evolutionary History of Urban Mosquitoes

A mosquito.

A new ‘Science’ paper by a Leon Levy Scholar on the London Underground Mosquito suggests that their ability to adapt to urban environments dates back further than previously thought.

Published October 31, 2025

By Nick Fetty

Magnified image of the Cx. pipiens body. Image by David Barillet-Portal via Wikimedia Commons. Licensed via CC BY-SA 3.0. No changes made to the original work.

Culex pipiens form molestus, more commonly known as the London Underground Mosquito, has long been an example of the potential speed and complexity of urban adaptation.

Through years of underground habitation in the subways and cellars of northern Europe, the species is thought to have evolved from its bird-biting ancestors to an  urban form, called molestus, that bites humans and other mammals. This is of interest to scientists because this characteristic within this species is thought to have contributed to the spread of West Nile virus in the United States and southern Europe over the past 20 years. While previous research has suggested that the mosquito evolved human-biting and other human-adaptive characteristics over the previous two centuries, new research published in the journal Science now shows this evolutionary history could date back more than 1000 years.

The paper was published in Science on October 23rd by a team of researchers, including first author Yuki Haba, PhD, a 2025 Leon Levy Scholar in Neuroscience. Named for the late philanthropist Leon Levy and administered by The New York Academy of Sciences, the Leon Levy Scholarships in Neuroscience aim to promote groundbreaking neuroscience research in New York City. The scholarship supports the most innovative young researchers during their postdoctoral research, which is a critical stage of their careers.

Analyzing Population Genomics

Dr. Haba built upon the research he did as a doctoral student at Princeton University. He applied his expertise in population genomics to the recent paper.

“As a behavioral and evolutionary scientist, I have been very much interested in the evolution of mosquitoes – whose human-biting behavior and the ability to vector deadly diseases are a threat to millions of people,” says Dr. Haba, who also serves as a postdoc at The Mortimer B. Zuckerman Mind Brain Behavior Institute at Columbia University. “I, together with my advisor Lindy McBride and more than 200 collaborators across the world, generated and analyzed the first global population genomic dataset of Culex pipiens, an important human-biting species. My expertise in population genomics was particularly helpful in analyzing large-scale datasets as well as in deciphering ecological contexts in which the human-biting mosquito originated.”

The research team sequenced the whole genomes of approximately 350 contemporary and historical Cx. pipiens mosquitoes from 77 populations across Europe, North Africa, and western Asia. They then used population genomic analysis, focusing on population structure, derived allele-sharing, phylogeny, and cross-coalescence, to better understand molestus’ evolutionary history.

“Our genomic data also provide a major revision to our understanding of gene flow between bird- and mammal-biting forms,” the researchers write. “We found that genetic signatures researchers previously ascribed to between-form hybridization instead reflect ancestral variation within bird-biting populations.”

Continuing to Evolve

The researchers now believe that molestus first started adapting to human environments more than 1000 years ago in the Mediterranean basin, likely Ancient Egypt or a similar early agricultural society.

“Rather than benchmarking the speed and complexity of urban evolution, this updated history highlights the role of early human society in priming taxa for colonization of modern urban environments,” the researchers conclude. “Our work also revises our fundamental understanding of gene flow in this important vector and opens the door to incisive investigation of the potential links between urbanization, hybridization, and arbovirus spillover to humans.”

Even though the researchers have shown that molestus has ancient origins, that doesn’t mean evolution has stopped. Once these mosquitoes moved underground, they faced a very different set of challenges — including the scarcity of hosts. In those settings, females that can lay eggs without a blood meal (a trait called autogeny) have a big advantage. This behavior and physiology are almost universal in northern underground populations but much less frequent in Egypt and surrounding regions.

“One exciting question for future research is whether that’s a bona fide recent, rapid adaptation to underground life, and whether it evolved just once or multiple times independently,” says Dr. Haba. “We think our study also has important and exciting public health implications, because molestus isn’t just a fascinating evolutionary story, it’s also a major vector for disease.”

New Avenues of Research

Aboveground molestus was once the primary carrier of a human-specific filarial parasite in Egypt, and it’s been implicated in the transmission of West Nile virus and other pathogens across Eurasia and North America. The researchers found that hybridization between bird-biting pipiens and human-biting molestus — which allows viruses to jump from birds to humans (referred to as ‘viral spillover’) — is much rarer than previously believed. What earlier studies interpreted as “mixing” often reflects shared ancient ancestry instead. But where hybridization does occur, it’s linked to human population density — meaning it happens more often in urban areas.

This finding gives researchers a new framework to explore how urbanization, gene flow, and disease transmission are all connected.

“By disentangling ancient variation from true hybridization events, we may be able to better predict where mosquitoes capable of bridging bird-to-human transmission might emerge,” says Dr. Haba. “We suggest future surveillance should incorporate as much genomic data and analyses as possible, so that we can better understand the links between urbanization, gene flow, ancestral variation, and viral spillover.”

Read the full paper.

2026 Annual Symposium for the Leon Levy Scholarships in Neuroscience

An orange and white graphic.

May 7, 2026 | 8:30 AM – 5:00 PM ET

The 2026 Annual Symposium for the Leon Levy Scholarships in Neuroscience is the flagship event for this highly competitive program. Presented by The New York Academy of Sciences in partnership with the Leon Levy Foundation, this Symposium is open to esteemed members of the local neuroscience community by invitation only. Current Leon Levy scholars from the 2025 cohort will be introducing their research proposals, while scholars from the 2023 and 2024 cohorts will be presenting updates on their research. Attendees will have ample opportunity to network with scholars, mentors, PI’s, program alumni and other prominent New York City based neuroscientists.

The Leon Levy Scholarships in Neuroscience aim to promote groundbreaking neuroscience research in the five boroughs of New York City. The scholarships support the most innovative young researchers at a critical stage of their careers—their postdoctoral research—as they develop new ideas and directions to help establish them as independent neuroscientists. To learn more about the program or request an invitation, click here or contact us at leonlevy@nyas.org.

Download the Agenda

Sponsor

Artificial Intelligence and Animal Group Behavior

By linking cognitive strategy, neural mechanisms, movement statistics, and artificial intelligence (AI) a team of interdisciplinary researchers are trying to better understand animal group behavior.

Published December 23, 2024

By Nick Fetty

A bay-breasted warbler in Central Park. Image courtesy of Rhododendrites, CC BY-SA 4.0, via Wikimedia Commons.

A new research paper in the journal Scientific Reports explores ways that artificial intelligence (AI) can analyze and perhaps even predict animal behavior.

The paper, titled “Linking cognitive strategy, neural mechanism, and movement statistics in group foraging behaviors,” was authored by Rafal Urbaniak and Emily Mackevicius, both from the Basis Research Institute, and Marjorie Xie, a member of the first cohort for The New York Academy of Sciences’ AI and Society Fellowship Program.

For this project, the team developed a novel framework to analyze group foraging behavior in animals. The framework, which bridged insights from cognitive neuroscience, cognitive science, and statistics, was tested with both simulated data and real-world datasets, including observations of birds foraging in mixed-species flocks.

“By translating between cognitive, neural, and statistical perspectives, the study aims to understand how animals make foraging decisions in social contexts, integrating internal preferences, social cues, and environmental factors,” says Mackevicius.

An Interdisciplinary Approach

Each of the paper’s three co-authors brought their own expertise to the project. Mackevicius, a co-founder and director of Basis Research Institute, holds a PhD in neuroscience from MIT where her dissertation examined how birds learn to sing. She advised this project, collected the data on the groups of birds, and assisted with analytical work. Her contributions built upon her postdoctoral work studying memory-expert birds in the Aronov lab at Columbia University’s Center for Theoretical Neuroscience.

Xie, who holds a PhD in neurobiology and behavior from Columbia University, brought her expertise in computational modeling, neuroscience, and animal behavior. Building on a neurobiological model of memory and planning in the avian brain, Xie worked along Mackevicius to design a cognitive model that would simulate communication strategies in birds.

“The cognitive model describes where a given bird chooses to move based on what features they value in their environment within a certain sight radius,” says Xie, who interned at Basis during her PhD studies. “To what extent does the bird value food versus being in close proximity to other birds versus information communicated by other birds?”

Bayesian Methods and Causal Probabilistic Programming

Urbaniak brought in his expertise in Bayesian methods and causal probabilistic programming. For the paper, he built all the statistical models and applied statistical inference tools to perform model identification.

“On the modeling side, the most exciting challenge for me was turning vague, qualitative theories about animal movement and motivations into precise, quantitative models. These models needed to capture a range of possible mechanisms, including inter-animal communication, in a way that would allow us to use relatively simple animal movement data with Bayesian inference to cast light on them,” says Urbaniak, who holds a PhD in logic and philosophy of mathematics from the University of Calgary, Canada and held previous positions at Trinity College Dublin, Ireland, and the University of Bristol, U.K.

For this project, the researchers set up video cameras in Central Park to analyze bird movements, which they then used to study behavior. In the paper, the researchers pointed out that birds are an appealing subject to study animal cognition within collaborative groups.

“Birds are highly intelligent and communicative, often operate in multi-agent or even multi-species groups, and occupy an impressively diverse range of ecosystems across the globe,” the researchers wrote in the paper’s introduction.

The paper built upon previous work within this realm, with the researchers writing that “[this work demonstrated] how abstract cognitive descriptions of multi-agent foraging behavior can be mapped to a biologically plausible neural network implementation and to a statistical model.”

Expanding their Research

For both Mackevicius and Xie, this project enabled them to expand their research from studying individual birds to groups of birds. They saw this as an opportunity to “scale up” their previous work to better understand how cognition differs within a group context. Since the paper was published in September, Mackevicius has applied a similar methodology to study NYC’s infamous rats, and she sees potential for extending this work even further.

“This research has broad implications not just for neuroscience and animal cognition but also for fields like artificial intelligence, where multi-agent decision-making is a central challenge,” Mackevicius wrote for the Springer Nature blog. “The ability to infer cognitive strategies from observed behavior, particularly in group contexts, is a crucial step toward designing more sophisticated AI systems.”

Xie says she “learned many skills on the spot” throughout the project, including reinforcement learning (an AI framework) and statistical inference. For her, it was especially rewarding to observe how all these small pieces shaped the bigger picture.

“This work inspires me to think about how we apply these tools to reason about human behavior in group settings such as team sports, crowds in public spaces, and traffic in urban environments,” says Xie. “In crowds, humans may set aside their individual agency and operate on heuristics such as following the flow of the crowd or moving towards unoccupied space. The balance between pursuing individual needs and cooperating with others is a fascinating phenomenon we have yet to understand.”

The AI and Society Fellowship is a collaboration with Arizona State University’s School for the Future of Innovation in Society. For more info, click here.

Basis AI is currently seeking Research Interns for 2025. For more info, click here.

The Ethics of Developing Voice Biometrics

A writer conducts an interview with an AI researcher.

Various ethical considerations must be applied to the development of artificial intelligence technologies like voice biometrics to ensure disenfranchised populations are not negatively impacted.

Published August 29, 2024

By Nitin Verma, PhD

Nitin Verma, PhD, (left) conducts an interview with Juana Caralina Becerra Sandoval at The New York Academy of Sciences’ office in lower Manhattan.
Photo by Nick Fetty/The New York Academy of Sciences.

Juana Catalina Becerra Sandoval, a PhD candidate in the Department of the History of Science at Harvard University and a research scientist in the Responsible and Inclusive Technologies initiative at IBM Research, presented as part of The New York Academy of Sciences’ (the Academy) Artificial Intelligence (AI) & Society Seminar series. The lecture – titled “What’s in a Voice? Biometric Fetishization and Speaker Recognition Technologies” – explored the ethical implications associated with the development and use of AI-based tools such as voice biometrics. After the presentation, Juana sat down with Nitin Verma, PhD, a member of the Academy’s 2023 cohort of the AI & Society Fellowship, to further discuss the promises and challenges society faces as AI continues to evolve.

*Some quotes have been edited for length and clarity*

Tell me about some of the big takeaways from your research so far on voice biometrics that you covered in your lecture?

I think some of the main takeaways from the history of the automation of speaker recognition are, first, really trying to understand what are the different motivations or incentives for investing in a particular technology and a particular technological future. In the case of voice biometrics, a lot of the interesmyt is coming from different sectors like the financial sector, or the security and surveillance sector. It’s important to keep those interests in mind and observe how they inform the way in which voice biometrics get developed or not.

The other thing that’s important is that even though we have a notion of technological progress, some of the underlying ideas and assumptions are very old. This includes ideas about the body, about what the human body is, and how humans have the ability to change, or not, their body and the way they speak. In the case of voice biometrics, these ideas date back to 19th-century eugenic science, and they continue informing research, even as we have new technologies. We need to not just look at this technology as new, but ask what are the ideas that remain, or that sustain over time, and in which context did those ideas originate.

So, in your opinion, what role does, or would, AI play in your historical accounting of voiceprint technology?

I think, in some way, this is the story of AI. So, it’s not a separate story. AI doesn’t come together in the abstract. It always comes along in relation to a particular application. A lot of the different algorithmic techniques we have today were developed in relation to voice biometrics. Really what AI entails is a shift in the logic of the ontology of voice where you can have information surface from the data or emerge from statistical methods, without needing to have a theory of what the voice is and how it relates to the body or identity and illness. This is the kind of shift and transformation that artificial intelligence ushers.

What would you think is the biggest concern regarding the use of AI in monitoring technologies such as voice biometrics?

Well, I think concerns are several. I definitely think that there’s already inscripted within the history of voice biometrics an interest in over-policing, and over-surveilling of Black and Latinx communities. There’s always that inherent risk that technology will be deployed to over-police certain communities and voice biometrics then enter into a larger infrastructure where people are already being policed and surveilled through video with computer vision or through other means.

In the security sector, I think my main concern is that there’s a presumption that the relationship between voice and identity is fixed and immutable, which can create problems for people who want to change their voice and or for people whose voice changes in ways outside of their control, like from an injury or illness. There are numerous reasons why people might be left out of these systems, which is why we want to make sure we are creating infrastructures that are equitable.

Speaking to the other side of this same question, in your view, what would be some of the beneficial or ethical uses of this technology going forward?

Rather than starting from the point of ‘what do corporations or institutions need to make their job easier or more profitable?’, we should instead focus on ‘what are the kinds of tools and techniques that people want for themselves and for their lives?’, and ‘in what ways can we leverage the current state of the art towards those ends?’. I think it’s much more about the approach and the incentive.

There’s nothing inherent to technology that makes it cause irreparable harm or be inherently unethical. It’s more about: what is the particular ontology of voice?; what’s the conception of voice that goes into the system?; and towards whose ends is it being leveraged? I’m hopeful and optimistic about anything that is driven by people and people’s desires for a better life and a better future.

Your work brings together various threads of research or inquiry, such as criminology, the history of technology, inequality, and the history of biometric technology as such. What are some of the challenges and benefits that you’ve encountered on account of this multidisciplinary approach to studying the topic?

I was trained as a historian, and originally my idea was to be a professor, but once I started working at IBM Research and the Responsible and Inclusive Tech team, I think I got much closer to the people who very materially and very concretely wanted to make technology better, or, more specifically, to improve the infrastructures and the cultures in which technology is built.

That really pushed me to take a multidisciplinary approach and to think about things not just from a historical lens, but be very rooted in the technical, as well as present day politics and economic structures. I think of my own immigrant background. I’m from Colombia and I naturally already had this desire to engage with humanities and social science scholarship that was critical of these aspects of society, but this may not be the same for everyone. I think the biggest challenge is effectively engaging different audiences.

In the lecture you described listening as a political process. Can you elaborate on that?

I’m really drawing on scholars in sound studies and voice studies. The Sonic Color Line, Race as Sound, and Black Linguistics, are three of the main theoretical foundations that I am in conversation with. The point they try to make is that when we attend to listening, rather than voice itself as a sort of thing that stands on its own, we can see and almost contextualize how different voices are understood, described, interpreted, classified, and so on.

The political in listening is what makes people have reactions to certain voices or interpret them in particular ways. Accents are a great example. Perceptions of who has an accent and what an accent sounds like are highly contextual. The politics of listening really emphasizes that contextuality and how we’ve come to associate things like being eloquent through particular ways of speaking or with how particular voices sound, and not others.

Is there anything else you’d like to add?

Well, I think something that strikes me about the story of voice biometrics and voiceprints is how little the public knows about what’s happening. A lot of decisions about these technologies are made in contexts that are not publicly shared. So, there’s a different degree of awareness in the kind of different public discourses around the ethics of AI and voice. It’s very different from facial recognition, computer vision, or even toxic language.

Also read: The Ethics of Surveillance Technology

Have We Passed the Turing Test, and Should We Really be Trying?

A black and white headshot of computer scientist Alan Turing.

The 70th anniversary of Turing’s death invites us to ponder: can we imagine AI models that will do well on the Turing test?

Published August 22, 2024

By Nitin Verma, PhD

Alan Turing (1912-1954) in 1936 at Princeton University.
Image courtesy of Wikimedia Commons.

Alan Turing is perhaps best remembered by many as the cryptography genius who led the British effort to break the German Enigma codes during WWII. His efforts provided crucial information about German troop movements and helped bring the war to an end.

2024 has been a noteworthy year in the story of Turing’s life as June 7th marked 70 years since his tragic death in 1954. But four years before that—in 1950—he kickstarted a revolution in digital computing by posing the question “can machines think?” and proposing an “imitation game” to answer it.

While this quest has been the holy grail for theoretical computer scientists since the publication of Turing’s 1950 paper, the public launch of ChatGPT in November 2022 has brought the question to the center stage of global conversation.

In his landmark 1950 paper, Turing predicted that: “[by about the year 2000] it will be possible to programme computers… [that] play the imitation game so well that an average interrogator will not have more than 70 per cent. chance of making the right identification after five minutes of questioning.” (p. 442). By “right identification”, Turing meant accurately distinguishing between human-generated and computer-generated text responses.

This “imitation game” eventually came to be known as the Turing test of machine intelligence. It is designed to determine whether a computer can successfully imitate a human to the point that a human interacting with it would be unable to tell the difference.

We’re much past the year 2000: Are we there yet?  

In 2022, Google let go of Blake Lemoine, a software engineer who had publicly claimed that the company’s LaMDA (Language Model for Dialogue Applications) program had attained sentience. Since then, the closest we’ve come to seeing Turing’s prediction come true is, perhaps, GPT-4, deepfakes, and OpenAI’s “Sora” text-to-video model that can churn out highly realistic video clips from mere text prompts.

Some researchers argue that LLMs (Large Language Models) such as GPT-4 do not yet pass the Turing test. Yet some others have flipped the script and argued that LLMs offer a way to assess human intelligence by positing a reverse Turing Test—i.e., what do our conversational interactions with LLMs reveal about our own intelligence?

Turing himself made a noteworthy remark about the imitation game in the same 1950 paper: “… we are not asking whether all digital computers would do well in the game nor whether the computers at present available would do well, but whether there are imaginable computers which would do well.” (Emphasis mine; p. 436).

Would Turing have imagined the current crop of generative AI models such as GPT-4 as ‘machines’ capable of “doing well” on the Turing test? I believe so, but we’re not quite there, yet. As an information scientist, I believe that in 2024 AI has come closer than ever to passing the Turing test.

If we’re not there yet, then should we strive to get there?

As with any other technology ever invented, as much as Turing may have only been thinking of the public good, there is always the potential for unforeseen consequences.

Technologies such as deepfake apps and conversational agents such as ChatGPT still need human creativity to be useful and usable. But still, the advanced AI that powers these technologies carries the potential of passing the Turing test. That potential portends a range of consequences for society that deserve our serious attention.

Leading scholars have already warned about the consequences of the ability of “fake” information to fuel distrust in public institutions including the judicial system and national security. The upheaval in the public imagination caused by ChatGPT even prompted US President Biden to issue an Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI in the fall of 2023.

We’ll never know what Turing would have made of the spurt of AI advances in light of his own foundational work in theoretical computer science and artificial intelligence. His untimely death at the young age of 41 deprived the world of one of the greatest minds of the 20th century and the still more extraordinary achievements he could have made.

But it’s clear that the advances and use of AI technology have brought society to a turning point that he anticipated in his seminal works.

It remains difficult to say when—or whether—machines will truly surpass human-level intelligence. But more than 70 years after Turing’s death we are at a point where we can imagine AI agents that will do well on the Turing test. And if we can imagine it, we can someday build it too.

Passing a challenging test can be seen as a marker of progress. But would we truly rejoice in having our AI pass the Turing test, or some other benchmark of human–machine indistinguishability?

A More Scientific Approach to Artificial Intelligence and Machine Learning

A researcher poses next to a vertical banner with the text "The New York Academy of Sciences."

Taking a more scientific perspective, while remaining ethical, can improve public trust of these emerging technologies.

Published August 13, 2024

By Nitin Verma, PhD

Savannah Thais, PhD, is an Associate Research Scientist in the Data Science Institute at Columbia University with a focus on machine learning. Dr. Thais is interested in complex system modeling and in understanding what types of information is measurable or modelable and what impacts designing and performing measurements have on systems and societies.

*This interview took place at The New York Academy of Sciences on January 18, 2024. This transcript was generated using Otter.ai and was proofread for corrections. Some quotes have been edited for length and clarity*

Tell me about the big takeaways from your talk?

The biggest highlight is that we should be treating machine learning and AI development more scientifically. I think that will help us build more robust, more trustworthy systems, and it will help us better understand the way that these systems impact society. It will contribute to safety, to building public trust, and all the things that we care about with ethical AI.

In what ways can the adoption of scientific methodology make models of complex systems more robust and trustworthy?

I think having a more principled design and evaluation process, such as the scientific method approach to model building, helps us realize more quickly when things are going wrong, and at what step of the process we’re going wrong. It helps us understand more about how the data, our data processing, and our data collection contributes to model outcomes. It helps us understand better how our model design choices contribute to eventual performance, and it also gives us a framework for thinking about model error and a model’s harm on society.

We can then look at those distributions and back-propagate those insights to inform model development and task formulation, and thereby understand where something might have gone wrong, and how we can correct it. So, the scientific approach really just gives us the principles, and a step-by-step understanding of the systems that we’re building. Rather than, what I see a lot of times, a hodgepodge approach where the only goal is model accuracy, in which something goes wrong, we don’t necessarily know why or where.

You have a very interesting background, and your work touches on various academic disciplines, including machine learning, particle physics, social science, and law. How does this multidisciplinary background inform your research on AI?

I think being trained as a physicist really impacts how I think about measurements and system design. We have a very specific idea of truth in physics. And that isn’t necessarily translatable to scenarios where we don’t have the same kind of data or the same kind of measurability. But I think there’s still a lot that can be taken from that, that has really informed how I think about my research in machine learning and its social applications.

This includes things like experimental design, data validation, uncertainty, propagation in models. Really thinking about how we understand the truth of our model, and how accurate it is compared to society. So that kind of idea of precision and truth that’s fundamental physics, has affected the research that I do. But my other interests and other backgrounds are influential as well. I’ve always been interested in policy in particular. Even in grad school, when I was doing a physics PhD, I did a lot of extracurricular work in advocacy in student government at Yale. That impacted a lot how I think about understanding how systems affect society, resource access, and more. It really all mixes together.

And then the other thing that I’ll say here is, I don’t think one person can be an expert in this many things. So, I don’t want it to seem like I’m an expert at law and physics and all this stuff. I really lean a lot on interdisciplinary collaborations, which is particularly encouraged at Columbia. For example, I’ve worked with people at Columbia’s School of International and Public Affairs as well as with people from the law school, from public health, and from the School of Social Work. My background allows me to leverage these interdisciplinary connections and build these truly collaborative teams.

Is there anything else you’d like to add to this conversation?

I would reemphasize that science can help us answer a lot of questions about the accuracy and impact of machine learning models of societal phenomena. But I want to make sure to emphasize at the same time that science is only ever going to get us so far. And I think there’s a lot that we can take from it in terms of experimental design, documentation, principles, model construction, observational science, uncertainty, quantification, and more. But I think it’s equally important that as scientific researchers, which includes machine learning researchers, we really make an effort to both engage with other academic disciplines, but also to engage with our communities.

I think it’s super important to talk to people in your communities about how they think about the role of technology in society, what they actually want technology to do, how they think about these things, and how they understand them. That’s the only way we’re going to build a more responsible, democratic, and participatory technological future. Where technology is actually serving the needs of people and is not just seen as either a scientific exercise or as something that a certain group of people build and then subject the rest of society to, whether it’s what they actually wanted or not.

So I really encourage everyone to do a lot of community engagement, because I think that’s part of being a good citizen in general. And I also encourage everyone to recognize that domain knowledge matters a lot in answering a lot of these thorny questions, and that we can make ourselves better scientists by recognizing that we need to work with other people as well.

Also read: From New Delhi to New York

The Leon Levy Scholarships in Neuroscience (LLSN)

Overview
Scholarship Details

Terms of Appointment

Selected Scholars must dedicate 100% of research time to scientific research projects unless they have a clinical obligation, in which case, they may spend up to 20% of the time on clinical obligations.

Stipend & Benefits

The Leon Levy Scholarship is a three (3)-year award. Scholars will receive: 

  • Annual stipend equals 125% of the National Institutes of Health (NIH) postdoctoral rate, according to the postdoctoral year
  • Fringe benefits at the host institution’s rate for postdoctoral Scholars
  • US$2,000 computer allowance as a one-time award
  • Annual supplement of up to US$10,000 to support care costs (e.g. dependent care)
  • Indirect support to the host institution will be allowed at the standard published rate if less than 20% and capped at 20%.
  • 3-year Membership to The New York Academy of Sciences
  • Participation in a structured Mentorship Program for Leon Levy Scholars
  • Access to leadership and skills-building workshops through The New York Academy of Sciences
  • Access to the community of past and present Leon Levy Scholars and Fellows
  • Grant writing support

Duration

  • Each Scholar is expected to begin the 36-month Scholarship in September of the year in which the award is received (some remote orientation may begin before September/before arrival).
  • Should a Scholar depart the institution in which they were awarded the LLSN, if the new institution is eligible, the Scholarship may be transferred, otherwise, it will conclude. 

Scholar Responsibilities

  1. Attend the New Scholar Orientation
  2. Participate in the annual Leon Levy Symposium
  3. Attend quarterly virtual Group Seminars; Scholars are required to present a research update at a Group Seminar at least once during their tenure
  4. Participate in Mentorship and Career Development activities (detailed below)
  5. Engage in Scholarship-related media activities and inquiries (e.g., video interviews, magazine profile interviews, etc.) as requested
  6. Provide an Annual Report describing research and career progress for each year of their tenure; the Final Report must summarize the research project and state final conclusions. Report templates will be provided.

Mentorship Program

All Scholars will be required to participate in a structured Mentorship Program for the duration of their Scholarship. Scholars will receive their primary scientific mentorship from their Research Advisor. In addition, Scholars will benefit from advice and mentorship from a senior scientist, referred to as a Mentor, not directly involved in the Scholars’ research. Scholars will have access to both their scientific Research Advisor and a Mentor as part of the Leon Levy Scholarships in Neuroscience Program.

An essential feature of the Scholarship program will be this opportunity to learn from and be mentored by distinguished leaders across scientific fields. In this capacity, Mentors will provide guidance on the Scholars’ postdoctoral research and in their pursuit of an independent PI role or other scientific career paths. Each Mentor will have a successful track record of mentorship and will be paired with a scholar based on mutual scientific interests.

Once Scholars are chosen, they will work with the program team to find an appropriate senior Mentor from the Academy’s membership. Once matched, the mentoring pairs are expected to meet a minimum of once every other month (in person or virtual) and will have access to prompts and activities to help guide conversations if appropriate. Mentoring pairs will complete an expectations worksheet to help define how the pair will work together. All pairs will be expected to abide by the Academy’s Code of Conduct.

Quarterly Group Seminars

Each quarter, the Academy will host a meeting of all current Scholars to discuss ideas, share research updates and success stories, identify potential collaborations and help solve problems. These seminars will include Scholar presentations, interactive discussions, and informal networking. The Academy will work closely with the Scholars – through conversations, surveys, and other methods – to design programming that meets the short-term and long-term scientific and career needs of the Scholars.

Leadership and Skills Building Opportunities

All Scholars will receive a professional membership to the Academy, providing them with free and reduced-cost access to career development events, courses, and workshops. There is no requirement for Scholars to participate.  Leadership and Skills building opportunities include topics such as science communication, grant writing, Inclusive Leadership, teaching and pedagogy, and ethics. Scholars will also receive a newsletter and regular updates about these opportunities.

Leon Levy Community

The Academy maintains a robust virtual community for scientists via LinkedIn. Scholars will have the opportunity to join our LinkedIn community and have a dedicated platform to network with other Scholars.

Membership to The New York Academy of Sciences

All Scholars will receive a (3) three-year membership to the Academy. Membership provides Scholars with access to our global Member Directory, a deep archive of digital content, and access to free or significantly discounted registrations for over 100 symposia, webinars, and conferences annually.

Grant Writing Support, as needed

Scholars will have access to a grant writing professional who can consult with them on a grant and provide them with guidance.

Instructions
Symposium
Scholars
Team

Contact Us

If you have any questions, contact leonlevy@nyas.org.

The New York Academy of Sciences and the Leon Levy Foundation Announce the 2024 Leon Levy Scholars in Neuroscience

Nine early career scientists are part of the 2024 cohort including researchers from The Rockefeller University, Albery Einstein College of Medicine, Icahn School of Medicine at Mount Sinai, the Flatiron Institute, and NYU.

New York, NY | May 29, 2024 – The New York Academy of Sciences and the Leon Levy Foundation announced today the 2024 cohort of Leon Levy Scholars in Neuroscience, continuing a program initiated by the Foundation in 2009 that has supported 170 fellows in neuroscience.

This highly regarded postdoctoral program supports exceptional young researchers across the five boroughs of New York City as they pursue innovative neuroscience research and advance their careers toward becoming independent principal investigators. Nine scholars were competitively selected for a three-year term from a broad pool of applications from more than a dozen institutions across New York City that offer postdoctoral positions in neuroscience.

Shelby White, founding trustee of the Leon Levy Foundation, said, “For two decades, the Foundation has supported over 170 of the best young neuroscience researchers in their risk-taking research and clinical work. We are proud to partner with The New York Academy of Sciences to continue to encourage these gifted young scientists, helping them not only to advance their careers but also to advance the cause of breakthrough research in the field of neuroscience.”

Nicholas Dirks, the Academy’s President and CEO said “Our distinguished jury selected nine outstanding neuroscientists across the five boroughs of New York City involved with cutting-edge research ranging from the study of neural circuitry of memory and decision-making, to psychedelic-based treatment of alcohol and substance abuse disorders, to the chemical communication of insects, to the use of organoids to study Alzheimer’s, to vocal learning research in mammals. We are excited to be working with the Leon Levy Foundation to welcome this new group of young neuroscientists to the Academy and the Leon Levy Scholar community.”

The Scholars program includes professional development opportunities such as structured mentorship by distinguished senior scientists, and workshops on grant writing, leadership development, communications, and management skills. The program facilitates networking among cohorts and alumni, data sharing, cross-institutional collaboration, and the annual Leon Levy Scholars symposium held in the Spring of 2025.

The 2024 Leon Levy Scholars


Tiphaine Bailly, PhD, The Rockefeller University

Recognized for: Genetically engineering the pheromone glands of ants to study chemical communication in insect societies.


Ernesto Griego, PhD, Albert Einstein College of Medicine

Recognized for: Mechanisms by which experience and brain disease modify inhibitory circuits in the dentate gyrus, a region of the brain that contributes to memory and learning.


Deepak Kaji, MD, PhD, Icahn School of Medicine at Mount Sinai

Recognized for: Using 3D organoids and assembloids to model abnormal protein accumulations and aggregations in the brain, a characteristic of Alzheimer’s Disease.


Jack Major, PhD, Icahn School of Medicine at Mount Sinai

Recognized for: Understanding the long-term effects of inflammation on somatosensory neurons, cells that perceive and communicate information about external stimuli and internal states such as touch, temperature and pain.


Brigid Maloney, PhD, Icahn School of Medicine at Mount Sinai

Recognized for: Identifying the transcriptomic (RNA transcript) specializations unique to advanced vocal learning mammals.


Amin Nejatbakhsh, PhD, Flatiron Institute

Recognized for: Statistical modeling of neural data to causally understand biological and artificial neural networks and the mechanisms therein.


Broc Pagni, PhD, NYU Langone Health

Recognized for: Identifying the psychological and neurobiological mechanisms of psychedelic-based treatments for alcohol and substance use disorders.


Adithya Rajagopalan, PhD, New York University

Recognized for: Examining how neurons within the brain’s orbitofrontal cortex, combine input from other brain regions to encode complex properties of the world that guide decision-making. 


Genelle Rankin, PhD, The Rockefeller University

Recognized for: Identifying and characterizing how thalamic nuclei, specialized areas of the thalamus responsible for relaying sensory and motor signals and regulating consciousness, supports working memory maintenance.

About the Leon Levy Foundation

The Leon Levy Foundation continues and builds upon the philanthropic legacy of Leon Levy, supporting preservation, understanding, and the expansion of knowledge, with a focus on the ancient world, arts and humanities, nature and gardens, neuroscience, human rights, and Jewish culture. The Foundation was created in 2004 from Leon Levy’s estate by his wife, founding trustee Shelby White. To learn more, visit: leonlevyfoundation.org.

For more information about the Scholarship program, contact: LeonLevy@nyas.org

Using AI and Neuroscience to Transform Mental Health

A headshot of a woman smiling for the camera.

With a deep appreciation for the liberal arts, neuroscientist Marjorie Xie is developing AI systems to facilitate the treatment of mental health conditions and improve access to care.  

Published May 8, 2024

By Nick Fetty

As the daughter of a telecommunications professional and a software engineer, it may come as no surprise that Marjorie Xie was destined to pursue a career in STEM. What was less predictable was her journey through the field of artificial intelligence because of her liberal arts background.

From the City of Light to the Emerald City

Marjorie Xie, a member of the inaugural cohort of the AI and Society Fellowship, a collaboration between The New York Academy of Sciences and Arizona State University’s School for the Future of Innovation in Society, was born in Paris, France. Her parents, who grew up in Beijing, China, came to the City of Light to pursue their graduate studies, and they instilled in their daughter an appreciation for STEM as well as a strong work ethic.

The family moved to Seattle, Washington in 1995 when her father took a job with Microsoft. He was among the team of software engineers who developed the Windows operating system and the Internet Explorer web browser. Growing up, her father encouraged her to understand how computers work and even to learn some basic coding.

“Perhaps from his perspective, these skills were just as important as knowing how to read,” said Xie. “He emphasized to me; you want to be in control of the technology instead of letting technology control you.”

Xie’s parents gifted her a set of DK Encyclopedias as a child, her first serious exposure to science, which inspired her to take “field trips” into her backyard to collect and analyze samples. While her parents instilled in her an appreciation for science and technology, Xie admits her STEM classes were difficult and she had to work hard to understand the complexities. She said she was easily intimated by math growing up, but certain teachers helped her reframe her purpose in the classroom.

“My linear algebra teacher in college was extremely skilled at communicating abstract concepts and created a supportive learning environment – being a math student was no longer about knowing all the answers and avoiding mistakes,” she said. “It was about learning a new language of thought and exploring meaningful ways to use it. With this new perspective, I felt empowered to raise my hand and ask basic questions.”

She also loved reading and excelled in courses like philosophy, literature, and history, which gave her a deep appreciation for the humanities and would lay the groundwork for her future course of studies. Xie designed her own major in computational neuroscience at Princeton University, with her studies bringing in elements of philosophy, literature, and history.

“Throughout college, the task of choosing a major created a lot of tension within me between STEM and the humanities,” said Xie. “Designing my own major was a way of resolving this tension within the constraints of the academic system in which I was operating.”

She then pursued her PhD in Neurobiology and Behavior at Columbia University, where she used AI tools to build interpretable models of neural systems in the brain.

A Deep Dive into the Science of Artificial and Biological Intelligence

Xie worked in Columbia’s Center for Theoretical Neuroscience where she studied alongside physicists and used AI to understand how nervous systems work. Much of her work is based on the research of the late neuroscientist David Marr who explained information-processing systems at three levels: computation (what the system does), algorithm (how it does it), and implementation (what substrates are used).

“We were essentially using AI tools – specifically neural networks – as a language for describing the cerebellum at all of Marr’s levels,” said Xie. “A lot of the work understanding how the cerebellar architecture works came down to understanding the mathematics of neural networks. An equally important part was ensuring that the components of the model be mapped onto biologically meaningful phenomena that could be measured in animal behavior experiments.”

Her dissertation focused on the cerebellum, the region of the brain used during motor control, coordination, and the processing of language and emotions. She said the neural architecture of the cerebellum is “evolutionarily conserved” meaning it can be observed across many species, yet scientists don’t know exactly what it does.

“The mathematically beautiful work from Marr-Albus in the 1970s played a big role in starting a whole movement of modeling brain systems with neural networks. We wanted to extend these theories to explain how cerebellum-like architecture could support a wide range of behaviors,” Xie said.

As a computational neuroscientist, Xie learned how to map ideas between the math world and the natural world. She attributes her PhD advisor, Ashok Litwin-Kumar, an assistant professor of neuroscience at Columbia University, for playing a critical role in her development of this skill.

“Even though my current research as a postdoc is less focused on the neural level, this skill is still my bread and butter. I am grateful for the countless hours Ashok spent with me at the whiteboard,” Xie said.

Joining a Community of Socially Responsible Researchers

After completing her PhD, Xie interned with Basis Research Institute, where she developed models of avian cognition and social behavior. It was here that her mentor, Emily Mackevicius, co-founder and director at Basis, encouraged her to apply to the AI and Society Fellowship.

The Fellowship has enabled Xie to continue growing professionally through opportunities such as collaborations with research labs, the winter academic sessions at Arizona State, the Academy’s weekly AI and Society seminars, and by working with a cohort of like-minded scholars across diverse backgrounds, including the other two AI and Society Fellows Akuadasuo Ezenyilimba and Nitin Verma.

During the Fellowship, her interest in combining neuroscience and AI with mental health led her to develop research collaborations at Mt. Sinai Center for Computational Psychiatry. With the labs of Angela Radulescu and Xiaosi Gu, Xie is building computational models to understand causal relationships between attention and mood, with the goal of developing tools that will enable those with medical conditions like ADHD or bipolar disorder to better regulate their emotional states.

“The process of finding the right treatment can be a very trial-and-error based process,” said Xie. “When treatments work, we don’t necessarily know why they work. When they fail, we may not know why they fail. I’m interested in how AI, combined with a scientific understanding of the mind and brain, can facilitate the diagnosis and treatment process and respect its dynamic nature.”

Challenged to Look Beyond the Science

Xie says the Academy and Arizona State University communities have challenged her to venture beyond her role as a scientist and to think like a designer and as a public steward. This means thinking about AI from the perspective of stakeholders and engaging them in the decision-making process.

“Even the question of who are the stakeholders and what they care about requires careful investigation,” Xie said. “For whom am I building AI tools? What do these populations value and need? How can they be empowered and participate in decision-making effectively?”

More broadly, she considers what systems of accountability need to be in place to ensure that AI technology effectively serves the public. As a case study, Xie points to mainstream social media platforms that were designed to maximize user engagement, however the proxies they used for engagement have led to harmful effects such as addiction and increased polarization of beliefs.

She is also mindful that problems in mental health span multiple levels – biological, psychological, social, economic, and political.

“A big question on my mind is, what are the biggest public health needs around mental health and how can computational psychiatry and AI best support those needs?” Xie asked.

Xie hopes to explore these questions through avenues such as journalism and entrepreneurship. She wants to integrate various perspectives gained from lived experience.

“I want to see the world through the eyes of people experiencing mental health challenges and from providers of care. I want to be on the front lines of our mental health crises,” said Xie.

More than a Scientist

Outside of work, Xie serves as a resident fellow at the International House in New York City, where she organizes events to build community amongst a diverse group of graduate students from across the globe. Her curiosity about cultures around the world led her to visit a mosque for the first time, with Muslim residents from I-House, and to participate in Ramadan celebrations.

“That experience was deeply satisfying.” Xie said, “It compels me to get to know my neighbors even better.”

Xie starts her day by hitting the pool at 6:00 each morning with the U.S. Masters Swimming team at Columbia University. She approaches swimming differently now than when she was younger and competed competitively in an environment where she felt there was too much emphasis on living up to the expectations of others. Instead, she now looks at it as an opportunity to grow.

“Now, it’s about engaging in a continual process of learning,” she said. “Being around faster swimmers helps me learn through observation. It’s about being deliberate, exercising my autonomy to set my own goals instead of meeting other people’s expectations. It’s about giving my full attention to the present task, welcoming challenges, and approaching each challenge with openness and curiosity.”

Read about the other AI and Society Fellows:

From New Delhi to New York

A headshot of a man.

Academy Fellow Nitin Verma is taking a closer look at deepfakes and the impact they can have on public opinion.

Published April 23, 2024

By Nick Fetty

Nitin Verma’s interest in STEM can be traced back to his childhood growing up in New Delhi, India.

Verma, a member of the inaugural cohort for the Artificial Intelligence (AI) and Society Fellowship, a collaboration between The New York Academy of Sciences and Arizona State University’s School for the Future of Innovation in Society, remembers being fascinated by physics and biology as a child. When he and his brother would play with toys like kites and spinning tops, he would always think about the science behind why the kite stays in the sky or why the top continues to spin.

Later, he developed an interest in radio and was mesmerized by the ability to pick up radio stations from far away on the shortwave band of the household radio. In the early 1990s, he remembers television programs like Turning Point and Brahmānd (Hindi: ब्रह्मांड, literally translated to “the Universe”) further inspired him.

“These two programs shaped my interest in science, and then through a pretty rigorous school system in India, I got a good grasp of the core concepts of the major sciences—physics, chemistry, biology—and mathematics by the time I graduated high school,” said Verma. “Even though I am an information scientist today, I remain absolutely enraptured by the night sky, physics, telecommunication, biology, and astronomy.”

Forging His Path in STEM

Verma went on to pursue a bachelor’s in electronic science at the University of Delhi where he continued to pursue his interest in radio communications while developing technical knowledge of electronic circuits, semiconductors and amplifiers. After graduating, he spent nearly a decade working as an embedded software programmer, though he found himself somewhat unfulfilled by his work.

“In industry, I felt extremely disconnected with my inner desire to pursue research on important questions in STEM and social science,” he said.

This lack of fulfillment led him to the University of Texas at Austin where he pursued his MS and PhD in information studies. Much like his interest in radio communications, he was also deeply fascinated by photography and optics, which inspired his dissertation research.

This research examined the impact that deepfake technology can have on public trust of photographic and video content. He wanted to learn how people came to trust visual evidence in the first place and what is at stake with the arrival of deepfake technology. He found that perceived, or actual, familiarity with content creators and depicted environments, contexts, prior beliefs, and prior perceptual experiences guide public trust in the material deemed trustworthy.

“My main thesis is that deepfake technology could be exploited to break our trust in visual media, and thus render the broader public vulnerable to misinformation and propaganda,” Verma said.

A New York State of Mind

Verma captured this image of the historic eclipse that occurred on April 8, 2024.

After completing his PhD, he applied for and was admitted into the AI and Society Fellowship. The fellowship has enabled him to further his understanding of AI through opportunities such as the weekly lecture series, collaborations with researchers at New York University, presentations he has given around the city, and by working on projects with Academy colleagues such as Marjorie Xie and Akuadasuo Ezenyilimba.

Additionally, he is part of the Academy’s Scientist-in-Residence program, in which he teaches STEM concepts to students at a Brooklyn middle school.

“I have loved the opportunity to interact regularly with the research community in the New York area,” he said, adding that living in the city feels like a “mini earth” because of the diverse people and culture.

In the city he has found inspiration for some of his non-work hobbies such as playing guitar and composing music. The city provides countless opportunities for him to hone his photography skills, and he’s often exploring New York with his Nikon DSLR and a couple of lenses in tow.

Deepfakes and Politics

In much of his recent work, he’s examined the societal dimensions (culture, politics, language) that he says are crucial when developing AI technologies that effectively serve the public, echoing the Academy’s mission of “science for the public good.” With a polarizing presidential election on the horizon, Verma has expressed concerns about bad actors utilizing deepfakes and other manipulated content to sway public opinion.

“It is going to be very challenging, given how photorealistic visual deepfakes can get, and how authentic-sounding audio deepfakes have gotten lately,” Verma cautioned.

He encourages people to refrain from reacting to and sharing information they encounter on social media, even if the posts bear the signature of a credible news outlet. Basic vetting, such as visiting the actual webpage to ensure it is indeed the correct webpage of the purported news organization, and checking the timestamp of a post, can serve as a good first line of defense against disinformation, according to Verma. Particularly when viewing material that may reinforce one’s beliefs, Verma challenges them to ask themselves: “What do I not know after watching this content?”

While Verma has concerns about “the potential for intentional abuse and unintentional catastrophes that might result from an overzealous deployment of AI in society,” he feels that AI can serve the public good if properly practiced and regulated.

“I think AI holds the promise of attaining what—in my opinion—has been the ultimate pursuit behind building machines and the raison d’être of computer science: to enable humans to automate daily tasks that come in the way of living a happy and meaningful life,” Verma said. “Present day AI promises to accelerate scientific discovery including drug development, and it is enabling access to natural language programming tools that will lead to an explosive democratization of programming skills.”

Read about the other AI and Society Fellows: