Skip to main content

Ethics and Equity: Navigating Inclusive Excellence in Healthcare and Health Research

The event provided a collaborative platform among speakers and panelists across academia, industry, government, non-profits, and more to exchange knowledge on ethical responsibilities to improve equity within healthcare and biomedical research.

Published April 22, 2025

By Christina Szalinski
Academy Contributor

“We are living in a moment that desperately needs clarity of principle and deep moral courage.” And with that statement, Amy Ben Arieh, JD, MPH, executive director of the Fenway Institute, and nationally recognized authority on human research participant protection and inclusive research practices, opened the proceedings of a day-long conference that explored the pursuit of equity and ethical considerations in both healthcare delivery and research conduct. 

The New York Academy of Sciences brought together researchers and healthcare professionals for discussions on identifying systemic barriers, sharing best practices and strategies to advance inclusivity, ensuring that healthcare and research benefit all members of society.

Staying the Course: Centering Ethics and Equity in Health Care and Health Research

Opening keynote speaker, Lisa Cooper, MD, MPH, James F. Fries Professor of Medicine and the Bloomberg Distinguished Professor of Equity in Health and Health Care at the Johns Hopkins University Schools of Medicine, Nursing, and Public Health, said: “Health equity means that everyone should have a fair opportunity to obtain their full health potential, and that no one should be disadvantaged from achieving this potential because of socially determined circumstances.” Dr. Cooper is the founder and director of the Johns Hopkins Center for Health Equity.

She went on to say that health disparities originate from social norms, institutional economic policies, and environmental living conditions. To address this, two approaches are required: relationship-centered care, which considers the personhood of everyone involved in healthcare, and structural competence, which involves acknowledging and breaking down barriers such as poverty and racism.

Dr. Cooper expressed the need for improvements in healthcare, such as patient-centered communication; community engagement, including a shift from outreach to shared leadership; workforce diversity, which involves establishing a culture of trust through equitable and inclusive treatment and attracting and retaining a diverse group of participants.

In her workplace experience, Dr. Cooper noted that diversity and inclusiveness lead to innovation and creativity as well as overall organizational excellence. Integrating these efforts into the part of the goals leads to success. As far as she is aware, no diversity, equity, and inclusion efforts have led to the marginalization of any group or worsening of health or well-being for any group.

Finally, Dr. Cooper addressed the stumbling blocks to achieving health equity, including the social and political climate, lack of resources, and current uncertainties. She encouraged attendees to transform these challenges into opportunities for growth and innovation. Quoting Dr. Martin Luther King: “Injustice anywhere is a threat to justice everywhere,” Dr. Cooper closed by noting that we need empathy, self-care, and creativity in order to navigate these obstacles.

Building Trust through Representation: Community Engagement and Research Practices

A panel, moderated by Carol R. Horowitz, MD, professor of population health science and policy at Icahn School of Medicine at Mount Sinai, brought together Carl Streed, MD, associate professor of medicine and research lead for the GenderCare Center at Boston University; Consuela Wilkins, MD, senior associate dean for equity at Vanderbilt University; Randi Woods, executive director of Sisters Together Reaching; and Anhtuh Huang, PhD, deputy director of We Act for Environmental Justice in Harlem.

The central theme of the panel was engaging the local community beyond transactional interactions. The panelists discussed how some institutions have historically perpetuated harm against marginalized communities, which explains why communities have a justified skepticism of institutions and research. However, as Dr. Wilkins pointed out, when we talk about trust and building trust, it can put the burden on the community—the people who have been disenfranchised and harmed. Instead, she recommends focusing on demonstrating trustworthiness.

To build trust, Randi Woods recommended collaborating with the community and including community perspectives in research priorities and design, as well as moving closer to shared leadership.

One way to establish relationships within the local community, Dr. Streed said, is through Institutional Review Boards (IRBs), which can require researchers to consider how the community informs the research or how the research benefits the community. 

Dr. Huang noted the importance of community engagement, that considered other viewpoints, shared resources, and strategized partnerships, as well as a communications plan to navigate conflicts and challenges.

Building a Health Research Workforce that Centers Equity and Community

Brian Smedley, PhD, senior fellow in the health policy division at the Urban Institute, said that the current healthcare systems are designed to generate profit rather than health, which create structural inequities. He recommended increasing transparency throughout the research process and training professionals in community engaged practices. He stressed the importance of involving community members in every stage of research—from setting priorities and developing research questions to interpreting and disseminating results to rebuild trust in medical and public health institutions.  

Ethical and Equitable Strategies for Diversifying the Biomedical Research Workforce

Emma Benn, DrPH, associate professor in the Center for Biostatistics and Department of Population Health Science and Policy at the Icahn School of Medicine at Mount Sinai, moderated a panel that included Philip Alberti, PhD, founding director of the Association of American Medical Colleges; Hila Berger, MPH, assistant vice president of research regulatory affairs at Rutgers Research; and Linda Pololi, MBBS, distinguished research scientist at the Institute for Economic and Racial Equity at Brandeis University and Director of the National Initiative on Gender, Culture and Leadership in Medicine at Brandeis. The overarching message was that diversifying the biomedical research workforce is critical for improving scientific innovation and healthcare outcomes.

Dr. Pololi noted that research shows that while many faculty believe in the importance of diversity, only a third think that race and ethnicity should be considered in hiring and promoting diverse candidates. Yet it was pointed out by Dr. Benn, that diverse teams lead to higher productivity and accelerated innovation.

The panelists stressed that diversifying the workforce isn’t just about representation, but about fundamentally changing institutional cultures. They shared examples of progress, such as creating community advisory boards for research protocols and bringing up diversity and inclusion in the hiring process. Additionally, they recommended measuring the value of outcomes that diverse research teams provide, encouraging accrediting bodies to influence institutional change, and creating systems to elevate diverse voices. Dr. Alberti and Hila Burger also suggested that K-12 education is an important place to create equal opportunity in the STEM pipeline by encouraging all young people to see themselves as having a place in STEM.

Is AI a Threat or a Solution for Equity, Engagement, and Inclusion?

Bernard Lo, MD, emeritus professor of medicine and director emeritus of the Program in Medical Ethics at University of California, San Francisco presented on the complex relationship between artificial intelligence and equity, highlighting AI’s potential to be both a threat and potential solution for improving diversity and inclusion.

He explained that AI systems can perpetuate existing biases when they are trained on historically skewed datasets. AI can discriminate in areas like hiring, the legal system, loan procurement, and healthcare by replicating biases embedded in training data.

However, he also outlined several ways generative AI could make positive changes by detecting bias in text, analyzing large data sets from healthcare records, improving patient communication, simplifying the process by which people access services for housing or financial insecurity, and developing easier-to-understand consent protocols in research. Dr. Lo noted that AI could also make certain healthcare screens cheaper and more accessible, like eye scans for diabetic retinopathy. Rather than allowing AI to perpetuate inequalities, he said that we need a collaborative, community-engaged approach for it to become a tool for empowerment.

Ensuring Equity and Ethical Practices in Clinical Trials

Giselle Corbie, MD, professor of social medicine at the University of North Carolina School of Medicine, moderated a panel exploring inclusive research practices, and emphasized the critical importance of trust and community engagement.  Ebony Boulware, MD, dean of Wake Forest School of Medicine and health equity researcher; and Maggie Alegria, PhD, chief of the Disparities Research Unit at Mass General Hospital participated.

A fundamental problem, Dr. Corbie noted, is that previous poor treatment of minorities and women by institutions, may be why they are reluctant to participate in research. A solution is to engage marginalized communities and populations in the research design.

Drs. Boulware and Corbie suggested using recruitment tools that ensure that there is no discrimination in the selection of participants, such as AI screening of health records, which can increase diversity in clinical trials. Also, by ensuring racial, ethnic and linguistic concordance in research studies, Dr. Alegria said, it can make participants feel safe and heard.

The panelists stressed the importance of returning research results to communities, providing fair compensation, and making sure that interventions don’t end when a study is over. They also emphasized the need for institutional accountability and sensitivity on the part of researchers when it comes to previous historical inequities. They also highlighted the critical need for meeting with policymakers to help keep successful interventions going by involving communities and community-based organizations, as well as a commitment to creating research practices that are inclusive to diverse populations.

Looking to the Future: Ensuring a Healthier America for All

David Williams, PhD, professor of public health and professor of African and African-American studies at Harvard T.H. Chan School of Public Health, gave the closing keynote, highlighting that all Americans should have better health. The U.S. spends the most on medical care globally, but has an average lower life expectancy than more than 60 industrialized countries.

Dr. Williams noted that a recent study showed that because of racial disparities in health, 203 Black people die prematurely every day. This isn’t just a loss of life, he said, it is also $15.8 trillion in loss every year. And because of racial inequities in health, Black children are three times more likely to lose a mother by age 10, and Black adults are ten times more likely to lose a child by age 30.

Programs that create equity help everyone, he said, citing the example of the State of Delaware, which implemented colorectal cancer screening and treatment regardless of health insurance while combining it with outreach. The program eliminated racial inequities in screening and nearly eliminated the mortality difference for African-Americans. The initiative provided care to all, and a net savings of $1.5 million per year due to reduced incidence and earlier diagnosis.

Dr. Williams said that we need to reduce implicit bias in care. He explained that short anti-bias interventions don’t always reduce bias, according to the evidence. Dr. Patricia Divine, professor of psychology at the University of Wisconsin-Madison, developed a 12-week program that teaches providers multiple strategies and reduces bias. Initial research shows that it works.

Dr. Williams also emphasized the importance of diversifying the healthcare workforce. A study from Northern California gave African-American males a coupon to go to a nearby hospital for screening. Once at the hospital, they were randomly assigned to a doctor of their own race or another doctor. Men who saw a doctor of their own race were more likely to talk about other health problems, get screened for diabetes, receive the flu vaccine, and be screened for cholesterol. Additionally, studies show that when there are more Black primary care providers in a county, the higher the life expectancy for Black people in that area.

“Most Americans are unaware studies show that racial inequities in health even exist. We need to pay attention to how we talk and frame the policy solutions,” Dr. Williams said. “We cannot be silent…we need to redouble efforts to work together to build a healthier America for all.”

The New York Academy of Sciences hosts a diverse array of events year-round. Check out our upcoming events.

Explore STEM Careers with the Academy

With our national and global economy increasingly powered by STEM, it’s crucial to offer opportunities to explore the careers available in these fields.

Published November 27, 2024

By Zamara Choudhary
Program Manager, Education

A recent study, titled STEM and the American Workforce, found that two thirds of people in the United States are employed in STEM-related occupations. The analysis took an inclusive view of STEM, accounting for all occupations that contribute to STEM-related work regardless of educational attainment. Altogether, this group accounts for a staggering 69% of the U.S. GDP and contributes $2.3 million in annual federal tax revenue.

STEM powers our economy, and the number of these jobs are growing at a rate that cannot be filled by the workforce. Global society is reliant on quickly developing technologies, and there is consistent demand for innovation and collaboration across continents. As a result, the U.S. must “develop adequate talent in science, technology, engineering, and mathematics (STEM) fields to ensure economic strength, security, global competitiveness, and environmental health,” according to the U.S. Chamber of Commerce Foundation.

To support this goal, this fall, The New York Academy of Sciences (the Academy) launched a year-long virtual series called Chat with Experts: Career Explorer, which explores the variety of careers an individual can pursue with a STEM degree or background. Each month on a select Thursday, a STEM professional gives a presentation about their background, career path, and current work, followed by questions from the audience. Featured speakers work at organizations including Pfizer, the City College of New York, the New York Hall of Science, the Broad Institute at Harvard and MIT, Noven Pharmaceuticals, the Space Telescope Science Institute, and more.

There are so many paths to STEM. Join the Academy and explore some of the possibilities pursuing a career in STEM can offer. Learn more about and register for Chat with Experts: Career Explorer.

From Neural Networks to Reinforcement Learning to Game Theory

Academics and industry experts shared their latest research and the broader potential of AI during The New York Academy of Sciences’ 2025 Machine Learning Symposium.

Published November 14, 2024

By Nick Fetty
Digital Content Manager

Pin-Yu Chen, PhD, a principal research scientist at IBM Research, presents during the Machine Learning Symposium at the New York Academy of Medicine on Oct. 18, 2024. Photo by Nick Fetty/The New York Academy of Sciences.

The New York Academy of Sciences (the Academy) hosted the 15th Annual Machine Learning Symposium at the New York Academy of Medicine on October 18, 2024. This year’s event, sponsored by Google Research and Cubist Systematic Strategies, included keynote addresses from leading experts, spotlight talks from graduate students and tech entrepreneurs, and opportunities for networking.

Exploring and Mitigating Safety Risks in Large Language Models and Generative AI

Pin-Yu Chen, PhD, a principal research scientist at IBM Research, opened the symposium with a keynote lecture about his work examining adversarial machine learning of neural networks for robustness and safety.

Pin-Yu Chen, PhD. Photo by Nick Fetty/The New York Academy of Sciences.

Dr. Chen presented the limitations and safety challenges facing researchers in the realm of foundation models and generative AI. Foundation models “mark a new era of machine learning,” according to Dr. Chen. Data sources, such as text, images, and speech, help to train these foundation models. These foundation models are then adapted to perform tasks ranging from answering questions to object recognition. ChatGPT is an example of a foundation model.  

“The good thing about foundation models is now you don’t have to worry about what task you want to solve,” said Dr. Chen. “You can spend more effort and resources to train a universal foundation model and fine-tune the variety of the downstream tasks that you want to solve.”

While a foundation model can be viewed as an “one for all” solution, according to Dr. Chen, generative AI is on the other side of the spectrum and takes an “all for more” approach. Once a generative AI model is effectively trained with a diverse and representative dataset, it can be expected to generate reliable outputs. Text-to-image and text-to-video platforms are two examples of this.

Dr. Chen’s talk also brought in examples of government action taken in the United States and in European Union countries to regulate AI. He also discussed “hallucinations” and other bugs occurring with current AI systems, and how these issues can be further studied.

“Lots of people talk about AGI as artificial general intelligence. My view is hopefully one day AGI will mean artificial good intelligence,” Dr. Chen said in closing.

Morning Short Talks

The morning session also included a series of five-minute talks delivered by early career scientists:

  • CoLoR-Filter: Conditional Loss Reduction Filtering for Targeted Language Model Pre-training
    David Brandfonbrener, PhD, Harvard University
  • On the Benefits of Rank in Attention Layers
    Noah Amsel, BS, Courant Institute of Mathematical Sciences
  • A Distributed Computing Lens on Transformers and State-Space Models
    Clayton Sanford, PhD, Google Research
  • Efficient Stagewise Pretraining via Progressive Subnetworks
    Abhishek Panigrahi, Bachelor of Technology, Princeton University
  • MaxMin-RLHF: Towards Equitable Alignment of Large Language Models with Diverse Human Preferences
    Souradip Chakraborty, PhD, University of Maryland

Revisiting the Exploration-Exploitation Tradeoff in the Era of Generative Sequence Modeling

Daniel Russo, PhD. Photo by Nick Fetty/The New York Academy of Sciences.

Daniel Russo, PhD, the Philip H. Geier Jr. Associate Professor of Business at Columbia University, delivered a keynote about reinforcement learning. This field combines statistical machine learning with online decision-making. Prof. Russo covered the work that has taken place in his lab over the past year.

He pointed out that today, “humans have deep and recurring interactions with digital services that are powered through versions of AI.” This includes everything from platforms for dating and freelance work, to entertainment like Spotify and social media, to highly utilitarian applications such as for healthcare and education.

“The thing I deeply believe is that decision making among humans involves information gathering,” said Prof. Russo. “It involves understanding what you don’t know about the world and figuring out how to resolve it.”

He said medical doctors follow a similar process as they assess what might be affecting a patient, then they decide what tests are needed to better diagnose the issue. MDs must weigh the costs versus the benefits. Prof. Russo pointed out that in their current state, it’s difficult to design machine learning agents to effectively make these assessments.

He then discussed major advancements in the field that have occurred over the past decade and did a deep dive into his work on generative modeling. Prof. Russo closed his talk by emphasizing the difficulty of quantifying uncertainty in neural networks, despite his desire to be able to program them for decision-making.

“I think what this [research] is, is the start of something. Definitely not the end,” he said. “I think there’s a lot of interesting ideas here, so I hope that in the years to come this all bears out.”

Award-Winning Research

Researchers, ranging from high schoolers to industry professionals, shared their projects and work with colleagues during the popular poster session. Graduate students, postdocs, and industry professionals delivered a series of spotlight talks. Conference organizers assessed the work and presented awards to the most outstanding researchers. Awardees include:

Posters:

  • Aleksandrs Slivkins, PhD, Microsoft Research NYC (his student, Kiarash Banihashem, presented on his behalf)
  • Aditya Somasundaram, Bachelor of Technology, Columbia University
  • R. Teal Witter, BA, New York University

Spotlight Talks:

  • Noah Amsel, BS, Courant Institute of Mathematical Sciences
  • Claudio Gentile, PhD, Google
  • Anqi Mao, PhD, Courant Institute of Mathematical Sciences
  • Tamalika Mukherjee, PhD, Columbia University
  • Clayton Sanford, PhD, Google Research
  • Yutao Zhong, PhD, Courant Institute of Mathematical Sciences
The Spotlight talk award winners. From left: Yutao Zhong, PhD; Angi Mao, PhD; Tamalika Mukherjee, PhD; Corinna Cortes, PhD (Scientific Organizing Committee); Claudio Gentile, PhD; Noah Amsel, BS; and Clayton Sanford, PhD.

Playing Games with Learning Agents

Jon Schneider, PhD. Photo by Nick Fetty/The New York Academy of Sciences.

To start the afternoon sessions, Jon Schneider, PhD, from Google Research New York, shared a keynote covering his research at the intersection of game theory and the theory of online learning.

“People increasingly now are offloading their decisions to whatever you want to call it; AI models, learning algorithms, automated agents,” said Dr. Schneider. “So, it’s increasingly important to design good learning algorithms that are capable of making good decisions for us.”

Dr. Schneider’s center of expertise and research involves decision-making in strategic environments for both zero-sum (rock-paper-scissors) and general-sum games (chess, Go, StarCraft). He shared some examples of zero-sum games serving as success stories for the theories of online learning and game theory. In this realm, researchers have observed “tight connections” between the economic theory and the theory of learning, finding practical applications for these theoretical concepts.

“Thinking about these convex objects, these menus of learning algorithms, is a powerful technique for understanding questions in this space. And there’s a lot of open questions about swap regret and the manipulative-ability of learning algorithms that I think are still waiting to be explored,” Dr. Schneider said in closing.

Afternoon Short Talks

Short talks in the afternoon by early career scientists covered a range of topics:

  • Improved Bounds for Learning with Label Proportions
    Claudio Gentile, PhD, Google
  • CANDOR: Counterfactual ANnotated DOubly Robust Off-Policy Evaluation
    Aishwarya Mandyam, MS, Stanford University
  • Cardinality-Aware Set Prediction and Top-k Classification
    Anqi Mao, PhD, Courant Institute of Mathematical Sciences
  • Cross-Entropy Loss Functions: Theoretical Analysis and Applications
    Yutao Zhong, PhD, Courant Institute of Mathematical Sciences
  • Differentially Private Clustering in Data Streams
    Tamalika Mukherjee, PhD, Columbia University

Towards Generative AI Security – An Interplay of Stress-Testing and Alignment

Furong Huang, PhD. Photo by Nick Fetty/The New York Academy of Sciences.

The event concluded with a keynote talk from Furong Huang, PhD, an associate professor of computer science at the University of Maryland. She recalled attending the Academy’s Machine Learning symposium in 2017. She was a postdoctoral researcher for Microsoft Research at the time, and had the opportunity to give a spotlight talk and share a poster. But she said she dreamt of one day giving a keynote presentation at this impactful conference.

“It took me eight years, but now I can say I’m back on the stage as a keynote speaker. Just a little tip for my students,” said Prof. Huang, which was met by applause from those in attendance.

Her talk touched on large language models (LLMs) like ChatGPT. While other popular programs like Spotify and Instagram took 150 days and 75 days, respectively, to gain one million users, ChatGPT was able to achieve this benchmark in just five days. Furthermore, Prof. Huang pointed out the ubiquity of AI in society, citing data from the World Economic Forum, which suggests that 34% of business products are produced using AI, or augmented by AI algorithms.

AI and Public Trust

Despite the ubiquity of the technology (or perhaps because of it), she points out that public trust of AI is lacking. Polling shows a strong desire from Americans to make AI safe and secure. She went on to explain that for public trust to be gained, LLMs and visual language models (VLMs) need to be better calibrated to avoid behavioral hallucinations. This happens when the AI misreads situations and infers behaviors that aren’t actually occurring. Prof. Huang concluded by emphasizing the utility of stress-testing when developing AI systems.

“We use stress-testing to figure out the vulnerabilities, then we want to patch them. So that’s where alignment comes into play. Using the data we got from stress-testing, we can do training time and test time alignment to make sure the model is safe,” Prof. Huang concluded, adding that it may be necessary to conduct another round of stress-testing after a system is realigned to further ensure safety.

Want to watch these talks in their entirety? The video is on-demand for Academy members. Sign up today if you aren’t already part of our impactful network.

Spring Soirée

April 22, 2025 | 6:00 PM ET

The University Club of New York | One West 54th Street, New York, NY 10019

Reception: 6:00 PM
Program & Dinner: 7:00 PM
Dress Code: Festive or Business Attire (jacket and collared shirt for men required)

Join us for the Academy’s premiere fundraising event of the year, an unforgettable evening of innovation and discovery at our Spring Soirée, hosted by Academy President and CEO, Nicholas Dirks.

Together, we will celebrate the exceptional achievements of accomplished figures who have expanded the frontiers of knowledge and are shaping the future of science.

We will also honor the accomplishments of STEM Teacher of the Year, Brittany Beck, Biology Teacher at the High School of Telecommunication Arts and Technology; STEM Mentor of the Year, Megan C. Henriquez of the CUNY Graduate Center; and five Emerging Student Researchers in our Education Programs, as well as outstanding contributions among our Board of Governors. 

The Soirée promises to be an inspiring evening, filled with engaging conversations and captivating stories of scientific triumph. This event will offer a wonderful opportunity for you to network with scientific leaders from companies, universities and research institutes, and philanthropic organizations.

Honorees

Dr. Albert Bourla
Chairman & CEO, Pfizer
Yann LeCun
VP & Chief AI Scientist, Meta
Janet Tobias
Emmy Award-Winning Director, Writer and Producer
Jared Lipworth
Head of Studio,
HHMI Tangled Bank Studios 

Dinner Chair

Chandrika K. Tandon
Academy Board Member,
Grammy Award-Winning Artist,
and Humanitarian

Sponsors

Underwriter


Mission Partner


Benefactors

Chandrika K. Tandon


Patrons

HHMI and Tangled Bank Studios Logo

Thomas C. Franco


Soirée Partner

Laurie M. Tisch Illumination Fund logo

Friends of the Soirée

AKA Strategy

Brooklyn Botanical Garden

City University of New York Advanced Science Research Center

Club Quarters

Eisner Amper

Liberty Science Center

Mercer Labs

Mushett Family Foundation

National Museum of Mathematics

New York Botanical Garden

US ORT Operations

Deepfakes and Democracy in the Digital Age

A woman presents during a panel event.

Combatting misinformation in the 2024 U.S. Presidential Election is crucial to ensuring democracy. It falls to science to address this challenge.

Published October 8, 2024

By Nick Fetty
Digital Content Manager

From left: Nicholas Dirks; Joshua Tucker, PhD, Maya Kornberg, PhD; and Luciano Floridi, PhD. Photo by Nick Fetty/The New York Academy of Sciences.

The complexities of artificial intelligence were discussed during the Deepfakes and Democracy in the Age of AI event, presented by The New York Academy of Sciences and Cure on September 17, 2024.

Seema Kumar, Chief Executive Officer of Cure, a healthcare innovation campus in New York City, set the stage for the discussion by emphasizing the impact of AI on healthcare. She cited a survey of nearly 2000 physicians who expressed concern about changes in behavior they’ve observed in patients as we move into a more digital age.

Nicholas Dirks. Photo by Nick Fetty/The New York Academy of Sciences.

“Patients are coming to them with misinformation and they’re not trusting physicians when physicians correct them,” said Kumar, who also serves on the Academy’s Board of Governors. “In healthcare, too, this is becoming an issue we have to tackle and address.”

Nicholas Dirks, president and CEO of the Academy introduced the panel of experts:

  • Luciano Floridi, PhD: Founding Director of the Digital Ethics Center and Professor in the Practice in the Cognitive Science Program at Yale University. His expertise covers the ethics and philosophy of AI.
  • Maya Kornberg, PhD: Senior Research Fellow and Manager, Elections & Government, at NYU Law’s Brennan Center for Justice. She leads work around information and misinformation in politics, congress, and political violence.
  • Joshua Tucker, PhD: Professor of Politics, Director of the Jordan Center for the Advanced Study of Russia, and Co-Director of the NYU Center for Social Media and Politics. His recent work has focused on social media and politics.

The Role of Deepfakes

Professor Tucker suggested that research can be an effective way to better protect information integrity.

“The question is, and I don’t know the answer to this yet, but this is something we want to get at with research,” he said. “Is there a meaningful difference across modes of communication?” adding that modes include text, images, and video.

Professor Tucker argued that the most impactful video so far in this U.S. election cycle wasn’t a deepfake at all. Instead, it was the unedited footage of President Joe Biden’s performance in the debate on June 27, 2024.

Not A New Phenomenon

Luciano Floridi, PhD. Photo by Nick Fetty/The New York Academy of Sciences.

Dr. Kornberg agreed that misinformation is not a new phenomenon. However, she does recognize that because of the often-realistic nature of deepfakes, it may be more difficult for people today to differentiate fact from fiction. The lack of regulation in the tech sector in this regard further complicates the issue. She posed the example of an AI generated phone call impersonating an election official sent to misinform potential voters.

“It can be difficult to determine if this is a real call or a fake call,” said Dr. Kornberg. “It’s extremely important, I think, as a society for us to be doubling down in civic listening and civic training programs.”

The ease of producing realistic AI-generated content is also contributing to the issue, according to Professor Floridi. He cautioned that media can become so oversaturated with this content, that consumers begin questioning the legitimacy of everything.

Professor Floridi cited a research project that he and his team are currently working on with the Wikimedia Foundation. The team hopes to release their findings prior to the U.S. election, but at this point, they have not observed anything particularly worrisome in terms of deepfakes.

Maya Kornberg, PhD. Photo by Nick Fetty/The New York Academy of Sciences

“What we do see is call it ‘shallowfakes.’ The tiny little change [to otherwise authentic content],” Professor Floridi said. He added that these “shallowfakes” can almost be more dangerous than deepfakes because the slight manipulations are generally less obvious.

The Issue of Credibility

Dirks then shifted the focus of the conversation to credibility. With first order effects, a person sees something untrue, then forms an opinion based on that misinformation. Dirks invited Professor Tucker to talk about his research on second order effects, in which the political consequences can be more salient and destabilizing.

Professor Tucker and his lab studied the Russian misinformation on Twitter during the 2016 U.S. Presidential Election. However, counter to popular belief, the researchers did not observe a significant correlation to indicate that exposure to such misinformation influenced American voter opinion.

“Yet, we spent years talking about how the Russians were able to change the outcomes of the election. It was a convenient narrative,” said Professor Tucker. “But it worried me. And I wondered for a long time after this, did that sow the seeds of doubt in people’s minds?”

With the current hype surrounding generative AI as we enter the 2024 election, Professor Tucker expressed concern that it can be a new tool to further spread misinformation.

Combatting Voter Suppression

Dr. Kornberg and her colleagues at the Brennan Center study the impact of voter suppression efforts. The researchers are studying ways to debunk, or “pre-bunk,” certain misconceptions that may be on the minds of voters. She said that purveyors of misinformation deliberately focus on simple themes like malfunctioning voter machines, distrust of election officials, and dead people voting.

Joshua Tucker, PhD. Photo by Nick Fetty/The New York Academy of Sciences.

“We saw that in 2020. We saw that in 2022. There’s a lot of reason to believe we’re going to see that in 2024,” said Dr. Kornberg. “So, we’re working to proactively get resources out to election administrators [so they can better counteract these threats].”

She cited the role of AI in further amplifying misinformation, which will make deciphering fact from fiction even more difficult for the average voter. Dr. Kornberg and her colleagues aim to get ahead of these issues by offering training and other resources for election administrators. She also advocates for experts with technical expertise in AI to advise local election official offices, municipalities, state legislatures and even congress.

“There’s a lot of demystifying for the workers themselves that we’re trying to do with our trainings about how to deal with AI,” said Dr. Kornberg. “This will help us to come up with some intelligent and timely solutions about how to combat this.”

Academy members can access an on-demand video recording of the event. Click here to listen to or watch the full conversation.

Not a member? Sign up today.

Beyond “Lost” Cities: Archaeology’s Digital Revolution and the Promises and Challenges of AI

November 4, 2024 | 6:00 PM – 8:30 PM ET

115 Broadway, 8th Floor, New York, NY 10006
or join virtually by Zoom

6:00 pm – 7:00 PM: Dinner in the pantry room ($20 suggested, free for students)
7:00 pm – 8:30 PM: Presentation and Q&A in the auditorium

Speaker: Parker VanValkenburgh (Associate Professor Anthropology, Archaeology, and the Ancient World, Brown University)
Discussant: Terence N.  D’Altroy (Loubat Professor of American Archaeology, Columbia University)

Using remote sensing data, including LiDAR (light detection and ranging) and high-resolution satellite imagery, archaeologists are working across scales and mapping sites at levels of detail that once seemed impossible. In parallel, deep learning models are transforming our ability to analyze the resulting large-scale datasets. However, these new developments also present practical and ethical challenges. In this talk, Dr. VanValkenburgh will draw on his own multi-faceted research projects in Peru to discuss how digital approaches are expanding archaeology’s reach and scientific impact, while also changing the way that the field works with stakeholders and publics.

Speakers

Peter VanValkenburgh
Associate Professor of Anthropology, Archaeology, and the Ancient World
Brown University
Terence N.  D’Altroy
Loubat Professor of
American Archaeology
Columbia University

Pricing

All: Free

About the Series

Since 1877, the Anthropology Section of The New York Academy of Sciences has served as a meeting place for scholars in the Greater New York area. The section strives to be a progressive voice within the anthropological community and to contribute innovative perspectives on the human condition nationally and internationally. Learn more and view other events in the Anthropology Section series.

Ethics and Equity: Navigating Inclusive Excellence in Healthcare and Health Research

March 25, 2025 | 8:35 AM – 6:10 PM ET

Join us on March 25, 2025, for Ethics and Equity: Navigating Inclusive Excellence in Healthcare and Health Research. This one-day conference will explore critical issues in creating a more inclusive and equitable healthcare and research ecosystem.

Critical discussions will build trust through representation by engaging communities and refining research practices, developing ethical and equitable strategies to diversify the biomedical research workforce, and ensuring fairness and ethical rigor in clinical trials.

This event provides a collaborative platform among speakers and panelists across academia, industry, government, non-profits, and more to exchange knowledge on ethical responsibilities within healthcare and biomedical research. Don’t miss this opportunity to connect with leaders in the field.

Therapeutic Approaches to Protein Misfolding in Neurodegenerative Disease

April 28, 2025 – April 29, 2025

Maintenance of proteostasis, an interconnected network of cellular processes that govern protein synthesis, folding, and degradation, is critical for cellular health. Imbalances in proteostasis are closely associated with aging and prevalent human neurodegenerative diseases, such as Alzheimer’s Disease and Parkinson’s Disease. This symposium will bring together leading experts in proteostasis biology, neurodegenerative disease research, clinical practice, pharmaceutical development, and investors to discuss the latest advancements in the field. The meeting will serve as a dynamic platform for stimulating discussions on translating current knowledge of proteostasis into innovative therapeutic strategies to address protein misfolding in neurodegenerative diseases.

Sponsors

Presented By

Regulated Degradation of RNA and Proteins: The Dr. Paul Janssen Award Symposium

January 30, 2025 | 2:00 PM – 5:00 PM ET

Lynne Maquat, PhD, of the University of Rochester and Alexander Varshavsky, PhD, of the California Institute of Technology have been awarded the prestigious 2024 Dr. Paul Jannsen Award for their fundamental discoveries in the regulated degradation of RNAs and proteins.

Dr. Maquat’s research has unveiled how cells selectively destroy flawed messenger RNA molecules to prevent the production of abnormal proteins. Dr. Varshavsky’s work uncovered key aspects of the ubiquitin system, including the first degradation signals (degrons) in short-lived proteins. Together, their collective discoveries have profoundly advanced our understanding of cellular mechanisms, opening avenues for new treatments for many human diseases such as cystic fibrosis, cancer, neurodegeneration, and disorders of immunity.

This hybrid symposium will celebrate their pioneering work. Symposium registration is complimentary. However, pre-registration is required for both in-person and virtual participation.

Sponsors

Presented By

Cancer Metabolism and Signaling in the Tumor Microenvironment

April 8, 2025 | 9:00 AM – 5:00 PM ET

Join leading experts at the forefront of cancer metabolism research for a one-day event on April 8, 2025, in New York City. The New York Academy of Sciences invites you to “Cancer Metabolism and Signaling in the Tumor Microenvironment” where top basic, translational, and clinical scientists will explore the intersection between cell signaling and metabolism.

Modern research in cancer metabolism and signaling has uncovered complex metabolite-signaling networks in cancer. These networks support tumor progression by enabling cell growth, influencing stress responses, restructuring the tumor microenvironment, aiding immune evasion, and promoting metastasis. Many of these oncogenic metabolic changes are enriched in tumors. These insights offer promising new therapeutic targets for combating cancer.

Join us for the latest in our series of annual symposia on new developments in cancer metabolism research. This event provides a collaborative platform to exchange knowledge on how tumor cells exploit cellular signaling and metabolic pathways to support malignant growth.

Don’t miss this opportunity to connect with leaders in the field and stay at the cutting edge of oncology innovation.

Sponsors

Presented By

The New York Academy of Sciences
Cancer & Signaling Discussion Group

Sponsored By

Lead Supporter: Cancer & Signaling Discussion Group