115 Broadway, 8th Floor, New York, NY 10006 or join virtually by Zoom
Speakers
Dr. Anand Pandian
Krieger-Eisenhower Professor of Anthropology at Johns Hopkins University
Pricing
All: Free
About the Series
Since 1877, the Anthropology Section of The New York Academy of Sciences has served as a meeting place for scholars in the Greater New York area. The section strives to be a progressive voice within the anthropological community and to contribute innovative perspectives on the human condition nationally and internationally. Learn more and view other events in the Anthropology Section series.
Winner of the Junior Academy Challenge – Fall 2024 “Ethical AI”
Published May 16, 2025
By Nicole Pope Academy Education Contributor
Sponsored by The New York Academy of Sciences
Team members: Emma L. (Team Lead) (New Jersey, United States), Shubh J. (California, United States), Darren C. (New York, United States), Aradhana S. (Pennsylvania, United States), Shreshtha B. (Kuwait), Jemali D. (New York, United States)
Mentor: Abdul Rauf (Pakistan)
Artificial Intelligence (AI) is evermore present in our lives and affects decision-making in government agencies, corporations, and small businesses. While the technology brings numerous opportunities to enhance productivity and pushes the boundaries of research, predictive AI models have been trained on data sets that contain historical data. As a result, they risk perpetuating and amplifying bias, putting groups who have traditionally been marginalized and underrepresented at a disadvantage.
Taking up the challenge of making AI more ethical and preventing the technology from harming vulnerable and underrepresented groups, this winning United States and Kuwait based team sought ways to identify and correct the inherent bias contained in large language models (LLM). “[The Ethical AI Innovation Challenge] helped me realize the true impact of bias in our society today, especially as predictive AI devices continue to expand their usage and importance,” acknowledged team lead Emma, from New Jersey. “As we transition into a future of increased AI utilization, it becomes all the more important that the AI being used is ethical and doesn’t place anyone at an unjustified disadvantage.”
The team conducted a thorough literature review and interviewed AI experts before devising their solution. In the course of their research, they came across real-life examples of the adverse effects of AI bias, such as an AI healthcare tool that recommended further treatment for white patients, but not for patients of color with the same ailments; a hiring model that contained gender bias, limiting opportunities for women; and a tool used to predict recidivism that incorrectly classified Black defendants as “high-risk” at nearly twice the rate it did for white defendants.
AI Bias
Team member Shreshthafrom Kuwait said she was aware of AI bias but “through each article I read, each interview I conducted, and each conversation I had with my teammates, my eyes opened to the topic further. This made me even keener on trying to find a solution to the issue.” She added that as the only team member who was based outside of the USA, “I ended up learning a lot from my teammates and their style of approaching a problem. We all may have had the same endpoint but we all had different routes in achieving our goal.”
The students came together regularly across time zones for intense working sessions to come up with a workable solution, with support from their mentor. “While working on this, I learned that my team shared one quality in common – that we are all committed to making a change,” explained teammate Shubh. “We had all unique skills, be it management, coding, design, etc., but we collaborated to form a sustainable solution that can be used by all.” In the end, the team decided to develop a customizable add-on tool that can be embedded in Google Sheets, a commonly used spreadsheet application.
The students wanted their tool, developed with Python programming, to provide cutting-edge bias detection while also being user friendly. “A key takeaway for me was realizing that addressing AI bias requires a balanced approach that combines technical fixes with ethical considerations—augmenting datasets while engaging directly with underrepresented groups,” stated New York-based teammate Darren, who initially researched and produced a survey while his teammates worked on an algorithm that could identify potential bias within a dataset.
More Ethical AI
The resulting add-on, which can be modified to fit any set of training data, utilizes complex statistical analysis to detect if AI training data is likely to be biased. The challenge participants also paired the add-on with an iOS app they created in UI/UX language and Swift, which gives users suggestions on how to customize the add-on for their specific data sets. The students were able to test their tool on a job applicant dataset provided by a company that chose to remain anonymous.
“By using an actual dataset from a company and analyzing it through our add-on, I was shocked to see that there could be gender bias if an AI model were trained on that dataset,” said team member Aradhana. “This experience highlighted how AI can continue societal discrimination against women.” The enterprising team members were able to refine and improve their solution further after conducting a survey and receiving feedback from 85 individuals from diverse backgrounds.
Members of the winning team believe addressing AI bias is critical to mitigate the risk of adverse impacts and build trust in the technology. They hope their solution will spearhead efforts to address bias on a larger scale and promote future, more ethical AI. Summing up, team member Jemali explained that the project “significantly deepened my insights into the implications of AI bias and the pivotal role that we, as innovators, play in ensuring technology benefits all individuals.”
In the final installment of this year’s distinguished lecture series hosted by The New York Academy of Sciences’ Anthropology Section, an expert panel discussed the intersection of anthropology, technology, and ethics.
Published May 2, 2025
By Brooke Elliott Education Communications Intern
Webb Keane, PhD, presents during the From Tools to Metahumans: Talking to AI event at The New York Academy of Sciences on April 7, 2025.
Keynote speaker Webb Keane, PhD, the George Herbert Mead Distinguished Professor of Anthropology at the University of Michigan and a leading voice in semiotics, media, and ethics, centered his April 7th talk around his new book Animals, Robots, Gods: Adventures in the Moral Imagination. The book moves beyond human communities and explores the relational ethics that arise from human interaction with non-humans and near-humans, including artificial intelligence.
Prof. Keane opened his presentation by posing the provocative question: What defines a human?
Traditionally, it has been humankind’s capacity for language, tool-making abilities, and moral reasoning. But with the rise of generative AI and large language models, all three are under pressure, according to Prof. Keane.
AI as a Metahuman
Generative AI now challenges humankind’s unique position as language users, introducing tools that seem to “escape the grasp” of their creators. These AI systems don’t merely reproduce human intelligence, they imitate its outputs.
Prof. Keane defines a “metahuman” as “someone or something with superior powers, but lacking a body or particular social location.” These are beings that humans have always interacted with, such as gods, spirits, and, now, robots and androids. These entities possess knowledge, power, and moral authority beyond the human.
Religious communities have taken to AI in surprisingly enthusiastic ways, Prof. Keane pointed out. Tools like Gita GPT, designed to simulate answers from Krishna, a major deity in Hinduism, are used for moral and spiritual guidance. AI’s “oracular affordances,” as Prof. Keane called them, allow it to function like ancient divinatory tools; they can elicit meaning, trust, and belief.
“AI reflects our fears because it is built from our language, our stories, our digital footprints,” said Prof. Keane.
The meanings we get from interactions with AI are the product of collaboration between the person and the device, just as divination, spiritual possession, and speaking in tongues once captivated our imaginations.
Omri Elisha’s Response
Responding to Prof. Keane, Omri Elisha, PhD, associate professor of anthropology at Queens College and the City University of New York Graduate Center, drew parallels with his own work on astrology. Prof. Elisha emphasized that technologies like AI and astrology translate abstract forces into moral guidance. Through symbolic systems, users interact with planetary or digital forces as if they have agency.
Prof. Elisha posed the critical question: “How is it that certain technologies and certain symbiotic mediations come to be authorized to speak for transcendental sources infinitely far from the here and now?”
He also addressed society’s growing reliance on crowdsourced truth. Platforms like Google and Reddit are worshipped for their convenience, immediacy, and trust, even by those who claim to be skeptical. Generations raised on the internet have come to accept the “wisdom of large numbers,” as Prof. Keane calls it
To further support this point, Prof. Elisha cited the viral meme, “A world where AI paints and writes poems while humans perform menial, backbreaking work wasn’t the future I imagined.”
In an age of corporate personhood and surveillance capitalism, many allow branded algorithms to make decisions once left to human discretion, including immigration status, medical diagnoses, and even music recommendations. As Prof. Keane notes, “We should be scrupulous about the would-be gods who lurk behind our devices.”
Danilyn Rutherford’s Call for a Global Perspective
Danilyn Rutherford, PhD, President of the Winter Grant Foundation and activist with A Thousand Currents, praised Prof. Keane’s commitment to ethical nuance. Still, she challenged the limits of cultural relativism. While different societies may live by different moral codes, Dr. Rutherford argued that there’s a deeper universality in our capacity for meaning-making, even across radically different contexts.
“The point, [Keane] argues, is not simply that different ponds nurture different frogs, they nurture different relationships among critters swimming in the same puddle,” said Dr. Rutherford.
Fear, Faith, and the Future of Human Meaning
All three speakers converged on a core insight: that our interactions with AI tell us more about ourselves than they do about the technology. Humans are beings who construct meaning collaboratively, introducing non-humans with agency, because of our innate ability to see intentions in others.
As Prof. Keane emphasized, the real question is not whether AI is sentient, but why we respond to it as if it were. He questioned what does that reveal about our values, our anxieties, and our longing for guidance as we continue toward an era with even greater interaction between humans and AI.
As the 2024–25 lecture series concludes, the Anthropology Section is already looking to the future. A graduate student gathering at the Margaret Mead Film Festival, which takes place May 2-4 at the American Museum of Natural History, will provide a final chance to connect this spring. This fall, the Anthropology Section will return with a new theme and speaker lineup, as well as a continued commitment to bridging anthropological insight and public dialogue.
Learn more about offerings from The New York Academy of Sciences’ Anthropology Section.
Yann LeCun, VP and Chief AI Scientist at Meta, was one of three Honorees recently recognized by The New York Academy of Sciences (the Academy) for outstanding contributions to science.
Published May 1, 2025
By Nick Fetty Digital Content Manager
Yann LeCun (right) poses with his wife Isabelle during the Soirée.
Yann LeCun was recently recognized by The New York Academy of Sciences, for his pioneering work in machine learning, computer vision, mobile robotics, and computational neuroscience. He was presented with the Academy’s inaugural Trailblazer Award during the 2025 Spring Soirée, hosted at the University Club of New York.
“His work has been instrumental in setting the terms of how we think about the uses, implications, and impact of AI in all its forms,” said Nick Dirks, President and CEO of the Academy, while introducing LeCun during the Soirée. “Yann, we’re grateful that your view has carried the day and are inspired by the boldness of your vision. A vision that has shaped the evolution of this amazing and transformative technology.”
LeCun spoke during the first installment of the Tata Series on AI & Society at the Academy in March 2024. His talk covered everything from his early work in revitalizing and advancing neural networks to the need for open sourcing AI to the limitations he sees with large language models (LLMs). He believes that sensory, as opposed to language, inputs are more effective for building better AI systems, due in part to the brain’s ability to process these inputs faster.
Yann LeCun (center) visits with Hon. Jerry Hultin, immediate past chair of The New York Academy of Sciences Board of Governors, during the Soirée.
“To build truly intelligent systems, they’d need to understand the physical world, be able to reason, plan, remember, and retrieve. The architecture of future systems that will be capable of doing this will be very different from current large language models,” he explained.
LeCun was presented with an Honorary Life Membership to the Academy during the 2024 event.
A Frenchman with a Clever Sense of Humor and Passion for Jazz
Though a serious computer scientist (he received the prestigious ACM Turing Award in 2018), his wry sense of humor often comes through when he talks and on his personal website.
“French people are generally known for their utter contempt of every product of the American culture (“or lack thereof”, as my friend John Denker would say with a smile),” LeCun writes on the “Fun Stuff” section of his website. “But there are two notable exceptions to this attitude, two pure products of the American culture that the French have embraced wholeheartedly (and no, one of them is not Jerry Lewis): Jazz music, and Tex Avery cartoons.”
A fan of jazz music, LeCun considers John Coltrane’s Giant Steps and Miles Davis’s Kind of Blue among his favorite jazz albums of all time. LeCun is a musician himself and plays various woodwind instruments. He even builds his own that combine traditional wind instruments with electronic synthesizers. When he worked at Bell Labs in the 1990s, he played in an informal jazz band with some colleagues. The passion for jazz (and tech) runs in the blood of the LeCun family, as Yann’s brother Bertrand plays the bass (and works at Google in Paris).
From left: Peter Salovey, former president of Yale University and current chair of The New York Academy of Sciences Board of Governors; Yann LeCun, VP and Chief AI Scientist at Meta; and Nick Dirks, President and CEO of The New York Academy of Sciences.
“I have always been interested in jazz because I have always been intrigued by the intellectual challenge of improvising music in real time,” he writes on his website.
Humble in nature—on his website he lists himself as an ACM Turing Award Laureate, but in a parenthetical note next to it indicates “(sounds like I’m bragging, but a condition of accepting the award is to write this next to your name)” —he was nonetheless appreciative of this recent recognition and the broader power of science.
“I like jazz so I’m fond of improvising speeches,” LeCun said when he took to the stage to accept his award, adding that he didn’t use AI to write his speech. “I’ve become a public advocate of science and rationalism. It’s true that today there’s been a lot of attacks against universities, rationalism, science, and scientists. All are being vilified by our own government. We have to stand up for science.”
The past, present, and future of artificial intelligence (AI) were discussed as part of the latest installment in the Tata Knowledge Series on AI & Society.
Published April 18, 2025
By Nick Fetty Digital Content Manager
Nick Dirks (left), President and CEO of The New York Academy of Sciences, and Alok Aggarwal, PhD, CEO and Chief Data Scientist of Scry AI. Photo by Nick Fetty/The New York Academy of Sciences.
The future implications for the growth of AI and its impact on our society was the topic of a fireside chat between renowned computer scientist, Alok Aggarwal, PhD, and Nick Dirks, President and CEO of The New York Academy of Sciences (the Academy).
Dr. Aggarwal is CEO and Chief Data Scientist at Scry AI, which he founded in 2014. The company “focuses on research and advanced development (R&D) in Artificial Intelligence, Data Science, and related disciplines.” In an attempt to demystify AI for the public, he published the book, The Fourth Industrial Revolution & 100 Years of AI (1950-2050), which focuses on demystifying AI for lay audiences.
In discussing the motivation for his book, Dr. Aggarwal explained how AI is part of “the Fourth Industrial Revolution” which started in 2011 and is projected to run through 2050.
Photo by Nick Fetty/The New York Academy of Sciences.
He points out that the recently published book “doesn’t have a single piece of software code and almost no math.” Instead, he focuses on what AI is, and what it will be, the “good, bad, and ugly.” Separately, he is also working on a follow-up book for students studying business analytics and other similar programs.
AI and the Business World
Dirks then shifted the conversation to focus on the business applications of AI. Dr. Aggarwal said he sees AI being most useful in pattern-recognition tasks.
“That pattern-recognition aspect is much faster because electrons are moving at the speed of light, unlike humans, where the ions are moving slowly,” he says. “Definitely in the long run, that pattern recognition aspect alone will make AI be extremely beneficial for humans in pretty much all areas.”
Dr. Aggarwal continued by saying “it’s not a matter of ‘if’, but ‘when’ AI is more fully embraced by society. He compared it to public acceptance of the internet, and its associated hype, in the late 1990s.
“I think, in many ways, hype is very good…because it leads to monetary support and makes the passionate inventors even more passionate,” Dr. Aggarwal says, adding that “it will take time.”
The Challenge of Driverless Cars for AI
Photo by Nick Fetty/The New York Academy of Sciences.
Dirks pointed out that Google recently reduced investments into its driverless car program. He also referenced Yann LeCun, Turing Award winner and Chief AI Scientist at Meta, who mentioned that driverless car technology has much room for improvement during another Academy fireside chat sponsored by Tata in March 2024.
Dr. Aggarwal shared that driverless car technology goes back to the late 1970s in Japan. The technology was further developed in Germany, and then at American institutions including Carnegie Mellon University and the University of California, Berkeley. Despite this effort, Dr. Aggarwal admits successfully integrating AI and driving has been a challenge. However, he pointed out several areas in which AI shows great potential.
For example, he said AI can be applied to laborious, mundane activities, where humans are prone to making mistakes like sifting through invoices to reconcile financial records or submitting the proper documentation for a mortgage loan. Furthermore, AI has been just as effective in preventative healthcare, such as detecting skin cancer, which Dr. Aggarwal has said has proven to be as accurate as a radiologist.
“A lot of the problem right now is [demonstrating] these benefits rather than just inflating the hype,” says Dr. Aggarwal. “We need to actually show that it works in disparate cases.”
Curating Accurate Training Sets
Photo by Nick Fetty/The New York Academy of Sciences.
Dirks pointed out that some AI systems are informed by various sources on the internet, which have varying levels of accuracy. He asked what can be done to curate accurate training sets to develop these technologies.
Dr. Aggarwal said the issue here isn’t so much the AI, as it’s the “human mirror” effect considering many of the inputs from the training sets are merely reflecting reality, which can sometimes be outdated, inaccurate, or biased. He used the example of countries with data sets that do not treat women and men as equals, so inputs from these countries can train the AI to have misinformed biases between genders and their associated roles.
“It’s no different from how we train our children,” said Dr. Aggarwal.
He then referred to “the imitation game” developed by computer pioneer Alan Turing. In this exercise, a human judge blindly assesses whether the answer to the judge’s question was provided by another human or by a computer. The judge needs to determine whether it was the human or the computer. The idea was that eventually the computer technology would be smart enough that the judge wouldn’t be able to differentiate.
Dr. Aggarwal stressed the need for humans to be diligent and balanced in training these AI systems. Because of the strong processing power of these AI systems, they can quickly amplify biases, misinformation, and other negative inputs through which it was informed.
Closing Thoughts
Photo by Nick Fetty/The New York Academy of Sciences.
Dirks and Dr. Aggarwal also discussed additional topics including the history of neural networks, the origin of the term “artificial intelligence,” the hype around advancements in computing in the mid-20th century, the definition of artificial general intelligence (AGI), companionship, job displacement, drug development, and more. After taking questions and comments from those in attendance, Dr. Aggarwal closed his talk by soliciting feedback from those who read his book and welcomed readers to contact him with their commentary.
This article provides a preview of the talk. Video of the full talk is available on-demand for Academy members. Sign up today if you aren’t already part of our impactful network.
This series is sponsored by Tata, a global enterprise, headquartered in India, comprising 30 companies across ten verticals. Read about other Academy events supported by Tata:
Concern about increasing screen time on mental health calls for creating “digital-free” spacesto mitigate rising levels of anxiety, depression and social isolation.
The authors broadly define an attention sanctuary as a wide range of already existing spaces and places such as libraries, churches, museums and school classrooms. Nationally, 77% of U.S. schools say they prohibit cellphones at school for non-academic use, according to the National Center for Education Statistics.
Drawing on the latest research and grassroots “Attention Activism”, the authors argue that the pervasive use of digital devices has led to unprecedented erosion of social and civil life, contributing to rising levels of anxiety, depression, and social isolation.
Key Findings and Recommendations:
The Attention Crisis: The article highlights the urgent need to address the harmful effects of the “attention economy,” where human attention is increasingly commodified by tech platforms through addictive design and data extraction, or “Human Fracking.”
Attention Activism: The authors introduce the concept of “attention activism,” a growing movement that seeks to resist the exploitative practices of the digital economy through education, organizing, and the creation of sanctuary spaces.
Attention Sanctuaries: The paper provides a detailed framework for establishing “attention sanctuaries”—spaces where communities can collectively cultivate and protect their attention. These sanctuaries, which can be implemented in schools, workplaces, and homes, are designed to foster meaningful human connection and reflection, free from the distractions of digital devices.
The authors emphasize that addressing the attention crisis requires a multi-pronged approach, combining grassroots activism, policy interventions, and community-driven initiatives. They argue that attention sanctuaries offer a practical and scalable solution to mitigate the negative effects of digital overload, promoting mental well-being and social cohesion.
“This is not just about limiting screen time,” says Burnett. “It’s about a participatory movement to create spaces where we can reconnect with ourselves and each other, free from the constant pull of digital distractions. Attention sanctuaries are a way to reclaim our humanity in an increasingly fragmented world.”
Eve Mitchell adds, “Attention activism is about more than individual self-control—it’s about collective action. By working together to create these sanctuaries, we can build a culture that values and protects our attention as an essential aspect of our individual and shared lives.”
The authors call for increased collaboration between researchers, policymakers, and community leaders to develop strategies that address the root causes of the attention crisis.
Abstract
While scientific consensus on the nature and extent of the harms attributable to increased use of networked screen media remains elusive, widespread expressions of acute concern among first-responders to the commodified-attention crisis (teachers, therapists, caregivers) should not be overlooked. This paper reviews a series of emergent strategies of collective attention activism, rooted in social practices of community action, deliberation, and consensus-building, and aimed at the creation of novel sanctuaries for the cultivation of new shared norms and habits regarding digital devices. Evidence suggests that such attention sanctuaries (and the formalization of the conventions for convening such spaces) will play an increasingly important role in addressing/mitigating the public health-and-welfare dimensions of societal-scale digital platforms. A copy of the full paper may be downloaded here.
About Annals of the New York Academy of Sciences
Annals of the New York Academy of Sciencesis a 200+ year-old multidisciplinary journal publishing research in all areas of science. Each issue advances our understanding of the natural, social, and physical world by presenting novel and thought-provoking original research, reviews, and expert opinions. We encourage cross disciplinary submissions, with particular interest in neuroscience, organismal biology, material sciences, cell and molecular biology, psychology, medicine, quantum science, renewable energy, and climate science. Please visit us online at www.nyas.org.
About the Authors
D. Graham Burnett is a professor at Princeton University and a leading voice in the study of attention and its role in contemporary society. Eve Mitchell is a psychotherapist and a facilitator at the Strother School of Radical Attention, an innovative institution dedicated to exploring the science, history, and practice of attention.
Where do you stand on AI—optimist or skeptic? A high-stakes conversation on AI’s promise, risks, and the global race for leadership in this game-changing technology.
LinkedIn co-founder and bestselling author Reid Hoffman (right) in conversation with former Secretary of State Hillary Rodham Clinton (left) at 92NY in New York City on January 28, 2025, discussing his new book Superagency: What Could Possibly Go Right with Our AI Future.
The emergence of artificial intelligence (AI) is reshaping nearly every aspect of human life. From medicine to transportation, education to industry, AI is not just a tool; it’s an evolving partner in human progress. But what does this transformation mean for individuals, society, and the growing geopolitical tensions between nations vying for dominance in AI technology? These questions were at the heart of a recent conversation between former US Secretary of State Hillary Rodham Clinton and Reid Hoffman, co-founder of LinkedIn and co-author—with tech writer Greg Beato—of the new book Superagency: What Could Possibly Go Right with Our AI Future.
At its core, Superagency presents an optimistic vision of AI as a general-purpose technology that amplifies human agency. This concept was brought to life for the evening’s audience through a video appearance by Reid AI—Hoffman’s digital twin—who, with the enthusiasm of a tireless press agent, championed Superagency and its vision. “Superagency describes not only how we as individuals get these superpowers from technology, but also how we benefit from a society in which millions of others have these superpowers.” This perspective challenges alarmist narratives around AI, instead framing it as a transformative force, much like past technological revolutions such as the printing press and the automobile.
AI as an Opportunity: Learning from the Past
History teaches us that every major technological advancement—from the steam engine to the internet—has been met with both excitement and trepidation. AI is no different. Hoffman pointed out that skepticism surrounding AI today mirrors historical anxieties: “When, for example, you go back to the printing press, the dialogue around the printing press was actually very similar to the dialogue around AI. It was things like ‘This will spread a lot of misinformation. This will destroy our institutions and our ability to discern truth.’” And yet, despite the upheavals they caused, these innovations propelled society forward. The challenge, Hoffman argues, is to navigate AI’s development thoughtfully, ensuring its benefits reach the many rather than the few.
The AI Spectrum: Doomers, Gloomers, Bloomers, and Zoomers
In Superagency, Hoffman describes a spectrum of attitudes toward AI. On one end are the Doomers, who believe AI is an existential threat that could bring about catastrophic consequences. Next are the Gloomers, who are skeptical and advocate for stringent regulatory controls but stop short of outright rejection. The Zoomers, by contrast, are those who champion rapid AI expansion without much concern for potential risks and are often anti-regulation. Finally, Hoffman identifies himself among the Bloomers, a group that believes AI, when properly guided by intelligent risk management, can be an overwhelmingly positive force for humanity. “One of the things that we argue for, as part of the case for optimism—being Bloomers—is to say you don’t expect perfection in the beginning,” Hoffman explained.
During their conversation, Hoffman asked Secretary Clinton where she saw herself on this spectrum. Her response was thoughtful: “Well, I think I’m a Boomer who is somewhere between a Bloomer and a Gloomer, because on the one hand, I really appreciate the optimism. I find that very attractive. We should learn as we do, learn as we go, make adjustments…Although I do worry about all the people who don’t see the curb and drive off over the cliff.” Her remark underscores the need for both enthusiasm and caution—embracing AI’s potential while ensuring that adequate safeguards are in place to prevent harm. Clinton continued, “We know a lot now. We don’t know anywhere near what we’re going to know, and maybe there are some kinds of guardrails that we would want without losing the optimism, because I want this country to dominate AI.”
Guardrails for Progress: The Role of Regulation
While AI’s potential is vast, so are the risks. Secretary Clinton raised a crucial concern: “If you look at the aggregate, is it going to be more difficult, given our political and social and economic environment, to say, ‘Hey, wait a minute, we’ve learned enough that maybe we should put on this guardrail. Maybe this should be a certain standard we try to meet.’”
Hoffman acknowledged the difficulty of balancing innovation with regulation but emphasized that responsible AI development requires ongoing assessment rather than outright restriction. “The attempt to hold any kind of large system to zero error is an attempt to stop the future,” he noted. Instead, he advocates for an iterative approach—adjusting regulations as AI evolves, rather than stalling progress in the name of perfection.
Hoffman compared this process to the development of the automobile. Early cars lacked essential safety features, but over time, society introduced refinements—first bumpers, then seatbelts, then airbags—to make vehicles safer without halting progress. We have to start driving before we realize what safeguards we need. AI, he argued, should follow the same evolutionary path, improving with real-world use and responsive adjustments.
The Global AI Race: Maintaining US Leadership
One of the most urgent topics in the conversation was the global competition in AI development, particularly between the United States and China. Secretary Clinton emphasized that the US cannot afford to fall behind: “I do worry that if we don’t have an optimistic, full speed ahead approach to it, that we will get outmaneuvered, that we will find ourselves in a subordinate position and that subordinate position could be one of great risk and potential danger. I still would rather have us struggling to try to make the right decisions than seeding ground to rogue states, to highly organized states, to criminal organizations, to rogue technologists.”
Hoffman echoed this sentiment, stressing that America’s strength lies in its innovative culture and entrepreneurial spirit. “We do it by the American entrepreneurial networks and the creativity, but we have to go at that, and we have to be saying that’s what we want.”
Recent developments highlight the stakes of this competition. Just days before this conversation, Chinese AI company DeepSeek made headlines with its advancements in large language models, demonstrating China’s accelerating capabilities in AI development. The rise of DeepSeek underscores the urgency for the US to not only invest in cutting-edge AI research but also establish ethical frameworks that ensure responsible deployment of the technology. This competition is not just about economic dominance; it’s about setting standards for ethical AI use worldwide. The key to maintaining leadership, Hoffman argued, is to ensure that AI development remains aligned with democratic values and responsible governance. If the US leads with innovation and responsibility, it can shape AI’s trajectory for the benefit of society at large.
AI as a Catalyst for Global Stability
Beyond economic and technological dominance, AI could play a significant role in shaping global stability. Hoffman suggested that AI-driven economic and educational advancements could reduce geopolitical tensions by fostering growth in underdeveloped regions. “When people think their future is likely to be better than their present, in terms of building things, they tend to go to war less,” he noted. If AI can be harnessed to improve healthcare, education, and job opportunities in struggling economies, it has the potential to serve as a stabilizing force rather than a disruptive one. This approach shifts the conversation from AI as a competition to AI as a tool for global peace and cooperation.
In contrast, during the discussion, an audience member raised concerns about AI’s potential use in warfare. Secretary Clinton acknowledged the risks, stating, “A lot of weapons of war are becoming more and more autonomous. And so we’re going to see all kinds of very dangerous weapons in the hands of all kinds of people that may or may not have the values that they should to be entrusted with that kind of destruction.” Hoffman reinforced this point, cautioning that AI’s offensive capabilities could be destabilizing: “One of the challenges with AI is that it’s inherently a little bit more of an offensive weapon and has the tendency to say ‘use it or lose your advantage’”, which is most worrisome in terms of a potential arms race dynamic. The exchange highlighted the delicate balance of leveraging AI for progress while preventing its potential misuse in global conflicts.
A Call to be AI “Curious”
As AI continues to evolve, engagement and understanding are critical. Rather than passively observing its impact, scientists, policymakers, and the public must take an active role in shaping AI’s future. As Hoffman puts it: “Move to being AI curious. It doesn’t matter if you are also at the same time AI uncertain, AI skeptical, AI fearful—but add AI curiosity into it.” The AI revolution is here. The question is not whether AI will change our world, but how we choose to participate and shape that change. By fostering curiosity, implementing smart regulations, and ensuring equitable opportunities, we can make AI a tool for empowerment rather than disruption.
For those eager to deepen their understanding of AI technologies in the healthcare sector, including leveraging AI for drug discovery, medical imaging, mental health, equity, and affordability, we invite you to join us at the HealthNext AI Summit 2025, March 3-4, 2025 in New York City. Register now with promo code HLTHNXTNYAS for 10% off!
Interested in hearing more from Reid Hoffman? Tune in to Hoffman’s March 2024 conversation with Academy President and CEO Nicholas Dirks about Hoffman‘s prior book, ‘Impromptu: Amplifying Our Humanity Through AI‘. Available On-Demand until March 27, 2025.
115 Broadway, 8th Floor, New York, NY 10006 or join virtually by Zoom
AI and AI-endowed robots are celebrated as useful tools. But the dramatic utopian and dystopian responses they can provoke suggest something far more, as many users probe them for signs of agency, sentience, and intelligence. At this point, AI is no longer just a tool, it can start to resemble something near human. But we have always lived with near humans and super humans, or what Marshall Sahlins called “metahumans.” We call them spirits, ancestors, gods. Ethnographic attention to the interaction brings out the common features of AI and other metahumans. One feature metahumans share is their ties to power. Much as a prophet embodies and legitimates the power of divinity, so AI can mystify and justify to users the power of its corporate masters, endowing mundane profit-seeking with supernatural aura.
Speakers
Speaker
Webb Keane George Herbert Mead Distinguished University Professor Department of Anthropology, University of Michigan
Discussant
Danilyn Rutherford President, The Wenner-Gren Foundation
Discussant
Omri Elisha Associate Professor of Anthropology, Queens College, CUNY
Pricing
All: Free
About the Series
Since 1877, the Anthropology Section of The New York Academy of Sciences has served as a meeting place for scholars in the Greater New York area. The section strives to be a progressive voice within the anthropological community and to contribute innovative perspectives on the human condition nationally and internationally. Learn more and view other events in the Anthropology Section series.
Academics and industry experts shared their latest research and the broader potential of AI during The New York Academy of Sciences’ 2025 Machine Learning Symposium.
Published November 14, 2024
By Nick Fetty Digital Content Manager
Pin-Yu Chen, PhD, a principal research scientist at IBM Research, presents during the Machine Learning Symposium at the New York Academy of Medicine on Oct. 18, 2024. Photo by Nick Fetty/The New York Academy of Sciences.
The New York Academy of Sciences (the Academy) hosted the 15th Annual Machine Learning Symposium at the New York Academy of Medicine on October 18, 2024. This year’s event, sponsored by Google Research and Cubist Systematic Strategies, included keynote addresses from leading experts, spotlight talks from graduate students and tech entrepreneurs, and opportunities for networking.
Exploring and Mitigating Safety Risks in Large Language Models and Generative AI
Pin-Yu Chen, PhD, a principal research scientist at IBM Research, opened the symposium with a keynote lecture about his work examining adversarial machine learning of neural networks for robustness and safety.
Pin-Yu Chen, PhD. Photo by Nick Fetty/The New York Academy of Sciences.
Dr. Chen presented the limitations and safety challenges facing researchers in the realm of foundation models and generative AI. Foundation models “mark a new era of machine learning,” according to Dr. Chen. Data sources, such as text, images, and speech, help to train these foundation models. These foundation models are then adapted to perform tasks ranging from answering questions to object recognition. ChatGPT is an example of a foundation model.
“The good thing about foundation models is now you don’t have to worry about what task you want to solve,” said Dr. Chen. “You can spend more effort and resources to train a universal foundation model and fine-tune the variety of the downstream tasks that you want to solve.”
While a foundation model can be viewed as an “one for all” solution, according to Dr. Chen, generative AI is on the other side of the spectrum and takes an “all for more” approach. Once a generative AI model is effectively trained with a diverse and representative dataset, it can be expected to generate reliable outputs. Text-to-image and text-to-video platforms are two examples of this.
Dr. Chen’s talk also brought in examples of government action taken in the United States and in European Union countries to regulate AI. He also discussed “hallucinations” and other bugs occurring with current AI systems, and how these issues can be further studied.
“Lots of people talk about AGI as artificial general intelligence. My view is hopefully one day AGI will mean artificial good intelligence,” Dr. Chen said in closing.
Morning Short Talks
The morning session also included a series of five-minute talks delivered by early career scientists:
CoLoR-Filter: Conditional Loss Reduction Filtering for Targeted Language Model Pre-training David Brandfonbrener, PhD, Harvard University
On the Benefits of Rank in Attention Layers Noah Amsel, BS, Courant Institute of Mathematical Sciences
A Distributed Computing Lens on Transformers and State-Space Models Clayton Sanford, PhD, Google Research
Efficient Stagewise Pretraining via Progressive Subnetworks Abhishek Panigrahi, Bachelor of Technology, Princeton University
MaxMin-RLHF: Towards Equitable Alignment of Large Language Models with Diverse Human Preferences Souradip Chakraborty, PhD, University of Maryland
Revisiting the Exploration-Exploitation Tradeoff in the Era of Generative Sequence Modeling
Daniel Russo, PhD. Photo by Nick Fetty/The New York Academy of Sciences.
Daniel Russo, PhD, the Philip H. Geier Jr. Associate Professor of Business at Columbia University, delivered a keynote about reinforcement learning. This field combines statistical machine learning with online decision-making. Prof. Russo covered the work that has taken place in his lab over the past year.
He pointed out that today, “humans have deep and recurring interactions with digital services that are powered through versions of AI.” This includes everything from platforms for dating and freelance work, to entertainment like Spotify and social media, to highly utilitarian applications such as for healthcare and education.
“The thing I deeply believe is that decision making among humans involves information gathering,” said Prof. Russo. “It involves understanding what you don’t know about the world and figuring out how to resolve it.”
He said medical doctors follow a similar process as they assess what might be affecting a patient, then they decide what tests are needed to better diagnose the issue. MDs must weigh the costs versus the benefits. Prof. Russo pointed out that in their current state, it’s difficult to design machine learning agents to effectively make these assessments.
He then discussed major advancements in the field that have occurred over the past decade and did a deep dive into his work on generative modeling. Prof. Russo closed his talk by emphasizing the difficulty of quantifying uncertainty in neural networks, despite his desire to be able to program them for decision-making.
“I think what this [research] is, is the start of something. Definitely not the end,” he said. “I think there’s a lot of interesting ideas here, so I hope that in the years to come this all bears out.”
Award-Winning Research
Researchers, ranging from high schoolers to industry professionals, shared their projects and work with colleagues during the popular poster session. Graduate students, postdocs, and industry professionals delivered a series of spotlight talks. Conference organizers assessed the work and presented awards to the most outstanding researchers. Awardees include:
Posters:
Aleksandrs Slivkins, PhD, Microsoft Research NYC (his student, Kiarash Banihashem, presented on his behalf)
Aditya Somasundaram, Bachelor of Technology, Columbia University
R. Teal Witter, BA, New York University
Spotlight Talks:
Noah Amsel, BS, Courant Institute of Mathematical Sciences
Claudio Gentile, PhD, Google
Anqi Mao, PhD, Courant Institute of Mathematical Sciences
Tamalika Mukherjee, PhD, Columbia University
Clayton Sanford, PhD, Google Research
Yutao Zhong, PhD, Courant Institute of Mathematical Sciences
The Spotlight talk award winners. From left: Yutao Zhong, PhD; Angi Mao, PhD; Tamalika Mukherjee, PhD; Corinna Cortes, PhD (Scientific Organizing Committee); Claudio Gentile, PhD; Noah Amsel, BS; and Clayton Sanford, PhD.
Playing Games with Learning Agents
Jon Schneider, PhD. Photo by Nick Fetty/The New York Academy of Sciences.
To start the afternoon sessions, Jon Schneider, PhD, from Google Research New York, shared a keynote covering his research at the intersection of game theory and the theory of online learning.
“People increasingly now are offloading their decisions to whatever you want to call it; AI models, learning algorithms, automated agents,” said Dr. Schneider. “So, it’s increasingly important to design good learning algorithms that are capable of making good decisions for us.”
Dr. Schneider’s center of expertise and research involves decision-making in strategic environments for both zero-sum (rock-paper-scissors) and general-sum games (chess, Go, StarCraft). He shared some examples of zero-sum games serving as success stories for the theories of online learning and game theory. In this realm, researchers have observed “tight connections” between the economic theory and the theory of learning, finding practical applications for these theoretical concepts.
“Thinking about these convex objects, these menus of learning algorithms, is a powerful technique for understanding questions in this space. And there’s a lot of open questions about swap regret and the manipulative-ability of learning algorithms that I think are still waiting to be explored,” Dr. Schneider said in closing.
Afternoon Short Talks
Short talks in the afternoon by early career scientists covered a range of topics:
Improved Bounds for Learning with Label Proportions Claudio Gentile, PhD, Google
Cardinality-Aware Set Prediction and Top-k Classification Anqi Mao, PhD, Courant Institute of Mathematical Sciences
Cross-Entropy Loss Functions: Theoretical Analysis and Applications Yutao Zhong, PhD, Courant Institute of Mathematical Sciences
Differentially Private Clustering in Data Streams Tamalika Mukherjee, PhD, Columbia University
Towards Generative AI Security – An Interplay of Stress-Testing and Alignment
Furong Huang, PhD. Photo by Nick Fetty/The New York Academy of Sciences.
The event concluded with a keynote talk from Furong Huang, PhD, an associate professor of computer science at the University of Maryland. She recalled attending the Academy’s Machine Learning symposium in 2017. She was a postdoctoral researcher for Microsoft Research at the time, and had the opportunity to give a spotlight talk and share a poster. But she said she dreamt of one day giving a keynote presentation at this impactful conference.
“It took me eight years, but now I can say I’m back on the stage as a keynote speaker. Just a little tip for my students,” said Prof. Huang, which was met by applause from those in attendance.
Her talk touched on large language models (LLMs) like ChatGPT. While other popular programs like Spotify and Instagram took 150 days and 75 days, respectively, to gain one million users, ChatGPT was able to achieve this benchmark in just five days. Furthermore, Prof. Huang pointed out the ubiquity of AI in society, citing data from the World Economic Forum, which suggests that 34% of business products are produced using AI, or augmented by AI algorithms.
AI and Public Trust
Despite the ubiquity of the technology (or perhaps because of it), she points out that public trust of AI is lacking. Polling shows a strong desire from Americans to make AI safe and secure. She went on to explain that for public trust to be gained, LLMs and visual language models (VLMs) need to be better calibrated to avoid behavioral hallucinations. This happens when the AI misreads situations and infers behaviors that aren’t actually occurring. Prof. Huang concluded by emphasizing the utility of stress-testing when developing AI systems.
“We use stress-testing to figure out the vulnerabilities, then we want to patch them. So that’s where alignment comes into play. Using the data we got from stress-testing, we can do training time and test time alignment to make sure the model is safe,” Prof. Huang concluded, adding that it may be necessary to conduct another round of stress-testing after a system is realigned to further ensure safety.
Want to watch these talks in their entirety? The video is on-demand for Academy members. Sign up today if you aren’t already part of our impactful network.
The New York Academy of Sciences has been at the forefront of machine learning and artificial intelligence since hosting the first Machine Learning Symposium nearly two decades ago.
Published September 16, 2024
By Nick Fetty Digital Content Manager
In today’s digital age, an abundance of reliable data is readily available at our fingertips. This is, in part, because of significant advances in the field of machine learning in recent years.
The New York Academy of Sciences (the Academy) has long played a role in advancing research in this subfield of artificial intelligence. In machine learning, researchers develop mathematical algorithms that extract knowledge from specific data sets. The machine then “learns” from the data in an iterative fashion that enables predictions to be made. It has a wide range of disparate practical applications from natural language processing and search engine function to stock market analysis and medical diagnosis.
The first Machine Learning Symposium was hosted by the Academy in 2006. Collaborators included experts from Google, Rutgers University, Columbia University, and NYU’s Courant Institute of Mathematical Sciences.
Continuing a Proud Tradition
This proud tradition will continue when the Academy hosts the 15th annual Machine Learning Symposium at the New York Academy of Medicine (1216 5th Avenue, New York, NY 10029) on October 18, 2024. This year’s keynote speakers include:
Pin-Yu Chen, PhD, IBM Research: Dr. Chen’s recent research focuses on adversarial machine learning of neural networks for robustness and safety. His long-term research vision is to build trustworthy machine learning systems.
Furong Huang, PhD, University of Maryland: Dr. Huang works on statistical and trustworthy machine learning, foundation models and reinforcement learning, with specialization in domain adaptation, algorithmic robustness, and fairness.
Daniel Russo, PhD, Columbia University: Dr. Russo’s research lies at the intersection of statistical machine learning and online decision making, mostly falling under the broad umbrella of reinforcement learning.
Jon Schneider, PhD, Google Research New York: Dr. Schneider’s primary research interests include problems in online learning, game theory, and convex optimization/geometry. His recent work focuses on designing strategically robust algorithms for learning in game-theoretic environments.
The symposium’s primary goal has always been to develop an active community of machine learning scientists. This includes experts from academic, government, and industrial institutions who can exchange ideas in a neutral setting.
Graduate students and representatives from tech startups will also deliver a series of “Spotlight Talks.” Others will share their research during an interactive poster session.
Promoting Impactful Machine Learning Applications
Over its history, the symposium has highlighted several mainstream machine learning applications. This includes simulation, learning and optimization techniques for IBM Watson‘s Jeopardy! game strategies, the role big data played in the 2012 U.S. presidential election, and a trainable vision system for off-road mobile robots.
Corinna Cortes, PhD, VP of Google Research, Mehryar Mohri, PhD, Professor at NYU and a Research Director at Google Research, and Tony Jebara, PhD, VP of Engineering and Head of Machine Learning at Spotify, have been involved since the event’s inception. They continue to guide the event’s programming through their roles on the Scientific Organizing Committee. This year’s sponsors include Google Research and Cubist Systematic Strategies.
This website uses cookies. Some of the cookies we use are essential for parts of the website to operate, while others offer you a better browsing experience. You give us your permission to use cookies by clicking on the “I agree” button or by continuing to use our website after receiving this notification. To find out more about cookies on this website, see our Privacy Policy and Terms of Use.I Agree