Skip to main content

From New Delhi to New York

A headshot of a man.

Academy Fellow Nitin Verma is taking a closer look at deepfakes and the impact they can have on public opinion.

Published April 23, 2024

By Nick Fetty
Digital Content Manager

Nitin Verma’s interest in STEM can be traced back to his childhood growing up in New Delhi, India.

Verma, a member of the inaugural cohort for the Artificial Intelligence (AI) and Society Fellowship, a collaboration between The New York Academy of Sciences and Arizona State University’s School for the Future of Innovation in Society, remembers being fascinated by physics and biology as a child. When he and his brother would play with toys like kites and spinning tops, he would always think about the science behind why the kite stays in the sky or why the top continues to spin.

Later, he developed an interest in radio and was mesmerized by the ability to pick up radio stations from far away on the shortwave band of the household radio. In the early 1990s, he remembers television programs like Turning Point and Brahmānd (Hindi: ब्रह्मांड, literally translated to “the Universe”) further inspired him.

“These two programs shaped my interest in science, and then through a pretty rigorous school system in India, I got a good grasp of the core concepts of the major sciences—physics, chemistry, biology—and mathematics by the time I graduated high school,” said Verma. “Even though I am an information scientist today, I remain absolutely enraptured by the night sky, physics, telecommunication, biology, and astronomy.”

Forging His Path in STEM

Verma went on to pursue a bachelor’s in electronic science at the University of Delhi where he continued to pursue his interest in radio communications while developing technical knowledge of electronic circuits, semiconductors and amplifiers. After graduating, he spent nearly a decade working as an embedded software programmer, though he found himself somewhat unfulfilled by his work.

“In industry, I felt extremely disconnected with my inner desire to pursue research on important questions in STEM and social science,” he said.

This lack of fulfillment led him to the University of Texas at Austin where he pursued his MS and PhD in information studies. Much like his interest in radio communications, he was also deeply fascinated by photography and optics, which inspired his dissertation research.

This research examined the impact that deepfake technology can have on public trust of photographic and video content. He wanted to learn how people came to trust visual evidence in the first place and what is at stake with the arrival of deepfake technology. He found that perceived, or actual, familiarity with content creators and depicted environments, contexts, prior beliefs, and prior perceptual experiences guide public trust in the material deemed trustworthy.

“My main thesis is that deepfake technology could be exploited to break our trust in visual media, and thus render the broader public vulnerable to misinformation and propaganda,” Verma said.

A New York State of Mind

Verma captured this image of the historic eclipse that occurred on April 8, 2024.

After completing his PhD, he applied for and was admitted into the AI and Society Fellowship. The fellowship has enabled him to further his understanding of AI through opportunities such as the weekly lecture series, collaborations with researchers at New York University, presentations he has given around the city, and by working on projects with Academy colleagues such as Marjorie Xie and Akuadasuo Ezenyilimba.

Additionally, he is part of the Academy’s Scientist-in-Residence program, in which he teaches STEM concepts to students at a Brooklyn middle school.

“I have loved the opportunity to interact regularly with the research community in the New York area,” he said, adding that living in the city feels like a “mini earth” because of the diverse people and culture.

In the city he has found inspiration for some of his non-work hobbies such as playing guitar and composing music. The city provides countless opportunities for him to hone his photography skills, and he’s often exploring New York with his Nikon DSLR and a couple of lenses in tow.

Deepfakes and Politics

In much of his recent work, he’s examined the societal dimensions (culture, politics, language) that he says are crucial when developing AI technologies that effectively serve the public, echoing the Academy’s mission of “science for the public good.” With a polarizing presidential election on the horizon, Verma has expressed concerns about bad actors utilizing deepfakes and other manipulated content to sway public opinion.

“It is going to be very challenging, given how photorealistic visual deepfakes can get, and how authentic-sounding audio deepfakes have gotten lately,” Verma cautioned.

He encourages people to refrain from reacting to and sharing information they encounter on social media, even if the posts bear the signature of a credible news outlet. Basic vetting, such as visiting the actual webpage to ensure it is indeed the correct webpage of the purported news organization, and checking the timestamp of a post, can serve as a good first line of defense against disinformation, according to Verma. Particularly when viewing material that may reinforce one’s beliefs, Verma challenges them to ask themselves: “What do I not know after watching this content?”

While Verma has concerns about “the potential for intentional abuse and unintentional catastrophes that might result from an overzealous deployment of AI in society,” he feels that AI can serve the public good if properly practiced and regulated.

“I think AI holds the promise of attaining what—in my opinion—has been the ultimate pursuit behind building machines and the raison d’être of computer science: to enable humans to automate daily tasks that come in the way of living a happy and meaningful life,” Verma said. “Present day AI promises to accelerate scientific discovery including drug development, and it is enabling access to natural language programming tools that will lead to an explosive democratization of programming skills.”

Read about the other AI and Society Fellows:

Applying Human Computer Interaction to Brain Injuries

A woman smiles for the camera.

With an appreciation for the value of education and an athlete’s work ethic, Akuadasuo Ezenyilimba brings a unique perspective to her research.

Published April 19, 2024

By Nick Fetty
Digital Content Manager

Athletes, military personnel, and others who endure traumatic brain injuries (TBI) may experience improved outcomes during the rehabilitation process thanks to research by a Fellow with Arizona State University and The New York Academy of Sciences.

Akuadasuo Ezenyilimba, a member of the inaugural cohort of the Academy’s AI and Society Fellowship, conducts research that aims to improve both the quality and the accessibility of TBI care by using human computer interaction. For Ezenyilimba, her interest in this research and STEM more broadly can be traced back to her upbringing in upstate New York.

Instilled with the Value of Education

Growing up in Rochester, New York, Ezenyilimba’s parents instilled in her, and her three younger siblings, the value of education and hard work. Her father, Matthew, migrated to the United States from Nigeria and spent his career in chemistry, while her mother, Kelley, grew up in Akron, Ohio and worked in accounting and insurance. Akuadasuo Ezenyilimba remembers competing as a 6-year-old with her younger sister in various activities pertaining to their after-school studies.

“Both my mother and father placed a strong emphasis on STEM-related education for all of us growing up and I believe that helped to shape us into the individuals we are today, and a big reason for the educational and career paths we all have taken,” said Ezenyilimba.

This competitive spirit also occurred outside of academics. Ezenyilimba competed as a hammer, weight, and discus thrower on the track and field team at La Cueva High School in New Mexico. An accomplished student athlete, Ezenyilimba was a discus state champion her senior year, and was back-to-back City Champion in discus as a junior and senior.

Her athletic prowess landed her a spot on the women’s track and field team as an undergraduate at New Mexico State University, where she competed in the discus and hammer throw. Off the field, she majored in psychology, which was her first step onto a professional path that would involve studying the human brain.

Studying the Brain

After completing her BS in psychology, Ezenyilimba went on to earn a MS in applied psychology from Sacred Heart University while throwing weight for the women’s track and field team, and then went on to earn a MS in human systems engineering from Arizona State University. She then pursued her PhD in human systems engineering at Arizona State, where her dissertation research focused on mild TBI and human computer interaction in regard to executive function rehabilitation. As a doctoral student, she participated in the National Science Foundation’s Research Traineeship Program.

“My dissertation focused on prototype of a wireframe I developed for a web-based application for mild traumatic brain injury rehabilitation when time, finance, insurance, or knowledge are potential constraints,” said Ezenyilimba. “The application is called Ụbụrụ.”

As part of her participation in the AI and Society Fellowship, she splits her time between Tempe, Arizona and New York. Arizona State University’s School for the Future of Innovation in Society partnered with the Academy for this Fellowship.

Understanding the Societal Impacts of AI

The Fellowship has provided Ezenyilimba the opportunity to consider the societal dimensions of AI and how that might be applied to her own research. In particular, she is mindful of the potential negative impact AI can have on marginalized communities if members of those communities are not included in the development of the technology.

“It is important to ensure everyone, regardless of background, is considered,” said Ezenyilimba. “We cannot overlook the history of distrust that has impacted marginalized communities when new innovations or changes do not properly consider them.”

Her participation in the Fellowship has enabled her to build and foster relationships with other professionals doing work related to TBI and AI. She also collaborates with her fellow cohort postdocs in brainstorming new ways to address the topic of AI in society.

“As a Fellow I have also been able to develop my skills through various professional workshops that I feel have helped make me more equipped and competitive as a researcher,” she said.

Looking Ahead

Ezenyilimba will continue advancing her research on TBI. Through serious gamification, she looks at how to lessen the negative context that can be associated with rehabilitation and how to better enhance the overall user experience.

“My research looks at how to increase accessibility to relevant care and ensure that everyone who needs it is equipped with the necessary knowledge to take control of their rehabilitation journey whether that be an athlete, military personnel, or a civilian,” she said.

Going forward she wants to continue contributing to TBI rehabilitation as well as telehealth with an emphasis on human factors and user experience. She also wants to be a part of an initiative that ensures accessibility to and trust in telehealth, so everyone is capable of being equipped with the necessary tools.

Outside of her professional work, Ezenyilimba enjoys listening to music and attending concerts with family and friends. Some of her favorite artists include Victoria Monet and Coco Jones. She is also getting back into the gym and focusing on weightlifting, harkening back to her days as a track and field student-athlete.

Like many, Ezenyilimba has concerns about the potential misuses of AI by bad actors, but she also sees potential in the positive applications if the proper inputs are considered during the development process.

“I think a promising aspect of AI is the limitless possibilities that we have with it. With AI, when properly used, we can utilize it to overcome potential biases that are innate to humans and utilize AI to address the needs of the vast majority in an inclusive manner,” she said.

Read about the other AI and Society Fellows:

Women’s Health 2.0: The Artificial Intelligence Era

A panel discussion from the South by Southwest event.

Charting the evolution of women’s healthcare in the AI era, illuminating the promise and challenges of predictive tech to close the health gender gap.

Published April 12, 2024

By Brooke Grindlinger, PhD
Chief Scientific Officer

Panelists Sara Reistad-Long (left), Healthcare Strategist at Empowered; Alicia Jackson, PhD, Founder and CEO of Evernow; Christina Jenkins, MD, General Partner at Convergent Ventures; and Robin Berzin, MD, Founder and CEO of Parsley Health speak at SXSW on March 9, 2024. The panelists discussed the promise and risks that AI and predictive tech carry as a path to closing the healthcare gender gap.

Less than 2% of global healthcare research and development is dedicated to female-specific conditions beyond cancer, as was starkly revealed in the January 2024 World Economic Forum and McKinsey Health Institute report, “Closing the Women’s Health Gap: A $1 Trillion Opportunity to Improve Lives and Economies.” Rectifying this disparity holds the potential to inject over $1 trillion annually into the global economy by 2040 through bolstered female workforce participation.

In February 2024, America’s First Lady Jill Biden unveiled a $100 million federal funding initiative for women’s health research, marking a significant milestone for the White House Initiative on Women’s Health Research intended to fundamentally change how the US approaches and funds research in this area. On March 9, 2024, the South by Southwest Conference hosted a pivotal panel discussion titled “Can AI Close the Health Gender Gap?” moderated by Sara Reistad-Long, a Healthcare Strategist at Empowered. This gathering of clinicians, digital health tech executives, and investors delved into the transformative potential of artificial intelligence (AI) and predictive technology in mitigating gender disparities in healthcare.

Women’s Health Beyond Reproduction

The panelists began by establishing a shared definition of ‘women’s health.’ Historically, women’s health has been narrowly defined as reproductive health, primarily concerning the female reproductive organs such as the uterus, ovaries, fallopian tubes, and to some extent, breasts. Yet, as panelist Christina Jenkins, MD, General Partner at Convergent Ventures, aptly pointed out, the scope of women’s health transcends this narrow scope.

“There’s so much more to women’s health than that,” she emphasized, advocating for a broader understanding. “We consider ‘women’s health’ as a specific practice… focused on things that are unique to women, which are those reproductive organs and [associated conditions], but also conditions that disproportionately… or differently affect women.” She elaborated with examples ranging from autoimmune diseases to conditions like migraine, colon cancer, and variances in women’s reactions to asthma medications.

Overlooked and Underserved: Women’s Health Blind Spots

The historical exclusion of women from health research and clinical trials has perpetuated the flawed assumption that women’s bodies and health outcomes mirror those of men, neglecting their unique biological and medical complexities. “Women were not included in medical research until 1993. Women are diagnosed later in over 700 conditions. Some of our most pressing chronic conditions that are on the rise take 5-7 years to be diagnosed—like autoimmune conditions—and 80% of them occur in women,” observed panelist Robin Berzin, MD, Founder and CEO of digital health company Parsley Health.

AI’s Promise in Closing the Research to Practice Gap

Alicia Jackson, PhD, Founder and CEO of digital health company Evernow, which is focused on women’s health at ages 40+, has spearheaded groundbreaking research that has yielded one of the most extensive and diverse datasets on menopause and perimenopause. This dataset encompasses a multifaceted understanding, ranging from the manifestation of bodily symptoms during these life stages to the impact of variables such as race, ethnicity, income levels, hysterectomy status, and concurrent medications on patient outcomes.

Furthermore, Jackson and her team have identified treatment protocols associated with both short-term relief and long-term health benefits. Despite possessing this wealth of information, Jackson posed a critical question: “I now have this massive dataset, but how do I actually get it into clinical practice to impact the woman that I am seeing tomorrow?” “There’s a huge opportunity for us to leverage clinical data in new ways to give us insights to personalize care,” added Berzin.

From Data Deluge to Personalized Care

Despite the increasing availability of rich research data on women’s health, significant challenges persist in promptly translating this data into effective patient care. With over a million new peer-reviewed publications in biomedicine added annually to the PubMed database, the sheer volume overwhelms individual healthcare providers. “That’s an impossible sum of research for any individual doctor…to digest and use,” observed Berzin. “New information takes 17 years to make its way from publication into medical education, and then even longer into clinical practice,” she lamented. “What I’m excited about when it comes to AI and closing the gender gap is the opportunity for us to close the research gap.

What AI will let all of us do is take in a lot of the data sets that have been unwieldy in the past and leverage them to personalize care. The rapidity and pace at which we can begin to gain insights from the data, which is otherwise like drinking from a fire hose, represents an opportunity for us to catch up [on] that gender gap.” Jackson added, “AI gives me a time machine…to immediately take those results and apply them and impact women today.”

AI Nurse Anytime, Anywhere

The conversation shifted to AI’s potential to address the critical shortage of healthcare providers in the United States. Berzin highlighted the systemic issues, stating, “We don’t have enough doctors. We are not training enough doctors. Nor are we importing enough doctors. We have really big disparities in terms of where the doctors are.” Jackson expanded on the role of AI beyond tackling the provider shortfall and fast-tracking diagnostic processes, emphasizing its potential to facilitate culturally sensitive care.

She emphasized that AI could go beyond delivering data and outcomes; it’s about understanding the nuances of cultural preferences in healthcare delivery. Jackson noted that women want more than just symptom discussion; they want to delve into the emotional and relational impacts of navigating the healthcare system. “Right now, no traditional healthcare system has time beyond that 15-minute appointment to listen and to understand.” However, AI offers the possibility of unlimited time for patients to share their experiences.

With the assistance of AI, patients can access personalized care on their terms, allowing for a more enriching and fulfilling healthcare experience. Jackson continued, “If you have a $9 per hour AI nurse that can take that entire [patient] history, that [the patient can] call up in the middle of the night, on your commute to work, and just continue to add to that [history]…now you’ve created this very, very rich experience. Suddenly, it’s healthcare on your terms.”

Women’s Patient Empowerment Through AI

In addition to its potential to enhance healthcare accessibility and availability, AI emerged as a catalyst for empowering women to take charge of their healthcare journey. Jackson underscored a prevalent issue in women’s healthcare: the need for multiple doctor visits before receiving a correct diagnosis. She highlighted AI’s transformative potential in bridging this gap by empowering women to input their symptoms into AI platforms like ChatGPT, potentially integrating data from wearable devices, and receiving informed guidance—such as urgent care recommendations—immediately. This represents a significant stride in patient empowerment.

AI’s Achilles’ Heel

However, Jenkins cautioned against the pitfalls of AI, citing the case of Babylon Health, a UK-based digital health service provider. She recounted a troubling incident where the Babylon Health AI platform, during a system test, misdiagnosed a woman experiencing symptoms of a heart attack as having an anxiety attack, while advising a man with the same symptoms and medical history to seek immediate medical attention for a heart attack.

“This is what happens when you build something well-meaning on top of bad data,” cautioned Jenkins. She went on to emphasize the critical need to use real-world evidence to mitigate gender biases entrenched in clinical research data. “There is an imperative, not just for the algorithms to eliminate bias, but to make sure that the data sources are there. That’s why we have to use real-world evidence instead of clinical research.”

Learn more about the opportunities and challenges surrounding the integration of AI-driven technologies into the healthcare system at the upcoming Academy conference: The New Wave of AI in Healthcare 2024, May 1-2, 2024 in New York.

Innovations in AI and Higher Education

Two authors discuss their books during an Academy event.

From the future of higher education to regulating artificial intelligence (AI), Reid Hoffman and Nicholas Dirks had a wide-ranging discussion during the first installment of the Authors at the Academy series.

Published April 12, 2024

By Nick Fetty
Digital Content Manager

Photo by Nick Fetty/The New York Academy of Sciences

It was nearly a full house when authors Nicholas Dirks and Reid Hoffman discussed their respective books during an event at The New York Academy of Sciences on March 27, 2024.

Hoffman, who co-founded LinkedIn as well as Inflection AI and currently serves as a partner at Greylock, discussed his book Impromptu: Amplifying Our Humanity Through AI. Dirks, who spent a career in academia before becoming President and CEO of the Academy, focused on his recently published book City of Intellect: The Uses and Abuses of the University. Their discussion, the first installment in the Authors at the Academy series, was largely centered on artificial intelligence (AI) and how it will impact education, business and creativity moving forward.

The Role of Philosophy

Photo by Nick Fetty/The New York Academy of Sciences

The talk kicked off with the duo joking about the century-old rivalry between the University of California-Berkeley, where Dirks serves on the faculty and formerly served as chancellor, and Stanford University, where Hoffman earned his undergraduate degree in symbolic systems and currently serves on the board for the university’s Institute for Human-Centered AI. From Stanford, Hoffman went to Oxford University as a Marshall Scholar to study philosophy. He began by discussing the role that his background in philosophy has played throughout his career.

“One of my conclusions about artificial intelligence back in the day, which is by the way still true, is that we don’t really understand what thinking is,” said Hoffman, who also serves on the Board of Governors for the Academy. “I thought maybe philosophers understand what thinking is, they’ve been at it a little longer, so that’s part of the reason I went to Oxford to study philosophy. It was extremely helpful in sharpening my mind toolset.”

Public Intellectual Discourse

He encouraged entrepreneurs to think about the theory of human nature in the work they’re doing. He said it’s important to think about what they want for the future, how to get there, and then to articulate that with precision. Another advantage of a philosophical focus is that it can strengthen public intellectual discourse, both nationally and globally, according to Hoffman.

“It’s [focused on] who are we and who do we want to be as individuals and as a society,” said Hoffman.

Photo by Nick Fetty/The New York Academy of Sciences

Early in his career, Hoffman concluded that working as a software entrepreneur would be the most effective way he could contribute to the public intellectual conversation. He dedicated a chapter in his book to “Public Intellectuals” and said that the best way to elevate humanity is through enlightened discourse and education, which was the focus of a separate chapter in his book.

Rethinking Networks in Academia

The topic of education was an opportunity for Hoffman to turn the tables and ask Dirks about his book. Hoffman asked Dirks how institutions of higher education need to think about themselves as nodes of networks and how they might reinvent themselves to be less siloed.

Dirks mentioned how throughout his life he’s experienced various campus structures and cultures from private liberal arts institutions like Wesleyan University, where Dirks earned his undergraduate degree, and STEM-focused research universities like Cal Tech to private universities in urban centers (University of Chicago, Columbia University) and public, state universities (University of Michigan, University of California-Berkeley).

While on the faculty at Cal Tech, Dirks recalled he was encouraged to attend roundtables where faculty from different disciplines would come together to discuss their research. He remembered hearing from prominent academics such as Max Delbrück, Richard Feynman, and Murray Gell-Mann. Dirks, with a smile, pointed out the meeting location for these roundtables was featured in the 1984 film Beverly Hills Cop.

Photo by Nick Fetty/The New York Academy of Sciences

An Emphasis on Collaboration in Higher Education

Dirks said that he thinks the collaborative culture at Cal Tech enabled these academics to achieve a distinctive kind of greatness.

“I began to see this is kind of interesting. It’s very different from the way I’ve been trained, and indeed anyone who has been trained in a PhD program,” said Dirks, adding that he often thinks about a quote from a colleague at Columbia who said, “you’re trained to learn more and more about less and less.”

Dirks said that the problem with this model is that the incentive structures and networks of one’s life at the university are largely organized around disciplines and individual departments. As Dirks rose through the ranks from faculty to administration (both as a dean at Columbia and as chancellor at Berkeley), he began gaining a bigger picture view of the entire university and how all the individual units can fit together. Additionally, Dirks challenged academic institutions to work more collaboratively with the off-campus world.

“A Combination of Competition and Cooperation”  

Dirks then asked Hoffman how networks operate within the context of artificial intelligence and Silicon Valley. Hoffman described the network within the Valley as “an intense learning machine.”

“It’s a combination of competition and cooperation that is kind of a fierce generator of not just companies and products, but ideas about how to do startups, ideas about how to scale them, ideas of which technology is going to make a difference, ideas about which things allow you to build a large-scale company, ideas about business models,” said Hoffman.

Photo by Nick Fetty/The New York Academy of Sciences

During a recent talk with business students at Columbia University, Hoffman said he was asked about the kinds of jobs the students should pursue upon graduation. His advice was that instead of pinpointing specific companies, jobseekers should choose “networks of vibrant industries.” Instead of striving for a specific job title, they should instead focus on finding a network that inspires ingenuity.

“Being a disciplinarian within a scholarly, or in some case scholastic, discipline is less important than [thinking about] which networks of tools and ideas are best for solving this particular problem and this particular thing in the world,” said Hoffman. “That’s the thing you should really be focused on.”

The Role of Language in Artificial Intelligence

Much of Hoffman’s book includes exchanges between him and ChatGPT-4, an example of a large language model (LLM). Dirks points out that Hoffman uses GPT-4 not just an example, but as an interlocutor throughout the book. By the end of the book, Dirks observed that the system had grown because of Hoffman’s inputs.

In the future, Hoffman said he sees LLMs being applied to a diverse array of industries. He used the example of the steel industry, in areas like sales, marketing, communications, financial analysis, and management.

“LLMs are going to have a transformative impact on steel manufacturing, and not necessarily because they’re going to invent new steel manufacturing processes, but [even then] that’s not beyond the pale. It’s still possible,” Hoffman said.

AI Understanding What Is Human

Photo by Nick Fetty/The New York Academy of Sciences

Hoffman said part of the reason he articulates the positives of AI is because he views the general discourse as so negative. One example of a positive application of AI would be having a medical assistant on smartphones and other devices, which can improve medical access in areas where it may be limited. He pointed out that AI can also be programmed as a tutor to teach “any subject to any age.”

“[AI] is the most creative thing we’ve done that also seems to have potential autonomy and agency and so forth, and that causes a bunch of very good philosophical questions, very good risk questions,” said Hoffman. “But part of the reason I articulate this so positively is because…[of] the possibility of making things enormously better for humanity.” 

Hoffman compared the societal acceptance of AI to automobiles more than a century ago. At the outset, automobiles didn’t have many regulations, but as they grew in scale, laws around seatbelts, speed limits, and driver’s licenses were established. Similarly, he pointed to weavers who were initially wary of the loom before understanding its utility to their work and the resulting benefit to broader society.

“AI can be part of the solution,” said Hoffman. “What are the specific worries in navigation toward the good things and what are the ways that we can navigate that in good ways. That’s the right place for a critical dialogue to happen.”

Regulation of AI

Photo by Nick Fetty/The New York Academy of Sciences

Hoffman said because of the speedy rate of development of new AI technologies, it can make effective regulation difficult. He said it can be helpful to pinpoint the two or three most important risks to focus on during the navigation process, and if feasible to fix those issues down the road.

Carbon emissions from automobiles was an example Hoffman used, pointing out that emissions weren’t necessarily on the minds of engineers and scientists when the automobile was being developed, but once research started pointing to the detrimental environmental impacts of carbon in the atmosphere, governments and companies took action to regulate and reduce emissions.

“[While] technology can help to create a problem, technologies can also help solve those problems,” Hoffman said. “We won’t know they’re problems until we’re into them and obviously we adjust as we know them.”

Hoffman is currently working on another book about AI and was invited to return to the Academy to discuss it once published.

For on-demand video access to the full event, click here.

Check out the other events from our 2024 Authors at the Academy Series

Full video of these events is available, please visit nyas.org/ondemand

Yann LeCun Emphasizes the Promise of AI

A man presents to a full house during an Academy event.

The renowned Chief AI Scientist of Meta, Yann LeCun, discussed everything from his foundational research in neural networks to his optimistic outlook on the future of AI technology at a sold-out Tata Knowledge Series on AI & Society event with the Academy’s President & CEO Nick Dirks while highlighting the importance of the open-source model.

Published April 8, 2024

By Nick Fetty
Digital Content Manager

Photo by Nick Fetty/The New York Academy of Sciences

Yann LeCun, a Turing Award winning computer scientist, had a wide-ranging discussion about artificial intelligence (AI) with Nicholas Dirks, President and CEO of The New York Academy of Sciences, as part of the first installment of the Tata Series on AI & Society on March 14, 2024.

LeCun is the Vice President and Chief AI Scientist at Meta, as well as the Silver Professor for the Courant Institute of Mathematical Sciences at New York University. A leading researcher in machine learning, computer vision, mobile robotics, and computational neuroscience, LeCun has long been associated with the Academy, serving as a featured speaker during past machine learning conferences and also as a juror for the Blavatnik Awards for Young Scientists.

Advancing Neural Network Research

Photo by Nick Fetty/The New York Academy of Sciences

As a postdoc at the University of Toronto, LeCun worked alongside Geoffrey Hinton, who’s been dubbed the “godfather of AI,” conducting early research in neural networks. Some of this early work would later be applied to the field of generative AI. At this time, many of the field’s foremost experts cautioned against pursuing such endeavors. He shared with the audience what drove him to pursue this work, despite the reservations some had.

“Everything that lives can adapt but everything that has a brain can learn,” said LeCun. “The idea was that learning was going to be critical to make machines more intelligent, which I think was completely obvious, but I noticed that nobody was really working on this at the time.”

LeCun joked that because of the field’s relative infancy, he struggled at first to find a doctoral advisor, but he eventually pursued a PhD in computer science at the Université Pierre et Marie Curie where he studied under Maurice Milgram. He recalled some of the limitations, such as the lack of large-scale training data and limited processing power in computers, during those early years in the late 1980s and 1990s. By the early 2000s, he and his colleagues began developing a research community to revive and advance work in neural networks and machine learning.

Work in the field really started taking off in the late 2000s, LeCun said. Advances in speech and image recognition software were just a couple of the instances LeCun cited that used neural networks in deep learning applications.  LeCun said he had no doubt about the potential of neural networks once the data sets and computing power was sufficient.

Limitations of Large Language Models

Large language models (LLMs), such as ChatGPT or autocomplete, use machine learning to “predict and generate plausible language.”  While some have expressed concerns about machines surpassing human intelligence, LeCun admits that he takes an unpopular opinion in thinking that he doesn’t think LLMs are as intelligent as they may seem.

LLMs are developed using a finite number of words, or more specifically tokens which are roughly three-quarters of a word on average, according to LeCun. He said that many LLMs are developed using as many as 10 trillion tokens.

While much consideration goes into deciding what tunable parameters will be used to develop these systems, LeCun points out that “they’re not trained for any particular task, they’re basically trained to fill in the blanks.” He said that more than just language needs to be considered to develop an intelligent system.

Photo by Nick Fetty/The New York Academy of Sciences

“That’s pretty much why those LLMs are subject to hallucinations, which really you should call confabulations. They can’t really reason. They can’t really plan. They basically just produce one word after the other, without really thinking in advance about what they’re going to say,” LeCun said, adding that “we have a lot of work to do to get machines to the level of human intelligence, we’re nowhere near that.”

A More Efficient AI

LeCun argued that to have a smarter AI, these technologies should be informed by sensory input (observations and interactions) instead of language inputs. He pointed to orangutans, which are highly intelligent creatures that survive without using language.

Part of LeCun’s argument for why sensory inputs would lead to better AI systems is because the brain processes these inputs much faster. While reading text or digesting language, the human brain processes information at about 12 bytes per second, compared to sensory inputs from observations and interactions, which the brain processes at about 20 megabytes per second.

“To build truly intelligent systems, they’d need to understand the physical world, be able to reason, plan, remember and retrieve. The architecture of future systems that will be capable of doing this will be very different from current large language models,” he said.

AI and Social Media

As part of his work with Meta, LeCun uses and develops AI tools to detect content that violates the terms of services on social media platforms like Facebook and Instagram, though he is not directly involved with the moderation of content itself. Roughly 88 percent of content removed is initially flagged by AI, which helps his team in taking down roughly 10 million items every three months. Despite these efforts, misinformation, disinformation, deep fakes, and other manipulated content continue to be problematic, though the means for detecting this content automatically has vastly improved.

Photo by Nick Fetty/The New York Academy of Sciences

LeCun referenced statistics stating that in late 2017, roughly 20 to 25 percent of hate speech content was flagged by AI tools. This number climbed to 96 percent just five years later. LeCun said this difference can be attributed to two things: first the emergence of self-supervised, language-based AI systems (which predated the existence of ChatGPT); and second, is the “transformer architecture” present in LLMs and other systems. He added that these systems can not only detect hate speech, but also violent speech, terrorist propaganda, bullying, fake news and deep fakes.

“The best countermeasure against these [concerns] is AI. AI is not really the problem here, it’s actually the solution,” said LeCun.

He said this will require a combination of better technological systems, “The AI of the good guys have to stay ahead of the AI of the bad guys,” as well as non-technological, societal input to easily detect content produced or adapted by AI. He added that an ideal standard would involve a watermark-like tool that verifies legitimate content, as opposed to a technology tasked with flagging inauthentic material.

Open Sourcing AI

LeCun pointed to a study by researchers at New York University which found that audiences over the age of 65 are most likely to be tricked by false or manipulated content. Younger audiences, particularly those who grew up with the internet, are less likely to be fooled, according to the research.

One element that separates Meta from its contemporaries is the former’s ability to control the AI algorithms that oversee much of its platforms’ content. Part of this is attributed to LeCun’s insistence on open sourcing their AI code, which is a sentiment shared by the company and part of the reason he ended up at Meta.

“I told [Meta executives] that if we create a research lab we’ll have to publish everything we do, and open source our code, because we don’t have a monopoly on good ideas,” said LeCun. “The best way I know, which I learned from working at Bell Labs and in academia, of making progress as quickly as possible is to get as many people as possible contributing to a particular problem.”

LeCun added that part of the reason AI has made the advances it has in recent years is because many in the industry have embraced the importance of open publication, open sourcing and collaboration.

“It’s an ecosystem and we build on each other’s ideas,” LeCun said.

Avoiding AI Monopolies

Photo by Nick Fetty/The New York Academy of Sciences

Another advantage is that open sourcing lessens the likelihood of a single company developing a monopoly over a particular technology. LeCun said a single company simply does not have the ability to finetune an AI system that will adequately serve the entire population of the world.

Many of the early systems have been developed using English, where data is abundant, but, for example, different inputs will need to be considered in a country such as India, where 22 different official languages are spoken. These inputs can be utilized in a way that a contributor doesn’t need to be literate – simply having the ability to speak a language would be enough to create a baseline for AI systems that serve diverse audiences. He said that freedom and diversity in AI is important in the same way that freedom and diversity is vital to having an independent press.

“The risk of slowing AI is much greater than the risk of disseminating it,” LeCun said.

Following a brief question and answer session, LeCun was presented with an Honorary Life Membership by the Academy’s President and CEO, Nick Dirks.

“This means that you’ll be coming back often to speak with us and we can all get our questions answered,” Dirks said with a smile to wrap up the event. “Thank you so much.”

Also from the Tata Knowledge Series on AI & Society: The Complex Ecosystem of Artificial Intelligence with Madhumita Murgia.

Rule Makers and Breakers in the Space Race for Off-Earth Resources

A panel discussion from the South by Southwest event.

From space junk to mining critical minerals on the Moon, this South by Southwest panel explored ambiguities in the governance of space ventures.

Published March 28, 2024

By Brooke Grindlinger, PhD
Chief Scientific Officer

Panelists Monique M. Chism, PhD (left), Under Secretary for Education at the Smithsonian Institution; Aida Araissi, Founder and CEO of the Bilateral Chamber of Commerce; Kirsten Bartok Touw, aerospace, space, and defense tech investor and Co-Founder and Managing Partner of New Vista Capital; and A.J. Crabill, National Director of Governance for the Council of the Great City Schools; speak at SXSW on March 11, 2024. The panelists discussed the need for a cohesive and forward-looking governance approach to the business of space, to ensure equitable access and opportunity for all in this growing industry.

Space exploration not only signifies a pioneering frontier for deepening our comprehension of the universe but also serves as a pivotal gateway to unprecedented resources, technologies, and job opportunities, poised to emerge both on and beyond Earth’s bounds. What was once exclusively the domain of national governments has now evolved into a thriving commercial industry, fueled by the burgeoning participation of the private sector in space exploration. To guarantee the safety, accessibility, and positive impact of space exploration, it’s imperative to develop evolving governance mechanisms that effectively oversee resource allocation, foster international collaboration, prioritize safety and security, address ethical dilemmas, and tackle the escalating challenges of space debris and traffic management. On March 11, 2024, a diverse assembly of space investors, public and private stakeholders, ethicists, and enthusiasts congregated at Austin’s South by Southwest Conference to glean insights from the panel session titled ‘Governance Beyond Gravity: Unity & Exploration,’ helmed by Dr. Monique M. Chism, Under Secretary for Education at the Smithsonian Institution.

Satellite Superhighway: Redefining Space Access

Amid our captivation by the human presence aboard the International Space Station, the allure of Mars exploration, and the awe-inspiring vistas from the James Webb Space Telescope, it’s easy to overlook the bustling thoroughfare of satellites silently navigating Earth’s orbit. Remarkably, data from the tracking site Orbiting Now reveals a staggering count of over 9,600 satellites currently overhead, with SpaceX‘s Starlink network alone accounting for more than 6,000 of them.

The burgeoning satellite network not only amplifies global connectivity and intelligence capabilities but also signifies a democratization of space access, with over 70 nations, in conjunction with numerous private sector entities, having effectively launched satellites into low Earth orbit, endowing their operators with advanced communication and intelligence resources. The acquisition of precise Earth observation data, down to the millimeter level, fuels unmatched insights, opportunities, and competition.

Fellow panelist, Kirsten Bartok Touw, an aerospace, space, and defense tech investor and Co-Founder and Managing Partner of New Vista Capital underscored, “The concept of national security and protecting your country’s and your allies’ access to space, and all that is up there, is incredibly important.” However, Bartok Touw proposed that this unique and specialized business sector should not solely reside within the purview of governments. “We need to work with commercial companies—they iterate, they move faster, they design.” Beyond intelligence applications, Bartok Touw highlighted the numerous commercial opportunities in space, ranging from asteroid and lunar mining for rare Earth minerals to satellite monitoring for methane leaks, and even drug discovery, which can occur at an accelerated pace due to the absence of gravity in space. “This is a race for unexplored capabilities and areas. The first companies up there to lay claim are going to be the furthest and most advanced.”

Space Governance in Flux: Challenges and Opportunities

In the absence of established human settlements in space or local space governments, the space community navigates a complex web of governance policies crafted over decades. These include the foundational Outer Space Treaty of 1967, ratified by 112 nations, the 1979 Agreement Governing the Activities of States on the Moon and Other Celestial Bodies, colloquially known as the ‘Moon Agreement,’ ratified by a mere 18 nations, and the recent non-binding 2020 proposal, the Artemis Accords, which serves as an international legal framework aimed at orchestrating the peaceful exploration and utilization of space resources.

Panelist A.J. Crabill, National Director of Governance for the Council of the Great City Schools in Washington DC offered insights into the pivotal role of space, and space governance, in shaping the future of our society. “That future requires stepping out of our birthing cradle and being able to access resources beyond those that are terrestrial. However, the moment you have a lot of people doing that you’re immediately going to run into conflict. That’s when the need for systems of governance come into place. How do we protect both people and resources [and] how do we collaborate effectively around services that are needed? All of those become way more complicated outside of Earth’s atmosphere.”

Bartok Touw flagged that prior space governance policies relied on country-to-country agreements. But today, independent commercial operators like SpaceX, which launch and lease satellites for a variety of government and private entities, can limit the access that a country or corporation has in their region to space-based communications. “The state-to-state agreements that we had earlier [are] being disrupted today because now it’s not just country-to-country… it’s commercial entity-to-commercial entity. I would love to live in a world where all these nations and commercial entities could agree, but that is not the case we’re in.”

The Lack of an Enforcement Mechanism

Panelist Aida Araissi, Founder and CEO of the Bilateral Chamber of Commerce, injected some optimism into the discussion, remarking, “The countries that are at the forefront of accessing the Moon are [the United States], the Soviet Union, China, and India. It’s an exciting time.” However, as humanity’s quest to return to the Moon and journey to Mars intensifies, Araissi raised concerns about the governance of commercial activities, such as lunar mining. “Exactly whose jurisdiction is that, and how are we going to regulate that? That is the key question.” Bartok Touw echoed, “That is the problem, there isn’t an enforcing mechanism.”

Crabill adopted a pragmatic stance, invoking the satirical adage, “He who has the gold makes the rules.”  He elaborated, “Unfortunately, we see this time and time [again] when we’re looking at governance systems—school systems, states, cities, nations—whoever has the keys of authority…the access to resources, does wind up making the rules. If we want space governance to follow our values, we have to be there first in a powerful way, establish industry, and establish resources. And then our values of bringing other people in coalition can be what carries the day.”  

Reflecting on the evolution of space governance, Crabill noted that reaching consensus was easier during the theoretical discussions of the 1950s and 1960s. However, as society approaches the technological reality of widespread space access, the complexities of governance intensify. “Governing the imminent is much more complicated than governing the hypothetical.”

Learn more about the ethical, legal, and social issues relevant to space exploration and long-term human settlement in space at the upcoming event featuring a conversation with space ethicist, astrophysicist, and author Erika Nesvold: Off-Earth: Ethical Questions and Quandaries for Living in Outer Space.

Considering Context and Culture in AI

A man presents during a conference.

Nicholas Dirks focused on large language models, such as ChatGPT, and the importance of context and culture when developing these technologies.

Published March 13, 2024

By Nick Fetty
Digital Content Manager

Nicholas Dirks, President and CEO of The New York Academy of Sciences, recently presented at what has been dubbed “the world’s most attended tech event.”

Dirks gave a lecture titled “The Social Life of AI” to LEAP conference attendees at the Riyadh Exhibition and Convention Center in Saudi Arabia’s capital city. His presentation was part of DeepFest, a section of the LEAP event focused specifically on artificial intelligence (AI), which featured more than 150 speakers and nearly 50,000 attendees.

Dirks’ presentation covered a range of AI topics from the origins of human-machine interaction during the industrial revolution to the social and cultural impacts AI will have as the technology continues to develop.

“I seek here not just to talk about benefits and dangers, nor the larger debate between techno optimists and those who worry about threats about extinction, but about how AI will impact our everyday lives in ways that will be about far more than the more obvious importance of technology in our economic and political worlds,” said Dirks.

Dirks focused on large language models, such as ChatGPT or autocomplete, which use machine learning to “predict and generate plausible language.” When developing these models, Dirks emphasized the importance of context and culture, both of which can greatly impact how AI is programmed to generate language.

“Humans live in webs that are particular to cultural and historical experience, some shared across the entire human race, others deeply specific to smaller groups and contexts,” said Dirks. “Language is thus critical, but so is the cultural context within which any language is developed, learned, and deployed.”

Creating Social Meaning

Without proper regulations and best practice guidelines, AI might change cultural contexts in negative ways, rendering it a problematic player in the symbolic game of creating social meaning. This can lead to greater autonomy for machines, and less for humans, causing people to have more faith in machines than in humans. Within just the past couple of decades, issues caused by changes in technology are being brought to the forefront, such as the loss of jobs due to automation, mental health concerns spurred on by social media algorithms, and the dissemination of misinformation and disinformation through deepfakes and other maliciously manipulated content.

To avoid these pitfalls, Dirks says experts from various disciplines, including humanists, social scientists, psychologists, and philosophers, must be brought to the table in the research and development of AI technologies. These technologies should be frontloaded with socially positive applications, such as relevant and inexpensive educational opportunities, frictionless access to financial and other transactions, entitlements, and more.

While many serious considerations must be factored into the development of AI to avoid negative outcomes, Dirks ended his presentation with an optimistic look at the positive potential of AI moving forward.

“The social life of AI need not destroy or replace our own social lives as human beings. With deliberate, cross-sectoral, and multidisciplinary engagement – we can ensure that AI augments our lives, neither replacing nor subordinating us, and that the benefits of AI are shared by all,” he said.

Learn more about The New York Academy of Sciences’ efforts into AI such as the Artificial Intelligence and Society Fellowship as well as the Tata Series on AI & Society.

Combating COVID-19

Overview

From March 25th to May 6th, 2020, over 2000 young innovators from 74 different countries came together to join the fight against COVID-19. In response to the coronavirus outbreak and global shutdown, the New York Academy of Sciences invited creative problem-solvers from around the world to participate in the challenge for a chance to receive a $500 travel scholarship to attend the Global STEM Alliance Summit. The winning solution, GOvid-19, is a virtual assistant and chatbot that provides users with accurate pandemic-related information. Learn more about the winning solution and the solvers who designed them.

The World Health Organization (WHO) declared the outbreak of the coronavirus disease 2019 (COVID-19) a pandemic in March 2020. As scientists and public health experts rush to find solutions to contain the spread, existing and emerging technologies are proving to be valuable. In fact, governments and health care facilities have increasingly turned to technology to help manage the outbreak. The rapid spread of COVID-19 has sparked alarm worldwide. Many countries are grappling with the rise in confirmed cases. It is urgent and crucial for us to discover ways to use technology to contain the outbreak and manage future public health emergencies.

Challenge

Consider the obstacles faced by governments, healthcare providers and/or patients and design a technology-based solution that can be deployed in response to combat COVID-19. The solution can be an improvement of an already existing technology or a new application.  Solutions should consider the following: 

  • Modes and rates of disease transmission 
  • Known preventative and protective measures against COVID-19
  • Lack of vaccine, medication, and treatment for COVID-19
  • The public health system, local healthcare infrastructure, access to technology and other relevant contexts

Winners

The winning solution, GOvid-19, is a virtual assistant and chatbot that provides users with accurate pandemic-related information about government responses, emergency resources, statistics on COVID-19 while utilizing grassroots feedback, streamlining medical supply chains with blockchain and AI techniques address potential accessibility issues among the most vulnerable groups.

Tracking Coronavirus

Overview

From May 8th to June 19th, 2020, over 250 innovators from 21 different countries worked together to develop syndromic surveillance systems that help us better understand the current pandemic and prevent future outbreaks. The New York Academy of Sciences invited solvers from around the world to participate in the challenge for a chance to win a $5,000 USD grand prize. The winning solution, SYNSYS: Tracking COVID-19 created by Esha Datanwala, is a syndromic surveillance system that uses online data to predict outbreaks. Learn more about the winning solution and the solver who designed it.

In the last two decades three new Corinaviruses have jumped from animals to humans – called the spillover effect– causing serious illness and fatalities. Scientists and researchers in various sectors are racing to develop treatments and a vaccine while also investigating fundamental questions about the virus such as the seasonality, full range of symptoms, true fatality rate, viral latency, dose response curve of the viral load, long-term immunity, mutation rate etc.

The lack of Syndromic Surveillance for Coronavirus has grossly exposed the global and local preparedness for pandemics making us vulnerable as well as putting extreme stress on our government, healthcare facilities, medical supply demands and economies.

Challenge

Using available data from the COVID-19 pandemic and/or past outbreaks of SARS and MERS (see below for some suggestions), design an innovative syndromic surveillance system that addresses the need for improved surveillance networks to better understand the threat of future waves of COVID-19 and/or future Coronavirus outbreaks.

Winners

SYNSYS is a syndromic surveillance system designed for the public & private healthcare sectors. This system uses public domain mined data from Google Trends, various social media sites, census data, and satellite data to predict outbreaks, both before they happen and while they’re happening.

Team Member: Esha Datanwala

IoT Smart Homes Challenge

Overview

In a two-year partnership with the Ericsson-created Center of Excellence (CoE), the Academy invited Omani youth to join the Junior Academy and participate in a series of Internet of Things (IoT) challenges and activities. Students and mentors from Omani industry and academia will participate in Challenges around the topic of ‘Internet of Things’ which will offer you opportunities to innovate and learn with peers and mentors around the globe.

Challenge

Design a smart home that integrates technology which collects, processes, and stores environmental and health information. The smart home you design should be sustainable and provide suitable feedback mechanisms for such information to promote sustainable energy use but also the physical and mental health of those living in the home. The design can include new innovations and/or alterations of existing technology.

In essence, the central challenge question you need to answer is:

How can a smart home create a healthier and more sustainable home environment?

Winners

The winning team, Smart Shelter, focused on using data—in particular, the interconnected web of computing devices and digital machines known as the Internet of Things (IoT)—to monitor energy, water and air quality/air usage and improve the efficiency of service provision in the shelters automatically. They also highlighted the use of data to enhance security, register new residents, and to keep track of unsheltered people at risk in order to direct them to shelters with available space.

Team members: Al-Zahraa A. (Team Lead) (Oman), Tahra A. (Oman), Miaad A. (Oman), Taher A. (Oman)

Mentor: Venkatesan Subramaniyan (India)

Sponsor

ericsson logo vertical

This program is made possible by a two-year partnership between the Academy and Ericsson-created Center of Excellence for Advanced Telecommunications and IoT. Throughout the program, Omani youth will build critically important 21st century skills, hone their entrepreneurial and innovation mindsets, and build their digital knowledge and leadership potential.