Skip to main content

Have We Passed the Turing Test, and Should We Really be Trying?

A black and white headshot of computer scientist Alan Turing.

The 70th anniversary of Turing’s death invites us to ponder: can we imagine AI models that will do well on the Turing test?

Published August 22, 2024

By Nitin Verma, PhD
AI & Society Fellow

Alan Turing (1912-1954) in 1936 at Princeton University.
Image courtesy of Wikimedia Commons.

Alan Turing is perhaps best remembered by many as the cryptography genius who led the British effort to break the German Enigma codes during WWII. His efforts provided crucial information about German troop movements and helped bring the war to an end.

2024 has been a noteworthy year in the story of Turing’s life as June 7th marked 70 years since his tragic death in 1954. But four years before that—in 1950—he kickstarted a revolution in digital computing by posing the question “can machines think?” and proposing an “imitation game” to answer it.

While this quest has been the holy grail for theoretical computer scientists since the publication of Turing’s 1950 paper, the public launch of ChatGPT in November 2022 has brought the question to the center stage of global conversation.

In his landmark 1950 paper, Turing predicted that: “[by about the year 2000] it will be possible to programme computers… [that] play the imitation game so well that an average interrogator will not have more than 70 per cent. chance of making the right identification after five minutes of questioning.” (p. 442). By “right identification”, Turing meant accurately distinguishing between human-generated and computer-generated text responses.

This “imitation game” eventually came to be known as the Turing test of machine intelligence. It is designed to determine whether a computer can successfully imitate a human to the point that a human interacting with it would be unable to tell the difference.

We’re much past the year 2000: Are we there yet?  

In 2022, Google let go of Blake Lemoine, a software engineer who had publicly claimed that the company’s LaMDA (Language Model for Dialogue Applications) program had attained sentience. Since then, the closest we’ve come to seeing Turing’s prediction come true is, perhaps, GPT-4, deepfakes, and OpenAI’s “Sora” text-to-video model that can churn out highly realistic video clips from mere text prompts.

Some researchers argue that LLMs (Large Language Models) such as GPT-4 do not yet pass the Turing test. Yet some others have flipped the script and argued that LLMs offer a way to assess human intelligence by positing a reverse Turing Test—i.e., what do our conversational interactions with LLMs reveal about our own intelligence?

Turing himself made a noteworthy remark about the imitation game in the same 1950 paper: “… we are not asking whether all digital computers would do well in the game nor whether the computers at present available would do well, but whether there are imaginable computers which would do well.” (Emphasis mine; p. 436).

Would Turing have imagined the current crop of generative AI models such as GPT-4 as ‘machines’ capable of “doing well” on the Turing test? I believe so, but we’re not quite there, yet. As an information scientist, I believe that in 2024 AI has come closer than ever to passing the Turing test.

If we’re not there yet, then should we strive to get there?

As with any other technology ever invented, as much as Turing may have only been thinking of the public good, there is always the potential for unforeseen consequences.

Technologies such as deepfake apps and conversational agents such as ChatGPT still need human creativity to be useful and usable. But still, the advanced AI that powers these technologies carries the potential of passing the Turing test. That potential portends a range of consequences for society that deserve our serious attention.

Leading scholars have already warned about the consequences of the ability of “fake” information to fuel distrust in public institutions including the judicial system and national security. The upheaval in the public imagination caused by ChatGPT even prompted US President Biden to issue an Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI in the fall of 2023.

We’ll never know what Turing would have made of the spurt of AI advances in light of his own foundational work in theoretical computer science and artificial intelligence. His untimely death at the young age of 41 deprived the world of one of the greatest minds of the 20th century and the still more extraordinary achievements he could have made.

But it’s clear that the advances and use of AI technology have brought society to a turning point that he anticipated in his seminal works.

It remains difficult to say when—or whether—machines will truly surpass human-level intelligence. But more than 70 years after Turing’s death we are at a point where we can imagine AI agents that will do well on the Turing test. And if we can imagine it, we can someday build it too.

Passing a challenging test can be seen as a marker of progress. But would we truly rejoice in having our AI pass the Turing test, or some other benchmark of human–machine indistinguishability?

Improving Classroom Accessibility with AI

A photo of a city skyline with an over-imposed graphic denoting different AI applications.

Winners of the Junior Academy Innovation Challenge Fall 2023: “Cognitive Classrooms”

Published August 14, 2024

By Nicole Pope
Academy Education Contributor

Sponsored by NEOM

Team members: Dawik D. (Team Lead) (Qatar), Atharv K. (India), Anoushka T. (India), Abhay B. (India), Asmit B. (India), Jefferson L. (United States)

Mentor: Aryan Chowdhary (India)

250 million children worldwide lack access to a decent education due to extreme poverty, child labor, or discrimination, according to data from the United Nations. A shortage of teachers, lack of resources and logistical constraints further undermine countless children’s educational outcomes.

This talented international team, comprising students from India, Qatar, and the United States, tackled this massive disparity with their project AI4Access. Tasked with devising innovative ways of harnessing the power of immersive technologies like artificial intelligence/machine learning (AI/ML) and virtual reality/augmented reality (VR/AR) to create a more inclusive, fair, and efficient environment in classrooms and improve students’ learning experience, the team more than met the challenge.

The team members learned that students respond to different learning styles (visual, auditory, and kinesthetic), but traditional teaching favors read/write learner types. 1 in 59 students, according to the UN, is affected by learning disabilities such as dyslexia, ADHD, dyscalculia and dyspraxia, which undermine their academic success in a rigid, one-size-fits-all education system. This is the aspect that the AI4Access team chose to focus on.

Advancing Education Through Digital Technology

The team developed an AI-led application designed to diversify the education experience, give students access to new visualized learning styles, and enable teachers to monitor individual students’ performance and provide support when needed.

The tool analyzes the students’ learner profile and enables teachers to provide them with a personalized teaching plan that considers their strengths and weaknesses. By providing visual learning features, such as 3D models and live simulations using VR/AR, the app enhances the learning experience and supports students with learning difficulties. The teacher can more easily track individual students’ progress, track their response, and identify when individuals need additional attention.

The team drew on individual members’ skills to build their app. “I’ve enjoyed working with the team, capitalizing on our respective strengths for the best possible outcome,” explains Anoushka. “This journey helped me truly appreciate the power of collaboration and teamwork!” Their end product—an elegant app that uses OpenAI API, Python and Eleven Labs API to improve the classroom experience for both students and teachers—won praise from the judges.

Their already impressive achievement is made even more outstanding by the difficulties they overcame to reach their solution. For six intense weeks, the team worked across time zones and at odd hours of the night to create their prototype app. “Even though we all had various commitments, whatever time I had spare, it would be dedicated to this even if it was midnight at my time!” explains Jefferson.

Sharpening Practical Skills

“Working countless hours at awkward times in the morning, just to meet up with your friends from halfway across the globe and work on something that truly motivates you is a feeling I cannot describe,” says Team Lead Dawik. “This project has taught me how to lead better, how to work with my peers and manage my time as well as the importance of meeting deadlines and staying committed to your work.”

Through the challenge, the team members were able to sharpen skills that will be essential in future endeavors, like teamwork and critical thinking. “My journey with this team has proven to be incredibly enriching. The team’s diverse skills and backgrounds, coupled with our unwavering unity, created an environment of continuous learning and personal growth,” believes Abhay. “We tackled challenges head-on, demonstrating resilience and innovative problem-solving.”

The Cognitive Classroom challenge was a wonderful learning opportunity for the members of the team and it left them hungry for more creative discoveries. “From late-night discussions to constructing prototypes and presentations, this environment taught me many things and opened new paths I never dreamed could exist,” explains Asmit.

His teammate Atharv concurs: “The diversity, unwavering support, and commitment to excellence of team members have pushed me to grow professionally and personally. I’m grateful to be part of this remarkable team, and I eagerly look forward to our next adventures.”

Read about other winners from the Fall 2023 Junior Academy Innovation Challenge:

A More Scientific Approach to Artificial Intelligence and Machine Learning

A researcher poses next to a vertical banner with the text "The New York Academy of Sciences."

Taking a more scientific perspective, while remaining ethical, can improve public trust of these emerging technologies.

Published August 13, 2024

By Nitin Verma, PhD
AI & Society Fellow

Savannah Thais, PhD, is an Associate Research Scientist in the Data Science Institute at Columbia University with a focus on machine learning. Dr. Thais is interested in complex system modeling and in understanding what types of information is measurable or modelable and what impacts designing and performing measurements have on systems and societies.

*This interview took place at The New York Academy of Sciences on January 18, 2024. This transcript was generated using Otter.ai and was proofread for corrections. Some quotes have been edited for length and clarity*

Tell me about the big takeaways from your talk?

The biggest highlight is that we should be treating machine learning and AI development more scientifically. I think that will help us build more robust, more trustworthy systems, and it will help us better understand the way that these systems impact society. It will contribute to safety, to building public trust, and all the things that we care about with ethical AI.

In what ways can the adoption of scientific methodology make models of complex systems more robust and trustworthy?

I think having a more principled design and evaluation process, such as the scientific method approach to model building, helps us realize more quickly when things are going wrong, and at what step of the process we’re going wrong. It helps us understand more about how the data, our data processing, and our data collection contributes to model outcomes. It helps us understand better how our model design choices contribute to eventual performance, and it also gives us a framework for thinking about model error and a model’s harm on society.

We can then look at those distributions and back-propagate those insights to inform model development and task formulation, and thereby understand where something might have gone wrong, and how we can correct it. So, the scientific approach really just gives us the principles, and a step-by-step understanding of the systems that we’re building. Rather than, what I see a lot of times, a hodgepodge approach where the only goal is model accuracy, in which something goes wrong, we don’t necessarily know why or where.

You have a very interesting background, and your work touches on various academic disciplines, including machine learning, particle physics, social science, and law. How does this multidisciplinary background inform your research on AI?

I think being trained as a physicist really impacts how I think about measurements and system design. We have a very specific idea of truth in physics. And that isn’t necessarily translatable to scenarios where we don’t have the same kind of data or the same kind of measurability. But I think there’s still a lot that can be taken from that, that has really informed how I think about my research in machine learning and its social applications.

This includes things like experimental design, data validation, uncertainty, propagation in models. Really thinking about how we understand the truth of our model, and how accurate it is compared to society. So that kind of idea of precision and truth that’s fundamental physics, has affected the research that I do. But my other interests and other backgrounds are influential as well. I’ve always been interested in policy in particular. Even in grad school, when I was doing a physics PhD, I did a lot of extracurricular work in advocacy in student government at Yale. That impacted a lot how I think about understanding how systems affect society, resource access, and more. It really all mixes together.

And then the other thing that I’ll say here is, I don’t think one person can be an expert in this many things. So, I don’t want it to seem like I’m an expert at law and physics and all this stuff. I really lean a lot on interdisciplinary collaborations, which is particularly encouraged at Columbia. For example, I’ve worked with people at Columbia’s School of International and Public Affairs as well as with people from the law school, from public health, and from the School of Social Work. My background allows me to leverage these interdisciplinary connections and build these truly collaborative teams.

Is there anything else you’d like to add to this conversation?

I would reemphasize that science can help us answer a lot of questions about the accuracy and impact of machine learning models of societal phenomena. But I want to make sure to emphasize at the same time that science is only ever going to get us so far. And I think there’s a lot that we can take from it in terms of experimental design, documentation, principles, model construction, observational science, uncertainty, quantification, and more. But I think it’s equally important that as scientific researchers, which includes machine learning researchers, we really make an effort to both engage with other academic disciplines, but also to engage with our communities.

I think it’s super important to talk to people in your communities about how they think about the role of technology in society, what they actually want technology to do, how they think about these things, and how they understand them. That’s the only way we’re going to build a more responsible, democratic, and participatory technological future. Where technology is actually serving the needs of people and is not just seen as either a scientific exercise or as something that a certain group of people build and then subject the rest of society to, whether it’s what they actually wanted or not.

So I really encourage everyone to do a lot of community engagement, because I think that’s part of being a good citizen in general. And I also encourage everyone to recognize that domain knowledge matters a lot in answering a lot of these thorny questions, and that we can make ourselves better scientists by recognizing that we need to work with other people as well.

Also read: From New Delhi to New York

Tata Knowledge Series on AI & Society: 100 Years of AI with Dr. Alok Aggarwal

December 5, 2024 | 6:00 PM – 8:00 PM ET

Join Dr. Alok Aggarwal as he discusses the science behind the mystical and magical world of Artificial Intelligence and his new book The Fourth Industrial Revolution & 100 Years of AI (1950-2050): The Truth About AI & Why It’s Only a Tool.

Artificial Intelligence is ushering in a wave of change that will touch every aspect of our daily lives. Dr. Alok Aggarwal – one of the early innovators and developers in this field sets out to demystify Artificial Intelligence by explaining its history, capabilities, and limitations. Aggarwal will explain the science and engineering behind AI in non-technical terms, catering to a diverse audience, including product managers, program leaders, business leaders, consultants, students, aspiring entrepreneurs, and decision-makers Aggarwal will explain numerous applications of AI that are already being used in vital inventions of the current and the Fourth Industrial Revolution, including the Internet of Things (IoT), Blockchains, Metaverse, Robotics, Autonomous Vehicles, Three-Dimensional Printing, inventions related to predicting, mitigating, and adapting to rapid climate change, and innovations related to gene editing, protein folding, and personalized healthcare. Explore the transformative capabilities of AI to drive innovations in this accessible discussion.

Sponsor

Lead Sponsor

The blue and white logo for the Tata Transformation Prize.

A Life in Defiance of Gravity

An author presents during an event at the Academy.

New book explores blackholes, massive gravity, how Einstein was ahead of his time, and learning from failure.

Published July 31, 2024

By Nick Fetty
Digital Content Manager

Photo by Nick Fetty/The New York Academy of Sciences

Theoretical physicist Claudia de Rham discussed her recently published book, The Beauty of Falling: A Life in Pursuit of Gravity, during the recent Authors at the Academy Series, moderated by Chief Scientific Officer Brooke Grindlinger, PhD, at The New York Academy of Sciences.

A Life in Defiance of Gravity

Professor de Rham opened the conversation by joking that she’s had to “defy gravity for most of her life in an effort to understand it.” She observed her body’s buoyancy during diving expeditions in the Indian Ocean. She gazed at Canadian waterfalls from overhead while piloting aircraft. She even endured the rigors of astronaut training. All of this, coupled with her study of theoretical physics, helped to inform her book.

“We have this playful relationship with gravity, I think from an early age you can see that. Everybody likes to defy gravity, I don’t think I’m the only one,” de Rham said with a smile.

She recalled an impactful instance from her childhood in Peru when she went on an exhibition into the Amazon jungle. Lying in her hammock, she peered up at a clear, star-filled night sky and was enveloped with feelings of serenity and blissfulness. She thought philosophically about how humankind is just one part of the greater universe. She theorized that gravity was the throughline that connected humankind to nature, to other humans, to everything in the universe.

“From that point on I realized I really want to explore the fundamental laws of nature much more,” she said.

De Rham’s life and career has taken her across the globe. In addition to Peru, her childhood included stints in Switzerland and Madagascar. She earned degrees in France, Switzerland, and England, before taking a postdoc in Canada. She’s also served as faculty at institutions in Switzerland and the United States.

The Dream of Becoming an Astronaut

Photo by Nick Fetty/The New York Academy of Sciences

De Rham currently serves as a professor of theoretical physics at Imperial College London, where her work falls at the intersection of gravity, cosmology, and particle physics. While she is now an accomplished physicist, her initial goal was to become an astronaut.

“I [knew] well the chances were very limited” she said. “I was very realistic but still, if you have a dream, you should just go for it and see what happens.”

She spent more than two decades in her pursuit, despite there being no formal school or training regimen. She said that since the selection process occurs every 15-20 years, most people only get one shot in their lifetime. “There were 10,000 people who had the same thought as me, so I wasn’t the only one.”

The process involved completing the necessary medical, flight and other training. Those who made the next stage, then underwent psychological, psychometric, intelligence, and a “battery” of other evaluations over a one-year period.

She was among roughly 200 applicants who made it to the second round of evaluations which focused more on team bonding and responding to stress. She was then one of 42, and one of the few women, to make it to the next stage, which involved “all possible medical tests that you can imagine” on “every single part of your body.”

One Step Backward, Two Steps Forward

Ultimately, it was a positive result on a newly developed tuberculosis (TB) screening that led to her being declared ineligible. The doctor explained to her that because of a past infection, the test showed that she had the TB antibodies.

“So that was that. That was the end of the dream,” she said. “The dream is still there to some extent but also it changed shape.”

Even though she was disqualified for something beyond her control, she expressed no regrets about the time and effort she spent training.

“It’s not so much about the outcome at the end of the day, it’s about the journey and the experiences you have along the way,” she said.

She emphasized that the element of “potential failure” was important in the process because that’s how people learn and make progress. She quickly found that this approach to dealing with failure was applicable to her work as a scientist.

“As a theoretical physicist, when I fail, it’s just an equation that’s wrong, [and] I start over again,” she said. “To me it’s also part of this discovery with gravity where we know [the theory] does fail, and that’s actually something very positive because it tells us there is something to explore there.”

Einstein Was Right (Sort Of)

In 1915, Einstein proposed his theory of general relativity and within a year he used this theory to predict the existence of gravitational waves, ripples within space and time. His contemporaries rejected this new theory, and even Einstein second guessed himself, wondering if gravitational waves could be detected. Roughly 20 years later, he almost published a paper with the definitive and provocative title of “Do Gravitational Waves Exist? Answer: No!”

Photo by Nick Fetty/The New York Academy of Sciences

“He wasn’t satisfied not only by the fact that you couldn’t observe them but simply he wanted to claim that they were not part of reality, an illusion, a mathematical artifact,” said de Rham.

This paper was one of the first of his to undergo the peer review process. This involves fellow scientists from similar fields auditing research papers for scientific accuracy and feasibility.

Einstein did not take kindly to the referee of his paper who questioned his definitive declaration about the nonexistence of gravitational waves, however it did prod him to keep exploring. He eventually reworked his paper with the more accurate, less provocative title of “On Gravitational Waves.”

“There’s a lesson in there for all of the scientists who complain about the peer review process,” Dr. Grindlinger, the moderator, chimed in. “Even Einstein benefitted from peer review.”

In 2016, scientists from the National Science Foundation’s Laser Interferometer Gravitational-wave Observatory announced a significant breakthrough after directly detecting signals for gravitational waves in space – proving Einstein’s theory from a century prior.

The 2016 discovery involved earth-based instruments that were able to detect the gravitational waves of two merging blackholes in outer space. The ripples caused by this phenomenon traveled through space and time for millions of years until they were detected by the instruments on earth.

The Beginning of a New Era

Today’s consensus in theoretical physics suggests that Einstein’s theory of general relativity will eventually fail. One example being within Sagittarius A*, the supermassive blackhole at the center of the Milky Way galaxy.

“For the failure of Einstein’s theory of general relativity, we don’t need to have any observations to know directly where it would fail,” said de Rham. “And yet we know that we need to have a new theory that goes beyond Einstein’s theory of relativity to overcome it.”

To fill the gaps in the research, de Rham has developed her own theory of “Massive Gravity.” Though, much like Einstein, she at times second guesses her own idea.

Photo by Nick Fetty/The New York Academy of Sciences

“I’m not convinced that it’s a reality, but I am convinced that we should explore it,” said de Rham. “Because that’s how we learn.”

In 2011, de Rham, Gregory Gabadadze and Andrew Tolley developed a new, groundbreaking mathematical framework for the theory of massive gravity. Her work has profound implications for the area of research now dubbed “beyond Einstein gravity”, which includes exploring new types of particles in the universe and connecting the theories of gravity with current and next-generation astrophysics experiments.

“If gravity had a very small mass, then the messenger for gravity wouldn’t have as big of a reach anymore. That’s the idea behind the theory of massive gravity. You wouldn’t need to account for all the vacuum energy present in the whole of the universe to explain the accelerated expansion. You only account for a fraction of it and it leads to a smaller rate of acceleration of the universe,” said de Rham, succinctly summarizing her complex theory.

Award-Winning Research

In recognition of her breakthrough research, de Rham was named the 2020 Blavatnik Awards for Young Scientists in the United Kingdom Laureate in the Physical Sciences & Engineering category. The support from the award enabled her to continue conducting impactful research in this field, particularly new and innovative ideas that may not be supported by other funding agencies. The award is free of restrictions and is the largest of its kind for early career researchers.

“Science is always much more fun and creative than science fiction,” de Rham said in closing.

Check out the other events from our 2024 Authors at the Academy Series

Full video of these events is available, please visit nyas.org/ondemand

The Academy’s Century-Long History with Solar Energy

Solar panels with the shining sun in the background.

What started as novel research 100 years ago is a major source of energy today, in part because of a research prize established by an Academy member.

Published July 30, 2024

By Nick Fetty
Digital Content Manager

Abraham Cressy Morrison/Public Domain

While electric vehicles and solar panels are commonplace around New York these days, the city’s history with solar energy goes back at least a century.

The New York Academy of Sciences has been an incubator for solar energy research and promotion since the early part of the 20th century. This is when Academy member Abraham Cressy Morrison established “a prize of $100 (the equivalent of about $1,800 today) for the best paper on the question of whether released intra-atomic energy constitutes an important source of solar and stellar energy,” according to reporting from The New York Times.

Morrison, who served as the Academy’s President from 1938 to 1939, funded various awards and prizes promoting scientific research in the first half of the 20th century.

The Early Days of Solar Energy

While solar energy research was novel at the time the award was established, within five years researchers were making advancements that helped to prove the potential of this new energy source. “This is merely an indication of the speed with which scientific research makes progress today,” The Brooklyn Daily Eagle reported in 1929.

According to that same article, Morrison pushed back at the idea that his motives were commercial, and instead emphasized his desire to advance science for sciences’ sake.

“It is of much more interest to me to know how the sun creates and continues its energy,” Morrison was quoted. “There is a gap in our knowledge of the sun and throughout the heavens there is a question mark that challenges us.”

Morrison was not the only scientist from this era to see solar as a potential energy source. The sentiment was shared by Thomas Edison, who happened to be a Fellow of the Academy. Around this time, the title of “Fellow” was bestowed upon active resident members credited with significant scientific achievements.

In a 1929 interview with Forbes magazine, Edison was asked “Do you believe that the age of electrical invention and discovery is over?” The 86-year-old Edison responded simply, “No; just started.”

Later in the interview he was asked “Do you believe the time will come when the world petroleum supply will be exhausted and man will turn to electric vehicles?” But oddly enough, he didn’t quite yet see the potential in EVs, answering “If petroleum was exhausted, we can get power for automobiles from powered coal, benzol, alcohol.”

Research Published in Annals and Transactions

The research that resulted from Morrison’s prizes would go on to be published in Annals of the New York Academy of Sciences, the Academy’s academic journal that dates back to 1823.

Volume XLII, Article 2 of Annals, published in 1941, focused on “The Fundamental Properties of the Galactic System.”  Academic papers published in this issue examined topics like “The Luminosity Function” and “The Stellar Distribution of High and Intermediate Latitudes.”

The issue also acknowledged Morrison directly, stating “This publication is due to the generosity of Mr. A. Cressy Morrison, who, through the establishment of the A. Cressy Morrison series of prizes in astronomy, has stimulated many noteworthy investigations on the sources of stellar energy.”

The Academy also devoted entire conferences to this line of research during this era. An astronomical conference in 1939, entitled “The Internal Constitution of the Stars,” brought in presenters from as far away as Finland and Czechoslovakia. The conference was so well-received that “[i]t was unanimously decided to follow up this meeting with a second conference to be held next fall,” according to Transactions of the New York Academy of Sciences.

Academy Awards Support Solar Energy (2018-2021)

Solar energy continues to be part of the Academy’s programming today from Awards to Education. Several recent recipients of the Blavatnik Awards for Young Scientists, sponsored by the Blavatnik Family Foundation and administered by the Academy, have made significant scientific research contributions to the field.

Henry Snaith, the 2018 Blavatnik Awards in the United Kingdom Physical Sciences & Engineering Laureate and who serves as the Binks Professor of Renewable Energy at the University of Oxford, found that metal halide perovskite materials can be employed in highly efficient solar cells. Snaith’s research aims to significantly reduce costs for “photovoltaic solar power [which] could help propel society to a sustainable future.”

Xiaoming Zhao, the 2021 Blavatnik Regional Awards Finalist in Chemistry and now on the faculty at Nanjing University of Aeronautics and Astronautics, has conducted extensive research on “perovskites,” which are less expensive and easier to produce than silicon-based solar cells. His research found “record-breaking efficiency and high stability after long-term use.”

Daniel Straus, the 2021 Blavatnik Regional Awards Winner in Chemistry and an assistant professor of chemistry at Tulane University, has advanced solar cells in two ways as a materials chemist. First, he “identified a structural instability in a promising new solar cell material, known as cesium lead iodide,” then he “also demonstrated a new technique to make chiral, or asymmetric, materials from very simple non-chiral molecules.”

Academy Awards Support Solar Energy (2022-2024)

Menny Shalom, the 2022 Blavatnik Awards in Israel Laureate in Chemistry and a professor of chemistry at Ben-Gurion University of the Negev, is developing stable, low-cost materials that “can be utilized for applications in photocatalytic and photo-electrochemical reactions and the development of solar cells, batteries, and fuel cells.”

Svitlana Mayboroda, the 2023  Blavatnik National Awards Physical Sciences & Engineering Laureate and McKnight Presidential Professor of Mathematics at the University of Minnesota, conducts research that provides “physicists with a new fundamental understanding of matter yielding improvements in crucial 21st century technologies, including LED lighting, semiconductors, and solar cells.”

Jooho Lee, the 2023 Blavatnik Regional Awards Laureate in Chemistry and an assistant professor of chemistry and chemical biology at Harvard University, studies “emergent functional materials, including solar cells, electrocatalysts for the hydrogen economy, and optoelectronics” at the microscale.

Samuel D. Stranks, the 2024 Blavatnik Awards in the United Kingdom Chemical Sciences Finalist and a professor of optoelectronics at the University of Cambridge, conducts research to make perovskite solar cells more commercially viable. His “work particularly sheds light on where efficiency losses are in perovskite materials and how they degrade over time, providing critical guidance to engineer long-lasting and high-performing commercial solar cells.”

Academy Educational Initiatives Advance Solar Energy

Renewable energy, specifically solar, was a component in the Junior Academy’s spring 2022 innovation challenge, sponsored by Ericsson. The winning team suggested utilizing solar panels as an energy source for their smart home concept.

Junior Academy member Sthuthi S. wanted to develop a solar panel that wouldn’t negatively impact wild birds. She and her team suggested using “infrared sensors and speakers [that produce] beeping noises at 3 kHz [to] deter birds from landing on solar panels.”

Fellow Junior Academy member Sharon L. expressed her optimism about future advancements in solar energy. “Finally, the development of new renewable energy sources — from paint-on solar cells to microgrids — are soon going to provide a democratization of energy to all corners of the world,” she said in 2017 for an article examining the next 100 years of scientific achievement. “It’s incredibly exciting to be living in a generation where we’ll have the opportunity to contribute to such innovative research!”

According to data from the Solar Energy Industries Association, cumulative U.S. solar installations went from less than 20,000 installed solar capacity (MWdc) in 2010, to nearly 200,000 MWdc in 2024. Similarly, data from the International Energy Agency shows that battery electric vehicles and plug-in hybrid vehicles in the U.S. rose from roughly 200,000 in 2013 to 4.8 million in 2023.

The Academy is at the forefront of new budding solar energy technologies that will help power the future. So, next time you see an EV driving down Broadway, or an array of solar panels on a rooftop, remember that technology has been a work in progress for at least a century. And the Academy has played its role in leading the “charge.”

The Complex Ecosystem of Artificial Intelligence

An author presents during an event at the Academy.

Journalist Madhumita Murgia discusses the potential impact of AI particularly on disenfranchised populations, in her new book Code Dependent: Living in the Shadow of AI.

Published July 16, 2024

By Nick Fetty
Digital Content Manager

Photo by Nick Fetty/The New York Academy of Sciences

Nicholas Dirks, President and CEO of The New York Academy of Sciences, recently sat down with journalist and author Madhumita Murgia to talk about her new book,  as the latest installment of the Tata Knowledge Series on AI & Society, sponsored by Tata and Sons.

Photo by Nick Fetty/The New York Academy of Sciences

From Scientist to Journalist

The discussion kicked off with Murgia talking about her own journey, which began in Mumbai, India. When considering her major at the University of Oxford, she had to decide whether she’d pursue studies in a scientific field or English. She chose the former.

“I think I made the right choice,” said Murgia. “I learned about the scientific method, more than the facts and the research. [I developed] a deep respect for how science is done and how to analyze data.”

After graduating with her undergraduate degree in biological sciences, she remained at Oxford where she completed her master’s in clinical immunology. She was part of a team that worked on an AIDS vaccine prior to earning a M.A. in science journalism from NYU and transitioning to media.  Murgia joined the staff of the Financial Times in 2016, serving as the European technology correspondent, and in 2023 was named the newspaper’s first Artificial Intelligence Editor.

“[Journalism is about] understanding complex subjects by talking to the experts, but then distilling that and communicating it to the rest of the world,” said Murgia. “[I want to] bring these complex ideas to people to show them why it matters.”

This basis in science and journalism helped to inform Murgia’s book, which was released in June by Macmillan Publishers.

AI’s Potential in Healthcare

Photo by Nick Fetty/The New York Academy of Sciences

While much of Murgia’s book focuses on societal concerns associated with AI, she highlights healthcare as an area where AI shows positive potential. Murgia discusses an app called Qure.ai, which analyzes chest x-rays to predict the likelihood of tuberculosis (TB), a growing health issue in India. The TB infection burden impacted more than 30 percent of those over the age of 15 between 2019 and 2021, according to the National Prevalence Survey of India.

But Murgia knows that stories about people and their experiences are the most compelling way to make a point. She used the example of patients and doctors, both of whom are dependent on these emerging technologies but in different ways.

“For me, the most optimistic I ever feel about AI is when I think about it in relation to science and health,” said Murgia.

Murgia writes about Ashita Singh, MD, a physician who practices in rural western India, often serving tribal populations. According to Murgia, Dr. Singh described medicine as “an art rather than a science.”

The doctor focuses on making human connections when treating patients knowing that resources in her area are extremely limited. AI has shown potential to fill these resource shortfalls, in part because of Dr. Singh’s willingness to train, test, and implement AI technologies within her medical practice.

 “TB is a curable disease. People shouldn’t be dying from it,” said Murgia. “In places where there aren’t many [medical professionals], this is the next best option.”

The Global Infrastructure Training the AI

Photo by Nick Fetty/The New York Academy of Sciences

A consistent theme throughout the book is AI’s at-times exploitative nature on laborers, particularly those at the lower rungs of the socioeconomic ladder. Murgia tells the disturbing story of workers in Africa who are tasked with moderating content for Meta, which owns the popular social media platforms Facebook and Instagram.

While this started out as a way to empower workers, enabling them to develop tech skills while earning a paycheck, it eventually turned exploitative. Workers became traumatized because of the often sexual and violent nature of the content they were forced to view then manually decide whether it violated the platform’s terms of service.

“The more I dug into it, it became apparent that there were huge limitations in how this industry operates,” said Murgia. “The biggest one being the amount of agency these workers are allowed to exercise.”

Murgia cautioned against the technological deterministic take, which can over emphasize the societal benefits of AI. She compared it to colonialism in that the disenfranchised populations are given a small amount of power, but not enough to fight back in a meaningful way.

Empowering Agency Through AI

Photo by Nick Fetty/The New York Academy of Sciences

Murgia said the public may feel a lack of control when using AI because of its complex and fast-moving nature. Typically, the individuals building the systems have the most say.

She added that this is further complicated by the fact that the majority of research and development is done by part-time scientists within corporate environments. These scientists, some of whom continue to hold on to academic appointments, are often bound by financial obligations alongside their ethical responsibilities.

Murgia argues that independent scientists, not bound by corporate obligations, are crucial in areas like evaluation and alignment. Experts in fields like science, medicine, and education provide valuable input when developing these systems, particularly in pinpointing weak points and limitations.

One example of effective, non-corporate work within the realm of scientific research on AI is with the AI Safety Institutes in the United States and the United Kingdom. Murgia feels that these agencies are effective because they are run by computer scientists and machine learning experts rather than regulators and policymakers.

Photo by Nick Fetty/The New York Academy of Sciences

 “That gives you a sense of accountability,” said Murgia. “And I think that’s how we can all contribute as it gets implemented into the education system, into hospitals, into workplaces.” 

Murgia raised numerous other ethical concerns about AI such as apps underestimating (and therefore underpaying) distances for couriers and the legal gray area of facial recognition software. She also points out threats posed by AI-manipulated video, which often target and sexualize women. AI is also serving as a replacement for romantic human companionship, as illustrated by a Chinese company that has generated half a million AI girlfriends for lonely men.

In his closing remarks, Nicholas Dirks, thanked Murgia and set the stage for future collaboration.

“I heard a lot of encouragement for the projects and initiatives we’re doing here from you, so hopefully we can continue to get advice on how we can be a player in this incredibly complex ecosystem that we’re all now part of, whether we know it or not,” he said.

Also from the Tata Knowledge Series on AI & Society: The Promise of AI with Yann LeCun

Deepfakes and Democracy in the Age of AI

deepfakes

September 17, 2024 | 6:30 PM – 9:00 PM

A recent Associated Press poll reveals that 58% of US adults across both political parties believe that AI will amplify the spread of misinformation in the 2024 presidential election.  Despite this widespread distrust, some political candidates have already leveraged deepfake ads in elections, utilizing AI-generated images and text-to-voice converters to craft highly realistic visuals that blur the line between truth and deception.

Beyond influencing public opinion with such deepfakes, AI can also skew election outcomes by deploying chatbots on a massive scale to target millions of voters with tailored political messages.While AI-enabled technologies present significant risks to elections’ integrity and societal cohesion, they also potentially enhance our democratic institutions. This technology can boost civic engagement and strengthen the electoral system by increasing accessibility and mitigating existing biases.   

Join us on September 17th for a conversation alongside a panel of experts from political consulting, social neuroscience, and deepfake technologies to explore AI’s dual potential to bolster and undermine the political system. This program is available in person and virtually, with member tickets as low as $10.

The Academy strongly recommends in-person participation to network with fellow participants and be prioritized throughout the Q+A session.

15th Annual Machine Learning Symposium

October 18, 2024 | 9:00 AM – 6:00 PM

Machine Learning, a subfield of computer science, involves the development of mathematical algorithms that discover knowledge from specific data sets, and then “learn” from the data in an iterative fashion that allows predictions to be made. Today, Machine Learning has a wide range of applications, including natural language processing, search engine optimization, medical diagnosis and treatment, financial fraud detection, and stock market analysis.

This Symposium, the fifteenth in an ongoing series presented by the Machine Learning Discussion Group at The New York Academy of Sciences, will feature:

  • Keynote Presentations from leading researchers in both applied and theoretical Machine Learning
  • Spotlight Talks: A series of short, early-career investigator podium presentations across a variety of topics at the frontier of Machine Learning; and
  • Poster Presentations

From New Delhi to New York

A headshot of a man.

Academy Fellow Nitin Verma is taking a closer look at deepfakes and the impact they can have on public opinion.

Published April 23, 2024

By Nick Fetty
Digital Content Manager

Nitin Verma’s interest in STEM can be traced back to his childhood growing up in New Delhi, India.

Verma, a member of the inaugural cohort for the Artificial Intelligence (AI) and Society Fellowship, a collaboration between The New York Academy of Sciences and Arizona State University’s School for the Future of Innovation in Society, remembers being fascinated by physics and biology as a child. When he and his brother would play with toys like kites and spinning tops, he would always think about the science behind why the kite stays in the sky or why the top continues to spin.

Later, he developed an interest in radio and was mesmerized by the ability to pick up radio stations from far away on the shortwave band of the household radio. In the early 1990s, he remembers television programs like Turning Point and Brahmānd (Hindi: ब्रह्मांड, literally translated to “the Universe”) further inspired him.

“These two programs shaped my interest in science, and then through a pretty rigorous school system in India, I got a good grasp of the core concepts of the major sciences—physics, chemistry, biology—and mathematics by the time I graduated high school,” said Verma. “Even though I am an information scientist today, I remain absolutely enraptured by the night sky, physics, telecommunication, biology, and astronomy.”

Forging His Path in STEM

Verma went on to pursue a bachelor’s in electronic science at the University of Delhi where he continued to pursue his interest in radio communications while developing technical knowledge of electronic circuits, semiconductors and amplifiers. After graduating, he spent nearly a decade working as an embedded software programmer, though he found himself somewhat unfulfilled by his work.

“In industry, I felt extremely disconnected with my inner desire to pursue research on important questions in STEM and social science,” he said.

This lack of fulfillment led him to the University of Texas at Austin where he pursued his MS and PhD in information studies. Much like his interest in radio communications, he was also deeply fascinated by photography and optics, which inspired his dissertation research.

This research examined the impact that deepfake technology can have on public trust of photographic and video content. He wanted to learn how people came to trust visual evidence in the first place and what is at stake with the arrival of deepfake technology. He found that perceived, or actual, familiarity with content creators and depicted environments, contexts, prior beliefs, and prior perceptual experiences guide public trust in the material deemed trustworthy.

“My main thesis is that deepfake technology could be exploited to break our trust in visual media, and thus render the broader public vulnerable to misinformation and propaganda,” Verma said.

A New York State of Mind

Verma captured this image of the historic eclipse that occurred on April 8, 2024.

After completing his PhD, he applied for and was admitted into the AI and Society Fellowship. The fellowship has enabled him to further his understanding of AI through opportunities such as the weekly lecture series, collaborations with researchers at New York University, presentations he has given around the city, and by working on projects with Academy colleagues such as Marjorie Xie and Akuadasuo Ezenyilimba.

Additionally, he is part of the Academy’s Scientist-in-Residence program, in which he teaches STEM concepts to students at a Brooklyn middle school.

“I have loved the opportunity to interact regularly with the research community in the New York area,” he said, adding that living in the city feels like a “mini earth” because of the diverse people and culture.

In the city he has found inspiration for some of his non-work hobbies such as playing guitar and composing music. The city provides countless opportunities for him to hone his photography skills, and he’s often exploring New York with his Nikon DSLR and a couple of lenses in tow.

Deepfakes and Politics

In much of his recent work, he’s examined the societal dimensions (culture, politics, language) that he says are crucial when developing AI technologies that effectively serve the public, echoing the Academy’s mission of “science for the public good.” With a polarizing presidential election on the horizon, Verma has expressed concerns about bad actors utilizing deepfakes and other manipulated content to sway public opinion.

“It is going to be very challenging, given how photorealistic visual deepfakes can get, and how authentic-sounding audio deepfakes have gotten lately,” Verma cautioned.

He encourages people to refrain from reacting to and sharing information they encounter on social media, even if the posts bear the signature of a credible news outlet. Basic vetting, such as visiting the actual webpage to ensure it is indeed the correct webpage of the purported news organization, and checking the timestamp of a post, can serve as a good first line of defense against disinformation, according to Verma. Particularly when viewing material that may reinforce one’s beliefs, Verma challenges them to ask themselves: “What do I not know after watching this content?”

While Verma has concerns about “the potential for intentional abuse and unintentional catastrophes that might result from an overzealous deployment of AI in society,” he feels that AI can serve the public good if properly practiced and regulated.

“I think AI holds the promise of attaining what—in my opinion—has been the ultimate pursuit behind building machines and the raison d’être of computer science: to enable humans to automate daily tasks that come in the way of living a happy and meaningful life,” Verma said. “Present day AI promises to accelerate scientific discovery including drug development, and it is enabling access to natural language programming tools that will lead to an explosive democratization of programming skills.”

Read about the other AI and Society Fellows: