Exploring 100 Years of Artificial Intelligence
The past, present, and future of artificial intelligence (AI) were discussed as part of the latest installment in the Tata Knowledge Series on AI & Society.
Published April 18, 2025
By Nick Fetty
Digital Content Manager

The future implications for the growth of AI and its impact on our society was the topic of a fireside chat between renowned computer scientist, Alok Aggarwal, PhD, and Nick Dirks, President and CEO of The New York Academy of Sciences (the Academy).
Dr. Aggarwal is CEO and Chief Data Scientist at Scry AI, which he founded in 2014. The company “focuses on research and advanced development (R&D) in Artificial Intelligence, Data Science, and related disciplines.” In an attempt to demystify AI for the public, he published the book, The Fourth Industrial Revolution & 100 Years of AI (1950-2050), which focuses on demystifying AI for lay audiences.
In discussing the motivation for his book, Dr. Aggarwal explained how AI is part of “the Fourth Industrial Revolution” which started in 2011 and is projected to run through 2050.

He points out that the recently published book “doesn’t have a single piece of software code and almost no math.” Instead, he focuses on what AI is, and what it will be, the “good, bad, and ugly.” Separately, he is also working on a follow-up book for students studying business analytics and other similar programs.
AI and the Business World
Dirks then shifted the conversation to focus on the business applications of AI. Dr. Aggarwal said he sees AI being most useful in pattern-recognition tasks.
“That pattern-recognition aspect is much faster because electrons are moving at the speed of light, unlike humans, where the ions are moving slowly,” he says. “Definitely in the long run, that pattern recognition aspect alone will make AI be extremely beneficial for humans in pretty much all areas.”
Dr. Aggarwal continued by saying “it’s not a matter of ‘if’, but ‘when’ AI is more fully embraced by society. He compared it to public acceptance of the internet, and its associated hype, in the late 1990s.
“I think, in many ways, hype is very good…because it leads to monetary support and makes the passionate inventors even more passionate,” Dr. Aggarwal says, adding that “it will take time.”
The Challenge of Driverless Cars for AI

Dirks pointed out that Google recently reduced investments into its driverless car program. He also referenced Yann LeCun, Turing Award winner and Chief AI Scientist at Meta, who mentioned that driverless car technology has much room for improvement during another Academy fireside chat sponsored by Tata in March 2024.
Dr. Aggarwal shared that driverless car technology goes back to the late 1970s in Japan. The technology was further developed in Germany, and then at American institutions including Carnegie Mellon University and the University of California, Berkeley. Despite this effort, Dr. Aggarwal admits successfully integrating AI and driving has been a challenge. However, he pointed out several areas in which AI shows great potential.
For example, he said AI can be applied to laborious, mundane activities, where humans are prone to making mistakes like sifting through invoices to reconcile financial records or submitting the proper documentation for a mortgage loan. Furthermore, AI has been just as effective in preventative healthcare, such as detecting skin cancer, which Dr. Aggarwal has said has proven to be as accurate as a radiologist.
“A lot of the problem right now is [demonstrating] these benefits rather than just inflating the hype,” says Dr. Aggarwal. “We need to actually show that it works in disparate cases.”
Curating Accurate Training Sets

Dirks pointed out that some AI systems are informed by various sources on the internet, which have varying levels of accuracy. He asked what can be done to curate accurate training sets to develop these technologies.
Dr. Aggarwal said the issue here isn’t so much the AI, as it’s the “human mirror” effect considering many of the inputs from the training sets are merely reflecting reality, which can sometimes be outdated, inaccurate, or biased. He used the example of countries with data sets that do not treat women and men as equals, so inputs from these countries can train the AI to have misinformed biases between genders and their associated roles.
“It’s no different from how we train our children,” said Dr. Aggarwal.
He then referred to “the imitation game” developed by computer pioneer Alan Turing. In this exercise, a human judge blindly assesses whether the answer to the judge’s question was provided by another human or by a computer. The judge needs to determine whether it was the human or the computer. The idea was that eventually the computer technology would be smart enough that the judge wouldn’t be able to differentiate.
Dr. Aggarwal stressed the need for humans to be diligent and balanced in training these AI systems. Because of the strong processing power of these AI systems, they can quickly amplify biases, misinformation, and other negative inputs through which it was informed.
Closing Thoughts

Dirks and Dr. Aggarwal also discussed additional topics including the history of neural networks, the origin of the term “artificial intelligence,” the hype around advancements in computing in the mid-20th century, the definition of artificial general intelligence (AGI), companionship, job displacement, drug development, and more. After taking questions and comments from those in attendance, Dr. Aggarwal closed his talk by soliciting feedback from those who read his book and welcomed readers to contact him with their commentary.
This article provides a preview of the talk. Video of the full talk is available on-demand for Academy members. Sign up today if you aren’t already part of our impactful network.
This series is sponsored by Tata, a global enterprise, headquartered in India, comprising 30 companies across ten verticals. Read about other Academy events supported by Tata: