Skip to main content

Blog Article

At the Forefront of Artificial Intelligence

While artificial intelligence (AI) is currently experiencing a revolution, The New York Academy of Sciences has been at the forefront of pre-AI technologies since at least the 1960s.

Published March 4, 2026

By Nick Fetty

Computers and smartphones are a ubiquitous part of our lives, but the early ancestors of this technology would be hardly recognizable today.

Some of the earliest electronic computers in the middle of the 20th century were mammoth machines that took up entire rooms and crunched numbers for national defense purposes. Today, handheld smartphones can deliver us everything from live sports to dinner…literally. Similarly, the earliest forms of AI would be considered primitive based on our understanding today. But these technological precursors nonetheless played a significant role in the development and adaptation of today’s popular AI tools like ChatGPT and Google Gemini.

Here are five times when the Academy was ahead of its time with efforts to promote and advance what can be considered pre-AI technologies.

Computers Making Decisions (1960)

During a 1960 conference at the Academy, MIT researcher Warren S. McCulloch, MD, discussed a new “thinking machine” known as Leo. Leo was being used by managers at a British restaurant chain to operate more efficiently. As Dr. McCulloch pointed out during the event, this showed that machines were “moving in on the last realm of [humankind’s] sovereignty—making decisions.”

A psychologist by training with expertise in neurophysiology, psychiatry, and cybernetics, Dr. McCulloch was blunt in his assessment of the then-new technology’s potential.

“I don’t think brains are such marvelous things at all,” he says, according to reporting by Associated Press science writer John Barbour. “Man is a slow computer. He is prone to error.”

Dr. McCulloch further explained that human brains can process about “25 bits of information” per second, while “at least a million times as much can flow along the wires of a machine.” Unbeknownst at the time, this was a bit of foreshadowing.

A man presents to a full house during an Academy event.
Yann LeCun (right) during an event at the Academy on March 14, 2024. Photo by Nick Fetty/The New York Academy of Sciences.

Modern AI expert Yann LeCun would echo a similar sentiment during a 2024 Academy event. LeCun made the case that with modern AI technology, sensory, as opposed to language, inputs were superior for developing more efficient AI. He pointed out that while reading text or digesting language, the human brain processes information at about 12 bytes per second. This is compared to sensory inputs, such as from observations and interactions, which the brain processes at about 20 megabytes per second.

 “To build truly intelligent systems, they’d need to understand the physical world, be able to reason, plan, remember and retrieve. The architecture of future systems that will be capable of doing this will be very different from current large language models,” LeCun says.

During his 1960 talk Dr. McCulloch predicted that properly built machines would eventually replace humans in the workplace.

Multiplying Human Capacity (1967)

The concept of machines replacing humans has been a theme of science fiction and other parts of popular culture for decades. However, this concept appeared to be less in the realm of fiction when it was brought up during the Academy’s 150th anniversary meeting in 1967.

Simon Ramo, PhD, an American physicist, engineer, and business leader, asserted that a robot society was inevitable, though he was optimistic about a more technologically based future. He pointed out that machines can do instantaneously what it takes the human brain months or even years to do. Instead of competing with humans, he saw machines as being a tool to multiply human capacity.

A Q&A between two men during an Academy event.
Alok Aggarwal (right) during an event at the Academy on December 5, 2024. Photo by Nick Fetty/The New York Academy of Sciences.

Alok Aggarwal, PhD, CEO and Chief Data Scientist at Scry AI, visited the Academy in 2024 to give a talk on his recently published book The Fourth Industrial Revolution & 100 Years of AI (1950-2050). Dr. Aggarwal would likely agree with Dr. Ramo about the utility of technology that improves human productivity. During his talk, Dr. Aggarwal pointed out that “AI can be applied to laborious, mundane activities, where humans are prone to making mistakes like sifting through invoices to reconcile financial records or submitting the proper documentation for a mortgage loan.”

However, not all predictions come to fruition. In 1967, Dr. Ramo suggested that machines could contribute to “instant democracy,” envisioning a future “where every home had an electronic voting machine, enabling all to participate in day-to-day decisions.” While political campaigns can use modern tools like social media, algorithms, targeted marketing, mobile games (such as the one developed by Shaping Science podcast guest Ian Bogost, PhD) and more in their attempt to win elections, it may be a while before voters are casting a ballot over their smartphones.

Voice Activated Commands (1971)

While movies like 2001: A Space Odyssey and shows like Star Trek depicted primitive concepts of human-computer interfaces as early as the late 1960s, the practicality of this technology wasn’t as farfetched as it might have seemed at that time.

During this era, New Jersey-based Bell Labs was developing “a device that dials telephone numbers when it ‘hears’ spoken commands,” as reported by The Sciences magazine. The Sciences was an award-winning magazine focused on scientific news and research published by The New York Academy of Sciences between 1961 and 2001.

Ariel Ekblaw discusses HAL 9000 from 2001: A Space Odyssey and Computer from Star Trek during an episode of the Shaping Science podcast.

This then-new technology was being developed for individuals with physical impairments who would otherwise struggle with dialing a phone number. Focusing on the technical aspects of how the device functioned, The Sciences reported: “Integrated circuitry converts sound waves into electrical pulses that open and close the electromechanical switches necessary for obtaining dial tone, dialing and terminating a call.”

While the technology started off as a tool to help those with physical impairments, its applications today are much broader. From Amazon’s Alexa to Apple’s Siri and everything in between, voice-assisted devices are now used to make relatively mundane daily tasks more efficient. Voice-to-text capabilities enable us to do everything from safely sending text messages while driving to dictating notes that once needed to be manually transcribed.

The jury is still out whether the future of voice-assisted technology will follow the helpful and supportive path of the beloved Computer from Star Trek or whether it exhibits the troubling agentic and rogue properties of something like HAL 9000 from 2001: A Space Odyssey.

Computers and Consciousness (1985)

As computers continued to develop into the late 20th century, the idea that the technology could gain its own consciousness became an ethical and philosophical concern. This was addressed directly in a 1985 article published in The Sciences magazine.

In the article, senior editor Robert Wright pondered whether a computer or robot could be programmed in a way “that it will be aware of its calculations, and perhaps even be capable of experiencing anger, fear, or sympathy.” A growing number of what Wright called “scientific materialists” or “mechanics” feel that humans and other forms of life are “entirely explicable in terms of engineering.”

“Every aspect of behavior, sensation, and thought, they maintain, is a product of the way information is processed and transmitted; so presumably, it is possible, by controlling the flow of information, to replicate human experience with precision,” Wright wrote.

These “mechanics” felt that with the correct hardware and software, coupled with sufficient time, they could “create computers flushed with pride, riddled with doubt, or alienated by the rapid pace of technological and social change.” If or when computers develop emotions, feelings, and thoughts, the optimistic “mechanics” feel it will be “additional proof that science can conquer all.”

Wright did point out that the optimism of the “mechanics” could be misguided. He wrote “If computers do someday evince a subjective life, the mechanics’ view of consciousness will have been undermined; if computers don’t show signs of consciousness, this silence will be an annoying reminder that the mechanics can never know for sure whether they are right about what consciousness is.”

While the technology has developed immensely in the 40 years since the article was published, philosophical and ethical concerns around computers gaining consciousness continues to be a hot topic today.

Nobody Knows You’re a Dog (1995)

While AI technologies like ChatGPT have helped to usher in the modern era of “chatbots,” the precursors to these virtual beings have existed since the commercial Internet’s early days in the 1990s.

Sherry Turkle, PhD, an MIT sociologist and psychologist, touched on this in a 1995 article in The Sciences, which was later adapted for her book Life on the Screen: Identity in the Age of the Internet. Prof. Turkle wrote about her travails in “multi-user domains/multi-user dungeons” known as MUDs, which were like early forms of chatrooms or social media. She described a bot known as Julia, developed at Carnegie Mellon University, which interacted with users so realistically, many struggled to tell if “she” was real or fake.

“As the boundaries erode between the real and the virtual, the animate and the inanimate, the unitary and the multiple self, the question becomes: Are we living life on the screen or in the screen?” Prof. Turkle writes.

In yet another instance of scientific prescience, English mathematician Alan Turing foresaw in the 1950s issues people would eventually have differentiating autonomous beings from fellow humans. To combat this, he developed the Turing test. Nitin Verma, PhD, a former AI and Society Fellow currently at the University of Illinois Urbana-Champaign, analyzed the relevance of the Turing test in the modern AI era.

“Passing a challenging test can be seen as a marker of progress. But would we truly rejoice in having our AI pass the Turing test, or some other benchmark of human–machine indistinguishability?” Prof. Verma ponders in a 2024 blog post.

Whether it’s life imitating art or art imitating life, The New Yorker really was on to something with their 1993 cartoon, that has since become a meme, in which Peter Steiner declared “On the Internet, nobody knows you’re a dog.”

Read more about the Academy’s efforts with AI.


Author

Image
Nick Fetty
Digital Content Manager
Nick is the digital content manager for The New York Academy of Sciences. He has a BA and MA in journalism from the University of Iowa as well as more than a decade of experience in STEM communications. Nick is also an adjunct instructor in mass media at Kirkwood Community College.