Skip to main content

Autoimmunity Meets Innovation: Cell Therapies

Joining leading experts at the forefront of cell biology, immunology, and autoimmunity research for a one-day event on April 30th, 2026, in New York City. The New York Academy of Sciences invites you to Autoimmunity Meets Innovation: Cell Therapies, where top researchers from academic institutions and industry professionals, as well as regulatory experts, will explore the scientific and implementation opportunities for advancing cell therapy approaches in autoimmunity. Autoimmune diseases affect a significant portion of the population, imposing substantial economic and societal burdens due to their chronic nature and profoundly impacting the quality of life for those affected.

Cutting-edge cell therapies, including CAR-T cells, CAR-Treg cells, CAR-NK cells, and CAR-MSC cells, have the potential to transform the landscape of autoimmune disease treatment. Early-stage clinical trials are already underway for a range of autoimmune conditions, such as type 1 diabetes, multiple sclerosis, myasthenia gravis, systemic lupus erythematosus (SLE), and others.

This symposium will highlight progress in the development of disruptive scientific approaches and game-changing technologies for the treatment of autoimmune diseases in both pre-clinical and clinical contexts. Don’t miss the chance to engage in dialogue and collaboration with renowned experts who will pave the way for an ecosystem that supports future scientific breakthroughs in autoimmunity research and treatment.

Sponsors

Presented By

The Biochemical Pharmacology
Discussion Group

Lead Supporters

Let’s Talk Genetics: A Workshop for Educators and Science Communicators

A graphic with a DNA helix.

December 2, 2025 | 5:00 PM – 7:00 PM ET

How does your health relate to your genes? What can (and what can’t) commercial ancestry testing tell you? How does law enforcement use DNA in criminal investigations? Join Personal Genetics Education and Dialogue (PGED) for an interactive workshop about genetics advances and their applications, including how to foster conversations about these topics in classrooms and community spaces.

Participants will learn about how genetics can intersect with personal and societal interests, including the use of genetic information in health, ancestry testing, and law enforcement. They will participate throughout the workshop, including in an online game, and will be encouraged to ask questions and share their perspectives with other attendees. Although this interactive workshop is geared towards educators and scientists interested in public engagement, anyone interested in genetics is welcome to attend.

Please note that the workshop is limited to live attendance and will not be recorded.

Sponsor

Personal Genetics Education and Dialogue (PGED) is a public engagement with science program based in the Department of Genetics at Harvard Medical School. For over eighteen years, they have raised awareness and inspired curiosity, reflection, and dialogue about genetics. They create resources and offer programs that explore the relevance and impact of genetics in people’s lives. By highlighting the “personal” in genetics, they strive to help people build knowledge and confidence to speak up, ask questions, and make informed decisions based on their needs and values.

Speaker

Rob O’Malley, PhD

Strategic Engagement Lead, Personal Genetics Education & Dialogue (PGED), Harvard Medical School

Rob is a biological anthropologist who shifted from a career studying wild chimpanzees to one focused on public engagement with science. Rob has expertise in evidence-based public engagement approaches, with a particular interest in how history, culture, and worldview (including faith and spirituality) inform peoples’ perspectives on genetics and related sciences. He helps to develop and facilitate workshops, co-creates and edits formal and informal education resources, and identifies and pursues grants and other funding opportunities to support PGED’s work. Rob is also the education committee co-chair for the American Association of Biological Anthropologists (AABA).

Pricing

All: Free

The New Wave of AI in Healthcare 2026

The New York Academy of Sciences is proud to present The New Wave of AI in Healthcare 2026.

Artificial intelligence and digital technologies are transforming healthcare at an unprecedented pace—reshaping how we diagnose, treat, and deliver care. From advanced machine learning applications to real-world evidence and patient-facing digital tools, innovation is accelerating rapidly, bringing both extraordinary promise and complex challenges for clinicians, researchers, and regulators.

To spotlight these breakthroughs, the Windreich Department of Artificial Intelligence and Human Health at the Icahn School of Medicine at Mount Sinai and The New York Academy of Sciences will host a two-day, in-person symposium in New York City The New Wave of AI in Healthcare

This premier event will convene leading scientists, clinicians, industry innovators, and policy experts at the intersection of computer science and medicine to share cutting-edge research, explore pressing ethical and regulatory considerations, and build collaborations that shape the future of healthcare. The symposium will not only showcase the latest scientific advances but also foster interdisciplinary dialogue and networking to ensure that AI-driven healthcare innovations are equitable, ethical, and impactful.

Sponsors

Presented By

The New York Academy of Sciences logo

207th Annual Meeting of The New York Academy of Sciences

November 6, 2025 | 5:30 PM – 7:30 PM ET

115 Broadway, 8th Floor, New York, NY 10006

Please note this is a members-only event. Any non-members who attempt to register will not receive any confirmation details as they are not eligible to attend.

The movie screening will be available exclusively to in-person attendees. Virtual participants will not have access to the screening.

Join us for the 207th Annual Meeting of The New York Academy of Sciences. Academy Chair of the Board Peter Salovey and Senior Vice President Chenelle Bonavito Martinez will kick off the event with a welcome, followed by President and CEO Nicholas Dirks, who will share updates on the Academy’s latest initiatives.

The program will also feature a special screening of Wild Hope: Mission Impossible. This compelling film highlights the late-career epiphany of renowned scientist Pat Brown, who abandoned his academic career to fight global warming and biodiversity collapse. Against all odds, he developed the revolutionary and delicious plant-based Impossible Burger, which has had a profound impact on the global food industry.

Members won’t want to miss this exclusive in-person-only screening, offering a unique look at one of the most impactful scientific breakthroughs of our time.

Sponsored by

HHMI and Tangled Bank Studios Logo

Official Business

  • During this meeting, you will vote for the 2025 Board of Governors.
  • You will be asked to approve the minutes of The New York Academy of Sciences 206th Annual Meeting of Membership held November 14, 2024.

Agenda

Please note the agenda is subject to change.

5:30 – 5:35 PM

Welcome Address to Academy Members – C. Martinez, P. Salovey


5:35 – 5:40 PM

New Business – P. Salovey


5:40 – 5:45 PM

Finance Report – C. Peeters, A. Miller


5:45 – 5:53 PM

President’s Report – N. Dirks

  • Update on Academy Programs

5:53 – 5:58 PM

Q&A


5:58 – 6:00 PM

Closing Remarks and Adjournment – P. Salovey


6:00 – 7:30 PM

Movie Screening of Wild Hope: Mission Impossible & Q&A

Available exclusively to in-person attendees.

  • Introduction of film, shared by Jared Lipworth, Executive Producer at HHMI Tangled Bank Studios
  • 40-minute movie screening
  • Q&A with Pat Brown and Jared Lipworth, moderated by Nick Dirks

Pricing

Member: Free

This event is exclusive to Academy members.

Distinguished Lecture Series: Kwame Edwin Otu

April 13, 2026 | 4:30 PM – 7:00 PM ET

115 Broadway, 8th Floor, New York, NY 10006
or join virtually by Zoom

Join us for our Distinguished Lecture Series featuring Kwame Edwin Otu.

Speaker

Kwame Edwin Otu

Associate Professor of African Anthropology,
African Studies Program,
Georgetown University

Pricing

All: Free

About the Series

Since 1877, the Anthropology Section of The New York Academy of Sciences has served as a meeting place for scholars in the Greater New York area. The section strives to be a progressive voice within the anthropological community and to contribute innovative perspectives on the human condition nationally and internationally. Learn more and view other events in the Anthropology Section series.

Distinguished Lecture Series: Pamela Geller

March 2, 2026 | 4:30 PM – 7:00 PM ET

115 Broadway, 8th Floor, New York, NY 10006
or join virtually by Zoom

Join us for our Distinguished Lecture Series featuring Pamela Geller.

Speaker

Pamela Geller

Professor of Anthropology,
University of Miami

Pricing

All: Free

About the Series

Since 1877, the Anthropology Section of The New York Academy of Sciences has served as a meeting place for scholars in the Greater New York area. The section strives to be a progressive voice within the anthropological community and to contribute innovative perspectives on the human condition nationally and internationally. Learn more and view other events in the Anthropology Section series.

Anthropology Graduate Student Speed Networking Night

December 10, 2025 | 5:30 PM – 7:30 PM ET

CUNY Graduate Center, 365 5th Ave, Room 6402 (Brockway Room),
New York, NY 10016

Meet fellow New York area anthropology graduate students in this fast-paced networking event! Rotate through 3–5 minute chats and connect across research interests. Refreshments will be served. All Master’s and PhD students in anthropology and anthropology-adjacent fields in the New York area are welcome.

  • Build spaces of collaboration
  • Practice your elevator pitch
  • Find people with similar interests

RSVP at NYASGradStudentMixerDec2025.eventbrite.com.

Note: CUNY students are required to show a valid CUNY ID at the front desk in the lobby and non-CUNY visitors are required to show a government-issued photo ID upon entering the building.

Pricing

All: Free

About the Series

Since 1877, the Anthropology Section of The New York Academy of Sciences has served as a meeting place for scholars in the Greater New York area. The section strives to be a progressive voice within the anthropological community and to contribute innovative perspectives on the human condition nationally and internationally. Learn more and view other events in the Anthropology Section series.

Academy’s Past – Moving on Up(town)

An old shot of the exterior of an NYC mansion.

The Academy would spend more than half a century in its next home, which was located on the city’s Upper East Side.

Published August 28, 2025

By Nick Fetty

Ziegler-Woolworth Mansion | 2 E. 63rd Street | 1949-2005

When the Academy moved into the Ziegler-Woolworth Mansion on East 63rd Street it once again had its own standalone facility. The mansion served as the Academy’s home into the 21st century.

The Academy’s procurement of this space can be directly attributed to Eunice Miner, the Academy’s leader as Executive Director for three decades, from 1939 to 1967. Among her many accomplishments, Miner played a significant role in growing the Academy’s membership through the middle of the 20th century. She also directly inspired Norman Woolworth to gift his mansion to the Academy after hearing a talk she gave about the need for a permanent home.

The 32-room mansion, designed by Frederick Sterners and constructed in 1921, first served as a home for William Ziegler, Jr., the Iowa-born, adopted nephew of William Ziegler, Sr., who made his fortune as a co-founder of the Royal Baking Powder Company. Junior, and his first wife, lived in the mansion for barely a year before their marriage ended in divorce. Though plans were initially discussed to convert the building into an actors’ hospital, the mansion was instead purchased by Norman Woolworth in 1929, where he would reside with his family before giving it to the Academy.

Essentially in its Native State Today

The massive neo-Italian Renaissance style house is roughly 75-feet wide and extends to the back of the property line. The building features a courtyard reminiscent of “a Roman villa.” The New York Times reported that several of the architectural highlights were imported from Europe, including a mantel from Florence and marble flooring from Tuscany, as well as wood paneling from London. At the time the Academy moved in, the house was full of spike-studded oak and bronze doors while the first-floor ceilings were adorned with intricate plaster reliefs.

Because of its elegant architecture, the building was often rented out for weddings and other events, providing the Academy a source of revenue. To move its headquarters to 7 World Trade Center, the Academy sold the mansion in 2005. While it has since undergone renovation and a name change to the Academy Mansion, it retains many of its original features.

The Academy was now ready to return to its roots with its move back to lower Manhattan.

This is the ninth piece in an eleven-part series exploring the Academy’s past homes. Read:

Street-Level AI

A street-level shot in NYC.

How one NYC cohort tested generative AI in real classrooms—with lessons for national implementation.

Published August 19, 2025

By Devin Chowske

Was it really just two years ago that the City declared ChatGPT had no place in classrooms? And it only took 8 months for that decision to be turned around. Eighteen months later, I’m working with The New York Academy of Sciences to help teachers bring AI into their classrooms. And now, three months after that, I’m writing an article – not just about tools, but about teachers, kids, and what AI means for schools trying to stay human.

But it’s summer, so I’m not up in my apartment writing like I should be. Instead, I’m auto-dictating as I sit in a slice shop on Jerome Avenue, under the smell of garlic and the 4 train rattling the window. My first thought is the importance of place.

Pre-Pandemic, the New York City Schools district was the largest in the USA, standing at 1.1 million students. Of course, we still are, but we’ve also bled something like 100,000 students Post-Pandemic. Here are some of the remainder – 30 local kids have just walked into “$1 Slice” asking for a slice, now $1.50. They’re just out of the summer school around the corner. The kid next to me with peacock Nikes is speaking Portuguese with his mom.

I’m a Bronx transplant, but it reminds me of where I grew up. As a Bellerose boy, it remains a point of pride that per square mile, Queens has the most unique spoken languages on Earth. Those numbers are up since 2020, suggesting a growing intensity of need. And, I think, about 16.3% of the school population is still learning English. None of that seems important to this room of teenagers, who have now splattered sauce across the ceiling, which drips down in puce ribbons over an old social-distancing poster.

You’ll find that many educators now speak in terms of before and after – the Pre-Pandemic and Post. Before, New York already had a problem with teacher attrition. We’ve reported trends of around 19%-25% lost per year. By year three, some estimates put it at 40% gone altogether. As I start my year ten, I wonder if the Bronx Zoo has a space for me on their wall of endangered species….

So why do I bring any of this up when it comes to AI?

Well, to put it bluntly, AI is being billed as the panacea for everything that’s broken – a quick, cheap fix for organizations on the ropes. In the case of education, there are high hopes that recent trauma and systemic issues will be answered by technological innovation. Even with my most cynical face screwed on, I will say the educational products that have been borne out by GenAI are pretty fantastic. Still, and this is key, I put all of the credit at the feet of the educators using the tools.

I’m getting wistful – before I dragged you up to Williamsbridge, we were speaking on the program I built for the Academy. It was an amazing opportunity, being allowed to lead a group of expert educators in the implementation of AI with students. The Academy hoped I could help participants articulate their classroom approaches so the results could be replicated in yours.

The whole program went like this:

  1. Articulate a measurable need currently in your classroom, using multiple data points to define it.
  2. Form a question, in the style of inquiry learning, to address this need using AI tools.
  3. Select tools that are currently on the market and available in your school (this last caveat I will return to).
  4. Have students interact with the AI produced materials or AI itself.
  5. Record results and extrapolate use cases.

The results were a series of tools and techniques that have pragmatic use tomorrow. After coaching over 200 educators and giving national presentations on AI in education, the biggest hurdle I keep seeing is the same: people are scared to even start without knowing the exact finish line. So while several of the studies were viable, I am going to focus primarily on the results, implications, and most frequent use cases I have seen.

The Academy, the participants, and I are hoping this gives you the confidence to begin, that somewhere in these stories you see a little piece of you and your kids. Let’s start with a writing teacher who found opportunity in limitation.


Pinck’s AI Literate Classroom

Pinck is over at New Design High School – a smaller school on the Lower East Side looking to expand student empowerment. With an enrollment of roughly 449 students and student to teacher ratio of 9:1, the school bills itself as “a coffee shop, a design shop, a youth development shop, and most importantly a community.” Talking to Pinck, I get the sense that they’re pulling that last bit off, no problem.

She had observed her students struggling with the rubrics given to them and in the consistent application of feedback received. Pinck aimed to improve confidence around revisions in students’ writing.

The class ended up using Perplexity for the most part, which falls into a class of AIs known as “AI answer engines.” These are Large Language Models (LLMs) specializing in research – they’re not geared towards the same sort of large-scale generation or analysis most models are associated with. To put it simply: Perplexity would be an easy choice for research, but is a unique choice for feedback. So why use it in this application?

Pinck’s choice was a simple one, it was either Perplexity or CoPilot because everything else was blocked by the school’s firewall. This, in and of itself, is a pretty common occurrence in NYC schools – uneven and seemingly arbitrary banning of specific AI tools left behind in the wake of initial panic. You’re going to have to talk to your own tech department about that hidden list. The upshot – Pinck’s students were struggling with proper research and citation strategies anyway.

Her classes’ initial experiences with AI had her going back to teach them how to prompt more effectively – a key aspect of the AI literacy that will be a staple of our curriculum in the future – and she managed some excellent results. Student confidence increased somewhat, but quality of citations and presence of lateral search skyrocketed.

The best part? The struggles. Students reported that they found it difficult to rephrase and reframe work, saying “It’s impossible. …[Y]ou can’t not plagiarize.” Others found prompt engineering “tedious”.

Personally, I love these sorts of insights. Pinck did a great job with building initial understanding of how AI worked before she moved to student application of these tools. Yes, her students were using AI to produce work, but not un-critically. They were made to reckon not only with the credibility of their source – a 21st Century skill – but also consider gaps in their own learning. Gaps that they can come back to target with clearer agency.

Ultimately, policy development, norms, and scaffolding built from years of experience and deep knowledge of her own students made Pinck’s application effective. I’ll give her the last word on implementation in her style: “Teach your kids how AI generates, [because] they want and need to know. Go slowly…[what] seem[s] obvious to teachers can be extremely challenging for students.”


AI as Feedback Partner in Yelyzaveta’s ICT Class

As far as persistent problems of education go, providing quality, timely feedback to learners is about as universal as it gets. The internal arithmetic is brutal. Guiding students through quality work takes time, but condensed deadlines leave no space to breath. So many of us get caught choosing: something specific and actionable late, or half-baked right on time.

Yelyzaveta Kalinichenko over at the High School of Environmental Sciences in Manhattan – a 9th through 12th school with roughly 1,000 students – decided to tackle this head-on. Working in an ICT classroom, she wanted to maintain high standards for all students while breaking through the feedback bottleneck. Her solution? Use AI as a feedback partner, informed by teacher-made rubrics.

The setup was straightforward: students got a pre-written prompt scaffold, fed the AI their draft plus the assignment rubric, and received scores, feedback, and suggestions. Yelyzaveta collected data through grades and pre/post questionnaires about student perceptions.

Before the experiment, students were moderately comfortable with AI – rating their proficiency at 3.27 out of 5, with generally neutral-to-slightly-positive feelings. After working with AI feedback? Fascinatingly, most opinions stayed exactly the same. Even more telling, trust in AI actually dropped slightly.

Students rated the overall experience as positive (3.50), but the challenges were real. Many struggled to write their own prompts when interacting with AI. Students resubmitted work and grades fluctuated – anywhere from 2 to 5 points difference. When Yelyzaveta probed the AI about this inconsistency, it told her the rubrics weren’t specific enough.

Even AI has learned to pass the buck – how refreshingly human.

The bigger worry, of course, is dependency. Will students stop thinking for themselves? There’s some research suggesting this concern isn’t baseless – a recent MIT study found that a group of participants (ages 18-39) using AI performed worse than “brain-only” groups at multiple levels. 83% of AI users couldn’t even quote their own writing accurately.

But here’s what Yelyzaveta actually saw in her classroom: students gradually figured out that AI was just another voice in the room. Less expert than their teacher, useful but limited. Instead of becoming dependent, they saw it as what it was – a tool.

The takeaway? Understanding how AI actually works is fundamental to student AI literacy. We need more experiments like Yelyzaveta’s to figure out realistic boundaries so students learn to leverage AI without becoming overly reliant on it. Sometimes the most valuable lesson is learning what not to trust. But feedback timing wasn’t the only accessibility challenge teachers faced.


Ted & His Helperbots

During the 2023-24 school year, chronic absenteeism amongst NYC Public Schools spiked to 34.8%, up over the 25% Pre-Pandemic. This unquestionably impacts academic competency – missing 10% of the school year puts you behind. Teachers find themselves with fewer hours to reach their highest-need students; but students, in turn, often have family, work, or other human obligations that don’t sync with school hours.

So, how to reach them while maintaining reasonable hours and boundaries? And how to provide guidance and feedback when students aren’t available when you are? Students have found (and meme’d) their own solution: YouTube. If you’ve been in education for any length of time, you know that YouTube tutorial content can be full of pitfalls. Sometimes it advocates shortcuts that don’t scale well, other times it robs students of the productive struggle of finding the right tool for the right job.

Ted Scoville was looking at a similar problem – not from the angle of chronic absenteeism, but rather from the perspective of a course with heavy technical lift. He works over at the Loyola School on the Upper East Side – a private school with roughly fifty students per grade band. His complex coding classes demand complex technical skills; Ted needed a way to give students quality-controlled feedback without handing them solutions.

He settled on building a “helperbot” through playlab.ai. Playlab, an AI app already audited by NYC Public Schools,  falls into the broader category of “AI Assistants” that allow users to code tools using natural language. Each helperbot you make is powered by a larger LLM, like Claude, Gemini, or ChatGPT. It’s worth mentioning magicschool.ai is also a popular choice and has spotty approval across several NYC districts, but other AI Assistants are on the market.

Ted’s students were largely open to leveraging his bot and found it easy to use. The biggest data point was the drop in late work – his class went from over 25% of work turned in late, down to under 5%. He also reported less work completed at odd hours of the night and an increase in student independence.

Even with these benefits, several questions arose. As was the case with Pinck’s class, Scoville found that the students often found the specifics of prompting frustrating; he worries that they might turn to other tools that give more direct answers. Likewise, there were questions about students becoming more interested in interacting with the bot than with teachers. Afterall, with bots being infinitely more portable and accessible, what if we miss out on teacher-student rapport that’s key to education?

These are good worries I think, partially because it shows that teachers actually want to have connections with their students, despite what cartoons might otherwise have you think. I can say that I’ve seen some informal studies that marked similar surges in confidence, but also paradoxically saw greater demand for teacher input. As students interacted with AI, they became aware of its limitations; what they knew they needed was their teacher’s help.

While Ted focused on supporting individual student needs, our next teacher took on a broader challenge: preparing students for a rapidly changing creative economy.


Cheriece’s AI & Art Class

High on the list of criticisms for AI is its impact on the art world. Some critics decry it as the death of creativity, while others the birth of a new strain of kitsch. Meanwhile, talk of the rollback of copyright protections against AI have become part and parcel of the current US administration’s action plan.

Personally, my tea leaves very seldom fall in patterns recognizable beyond the five boroughs and I think there are better people to speak on those conversations. The World Economic Forum forecasts opportunities for traditional design roles will be fewer, but skills like creativity, resilience, and life-long learning will be up. The landscape our artistic students will be navigating is a difficult one. I can’t help but think of the tolling common wisdom uttered at every AI conference I attend: “AI will not take your job, but the person who knows how to use it, will.”

Up in the Bronx, Cheriece White-Fair can hear the same bells I can. She’s an Art Teacher at Metropolitan Soundview High School who wanted to not only push her 11th and 12th grade students’ creative expression, but also to future-proof their skills, knowing that AI is part of the future graphic artists will be living.

Perhaps the most novel aspect of her approach was the fact that she covered AI tools as a genre rather than diving into a singular tool for exploration – Adobe Express generative AI for image creation, Bing Create for realistic image generation, Sora for AI video generation, Suno AI for AI song generation, Gamma for presentation creation, and Canva AI tools for presentations.

Cheriece even went as far as to have students develop their own AI chatbot “with a unique brand and backstory”. She used playlab.ai (the same platform used by Ted) as a tool for students to learn the fundamentals of AI “workflows, prompting, ethics, user experience, and digital identity.”

As a result of this sandbox-meets-PBL style, students became so engaged with their work, Cheriece had students who didn’t want to leave at the end of class. 91% of students reported increased confidence using AI tools, and 87% agreed AI helped them discover new ways to express creativity. 89% said they enjoyed experimenting with AI platforms, and 94% believe AI will play a role in their future careers.

I think what made Cheriece’s work so successful was her ability to ground her students’ understandings in AI-agnostic skills – prompt engineering, metacognitive analysis, environmental and social stewardship – before broadening their work to specific tools.

Each formed organic preferences to the apps afforded them. This teaching choice? It’s equitable scaffolding in action. The study reminded me of Seymour Papert: “The role of the teacher is to create the conditions for invention rather than provide ready-made knowledge.” We are at a point where AI products are forming and breaking in waves; we, like Cheriece’s students, need to be able to make informed, ethical choices about the technology with which our work is becoming increasingly entangled.

In her final thoughts, Cheriece speaks on the need for educators to have continuing education around AI. I tend to agree. AI literacy is not just for the students in the classroom; it’s for all teachers and all professionals moving forward in a world that is quickly integrating AI. As Cheriece herself puts it: “Art is evolving through AI and we need to catch up. Education needs this… We need this…”


So What Now? Six Principles for Starting Tomorrow

So maybe you’re not in a dollar slice dodging red sauce, but you’re thinking about bringing AI into your classroom. Maybe you’re skeptical. Maybe you’re burned out. Or maybe, like most of us, you just don’t want to mess this up for your kids. Fair enough. Here’s what’s worked for us so far.

1. Do No Harm

Before you plug anything in, ask: “What could go wrong?” Not in the paranoid way – just in the professional, responsible way. For those slow to start, you’re not wrong. Data privacy matters. So does classroom trust. Start small, stay curious, and yes – track what’s happening. You can’t fix what you’re not measuring.

Read the experts: NYC’s K-12 AI Policy Lab and NYS’s AI Tech Guidelines (March 2025) are great starting points.

2. Talk About It. Loudly.

AI’s already in your building – even if no one’s said the word. Kids are using it. Teachers are whispering about it. So name it. Normalize it. Talk with your staff, your students, your parents. Frame it like you would any other new literacy: When is it helpful? When is it cheating? When is it a place where conversation starts?

Join the conversation: The MIT Day of AI is a low-stakes way to get your team thinking and talking. Also check out STEM Teachers NYC’s Harnessing AI Working Group for a more New York focused experience.

3. Teach Everyone, Not Just the Kids

AI literacy isn’t just for 11th grade comp sci. It’s for every student and every adult in the building – deans, paras, office staff, everyone. Understanding how it works changes what you do with it.

Where to learnOnline prompt engineering courses are everywhere. Or use UNESCO’s student and teacher frameworks to get started.

4. Pick One Tool and Go Deep

You don’t need to master every AI app on Earth. Choose one. Preferably something that solves a real annoyance – marking multiple choice, formatting a newsletter, building a lesson outline. Learn it well. You’ll be surprised how fast the rest comes.

Where to begin: ChatGPT, Gemini, or Claude. All have free account options, though consider that free accounts often use your data for model training. Bear in mind many tools will be blocked by your school’s firewall – ask your IT administrator about what to unblock and why. You can also check out the ERMA Database (though the list is not comprehensive).

5. Don’t Outsource the Thinking

If a student can’t tell when AI is bluffing, that’s not literacy – that’s a liability. We’re not just teaching them to use a tool; we’re teaching them to interrogate it. It’s no longer enough to ask where information comes from. We also need to ask: why trust one source over another? What narrative does it serve? Is this a peer-reviewed fact, or opinion generated to sound convincing?

AI can help draft. It can help organize. But it can’t replace the messy, human thinking that makes learning stick. If students don’t learn to pause and push back, they’ll start outsourcing the very muscle they need most: their judgment.

Scaffold both worlds: Use the AI4K12 guidelines to help align real-world skills with AI expectations.

6. It’s a Tool. Not a Teacher.

AI is fast. It’s powerful. But it doesn’t love your kids. You do. That’s the difference. So sure – let it draft the rubric. Let it brainstorm the group project. But don’t let it replace your judgment, your feedback, or your connection.

Try this toolThe Kapor Foundation’s AI Norms Framework helps clarify how much help is too much.

You don’t need to be a tech wizard to do this right. You just need to be honest, reflective, and willing to listen to your students – same as it ever was. AI isn’t here to replace that. If anything, it’s asking us to double down on it.


A Really, Really Good Question

With the slice place shuttered for the night, I’m out walking with a mason jar of limonada de coco, looking for a good thought to leave you with on Gunhill Road.

The bodega is full of surgeons from Monte Fiore looking for chopped cheese and kale smoothies. The kids are out in front, composing a break-up text by committee. I recognize Peacock Nikes. One of them suggests using ChatGPT to write it – this draws debate.

“Why should I write it myself? It’s over, so it’s not like it’s gonna matter anyway.”

In a few weeks, we’ll all be in classrooms, and some version of that question will land on your desk: Why should I do it myself? Your students will be asking it about essays, projects, lab reports – moments they’re tempted to hand off to a machine. Our job isn’t to judge, but to understand why.

Sometimes it’s because critical thinking is hard. Sometimes it’s because they don’t trust their own voice and want “the right words.” AI can strip away the challenge of original articulation, but it can also surface language and ideas students wouldn’t have found on their own. That’s the tension – between Productive Struggle and the Zone of Proximal Development.

You should be asking these questions about learners’ skills, because it’s what teachers do. And just know, even as students are plastering Juicy Fruit underneath their chairs, they’re asking the same questions about you.

“When does my teacher use AI?”

“How can I trust adults not to offload my future to a few lines of code?”

For those still wondering why we should have AI in our classrooms: it’s already here. But in the same breath, I have a new question for you: what does AI give and what does AI take? I don’t have your answer, and neither does AI.

No person or program can counterfeit the humanity you bring to your community. You worry about your kids, you think about who they’ll be, where they’ll go in a way that machines cannot. Granted, none of us can say with certainty what the AI-integrated future will look like, but our students will be living it. The teachers leading these studies have had enough bravery to address that fact. They’ve had enough care to do so safely.

For my part, I hope that neighborhood kid’s text never sends – because AI has never held hands in line at The Lemon Ice King of Corona. It can’t replace that intimacy, and it won’t excise heartbreak by numbers.

I hope you trust your gut. AI has read countless articles, papers, and stories by teachers, but it isn’t one. Who you are to your students is a non-transferrable asset.

I hope we all take the time to sit with the messy, personal wonderings – because in my experience, the only way to get a meaningful answer is to ask a really, really good question first.

You can have this one for free: Where do students already want to skip the thinking? Start there, and as you make your first AI lesson, be sure to leave space for the “Whys” that follow.

Entrepreneurship in Artificial Intelligence: Mission Impossible?

September 25, 2025 | 12:00 PM – 1:15 PM ET

Does artificial intelligence represent a fundamentally different kind of technological revolution—one that could reshape not only industries but also the structure of global markets? In past waves of innovation, from social media to e-commerce, technological booms spurred widespread entrepreneurship. Startups flourished, and many evolved into dominant firms, but they emerged from a competitive landscape where new entrants had room to grow. Artificial intelligence may chart a different path. Some analysts argue that AI’s steep economies of scale, vast computational requirements, and the adaptability of its systems could concentrate power in the hands of a few organizations—more akin to the era of mainframe computing in the 1960s, when one firm largely defined the field.

This roundtable discussion will explore:

  • Concentration vs. Competition: Are the capital demands, data needs, and infrastructure requirements of AI inherently driving the market toward centralization?
  • Investment Implications: How should private equity investors assess opportunities in an environment where scale advantages may limit smaller entrants?
  • Policy and Ethical Dimensions: What responsibilities do investors and innovators hold in shaping an AI ecosystem that fosters innovation without amplifying systemic risks of monopoly power?
  • Lessons from History: What parallels can be drawn between AI today and previous technology cycles, and what can we learn to anticipate future market dynamics?

Series Moderator

Josh Lerner

The Jacob H. Schiff Professor, Harvard Business School; Director, Private Capital Research Institute

Panelists

Dr. Jianying Hu

Director of Healthcare and Life Sciences Research, IBM

Ravi Kumar

CEO, Cognizant

Daniel Feder, CFA

Senior Managing Director of Investments at University of Michigan

Maya Frutiger

Minnow Venture Partners

Sponsors

Series Sponsor

Presented By

The New York Academy of Sciences logo

Pricing

All: Free

About the Series

The “Private Capital and Discovery: Strategic Investing in Scientific Innovation” series is brought to you by The New York Academy of Sciences and The Private Capital Research Institute. Through expert panels and thought-provoking discussions, the series examines how private equity is uniquely positioned to drive transformative advancements—while also exploring the ethical and strategic dilemmas that can arise when financial incentives influence the trajectory of science. Learn more about the series.