Skip to main content

Blog Article

Street-Level AI

How one NYC cohort tested generative AI in real classrooms—with lessons for national implementation.

Published August 19, 2025

By Devin Chowske

Was it really just two years ago that the City declared ChatGPT had no place in classrooms? And it only took 8 months for that decision to be turned around. Eighteen months later, I’m working with The New York Academy of Sciences to help teachers bring AI into their classrooms. And now, three months after that, I’m writing an article – not just about tools, but about teachers, kids, and what AI means for schools trying to stay human.

But it’s summer, so I’m not up in my apartment writing like I should be. Instead, I’m auto-dictating as I sit in a slice shop on Jerome Avenue, under the smell of garlic and the 4 train rattling the window. My first thought is the importance of place.

Pre-Pandemic, the New York City Schools district was the largest in the USA, standing at 1.1 million students. Of course, we still are, but we’ve also bled something like 100,000 students Post-Pandemic. Here are some of the remainder – 30 local kids have just walked into “$1 Slice” asking for a slice, now $1.50. They’re just out of the summer school around the corner. The kid next to me with peacock Nikes is speaking Portuguese with his mom.

I’m a Bronx transplant, but it reminds me of where I grew up. As a Bellerose boy, it remains a point of pride that per square mile, Queens has the most unique spoken languages on Earth. Those numbers are up since 2020, suggesting a growing intensity of need. And, I think, about 16.3% of the school population is still learning English. None of that seems important to this room of teenagers, who have now splattered sauce across the ceiling, which drips down in puce ribbons over an old social-distancing poster.

You’ll find that many educators now speak in terms of before and after – the Pre-Pandemic and Post. Before, New York already had a problem with teacher attrition. We’ve reported trends of around 19%-25% lost per year. By year three, some estimates put it at 40% gone altogether. As I start my year ten, I wonder if the Bronx Zoo has a space for me on their wall of endangered species….

So why do I bring any of this up when it comes to AI?

Well, to put it bluntly, AI is being billed as the panacea for everything that’s broken – a quick, cheap fix for organizations on the ropes. In the case of education, there are high hopes that recent trauma and systemic issues will be answered by technological innovation. Even with my most cynical face screwed on, I will say the educational products that have been borne out by GenAI are pretty fantastic. Still, and this is key, I put all of the credit at the feet of the educators using the tools.

I’m getting wistful – before I dragged you up to Williamsbridge, we were speaking on the program I built for the Academy. It was an amazing opportunity, being allowed to lead a group of expert educators in the implementation of AI with students. The Academy hoped I could help participants articulate their classroom approaches so the results could be replicated in yours.

The whole program went like this:

  1. Articulate a measurable need currently in your classroom, using multiple data points to define it.
  2. Form a question, in the style of inquiry learning, to address this need using AI tools.
  3. Select tools that are currently on the market and available in your school (this last caveat I will return to).
  4. Have students interact with the AI produced materials or AI itself.
  5. Record results and extrapolate use cases.

The results were a series of tools and techniques that have pragmatic use tomorrow. After coaching over 200 educators and giving national presentations on AI in education, the biggest hurdle I keep seeing is the same: people are scared to even start without knowing the exact finish line. So while several of the studies were viable, I am going to focus primarily on the results, implications, and most frequent use cases I have seen.

The Academy, the participants, and I are hoping this gives you the confidence to begin, that somewhere in these stories you see a little piece of you and your kids. Let’s start with a writing teacher who found opportunity in limitation.


Pinck’s AI Literate Classroom

Pinck is over at New Design High School – a smaller school on the Lower East Side looking to expand student empowerment. With an enrollment of roughly 449 students and student to teacher ratio of 9:1, the school bills itself as “a coffee shop, a design shop, a youth development shop, and most importantly a community.” Talking to Pinck, I get the sense that they’re pulling that last bit off, no problem.

She had observed her students struggling with the rubrics given to them and in the consistent application of feedback received. Pinck aimed to improve confidence around revisions in students’ writing.

The class ended up using Perplexity for the most part, which falls into a class of AIs known as “AI answer engines.” These are Large Language Models (LLMs) specializing in research – they’re not geared towards the same sort of large-scale generation or analysis most models are associated with. To put it simply: Perplexity would be an easy choice for research, but is a unique choice for feedback. So why use it in this application?

Pinck’s choice was a simple one, it was either Perplexity or CoPilot because everything else was blocked by the school’s firewall. This, in and of itself, is a pretty common occurrence in NYC schools – uneven and seemingly arbitrary banning of specific AI tools left behind in the wake of initial panic. You’re going to have to talk to your own tech department about that hidden list. The upshot – Pinck’s students were struggling with proper research and citation strategies anyway.

Her classes’ initial experiences with AI had her going back to teach them how to prompt more effectively – a key aspect of the AI literacy that will be a staple of our curriculum in the future – and she managed some excellent results. Student confidence increased somewhat, but quality of citations and presence of lateral search skyrocketed.

The best part? The struggles. Students reported that they found it difficult to rephrase and reframe work, saying “It’s impossible. …[Y]ou can’t not plagiarize.” Others found prompt engineering “tedious”.

Personally, I love these sorts of insights. Pinck did a great job with building initial understanding of how AI worked before she moved to student application of these tools. Yes, her students were using AI to produce work, but not un-critically. They were made to reckon not only with the credibility of their source – a 21st Century skill – but also consider gaps in their own learning. Gaps that they can come back to target with clearer agency.

Ultimately, policy development, norms, and scaffolding built from years of experience and deep knowledge of her own students made Pinck’s application effective. I’ll give her the last word on implementation in her style: “Teach your kids how AI generates, [because] they want and need to know. Go slowly…[what] seem[s] obvious to teachers can be extremely challenging for students.”


AI as Feedback Partner in Yelyzaveta’s ICT Class

As far as persistent problems of education go, providing quality, timely feedback to learners is about as universal as it gets. The internal arithmetic is brutal. Guiding students through quality work takes time, but condensed deadlines leave no space to breath. So many of us get caught choosing: something specific and actionable late, or half-baked right on time.

Yelyzaveta Kalinichenko over at the High School of Environmental Sciences in Manhattan – a 9th through 12th school with roughly 1,000 students – decided to tackle this head-on. Working in an ICT classroom, she wanted to maintain high standards for all students while breaking through the feedback bottleneck. Her solution? Use AI as a feedback partner, informed by teacher-made rubrics.

The setup was straightforward: students got a pre-written prompt scaffold, fed the AI their draft plus the assignment rubric, and received scores, feedback, and suggestions. Yelyzaveta collected data through grades and pre/post questionnaires about student perceptions.

Before the experiment, students were moderately comfortable with AI – rating their proficiency at 3.27 out of 5, with generally neutral-to-slightly-positive feelings. After working with AI feedback? Fascinatingly, most opinions stayed exactly the same. Even more telling, trust in AI actually dropped slightly.

Students rated the overall experience as positive (3.50), but the challenges were real. Many struggled to write their own prompts when interacting with AI. Students resubmitted work and grades fluctuated – anywhere from 2 to 5 points difference. When Yelyzaveta probed the AI about this inconsistency, it told her the rubrics weren’t specific enough.

Even AI has learned to pass the buck – how refreshingly human.

The bigger worry, of course, is dependency. Will students stop thinking for themselves? There’s some research suggesting this concern isn’t baseless – a recent MIT study found that a group of participants (ages 18-39) using AI performed worse than “brain-only” groups at multiple levels. 83% of AI users couldn’t even quote their own writing accurately.

But here’s what Yelyzaveta actually saw in her classroom: students gradually figured out that AI was just another voice in the room. Less expert than their teacher, useful but limited. Instead of becoming dependent, they saw it as what it was – a tool.

The takeaway? Understanding how AI actually works is fundamental to student AI literacy. We need more experiments like Yelyzaveta’s to figure out realistic boundaries so students learn to leverage AI without becoming overly reliant on it. Sometimes the most valuable lesson is learning what not to trust. But feedback timing wasn’t the only accessibility challenge teachers faced.


Ted & His Helperbots

During the 2023-24 school year, chronic absenteeism amongst NYC Public Schools spiked to 34.8%, up over the 25% Pre-Pandemic. This unquestionably impacts academic competency – missing 10% of the school year puts you behind. Teachers find themselves with fewer hours to reach their highest-need students; but students, in turn, often have family, work, or other human obligations that don’t sync with school hours.

So, how to reach them while maintaining reasonable hours and boundaries? And how to provide guidance and feedback when students aren’t available when you are? Students have found (and meme’d) their own solution: YouTube. If you’ve been in education for any length of time, you know that YouTube tutorial content can be full of pitfalls. Sometimes it advocates shortcuts that don’t scale well, other times it robs students of the productive struggle of finding the right tool for the right job.

Ted Scoville was looking at a similar problem – not from the angle of chronic absenteeism, but rather from the perspective of a course with heavy technical lift. He works over at the Loyola School on the Upper East Side – a private school with roughly fifty students per grade band. His complex coding classes demand complex technical skills; Ted needed a way to give students quality-controlled feedback without handing them solutions.

He settled on building a “helperbot” through playlab.ai. Playlab, an AI app already audited by NYC Public Schools,  falls into the broader category of “AI Assistants” that allow users to code tools using natural language. Each helperbot you make is powered by a larger LLM, like Claude, Gemini, or ChatGPT. It’s worth mentioning magicschool.ai is also a popular choice and has spotty approval across several NYC districts, but other AI Assistants are on the market.

Ted’s students were largely open to leveraging his bot and found it easy to use. The biggest data point was the drop in late work – his class went from over 25% of work turned in late, down to under 5%. He also reported less work completed at odd hours of the night and an increase in student independence.

Even with these benefits, several questions arose. As was the case with Pinck’s class, Scoville found that the students often found the specifics of prompting frustrating; he worries that they might turn to other tools that give more direct answers. Likewise, there were questions about students becoming more interested in interacting with the bot than with teachers. Afterall, with bots being infinitely more portable and accessible, what if we miss out on teacher-student rapport that’s key to education?

These are good worries I think, partially because it shows that teachers actually want to have connections with their students, despite what cartoons might otherwise have you think. I can say that I’ve seen some informal studies that marked similar surges in confidence, but also paradoxically saw greater demand for teacher input. As students interacted with AI, they became aware of its limitations; what they knew they needed was their teacher’s help.

While Ted focused on supporting individual student needs, our next teacher took on a broader challenge: preparing students for a rapidly changing creative economy.


Cheriece’s AI & Art Class

High on the list of criticisms for AI is its impact on the art world. Some critics decry it as the death of creativity, while others the birth of a new strain of kitsch. Meanwhile, talk of the rollback of copyright protections against AI have become part and parcel of the current US administration’s action plan.

Personally, my tea leaves very seldom fall in patterns recognizable beyond the five boroughs and I think there are better people to speak on those conversations. The World Economic Forum forecasts opportunities for traditional design roles will be fewer, but skills like creativity, resilience, and life-long learning will be up. The landscape our artistic students will be navigating is a difficult one. I can’t help but think of the tolling common wisdom uttered at every AI conference I attend: “AI will not take your job, but the person who knows how to use it, will.”

Up in the Bronx, Cheriece White-Fair can hear the same bells I can. She’s an Art Teacher at Metropolitan Soundview High School who wanted to not only push her 11th and 12th grade students’ creative expression, but also to future-proof their skills, knowing that AI is part of the future graphic artists will be living.

Perhaps the most novel aspect of her approach was the fact that she covered AI tools as a genre rather than diving into a singular tool for exploration – Adobe Express generative AI for image creation, Bing Create for realistic image generation, Sora for AI video generation, Suno AI for AI song generation, Gamma for presentation creation, and Canva AI tools for presentations.

Cheriece even went as far as to have students develop their own AI chatbot “with a unique brand and backstory”. She used playlab.ai (the same platform used by Ted) as a tool for students to learn the fundamentals of AI “workflows, prompting, ethics, user experience, and digital identity.”

As a result of this sandbox-meets-PBL style, students became so engaged with their work, Cheriece had students who didn’t want to leave at the end of class. 91% of students reported increased confidence using AI tools, and 87% agreed AI helped them discover new ways to express creativity. 89% said they enjoyed experimenting with AI platforms, and 94% believe AI will play a role in their future careers.

I think what made Cheriece’s work so successful was her ability to ground her students’ understandings in AI-agnostic skills – prompt engineering, metacognitive analysis, environmental and social stewardship – before broadening their work to specific tools.

Each formed organic preferences to the apps afforded them. This teaching choice? It’s equitable scaffolding in action. The study reminded me of Seymour Papert: “The role of the teacher is to create the conditions for invention rather than provide ready-made knowledge.” We are at a point where AI products are forming and breaking in waves; we, like Cheriece’s students, need to be able to make informed, ethical choices about the technology with which our work is becoming increasingly entangled.

In her final thoughts, Cheriece speaks on the need for educators to have continuing education around AI. I tend to agree. AI literacy is not just for the students in the classroom; it’s for all teachers and all professionals moving forward in a world that is quickly integrating AI. As Cheriece herself puts it: “Art is evolving through AI and we need to catch up. Education needs this… We need this…”


So What Now? Six Principles for Starting Tomorrow

So maybe you’re not in a dollar slice dodging red sauce, but you’re thinking about bringing AI into your classroom. Maybe you’re skeptical. Maybe you’re burned out. Or maybe, like most of us, you just don’t want to mess this up for your kids. Fair enough. Here’s what’s worked for us so far.

1. Do No Harm

Before you plug anything in, ask: “What could go wrong?” Not in the paranoid way – just in the professional, responsible way. For those slow to start, you’re not wrong. Data privacy matters. So does classroom trust. Start small, stay curious, and yes – track what’s happening. You can’t fix what you’re not measuring.

Read the experts: NYC’s K-12 AI Policy Lab and NYS’s AI Tech Guidelines (March 2025) are great starting points.

2. Talk About It. Loudly.

AI’s already in your building – even if no one’s said the word. Kids are using it. Teachers are whispering about it. So name it. Normalize it. Talk with your staff, your students, your parents. Frame it like you would any other new literacy: When is it helpful? When is it cheating? When is it a place where conversation starts?

Join the conversation: The MIT Day of AI is a low-stakes way to get your team thinking and talking. Also check out STEM Teachers NYC’s Harnessing AI Working Group for a more New York focused experience.

3. Teach Everyone, Not Just the Kids

AI literacy isn’t just for 11th grade comp sci. It’s for every student and every adult in the building – deans, paras, office staff, everyone. Understanding how it works changes what you do with it.

Where to learnOnline prompt engineering courses are everywhere. Or use UNESCO’s student and teacher frameworks to get started.

4. Pick One Tool and Go Deep

You don’t need to master every AI app on Earth. Choose one. Preferably something that solves a real annoyance – marking multiple choice, formatting a newsletter, building a lesson outline. Learn it well. You’ll be surprised how fast the rest comes.

Where to begin: ChatGPT, Gemini, or Claude. All have free account options, though consider that free accounts often use your data for model training. Bear in mind many tools will be blocked by your school’s firewall – ask your IT administrator about what to unblock and why. You can also check out the ERMA Database (though the list is not comprehensive).

5. Don’t Outsource the Thinking

If a student can’t tell when AI is bluffing, that’s not literacy – that’s a liability. We’re not just teaching them to use a tool; we’re teaching them to interrogate it. It’s no longer enough to ask where information comes from. We also need to ask: why trust one source over another? What narrative does it serve? Is this a peer-reviewed fact, or opinion generated to sound convincing?

AI can help draft. It can help organize. But it can’t replace the messy, human thinking that makes learning stick. If students don’t learn to pause and push back, they’ll start outsourcing the very muscle they need most: their judgment.

Scaffold both worlds: Use the AI4K12 guidelines to help align real-world skills with AI expectations.

6. It’s a Tool. Not a Teacher.

AI is fast. It’s powerful. But it doesn’t love your kids. You do. That’s the difference. So sure – let it draft the rubric. Let it brainstorm the group project. But don’t let it replace your judgment, your feedback, or your connection.

Try this toolThe Kapor Foundation’s AI Norms Framework helps clarify how much help is too much.

You don’t need to be a tech wizard to do this right. You just need to be honest, reflective, and willing to listen to your students – same as it ever was. AI isn’t here to replace that. If anything, it’s asking us to double down on it.


A Really, Really Good Question

With the slice place shuttered for the night, I’m out walking with a mason jar of limonada de coco, looking for a good thought to leave you with on Gunhill Road.

The bodega is full of surgeons from Monte Fiore looking for chopped cheese and kale smoothies. The kids are out in front, composing a break-up text by committee. I recognize Peacock Nikes. One of them suggests using ChatGPT to write it – this draws debate.

“Why should I write it myself? It’s over, so it’s not like it’s gonna matter anyway.”

In a few weeks, we’ll all be in classrooms, and some version of that question will land on your desk: Why should I do it myself? Your students will be asking it about essays, projects, lab reports – moments they’re tempted to hand off to a machine. Our job isn’t to judge, but to understand why.

Sometimes it’s because critical thinking is hard. Sometimes it’s because they don’t trust their own voice and want “the right words.” AI can strip away the challenge of original articulation, but it can also surface language and ideas students wouldn’t have found on their own. That’s the tension – between Productive Struggle and the Zone of Proximal Development.

You should be asking these questions about learners’ skills, because it’s what teachers do. And just know, even as students are plastering Juicy Fruit underneath their chairs, they’re asking the same questions about you.

“When does my teacher use AI?”

“How can I trust adults not to offload my future to a few lines of code?”

For those still wondering why we should have AI in our classrooms: it’s already here. But in the same breath, I have a new question for you: what does AI give and what does AI take? I don’t have your answer, and neither does AI.

No person or program can counterfeit the humanity you bring to your community. You worry about your kids, you think about who they’ll be, where they’ll go in a way that machines cannot. Granted, none of us can say with certainty what the AI-integrated future will look like, but our students will be living it. The teachers leading these studies have had enough bravery to address that fact. They’ve had enough care to do so safely.

For my part, I hope that neighborhood kid’s text never sends – because AI has never held hands in line at The Lemon Ice King of Corona. It can’t replace that intimacy, and it won’t excise heartbreak by numbers.

I hope you trust your gut. AI has read countless articles, papers, and stories by teachers, but it isn’t one. Who you are to your students is a non-transferrable asset.

I hope we all take the time to sit with the messy, personal wonderings – because in my experience, the only way to get a meaningful answer is to ask a really, really good question first.

You can have this one for free: Where do students already want to skip the thinking? Start there, and as you make your first AI lesson, be sure to leave space for the “Whys” that follow.


Author

Image
Contributing Author