fAIrify – Reducing BIAS in AI Models
Winner of the Junior Academy Challenge – Fall 2024 “Ethical AI”
Published May 16, 2025
By Nicole Pope
Academy Education Contributor
Sponsored by The New York Academy of Sciences
Team members: Emma L. (Team Lead) (New Jersey, United States), Shubh J. (California, United States), Darren C. (New York, United States), Aradhana S. (Pennsylvania, United States), Shreshtha B. (Kuwait), Jemali D. (New York, United States)
Mentor: Abdul Rauf (Pakistan)

Artificial Intelligence (AI) is evermore present in our lives and affects decision-making in government agencies, corporations, and small businesses. While the technology brings numerous opportunities to enhance productivity and pushes the boundaries of research, predictive AI models have been trained on data sets that contain historical data. As a result, they risk perpetuating and amplifying bias, putting groups who have traditionally been marginalized and underrepresented at a disadvantage.
Taking up the challenge of making AI more ethical and preventing the technology from harming vulnerable and underrepresented groups, this winning United States and Kuwait based team sought ways to identify and correct the inherent bias contained in large language models (LLM). “[The Ethical AI Innovation Challenge] helped me realize the true impact of bias in our society today, especially as predictive AI devices continue to expand their usage and importance,” acknowledged team lead Emma, from New Jersey. “As we transition into a future of increased AI utilization, it becomes all the more important that the AI being used is ethical and doesn’t place anyone at an unjustified disadvantage.”
The team conducted a thorough literature review and interviewed AI experts before devising their solution. In the course of their research, they came across real-life examples of the adverse effects of AI bias, such as an AI healthcare tool that recommended further treatment for white patients, but not for patients of color with the same ailments; a hiring model that contained gender bias, limiting opportunities for women; and a tool used to predict recidivism that incorrectly classified Black defendants as “high-risk” at nearly twice the rate it did for white defendants.
AI Bias
Team member Shreshthafrom Kuwait said she was aware of AI bias but “through each article I read, each interview I conducted, and each conversation I had with my teammates, my eyes opened to the topic further. This made me even keener on trying to find a solution to the issue.” She added that as the only team member who was based outside of the USA, “I ended up learning a lot from my teammates and their style of approaching a problem. We all may have had the same endpoint but we all had different routes in achieving our goal.”

The students came together regularly across time zones for intense working sessions to come up with a workable solution, with support from their mentor. “While working on this, I learned that my team shared one quality in common – that we are all committed to making a change,” explained teammate Shubh. “We had all unique skills, be it management, coding, design, etc., but we collaborated to form a sustainable solution that can be used by all.” In the end, the team decided to develop a customizable add-on tool that can be embedded in Google Sheets, a commonly used spreadsheet application.
The students wanted their tool, developed with Python programming, to provide cutting-edge bias detection while also being user friendly. “A key takeaway for me was realizing that addressing AI bias requires a balanced approach that combines technical fixes with ethical considerations—augmenting datasets while engaging directly with underrepresented groups,” stated New York-based teammate Darren, who initially researched and produced a survey while his teammates worked on an algorithm that could identify potential bias within a dataset.
More Ethical AI
The resulting add-on, which can be modified to fit any set of training data, utilizes complex statistical analysis to detect if AI training data is likely to be biased. The challenge participants also paired the add-on with an iOS app they created in UI/UX language and Swift, which gives users suggestions on how to customize the add-on for their specific data sets. The students were able to test their tool on a job applicant dataset provided by a company that chose to remain anonymous.
“By using an actual dataset from a company and analyzing it through our add-on, I was shocked to see that there could be gender bias if an AI model were trained on that dataset,” said team member Aradhana. “This experience highlighted how AI can continue societal discrimination against women.” The enterprising team members were able to refine and improve their solution further after conducting a survey and receiving feedback from 85 individuals from diverse backgrounds.
Members of the winning team believe addressing AI bias is critical to mitigate the risk of adverse impacts and build trust in the technology. They hope their solution will spearhead efforts to address bias on a larger scale and promote future, more ethical AI. Summing up, team member Jemali explained that the project “significantly deepened my insights into the implications of AI bias and the pivotal role that we, as innovators, play in ensuring technology benefits all individuals.”
Learn more about the Junior Academy.