Create a free Diverse: Issues In Higher Education account to continue reading. Already have an account? Enter your email to access the article.

Algorithmic Bias Continues to Negatively Impact Minoritized Students

user-gravatar

As institutions of higher education turn to AI machine learning and data-driven algorithms to make their work more efficient, a new study published in the American Educational Research Association (AERA) peer-reviewed journalAERA Open, reminds administrators that algorithms can be racially biased.

Dr. Denisa Gándara, assistant professor of educational leadership and policy at the University of Texas at Austin and co-author of the study.Dr. Denisa Gándara, assistant professor of educational leadership and policy at the University of Texas at Austin and co-author of the study.In their study, “Inside the Black Box,” researchers discovered that algorithms used to predict student success produced false negatives for 19% of Black and 21% of Latinx students, incorrectly calculating these percentages of students would fail out of college. Using data from the last decade collected by the National Center for Education Statistics that included over 15,200 students, the study looked for bachelor’s degree attainment at four-year institutions eight years after high school graduation.

“It is essential for institutional actors to understand how models perform for specific groups. Our study indicates that models perform better for students categorized as white and Asian,” said Dr. Denisa Gándara, an assistant professor of educational leadership and policy at the University of Texas at Austin and co-author of the study.

Dr. Hadis Anahideh, an assistant professor of industrial engineering at the University of Illinois Chicago and another co-author of the study, said she and her team expected to encounter bias in algorithms. But she was surprised, she said, to discover that attempts to mitigate that bias did not produce the strong, fair results they were hoping for.

“[Institutional leaders] should know that machine learning models on their own cannot be reliable. They need to be aware that algorithms can be biased and unfair because of bias in the history, data. All the algorithms can see and learn,” said Anahideh.

Institutions use algorithms to predict college success, admissions, allocation for financial aid, inclusion in student success programs, recruitment, and many more tasks.

“Even if you use bias-mitigation technology, which you should, you may not be able to reduce unfairness from all aspects and to the full extent, mitigation technology won’t do magic,” said Anahideh. “You really need to be aware, what notion are you using to mitigate non-fairness, and how much you can reduce it.”

The trusted source for all job seekers
We have an extensive variety of listings for both academic and non-academic positions at postsecondary institutions.
Read More
The trusted source for all job seekers