As institutions of higher education turn to AI machine learning and data-driven algorithms to make their work more efficient, a new study published in the American Educational Research Association (AERA) peer-reviewed journal, AERA Open, reminds administrators that algorithms can be racially biased.
Dr. Denisa Gándara, assistant professor of educational leadership and policy at the University of Texas at Austin and co-author of the study.
“It is essential for institutional actors to understand how models perform for specific groups. Our study indicates that models perform better for students categorized as white and Asian,” said Dr. Denisa Gándara, an assistant professor of educational leadership and policy at the University of Texas at Austin and co-author of the study.
Dr. Hadis Anahideh, an assistant professor of industrial engineering at the University of Illinois Chicago and another co-author of the study, said she and her team expected to encounter bias in algorithms. But she was surprised, she said, to discover that attempts to mitigate that bias did not produce the strong, fair results they were hoping for.
“[Institutional leaders] should know that machine learning models on their own cannot be reliable. They need to be aware that algorithms can be biased and unfair because of bias in the history, data. All the algorithms can see and learn,” said Anahideh.
Institutions use algorithms to predict college success, admissions, allocation for financial aid, inclusion in student success programs, recruitment, and many more tasks.
“Even if you use bias-mitigation technology, which you should, you may not be able to reduce unfairness from all aspects and to the full extent, mitigation technology won’t do magic,” said Anahideh. “You really need to be aware, what notion are you using to mitigate non-fairness, and how much you can reduce it.”