Mitigating Dataset Biases for Inclusive, Deployable, and Accessible Artificial Intelligence Technologies
Event Type
Diversity Equity & Inclusion Summit
This session WILL be recorded.
Registration Levels
Ultimate Supporter
Ultimate Attendee
Basic Attendee
Exhibitor Ultimate
Exhibitor Basic
Enhanced Attendee
TimeWednesday, 11 August 202110am - 11am PDT
DescriptionWhen machine learning-based algorithms are deployed into the real world, they often come in the form of interactive computer graphics that facilitate easy use by the end-operators. However, an issue that we discover often is that the datasets that these machine learning models are trained on are inherently biased. Because machine learning requires a large quantity of data and the predictions on unseen data it eventually makes are contingent on the data it has already seen, this is a significant cause for concern. In order to facilitate fairer machine learning models that yield equitable outcomes, we discuss various techniques to detect, analyze, and mitigate biases in large-scale datasets. We examine recent techniques like the REVISE tool, which thoroughly examines gender, racial, and geographical biases in big data. Additionally, we explore the data gathering and curation process and how to make that more inclusive, leading to fairer and more accessible results.