University of Washington researchers found significant racial, gender and intersectional bias in how three state-of-the-art large language models ranked resumes. The models favored white-associated...
It’s just a method of shifting the blame And responsibility away from real people to try to avoid getting punished.
Yep. Biased AI works just fine when the goal is creation of an as-desperate-as-possible underclass.
Only, it still doesn’t, because eventually even the ultra rich sponsor’s niece is equally dead - once the AI takes over doctoring and mis-recommemds a needless and dangerous surgery.
It’s just a method of shifting the blame And responsibility away from real people to try to avoid getting punished.
Yep. Biased AI works just fine when the goal is creation of an as-desperate-as-possible underclass.
Only, it still doesn’t, because eventually even the ultra rich sponsor’s niece is equally dead - once the AI takes over doctoring and mis-recommemds a needless and dangerous surgery.
Machine learning models in medicine has biases against darker skins for identifying diseases. So I’m guessing lot of that will be there too