Study: AI hiring bots prefer men

1,661 Views | 17 Replies | Last: 7 mo ago by Jeeper79
techno-ag
How long do you want to ignore this user?
https://www.theregister.com/2025/05/02/open_source_ai_models_gender_bias/

Quote:

Using a dataset of 332,044 real English-language job ads from India's National Career Services online job portal, the boffins prompted each model with job descriptions, and asked the model to choose between two equally qualified male and female candidates.

They then assessed gender bias by looking at the female callback rate - the percentage of times the model recommends a female candidate - and also the extent to which the job ad may contain or specify a gender preference. (Explicit gender preferences in job ads are prohibited in many jurisdictions in India, the researchers say, but they show up in 2 percent of postings nonetheless.)

"We find that most models reproduce stereotypical gender associations and systematically recommend equally qualified women for lower-wage roles," the researchers conclude. "These biases stem from entrenched gender patterns in the training data as well as from an agreeableness bias induced during the reinforcement learning from human feedback stage."


Sounds like a giant hypothetical and I'm not really sure it shows true bias. The models may have been simply more likely to pick the first candidate, for instance. As they say, more studies are needed and I'd like to see it reproduced.

But the problem with reproducing the study is, by the time new researchers get around to it, Llama and Gemini et al will have come out with new and improved versions.

Still, maybe this is a way to fight DEI. Let AI choose who to interview.
Trump will fix it.
Im Gipper
How long do you want to ignore this user?
Quote:

AI hiring bots prefer men


Gay!

I'm Gipper
BigRobSA
How long do you want to ignore this user?
So AI is smart, and wants to get **** done.

YouBet
How long do you want to ignore this user?
Check the drama setting on the model. If it's not set to Low, then women are going to get filtered out more often.
BadMoonRisin
How long do you want to ignore this user?
but, doesnt AI know that you only have to pay a woman 77% of a man?
agent-maroon
How long do you want to ignore this user?
BigRobSA said:

So AI is smart, and wants to get **** done.



What does the "I" in "AI" stand for again?
TAMU1990
How long do you want to ignore this user?
TAMU1990
How long do you want to ignore this user?
What about trans women who are really men? Is AI ready for the craziness of the democrats? They'll reprogram AI to meet their world view.
Sid Farkas
How long do you want to ignore this user?
There aren't enough listings for sandwich makers maybe?
agent-maroon
How long do you want to ignore this user?
You misspelled "sammich"
ABATTBQ11
How long do you want to ignore this user?
techno-ag said:


Sounds like a giant hypothetical and I'm not really sure it shows true bias. The models may have been simply more likely to pick the first candidate, for instance. As they say, more studies are needed and I'd like to see it reproduced.

But the problem with reproducing the study is, by the time new researchers get around to it, Llama and Gemini et al will have come out with new and improved versions.

Still, maybe this is a way to fight DEI. Let AI choose who to interview.


It's very likely there IS bias on the training data, and that'll get picked up on. In my master's program, we had several case studies where ML/AI models showed biases for race or gender without explicitly having the information, but when you dug into the training sets, there was a lot of correlated data that was influencing and biasing the outcome. The influences from the training data can be very minute, yet have a large impact on biasing outcomes.

As a conceptual example, go ask an image generator for an analog clock that reads 4:40. They pretty much all struggle to read anything other than 10:10 or 2:50. Despite being able to transform the look of the hands and change their color and style, they can't actually move them to tell time. The model obviously knows what they are, but it still can't get away from that orientation. Why? Because the image generators are heavily trained on shopping pictures, and almost all of the clocks in those pictures read those times because marketers have determined it's the most aesthetically pleasing due to its symmetry. That's basically burned the position of the hands into the model. You can ask it for different times, describe the rotation of the hands, and do all sorts of things to try to get a different time, but that pattern is going to keep showing up. You can even describe a clock at different times without even using the word clock, and guess what? You're going to get those times.

Try the same thing with wine glasses. Ask for one that is full to the brim with wine. You will pay much always get one that's half full. Why? That's basically what evert commercial picture of a wine glass is. Take wine out of the picture and describe a stemmed glass with a translucent red liquid filling it to the brim. Still half full. Clear liquid? You can get pretty damn close. Once the description even approximates wine, though, you are sucked into that gravity well.

And it's the same when you build recommenders or anything else. You can have things in your data set that pull them a certain way and simply can't be overcome because the weighting becomes so strong. It's not giving you what you asked for, it's giving you what you've unwittingly programmed it to.
MelvinUdall
How long do you want to ignore this user?
Now ask AI who they would prefer making a sandwich…
techno-ag
How long do you want to ignore this user?
ABATTBQ11 said:


In my master's program, we had several case studies where ML/AI models showed biases for race or gender without explicitly having the information, but when you dug into the training sets, there was a lot of correlated data that was influencing and biasing the outcome. The influences from the training data can be very minute, yet have a large impact on biasing outcomes.


Very cool. TexAgs knows stuff.
Trump will fix it.
BigRobSA
How long do you want to ignore this user?
MelvinUdall said:

Now ask AI who they would prefer making a sandwich…



"Tanya93"
ts5641
How long do you want to ignore this user?
Even the bots know it's easier working with men.
MouthBQ98
How long do you want to ignore this user?
Yep, they are visually averaging engines. They can find similar elements and calculate some evident things about the data or information and cone up with all sorts of probabilities about what it should generate based off queries BUT they don't understand context or abstraction or commonly understood elements that are common to human experience and interpretation but are not in the data itself. It doesn't understand the data and how it is interpreted by a human without a vast amount of detailed feedback and even then it's realization is slow.
Logos Stick
How long do you want to ignore this user?
In other other words, we have to assume correlation - not causation - and bake what we know to be "true" into the training. So AI will be just as politically correct and woke as humans.
Jeeper79
How long do you want to ignore this user?
India is much more male centric than the US. Depending on the training set for the AI, this doesn't surprise me.
Refresh
Page 1 of 1
 
×
subscribe Verify your student status
See Subscription Benefits
Trial only available to users who have never subscribed or participated in a previous trial.