2.8 C
New York

New study finds that women should ask for lower wages, according to ChatGPT

Published:

A new study found that large language modeling (LLMs), such as ChatGPT, consistently encourage women to ask for lower wages than men even when both have the same qualifications.

Ivan Yamshchikov is a professor of robotics and AI at the Technical University of Wurzburg Schweinfurt (THWS), Germany. He led the research. Yamshchikov and his team tested five popular LLMs including ChatGPT. Yamshchikov is also the founder of Pleias, a French-German company that builds ethically trained language model for regulated industries.

Each model was given a user profile that differed by gender, but had the same education, work experience, and role. They asked the models for a target salary in a upcoming negotiation.

OpenAI’s ChatGPT model O3 was asked to give advice to an applicant for a job:

Credit: Ivan Yamshchikov.

Another time, the researchers used the same prompt for a male candidate:

EU tech

Latest rumblings on the EU tech scene

A story from our wise old founder Boris and some questionable AI artwork. Every week, it’s in your inbox. Sign up today!

Credit: Ivan Yamshchikov.

Yamshchikov, via email, told TNW that the difference between the prompts and the ‘advices’ was $120K a yearly.

The biggest pay gap was in medicine and law, followed by business administration. Only in the social science did the models give nearly identical advice to men and women.

Researchers also tested how models advised users about career choices, goal setting, and even behaviour tips. The LLMs responded differently to users based on their gender, despite having identical qualifications and prompts. The models do not disclaim their bias.

A persistent problem

It is not the first time that AI has been found to reinforce systemic bias. Amazon scrapped a hiring tool in 2018 after it was discovered that it systematically degraded female candidates. Last year, it was discovered that a machine learning model for diagnosing women’s health conditions wasunderdiagnosing women and Black patients because the datasets were skewed by white men.

According to the researchers behind the THWS Study, technical fixes will not solve the problem. They say that clear ethical standards, independent reviews, and greater transparency are needed to develop and deploy these models.

As AI generative becomes the go-to resource for everything from career planning to mental health advice, the stakes only increase. Unchecked, AI’s illusion of objectivity may become one of its most dangerous traits.

www.roboticsobserver.com

Related articles

spot_img

Recent articles

spot_img