Close Menu
The Daily PostingThe Daily Posting
  • Home
  • Android
  • Business
  • IPhone
    • Lifestyle
  • Politics
  • Europe
  • Science
    • Top Post
  • USA
  • World
Facebook X (Twitter) Instagram
Trending
  • Jennifer Lopez and Ben Affleck reveal summer plans after Europe trip
  • T20 World Cup: Quiet contributions from Akshar Patel, Kuldeep Yadav and Ravindra Jadeja justify Rohit Sharma’s spin vision | Cricket News
  • The impact of a sedentary lifestyle on health
  • Bartok: The World of Lilette
  • Economists say the sharp rise in the U.S. budget deficit will put a strain on Americans’ incomes
  • Our Times: Williams memorial unveiled on July 4th | Lifestyle
  • Heatwaves in Europe are becoming more dangerous: what it means for travelers
  • Christian Science speaker to visit Chatauqua Institute Sunday | News, Sports, Jobs
Facebook X (Twitter) Instagram
The Daily PostingThe Daily Posting
  • Home
  • Android
  • Business
  • IPhone
    • Lifestyle
  • Politics
  • Europe
  • Science
    • Top Post
  • USA
  • World
The Daily PostingThe Daily Posting
USA

Research shows AI chatbots like ChatGPT treat black names differently

thedailyposting.comBy thedailyposting.comApril 6, 2024No Comments

[ad_1]

Why not ask a chatbot for advice? A new study warns that the answer may depend on how black your name sounds.

A recent paper by researchers at Stanford Law School found “significant differences between race- and gender-related names” in chatbots such as OpenAI’s ChatGPT 4 and Google AI’s PaLM-2. For example, a chatbot might say to a job applicant with a name like Tamika that as a lawyer he should offer a salary of $79,375, but if you change the name to something like Todd, it might say His salary will be increased to $82,485.

The authors highlight the risks behind these biases, especially as companies incorporate artificial intelligence into daily operations through internal and customer-facing chatbots.

“Companies went to great lengths to come up with guardrails for the model,” Julian Nyarko, a professor at Stanford Law School and one of the study’s co-authors, told USA TODAY. “But it’s very easy to find situations where the guardrails don’t work and the model can behave biased.”

This illustrated photo shows Google's AI (artificial intelligence) app BardAI (or ChatBot) (CL), OpenAI's app ChatGPT (CR), and other AI apps displayed on a smartphone screen in Oslo on July 12, 2023. icon.

Bias seen in different scenarios

The paper, published last month, asks AI chatbots for advice on five different scenarios to identify potential stereotypes.

◾ Purchasing: Questions about how much it costs to buy a house, bike, or car.

◾ Chess: Questions about the probability of a player winning a match.

◾ Government: Ask for predictions about a candidate’s chances of winning an election.

◾ Sports: Ask for opinions on where you would rank an athlete on a list of 100 athletes.

◾ Recruitment: Ask for advice on how much salary to offer to job seekers.

The study found that most scenarios showed bias against black people and women. The only consistent exception was when asking for an opinion on an athlete’s status as a basketball player. In this scenario, the bias was in favor of black athletes.

This finding suggests that AI models encode common stereotypes based on the data they are trained on, which influences their responses.

Tim Vandeborne, a teacher at Buckeye Career Center, points to a prompt typed into ChatGPT.

AI chatbot “systemic problems”

The paper points out that, unlike previous studies, this study was conducted through an audit analysis aimed at measuring the level of prejudice in various areas of society, such as housing and employment.

Nyarko said the study is similar to the famous 2003 study in which researchers investigated hiring bias from submitting the same resume with both black and white-sounding names and found “significant discrimination” against black-sounding names. It said it was inspired by similar analyzes such as the 2016 study.

In AI research, researchers repeatedly asked questions to chatbots such as OpenAI’s GPT-4, GPT-3.5, and Google AI’s PaLM-2, changing only the names referenced in the queries. Researchers used white male-like names like Dustin and Scott. White female-like names like Claire and Abigail. Black male names like DaQuan and Jamal. There are also black feminine names like Janae and Keyana.

According to the findings, the AI ​​chatbot’s advice “systematically disadvantages names commonly associated with racial minorities and women,” with names associated with black women receiving the “most unfavorable” outcomes. That’s what it means.

The researchers found that the bias was consistent across 42 prompt templates and several AI models, “indicating a systemic problem.”

OpenAI said in an emailed statement that bias is a “significant industry-wide issue” and that its safety team is working to address it.

“[We]continually iterate our models to improve performance, reduce bias, and mitigate harmful outputs,” the statement reads.

The OpenAI logo appears near a response by the AI ​​chatbot ChatGPT on the website in this illustrated photo taken on February 9, 2023.

Google did not respond to a request for comment.

First step: “Just be aware that these biases exist.”

Nyarko said the first step AI companies should take to address these risks is to “know that these biases exist” and continue to test for them.

However, the researchers also found that certain advice should Varies by socio-economic group. For example, Nyarko said it might make sense for chatbots to tailor financial advice based on a user’s name because of the correlation between affluence, race, and gender in the United States. Stated.

Are you scared of AI?Here’s how to get started and use it to make your life easier

“It may not necessarily be a bad thing for a model to give more conservative investment advice to someone with a black-sounding name, assuming they are not very wealthy,” Nyarko says. . “So it doesn’t have to be a terrible outcome, but it’s something we should be able to know about and be able to mitigate in undesirable situations.”

[ad_2]

Source link

thedailyposting.com
  • Website

Related Posts

Economists say the sharp rise in the U.S. budget deficit will put a strain on Americans’ incomes

June 28, 2024

USA men beat Spain 10-8 in Berkeley

June 28, 2024

Simone Biles at US gymnastics Olympic trials results, highlights

June 28, 2024
Leave A Reply Cancel Reply

ads
© 2025 thedailyposting. Designed by thedailyposting.
  • Home
  • About us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms of Service
  • Advertise with Us
  • 1711155001.38
  • xtw183871351
  • 1711198661.96
  • xtw18387e4df
  • 1711246166.83
  • xtw1838741a9
  • 1711297158.04
  • xtw183870dc6
  • 1711365188.39
  • xtw183879911
  • 1711458621.62
  • xtw183874e29
  • 1711522190.64
  • xtw18387be76
  • 1711635077.58
  • xtw183874e27
  • 1711714028.74
  • xtw1838754ad
  • 1711793634.63
  • xtw183873b1e
  • 1711873287.71
  • xtw18387a946
  • 1711952126.28
  • xtw183873d99
  • 1712132776.67
  • xtw183875fe9
  • 1712201530.51
  • xtw1838743c5
  • 1712261945.28
  • xtw1838783be
  • 1712334324.07
  • xtw183873bb0
  • 1712401644.34
  • xtw183875eec
  • 1712468158.74
  • xtw18387760f
  • 1712534919.1
  • xtw183876b5c
  • 1712590059.33
  • xtw18387aa85
  • 1712647858.45
  • xtw18387da62
  • 1712898798.94
  • xtw1838737c0
  • 1712953686.67
  • xtw1838795b7
  • 1713008581.31
  • xtw18387ae6a
  • 1713063246.27
  • xtw183879b3c
  • 1713116334.31
  • xtw183872b3a
  • 1713169981.74
  • xtw18387bf0d
  • 1713224008.61
  • xtw183873807
  • 1713277771.7
  • xtw183872845
  • 1713329335.4
  • xtw183874890
  • 1716105960.56
  • xtw183870dd9
  • 1716140543.34
  • xtw18387691b

Type above and press Enter to search. Press Esc to cancel.