Watch CBS News

Maryland researchers are working to curb biases, inequity in Artificial Intelligence

Maryland researchers are working to curb biases, inequity in Artificial Intelligence
Maryland researchers are working to curb biases, inequity in Artificial Intelligence 03:25

BALTIMORE -- Artificial intelligence touches every facet of our lives, and critical research is being done in Maryland to make sure it's equitable.

Doctor Kofi Nyarko, professor of electrical and computer engineering at Morgan State University, said that if AI is trained on data that is biased, the end result will be biased.

"Everybody wants to have the same opportunities that everybody else has, right?" he said. "But depending on if the AI that is being used is unfair, some individuals may experience prejudice and have fewer opportunities than other people have."

This comes into play with something as simple as washing your hands.

"You want to put your hand underneath the faucet or water. But if you are lighter skinned, the sensor can pick up your hand a lot better. But if you're darker skinned, it's more of an engineering challenge," he said.

Here's an example of bias affecting college applications:

"There was a scandal a little while ago where it was found the system that was screening candidates would automatically kick out names that didn't sound European," Nyarko said.

Gabriella Waters is with the "Center for Equitable AI Machine Learning Systems" or CEAMLS. Through research they aim to detect bias, improve transparency, and develop standards. AI systems affect whether you get a home, a job, or healthcare coverage.

"This technology affects our lives every single day," Waters said. "It can be the difference between someone getting a cancer treatment and not. So when it gets down to that level, it's about someone living or not based on a decision made by a technology implement."

Waters said that as AI becomes more embedded in our lives, the responsibility to make sure it's safe for everyone increases exponentially.

"If your autonomous vehicle decides that the person in the crosswalk isn't actually a person because it's been trained on data that says this doesn't look like a person, because it doesn't have this feature, then what?" Waters said.

Nyarko said that we need protocols, best practices, and eventually laws.

"We want to make algorithmic equity a situation where people pay attention to it," he said. "And companies understand, it doesn't have to be about your bottom line. It's the right thing to do."

The White House has introduced a blueprint for an "AI Bill of Rights," identifying principles that should guide the design, use, and deployment of automated systems to protect the American public.

View CBS News In
CBS News App Open
Chrome Safari Continue
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.