When sexist, racist robots discriminate, are their owners at fault?

Artificial intelligence has the potential to wreak havoc on diversity initiatives

February 20, 2018

Artificial intelligence (AI), it seems, has become the cutting-edge target for proponents of diversity in the workplace.

Some experts claim that AI is increasingly biased against women and non-white people. Even robots, they claim, are being sexist and racist. The bias may not be deliberate but, in some ways, that makes things worse: many are those who believe, and quite reasonably, that unconscious bias is the invisible enemy of workplace diversity. If so, artificial intelligence has the potential to wreak havoc on diversity initiatives.

But what if the agent of bias, the AI software, has no consciousness and certainly not a conscience? Can employers who use the software be held legally accountable for its biases? As artificial intelligence worms its way into the business world’s infrastructure, the problem smacks of growing proportions.

“The difficulty is that today’s software is solving problems that have traditionally been left to humans, like human resource tools for hiring promotion and firing, programs for credit scoring, and public safety inquiries into the likelihood of a particular person or group committing various crimes,” says Maya Medeiros, a patent and trademark lawyer in Norton Rose Fulbright Canada LLP’s Toronto office, who has extensive experience in artificial intelligence and a degree in mathematics and computer science.

“Some companies are even developing algorithms for sentencing in criminal cases.”

Even though employers may not be aware of the intricacies of biases inherent in particular software, they may have a duty to exercise reasonable care not to use discriminatory programs. “Employers won’t be able to get away with saying ‘the tool did it,’ because there is often a way for them to evaluate the tool, at least in a limited fashion,” Medeiros says.

Sara Jodka, a lawyer in Dickinson Wright PLLC’s office in Columbus, Ohio, who offers preventative counselling services to employers, says employers should “look under the hood” of the technology and determine that the software uses an appropriate range of “data sets,” the criteria fed into the software that power its determinations.

Absent an appropriate range of data sets, AI is capable of discriminating across broad categories.

For example, hiring software that searches for such factors, as “periods of long unemployment,” could be discriminating against single mothers and parents in general, or perhaps against veterans who served in the armed forces. Similarly, AI with cognitive emotional components that analyze video interviews, messages or answers to questions may discriminate against individuals with physical and mental challenges.

“Employers need to ensure that AI embeds proper values, that its values are transparent and that there is accountability, in the sense of identifying those responsible for harm caused by the system,” Medeiros says.

Training the software properly is key as well. “Good AI learns and evolves over time through machine learning,” Medeiros adds. “But unless the training data reflects diverse values, the employer may be creating or exacerbating a tool that doesn’t embed the right values.”

Following through on this type of investigation and training can be a problem, however, especially for smaller businesses who may have no in-house technological expertise or lack the resources to hire outside providers.

“Ultimately, companies providing or supporting AI solutions will have to adopt a more transparent framework,” Medeiros says. “It doesn’t have be at code level, which can cause trade secret problems. But developers could provide at least the basic social assumptions in the software, as well as training data.”

Transparency requirements are already working their way into regulators’ requirements. The U.S. Food and Drug Administration, for example, has indicated that it will allow the use of AI in medical devices only where the developers enable independent review of the software’s limitations, models and machine learning processes.

In any event, Jodka suggests that employers take advantage of their leverage in contractual negotiations to seek indemnity from AI developers.

“Because it may be hard to determine precisely the extent to which the developers or the data sets are prone to blind biases, employers should contract around liability by demanding tight clauses fully indemnifying them against damages occasioned by discriminatory technology,” she says.

From a developer’s perspective, Medeiros suggests that having a diverse set of employees can go a long way.

“Bias comes in at the human stage, so utilizing a diverse set of developers helps balance a group’s blind spots,” Medeiros says. “Developers working in the human resources space should seek expert input on the social as well as the technical side.”

Social Media Auto Publish Powered By : XYZScripts.com