Artificial Intelligence is playing a growing role in human resources departments across the country, changing the way businesses recruit, hire, monitor and discipline employees.
But while AI can help make HR departments more efficient, it also carries a big risk: The tools used to help companies hire and manage their employees can also differentiate between groups of workers.
Policy makers are paying attention. Federal agencies such as the Department of Labor and the Equal Employment Opportunity Commission include AI discrimination in their operational plans. Meanwhile, states including California, Connecticut and New York are considering measures that would require bias testing for independent job applications, such as New York City’s law that went into effect in 2023.
It’s important for companies to scrutinize their AI vendors closely, so they understand what data the tools are collecting and how it’s being used to make decisions, lawyers said.
“These tools give you more information than we’ve ever had,” said Keith Sonderling, EEOC commissioner, speaking at a conference in April. “If you have all this information that you’re not allowed to make an employment decision with, that’s a lot of evidence that you can make an illegal hire or an illegal firing based on those protected characteristics.”
Combating AI bias
Screening may result in unintentional bias against applicants based on race, gender, or other characteristics protected by anti-discrimination law.
Biases can creep into models through bad data, opacity about how models work, and misuse of AI tools, the EEOC and eight other agencies said in an April statement in which they pledged implementing human rights laws in favor of AI. usage.
The EEOC has made policing technology an enforcement priority, issued employer guidance on the matter, and settled its first AI discrimination case against an employer that allegedly configured its software to reject large applicants.
Some examples of potential AI biases include the use of a recruiting tool that may select candidates from ZIP codes near an employer’s location because retention rates are higher for employees who live closer to their jobs. . But if the population in those areas is overwhelmingly white, the recruiting tool may inadvertently discriminate against candidates of color.
To reduce the risks, companies should review AI tools to ensure they don’t introduce unintended bias, and should establish human review processes for algorithm-driven decisions, law enforcement and industry sources say. .
In-house counsel is also preparing to comply with potential state laws that would require employers to independently audit their systems for bias, as well as measures that would require notice to employees and job applicants when businesses use AI, and provide an opt-out for those who do not want to be evaluated by automated systems.
California’s AI policy choices are expected to have a major impact on businesses, according to half of the nearly 400 in-house consultants, HR managers, and business leaders who responded to a survey last summer by the law firm Littler Mendelson PC One. – a third of those who were asked said that New York City would be their biggest source of anxiety.
California privacy regulators are also drafting regulations regarding AI-related data protection.
Older states have enacted more focused restrictions—Illinois limits employers’ use of AI to analyze job applicants’ questionnaires and Maryland limits the use of facial recognition technology in interviews.
“The clients that I’m working with are trying to monitor this area, what the progress looks like, what the proposals seem to be at,” said Jennifer Betts, a labor and employment attorney at Ogletree. Deakins, Nash. , Smoak & Stewart. “Most of the proposed regulations across the US are more focused, where they identify specific steps that employers must take” if they want to use certain AI tools.
Exploring AI tools
Best practices that are beginning to emerge among employers include only using AI decision tools that have been screened for potential bias, then periodically reviewing them further, Betts said.
For companies that choose to conduct a voluntary audit, he suggested involving in-house or outside lawyers so that the results are protected by the lawyers’ privilege.
Some businesses are creating separate AI or innovation teams to test devices before they’re installed, Betts said. These may include in-house consultants, human resources leaders, and information technology specialists.
Businesses should also consider maintaining human review of any AI-driven decisions, carefully reviewing contractual agreements with AI vendors, and providing easy-to-understand notices to employees and job applicants. when using AI decision tools — including information about how they’re applying for housing when appropriate, Betts said.
The tech companies that make and sell these AI products are already feeling pressure from the businesses that buy and use them to conduct biased checks on AI decision-making tools, as employers seek to avoid claims of discrimination in the workplace and preparing for government disclosures. , and European Union regulations, said Shea Brown, founder and CEO of bookmaker BABL AI.
“They know more regulations are coming,” Brown said.
To
It’s important to understand what data the app collects, and doing so involves “getting through the weeds with vendors before you agree to work with them,” he said.
Cisco is using AI tools in recruiting and hiring to help with a big problem: The company receives hundreds of resumes or more every few months, Cisco said. But for smaller companies, it may not make sense to take on the debt and costs, including paying for a bias test, he said.
Companies should also remember that “anyone close to the border can be responsible” for the results of discrimination from their AI tools, not just the sellers, he warned. “Therefore, you have a responsibility to the chosen ones to make your sacrifice.”
The risk of discrimination has made The Planet Group, a recruiting and staffing company, cautious when choosing vendor partners for AI tools, said Marni Helfand, the company’s general counsel and chief executive officer. of HR.
When reviewing an AI vendor, Helfand said he asks them to validate their approach. And anyone who promises the world with their AI tool also raises a red flag.
“Like anything that consumers buy, if it sounds too good to be true, it probably is,” Helfand said.
Controlling ‘Bossware’
There is also growing interest in developing digital workforce assessment tools that could include using AI to help with decisions about promotions or disciplinary actions.
New York and several states, including California and Washington, have banned many warehouse workers.
Outside of warehouse space, two bills pending in New York State would restrict employers’ use of surveillance technology, sometimes called “bossware.”
“The piece of electronic monitoring has become very large since Covid, with many remote workers,” said Karla Grossenbacher, a labor and employment attorney who focuses on workplace privacy at Seyfarth Shaw LLP. in Washington, DC “Are you sure they are doing their jobs and protecting your data?”
Employers generally don’t run into privacy problems by monitoring how much time employees are logged into company computer centers or how often they’re on non-work-related websites, Grossenbacher said. For most businesses, those basic monitoring methods are sufficient, but he added that companies can consider more advanced technologies such as tracking important data for employees who have access to information. in-depth information such as customer financial information.
He said that the weak regulations on electronic monitoring of employees generally limit the location tracking of employees’ vehicles that they also use for work.
“You have a right to know what employees are doing when they’re at work,” Grossenbacher said, adding that many companies don’t need or want to spend the money to implement multiple methods. keep an eye on.
#Transformation #Work #Tools #Risks #Businesses