The White House released the AI ​​Bill of Rights, which instructs banks and other companies on what consumer protections they need to build into their AI-based programs.

The plan, released Tuesday, lays out six rights that consumers should have when companies deploy artificial intelligence: protection from unsafe or ineffective systems; lack of discrimination by algorithms; data privacy; notification of the use of algorithmic systems; possibility of refusal; and access to human-powered customer service.

The bill of rights is not law or enforceable, but it shows how the Biden administration wants to ensure that consumer rights are protected when companies like banks use AI.

“You can think of this as a preamble to future regulatory action,” said Jacob Metcalfe, program director of AI on the Ground for the nonprofit research group Data and Society. The White House Office of Science and Technology Policy, which drafted the document, doesn’t write the laws, but sets strategic priorities for other government agencies to follow, he explained.

“You can really look at it as a tone-setting document,” he said.

The use of artificial intelligence by banks and fintechs has been questioned many times by regulators and consumer advocates, especially their use of AI in lending. Consumer Financial Protection Bureau director Rohit Chopra recently warned that relying on artificial intelligence in lending decisions could lead to illegal discrimination. Use of AI by banks recognition of persons was also highlighted and their use of AI in hiring has been questioned. This is the tip of the iceberg: banks and financial technology are using artificial intelligence in many other areas, including fraud detection, cyber security and virtual assistants.

The Bill of Rights specifically focuses on financial services several times. For example, an appendix listing the types of systems that rights should cover includes “financial system algorithms, such as loan allocation algorithms, financial system access determination algorithms, credit scoring systems, insurance algorithms, including risk assessment, automated interest rate determination, and financial algorithms , which apply penalties (for example, which can collect wages or withhold tax returns)”.

Some in the financial industry are skeptical about how effective this bill of rights will be. Others worry that some rights will be too difficult to enforce.

“At the very least, it sends a signal to the industry that hey, we’re going to be watching,” said Theodora Lau, co-founder of Unconventional Ventures. “However, we are a bit late to the party, especially when even the Vatican has spoken out on the subject, let alone the EU. What is more concerning is that it is non-compulsory without coercive measures, like a toothless tiger. New bills will be proposed by legislators. And even if something is passed, having laws is one thing, but enforcing them is quite another.”

Lau noted that the EU has proposed legislation to regulate the use of artificial intelligence in specific high-risk areas, including loan applications.

“Will we be able to follow suit? And if so, when? Or will we be subject to the whims of the political winds?” she said.

The intent of the plan, putting in place some fences for the use of artificial intelligence systems to make sure credit decisions aren’t final and open to challenge, is a smart one, said Mark Stein, founder and CEO of Underwrite.ai.

“But I have serious reservations about how it might be implemented in the financial services industry,” he said.

Loan application

One of the most controversial areas where banks are using artificial intelligence is in lending decisions. Regulators and consumer advocates have warned lenders about this they still have to comply with fair lending laws when they use artificial intelligence.

The federal government is beginning to require companies to prove that the artificial intelligence software they use is not discriminatory, Metcalfe said.

“We’ve existed in a regulatory environment where you can rely on claims of magic without having to invest your money and evaluate how your system actually works,” he said. “You can get away with just providing hypotheticals. I see the federal government moving toward a “peace or shut up” environment. If you’re going to provide a product that works in these regulated areas, including finance and banking, you need to affirmatively provide an assessment that shows you’re operating within the law.”

But Stein argued that there are practical difficulties in applying the plan’s directives to lending, such as the clause that consumers must be able to opt out and have access to someone who can quickly review and fix problems.

“If automated interest rate determination is based on FICO scores, how do you apply it?” Stein said. “What function should a human do? The decision is not made by a black box algorithm and has been set up by human underwriters to work automatically. What exactly will the customer like? Is using FICO scores unfair? That may be a valid argument, but it has nothing to do with artificial intelligence and cannot be considered in this regard.”

Stein noted that lenders have long-standing rules regarding discrimination and establishing liability for bad creditor behavior.

“If a lender discriminates or misleads, it should be punished,” he said. “If an automated system is used for that violation, then the lender who deployed the automated system is liable. It’s certainly not a reasonable defense to claim that you didn’t realize your system violated the law.”

AI in hiring

The use of artificial intelligence in hiring is also controversial because of concerns that the software could pick up cues in resumes or videos that discriminate against already disadvantaged groups.

“There are all kinds of public, well-known examples of machine learning making really discriminatory and frankly inappropriate decisions, and the rest of us are expected to just accept that it works,” Metcalfe said.

He pointed to Amazon’s attempt to use its own algorithmic hiring tool to process applications for scientists and executives.

“They found that it gave very high scores to anyone named Chad and anyone who played lacrosse, and very low scores to anyone with the word ‘female’ on their resume, including the head of the Harvard Women’s Science Club.” — Metcalf said. said. “That’s why Amazon abandoned the tool. They worked on it for three years and Amazon couldn’t get it to work.”

Fraud detection

The plan’s warning that consumers should be protected from unsafe or inefficient systems could apply to artificial intelligence-based fraud detection software that flags suspicious activity too aggressively, Metcalf said.

“You can lose access to your money,” he said.

Chime Claims Bank ran into this problem last year when he improperly closed customer accounts due to the operation of an overzealous fraud system.

“If it happens on a Saturday at 10 p.m., you might not get your bank account back until Monday morning,” Metcalf said. “There are security issues. The question for me, as someone very interested in algorithmic accountability and corporate governance, is what testing is this bank required to do with respect to the accuracy of this prediction? Did they test it against realistic accounts of demographic differences. ? We live in a segregated society and African Americans may have different banking behaviors than whites. Are we in a situation where false fraud alerts are being sent to people who simply have innocuous banking patterns common to African-Americans? What is the bank’s obligation to check these scenarios?”

A bank may not know how to conduct such tests and may not have the resources to do so, he added.

“I think we should move to a situation where this kind of testing is mandatory, where there is transparency and documentation, and where federal regulators are asking these questions of the banks and telling them that they are expected to respond and that there is recourse,” he said. Metcalf.

One of the most important aspects of the Bill of Rights is its requirement for recourse for wrongs, he said.

“If an algorithm flags your bank account as fraud and it’s wrong and it happens on a Saturday night, who’s going to fix it for you?” Metcalf said. “Is there a customer service agent authorized to fix the computer problem? Usually they are not. Bankers should think carefully about the relationship between error, human intervention and recourse. If you’re going to make automated decisions that can affect people’s lives, then you’d better have a way for them to correct it if you’re wrong.”

Source link