LIVE STREAMING
Blue digital computer brain on circuit board with glows and flares - stock photo.
Blue digital computer brain on circuit board with glows and flares - stock photo.

Experts say artificial intelligence contributes to discrimination in lending

While the technology holds promise, artificial intelligence also has flaws which often take on the biases of its creators.

MORE IN THIS SECTION

$1 USD Houses in Italy

University Students Unable

Snow in New York

Veterans Day USA

A great global danger!

United Nations Day

New Nobel Prize in Economics

A Nobel and The Worrying AI

SHARE THIS CONTENT:

Artificial intelligence (AI) may be playing into stereotypes that adversely affect women and minorities in financial decisions, according to two experts. 

The House Financial Services Committee held a hearing entitled: "Perspectives on Artificial Intelligence: Where We Are and the Next Frontier in Financial Services."

Among the witnesses who testified were Dr. Nicol Turner-Lee, a fellow of Governance Studies at the Center for Technology Innovation at the Brookings Institution, a think tank; and Dr. Douglas Merrill, the founder and Chief Executive Officer of Zest Finance, a company which uses machine learning (ML) for the loan approval process.

“Lenders use our software to increase loan approval rates, lower defaults, and make their lending fairer,” Dr. Merrill said. 

“Increasingly, the private and public sectors are turning to artificial intelligence (AI) systems and machine learning algorithms to automate simple and complex decision-making processes,” Dr. Turner Lee said during her opening remarks. “The mass scale digitization of data and the emerging technologies that use them are disrupting most economic sectors, including transportation, retail, advertising, financial services and energy, and other areas. AI is also having an impact on democracy and governance as computerized systems are being deployed to improve accuracy and drive objectivity in government functions.”

While each recognized the incredible potential of AI, they both warned that it is currently contributing to adverse outcomes to women and minorities. 

Dr. Turner-Lee
Dr. Turner-Lee stated that AI often credits decisions that create an unfair bias against women and minorities.

“In the case of credit, we are seeing people denied credit due to the factoring of digital cognitive profiles which include their web browsing histories, social media profiles and other inferential characteristics to the factoring of credit models and these biases are systematically finding themselves with less favor to individuals in particular groups where there is no relevant difference between those groups, which justifies that harm.”

She said one reason why these biases in AI exist is due to the lack of diversity in the industry. 

She cited a study by Deloitte from 2016 about the lack of diversity in technology. 

“Recent diversity statistics report these companies employ less than two percent of African Americans in senior executive positions, and three percent of Hispanics when compared to 83 percent of whites,” she stated in her written testimony. “Asian-Americans comprise just 11 percent of executives in high tech companies. In the occupations of computer programmers, software developers, database administrators, and even data scientists, African-Americans and Hispanics collectively are under six percent of the total workforce, while whites make up 68 percent.”

In her written testimony, Dr. Turner-Lee listed some examples of AI creating bias. 

“Latanya Sweeney, Harvard researcher and former chief technology officer at the Federal Trade Commission (FTC), found the micro-targeting of higher-interest credit cards and other financial products when the computer inferred that the subjects were African-Americans, despite having similar backgrounds to whites,” she stated. “During a public presentation at an FTC hearing on big data, Sweeney demonstrated how a web site, which marketed the centennial celebration of an all-black fraternity, received continuous ad suggestions for purchasing ‘arrest records’ or accepting high-interest credit card offerings.”

She noted that faulty AI can even affect decisions made by judges. “For example, automated risk assessments used by U.S. judges to determine bail and sentencing limits can generate incorrect conclusions, resulting in large cumulative effects on certain groups, like longer prison sentences or higher bails imposed on people of color.”

In housing, she referred to a phenomenon called web-lining. 

“Despite a strengthening economy, record low unemployment and higher wages for whites, African-American homeownership has decreased every year since 2004 while all other groups have made gains. In 2017, 19.3 percent of African American applicants were denied home loans, while only 7.9 percent of white applicants were rejected. Brookings fellow Andre Perry found that ‘owner-occupied homes in black neighborhoods are undervalued by $48,000 per home on average, amounting to $156 billion in cumulative losses.’ In other words, for every $100 in white family wealth, black families hold just $5.04. This type of physical redlining is now manifesting in the form of applications discrimination, or what Frank Pasquale has coined as ‘weblining,’ where whole communities are classified by their credit characteristics and associated risks.”

Dr. Turner-Lee’s Solutions
As for solutions, Dr. Turner-Lee proposed three.
  1. Updating the main civil rights laws to address AI. She specified three laws to address this. One was  the Civil Rights Act of 1964 which, “forbade discrimination on the basis of sex as well as race in hiring, promoting, and firing.” She also cited the 1968 Fair Housing Act and the 1974 Equal Credit Opportunity Act. “To quell algorithmic bias, Congress should start by clarifying how these nondiscrimination laws apply to the types of grievances recently found in the digital space, since most of these laws were written before the advent of the internet,” she noted in her written testimony. 
  2. Industry self-regulation, which includes a proactive approach to diversity. She also stated companies must create a bias impact statement. “As a self-regulatory practice, a bias impact statement can help probe and avert any potential biases that are baked into or are resultant from the algorithmic decision.”
  3. Congress should support the use of regulatory sandboxes and safe harbors to curb online biases. A regulatory sandbox is “a regulatory approach, typically summarized in writing and published, that allows live, time-bound testing of innovations under a regulator’s oversight. Novel financial products, technologies, and business models can be tested under a set of rules, supervision requirements, and appropriate safeguards.” According to a white paper by the United Nations Secretary-General’s Special Advocate for Inclusive Finance for Development. Dr. Turner-Lee noted, “Regulatory sandboxes could be another policy strategy for the creation of temporary reprieves from regulation to allow the technology and rules surrounding its use to evolve together. These policies could apply to algorithmic bias and other areas where the technology in question has no analog covered by existing regulations. Rather than broaden the scope of existing regulations or create rules in anticipation of potential harms, a sandbox allows for innovation both in technology and its regulation.”
Dr. Douglas Merrill
Dr. Merrill focused his remarks on a specific type of AI, machine learning, which he said, “discovers relationships between many variables in a dataset to make better predictions.”

In his opening remarks, he talked about both its potential and pitfalls.

“Machine learning increases access to credit especially for low-income and minority borrowers. Regulators understand these benefits and, in our experience, want to facilitate, not hinder, the use of ML. 

“At the same time, ML can raise serious risks for institutions and consumers. ML models are opaque and inherently biased. Thus, lenders put themselves, consumers, and the safety and soundness of our financial system at risk if they do not appropriately validate and monitor ML models.”

He then described one example of how these ML models can go haywire. 

“Without understanding why a model made a decision, bad outcomes will occur. For example, a used-car lender we work with had two seemingly benign signals in their model. One signal was that higher mileage cars tend to yield higher risk loans. Another was that borrowers from a particular state were slightly less risky than those from other states. Neither of these signals raises redlining or other compliance concerns. 

“However, our ML tools noted that, taken together, these signals predicted a borrower to be African-American and more likely to be denied. Without visibility into how seemingly fair signals interact in a model to hide bias, lenders will make decisions which tend to adversely affect minority borrowers.”

Dr. Merrill said the industry has created numerous methods for understanding why a decision is made but most don’t work. 

“There are purported to be a variety of methods for understanding how ML models make decisions. Most don’t actually work. As explained in our White Paper and recent essay on a technique called SHAP (SHapley Additive exPlanations), both of which I’ve submitted for the record, many explainability techniques are inconsistent, inaccurate, computationally expensive, or fail to spot discriminatory outcomes.”

Dr. Merrill disagreed slightly with Dr. Turner-Lee on the need for updating legislation. “Regulators have the authority necessary to balance the risks and benefits of ML underwriting. In 2011, the Federal Reserve, OCC, and FDIC published guidance on effective model risk management. ML was not commonly in use in 2011, so the guidance does not directly address best practices in ML model development, validation and monitoring.”

Congresswoman Sylvia Garcia 
Dr. Merrill noted that AI can be opaque and Congresswoman Sylvia Garcia, a Democrat from Texas, illustrated that during her five-minute question-and-answer period. 

Referring to AI, she asked Dr. Bonnie Buchanan, the Head of School of Finance and Accounting and Professor of Finance, Surrey Business School, University of Surrey, “Can you in just plain English in 25 words or less tell the average viewer what the heck we’re talking about?”

“It’s a group of technologies and processes that can look at determining general pattern recognition, universal approximation of relationships and trying to detect patterns from noisy data or sensory perception,” Dr. Buchanan responded.

“I think that probably confused them more,” Garcia stated, noting the complexity is part of the problem.

  • LEAVE A COMMENT:

  • Join the discussion! Leave a comment.

  • or
  • REGISTER
  • to comment.
  • LEAVE A COMMENT:

  • Join the discussion! Leave a comment.

  • or
  • REGISTER
  • to comment.