Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Friday, May 24, 2024

Zillow’s New Free AI tool aims to Promote Equality in Housing

 

The open-source tool, which is available for free, addresses bias in large language models. Zillow's open-source tool, the Fair Housing Classifier, is part of the company’s efforts to “promote responsible and unbiased behavior in real estate conversations powered by large language model (LLM) technology.“ Zillow explained that artificial intelligence (AI) tools often fail to account for the myriad requirements of fair housing laws. These tools, when deployed, “can perpetuate bias and undermine the progress achieved in advocating for fair housing.“ 

The Fair Housing Classifier (FHC) is designed to act as a protective measure against steering, or the act of influencing a person’s choice of home based upon protected characteristics. The Fair Housing Act of 1968, as amended, prohibits discrimination in housing based on race, color, religion, gender, disability, familial status or national origin. The FHC is equipped to detect questions “that could lead to discriminatory responses about legally protected groups in real estate experiences, such as search or chatbots.“ The AI technology can identify cases of noncompliance with equal housing laws when it is given a question or answer. System developers have the ability to intervene in these cases.

In a recent survey of over 12,000 Americans, Zillow found that 57% reported some type of housing discrimination during their life. This was 79% for LGBTQ+ respondents, 69% for Blacks, and 64% for Hispanics and Latinos.

“In today’s rapidly evolving AI landscape, promoting safe, secure and trustworthy AI practices in housing and lending is becoming increasingly important to protect consumers against algorithmic harms,“ Michael Akinwumi, chief responsible AI officer for the National Fair Housing Alliance, said in a statement. “Zillow’s open-source approach sets an admirable precedent for responsible innovation. We encourage other organizations and coalition groups to actively participate, test, and enhance the model and share their findings with the public.”

Companies and individuals that want to use the Fair Housing Classifier can access its code and comprehensive framework on its page on GitHub. Anyone wanting to provide feedback and/or improve the tool can connect with the email alias on the GitHub page.

Read the May 21, 2024 HousingWire article..

Friday, September 29, 2023

Fed Regulator Warns AI & Machine Learning Can Worsen Lending Bias

AI & Automated Decisions Determine Credit Rating, Loan Terms, Hiring, Housing, etc. 

“While these technologies have enormous potential, they also carry risks of violating fair lending laws and perpetuating the very disparities that they have the potential to address,” the Fed’s vice chair of supervision, Michael Barr, said at the recent National Fair Housing Alliance (NFHA) 2023 national conference. While new artificial intelligence tools could cheaply expand credit to more people, machine learning and AI may also worsen bias or inaccuracies inherent in data used to train the systems or make inaccurate predictions.

As concerns grow over increasingly powerful artificial intelligence systems like ChatGPT, the nation’s financial watchdog says it’s working to ensure that companies follow the law when they’re using AI. Automated systems and algorithms help determine credit ratings, loan terms, bank account fees, and other aspects of our financial lives. AI also affects hiring, housing and working conditions.

In the past year, the Consumer Finance Protection Bureau said it has fined banks over mismanaged automated systems that resulted in wrongful home foreclosures, car repossessions, and lost benefit payments, after the institutions relied on new technology and faulty algorithms.

One problem is transparency. Under the Fair Credit Reporting Act and Equal Credit Opportunity Act, for example, financial providers legally must explain any adverse credit decision. Those regulations likewise apply to decisions made about housing and employment. Where AI make decisions in ways that are too opaque to explain, regulators say the algorithms shouldn’t be used. “I think there was a sense that, ‘Oh, let’s just give it to the robots and there will be no more discrimination,’” Chopra said. “I think the learning is that that actually isn’t true at all. In some ways the bias is built into the data.”

EEOC Chair Charlotte Burrows said there will be enforcement against AI hiring technology that screens out job applicants with disabilities, for example, as well as so-called “bossware” that illegally surveils workers. The Fed recently announced two policy initiatives to address appraisal discrimination in mortgage transactions. Under the proposed rule, institutions that engage in certain credit decisions would be required to adopt policies, practices, and control systems that guarantee a “high level of confidence” in automated estimates and protect against data manipulation.

In April, 2023, about one-fourth of federal agencies, including the Federal Trade Commission and the Department of Justice, announced their commitment to cracking down on automated systems that cause harmful business practices.

Sam Altman, the head of OpenAI, which makes ChatGPT, said government intervention “will be critical to mitigate the risks of increasingly powerful” AI systems, suggesting the formation of a U.S. or global agency to license and regulate the technology. The Electronic Privacy Information Center said the agencies should do more to study and publish information on the relevant AI markets, how the industry is working, who the biggest players are, and how the information collected is being used - like regulators have done with previous new consumer finance products and technologies.

*****

Read the July 19, 2023 The Hill article.

Read the June 15, 2023 Federal Times article.