person

Hong Qu

Hong Qu was a Non-Resident Fellow at the Digital Civil Society Lab in Partnership with the Center for Comparative Studies in Race and Ethnicity (2019-2020, 2020-2021).

Hong Qu serves as a research director and adjunct lecturer at the Harvard Kennedy School. He was a member of YouTube’s startup team where he designed key features such as video sharing, channels, and skippable ads. Prior to HKS, he was VP of product at Upworthy and CTO at Fusion Media. He has been a visiting Fellow at Neiman Foundation and a member of the Berkman Klein Center and MIT Media Lab’s Assembly program on Ethics and Governance of AI. Hong is a graduate of Wesleyan University and the UC Berkeley School of Information.

Research Project

Impact of AI on Fair Lending Laws and Practices

Project Lead: Hong Qu

 

OVERVIEW

A decade has passed since the subprime mortgage crisis and the Fintech industry is starting to embracing alternative data and machine learning to improve predictive models that have the capability to determine the creditworthiness of consumers with low credit scores.  Compared to traditional credit-risk and lending practices, these new financial products and services–buttressed by fledgling data sets and enigmatic algorithms–pose a new set of challenges for regulatory oversight and risks to consumer welfare.

We will apply the AI Blindspot discovery process created at the Assembly program at the Berkman Klein Center and MIT Media Lab to investigate the potential benefits and shortcomings of new financial service offerings from startups such as Upstart, with emphasis on their compliance with three legal mandates: 1. Equal Access to Credit Act’s call for guarding against algorithmic bias; 2. Consumer Credit Protection Act’s stipulation for transparency and explainability; 3. Dodd-Frank Act’s prohibitions against unfair, deceptive, or abusive acts or practices.

 

APPROACH

This project seeks to engage with the relevant social groups in the consumer loan marketplace to identify blind spots that might arise from unconscious bias and historical inequalities over the course of developing, deploying and operating AI systems that automate consumer lending processes.  We will delve into issues such as discrimination by proxy, disparate impact, transparency and explainability, and control over personal data.  

Sample questions we plan to address are:

  • How should regulators develop monitoring and enforcement tools to promote fair lending by AI systems?
  • What types of alternative data will likely lead to discrimination by poxy and reinforce historical inequalities?
  • How might we ensure that data collection processes respect the right to privacy and informed self-determination of all borrowers, especially those 19 million Americans who are credit invisible?
  • For those consumers who have been denied credit, what are their rights to explanation, appeal, and collectively challenge the fairness of the credit-risk model?

PROCESS

This proposed project will:

  1. Review academic literature, industry trends, and public policy to analyze the relevant social groups’ power dynamics and incentives.
  2. Conducting lightweight ethnographic observation and interviews with major stakeholder groups such as tech company product teams, regulators, academic experts, consumer advocates, and impacted populations in marginalized communities.
  3. Propose a mechanism to monitor disparate impact by testing, documenting, and reporting on unfair, deceptive or abusive practices.
  4. Write and publish a primer on the interplay of consumer access to credit, legal and regulatory policies, and the technical basis for alternative data and machine learning models which enable predictive modeling of credit risk.
  5. Share project findings with journalists in trade, local, and ethnic media.

Fellowship Impact

When Hong Qu first came to the DCSL, he saw himself as a technologist trying to build tools for other technologists to understand and highlight biases in the systems they create. After meeting with and learning from his fellow Fellows, however, Qu shiftedthe focus of his project to the civil society sector, an area he was new to considering. He realized that developing an engineer’s tool to fix these problems was insufficient; it was too procedural, and lacked the commitment, conviction, and courage displayed by the other Fellows working in the space. “They said things,” Qu recalls, “that provoked me to reconsider my own place in these movements,” and helped him focus on efforts in governance and policies that create and deploy technology.

Qu’s project, AIBlindspot: Tools for Advancing Equity in Artificial Intelligence, evolved along with his insights. Qu saw the opportunity to stress-test AI Blindspot with technologists and civil society leaders. This more indirect approach, Qu hopes, will help civil society demystify AI, to make sense of it so those working in the sector can challenge its authority through a united voice and campaign against its harms. While Qu thinks about how to scale up this project or find it a permanent home, he will continue to work towards his PhD, an ordeal he says is testing his capacity and tenacity, but a dream he will see through to the end.

Products

AI Blindspot – Tools for Advancing Equity in Artificial Intelligence

Learn more

Leading Civil Rights, Consumer, and Technology Advocates Urge the Federal Financial Regulators to Promote Equitable Artificial Intelligence in Financial Services

Learn more

Comment on Financial Institutions’ Use of Artificial Intelligence, including Machine Learning

Learn more

Preprint Sifter – Twitter bot for amplifying COVID-19 Health experts and preprints

Learn more

Human Development for Economic Progress dashboard

Learn more

COVID States survey research

Learn more

New Positions & Appointments

Speeches & Presentations

Tools for Combatting Bias in Datasets & Models | RACE, TECH + CIVIL SOCIETY SERIES

Watch

“Even if you can do it, should you?” Researchers talk combating bias in artificial intelligence

Learn more

Hong Qu: Shining a Headlight on AI Blindspots

Learn more

Tech & Racial Equity Workshop: WHAT KIND OF AI ARE WE RAISING?

Learn more

Belfer Policy Chat | An Ethical Approach to AI & Governance

Learn more

Hong Qu Interview: Artificial Intelligence and Biases

Watch

What Artificial Intelligence Can’t See

Learn more

Assembly Project Fellowship Showcase

Watch

Resume-Writing Tips to Help You Get Past the A.I. Gatekeepers

Learn more

Belfer Policy Chat | An Ethical Approach to AI & Governance

Learn more