Practitioner Fellow, Digital Civil Society Lab in Partnership with the Center for Comparative Studies in Race and Ethnicity, Stanford PACS
(2019-20, 2020-21)
Renata Avila was a Non-Resident Fellow at the Digital Civil Society Lab in Partnership with the Center for Comparative Studies in Race and Ethnicity (2019-2020, 2020-2021).
Renata, (Guatemalan), Executive Director Fundación Ciudadanía Inteligente, is an international Human Rights lawyer, specialising in the next wave of technological challenges to preserve and advance our rights, and better understand the politics of data and their implications on trade, democracy and society. She is currently writing a book on digital colonialism and designing international policies, and prototyping technology for a democratic future. She is a Board member for Creative Commons. She also serves as a Board Member of the Common Action Forum and a Global Trustee of the Think Tank Digital Future Society. She also serves as Advisory Board Member for Article 19 Mexico & Central America and Democracia Abierta. She volunteers in https://diem25.org/, a movement to save democracy in Europe, as an elected member of its Coordinating Collective and co-convener of the Progressive International.
Research Project
<A+> Alliance Affirmative Action Algorithms to Correct Gender and Race Bias in the Algorithms
Project Lead: Renata Avila
OVERVIEW
Predictive analytics, algorithms and other forms of AI are highly likely to reproduce biases reflected in existing incomplete data policies, and then exacerbate them as machine learning embeds the bias.
In-built forms of discrimination can fatally undermine the right to equality and to social protection for women, especially women belonging to excluded racial and social groups.
The future is being built on technologies and decision-making systems that are created under the “standardized male” default, which assumes its average users are white, hetero, educated men from wealthy countries. As the adoption of Automated Decision-Making (ADM) systems and AI rapidly expands around the world, this default has proved to be extremely damaging for women and girls, especially those belonging to non-white groups, worldwide. Biased algorithms are hindering their possibilities and threatening liberties of collectives. Stories of harm are only anecdotal and there is no concrete, binding proposal to fix the problem at the table of decision-makers.
The problem needs an urgent, positive response at the diplomatic, standards, public policy and technical levels, simultaneously. As countries accelerate their pace drafting National AI Strategies and rush to adopt ADM systems to address different social problems, we need to equip policymakers, technologists and governments with the basic skills to use human rights as guiding principles in design in order to fully realize the potential of technology to identify and counteract biases when designing the digital future.
APPROACH
This project will increase the influence of the <A+> Alliance, a multidisciplinary, diverse, and feminist global coalition of expert practitioners, academics and activists working to create and apply Affirmative Action Algorithms <A+> that upturn the current path of ADM at a critical turning point in history. The focus of my work will be at the intersection of gender and race.
The final goal will be achieving systems change at the institutional multilateral level, through innovative public policy proposals, high-level diplomatic engagement, partnerships and capacity building of strategic actors that place an inclusive digital future at the core of their initiatives.
PROCESS
Drafting innovative public policy recommendations addressing the technological, social and administrative challenges to implement Affirmative Action Algorithms.
Engaging in diplomatic multilateral spaces both within the UN system (WTO, ITU, WIPO, CDAW, UNHRC) and alternative ones (WEF, Barcelona Mobile Congress, Consumers Associations), contributing to actionable public policy recommendations at the highest level of decision-making, breaking down silos that place gender and race as something separated from technology and diplomacy, and elevate the urgency to lay down the foundation of an inclusive digital future.
Engaging with and training the next generation of innovators, both on the public and private sector, to guide them and train them in the design an inclusive digital future.
As a final output, a viable plan will be delivered, suggesting a UN agencies-wide review of the application of existing international human rights laws and standards for ADM, machine learning, race and gender.
Fellowship Impact
Persevering through this pandemic period, Renata Avila adapted her strengths in networking and international diplomacy to bring the <A+> Alliance for Inclusive Algorithms into being. Avila and her colleagues made the most of remote, small group meetings – as well as “roaming the chat box” instead of working the hallways of UN meetings – to assemble an advisory board, set a research agenda, and establish this global network focused on feminist technology from the global majority.
While it was a difficult and laborious process to curate the right group of people for the Alliance network–a process not unlike “fishing” in a pool of applicants for the cream of the crop–the fruit of Avila’s labor will be sweet: over the next three years, the Alliance is dedicated to producing 90 papers, 10 prototypes, and 3 pilots in the Middle East, Latin America, and South Asia. Never one to waste a moment, Avila will spend the rest of her time advocating for stopping, thinking, and co-designing as we come out of this pandemic and back into the world; technology alone, she cautions, will never save us.
Products
Created a Network of Feminist AI experts and a directory through a series of webinars during the COVID-19 lockdown