Criar uma Loja Virtual Grátis



Human Rights Advocacy in the Dawning Age of AI

Interview with Dr. Alexa Koenig, Executive Director, UC Berkeley Human Rights Center

On a day when blooming fruit trees, singing robins, and bright yellow flowers of mimosa trees teamed up to bring an untimely spring to Berkeley, I sat down with Dr. Alexa Koenig to have a very timely conversation on the intersection of emerging technologies and human rights.

Dr. Alexa Koenig is the Executive Director of the UC Berkeley Human Rights Center. Her journey as a human rights scholar began in the aftermath of the 9/11 attack. As a PhD student, Alexa was offered a position at the Human Rights Center at Berkeley to work on War on Terror related investigations. In the meantime, she started working with Peter Jan Honigsberg at the University of San Francisco to develop an oral history project, Witness to Guantanamo, to interview and record the experience of detainees in Guantanamo Bay.

Dr. Alexa Koenig (Credit: Human Rights Center, UC Berkeley)

Her interest in harnessing new technologies for human rights blossomed by cooperating with the Office of the Prosecutor at the International Criminal Court, as a member of the Technology Advisory Board. She and her team held a number of workshops and invited technologists to understand how to use different techniques such as remote sensing, satellite imagery, collecting video footage and data analytics to document human rights abuses and war crimes.

Bellow is my conversation with Alexa on her valuable insights and experience in utilizing the power of technology in human rights and humanitarian efforts.

Roya Pakzad: Alexa, could you tell me what you and your students do at your human rights investigations lab?

Alexa Koenig: [With the advent of social media] there was suddenly a wealth of information coming out in terms of video footage and photographs that were invaluable for investigating human rights abuses and war crimes in conflict zones. In September 2016, Andrea Lampros and I launched the human rights investigation lab as a pilot effort to work with other human rights non-profits. We wanted to see if students can be trained to provide critical capacity for human rights investigation and accountability efforts. We started with Amnesty International, and they began sending us footage of different crises that students could verify through different techniques, such as checking the source of the video, using tools like SunCalc to verify the time of the day when the footage was taken.

At our lab, students from computer science and engineering backgrounds, anthropologists, law students, and journalists are brought together to combine their know-how about how to do this work and take their research and investigations to that next level.

Roya: This is one of the successful examples of using technology for human rights. But, I have seen many tech oriented human rights projects that in practice were not very effective and sustainable. How can we address that concern?

Alexa: There have been so many lessons learned over the past ten years about how to ethically and responsibly use these new technologies. Back in 2012 to 2014, people were trying to design new apps to document what’s happening around the world but were not always sensitive to social context. One example is a company creating an app to document sexual violence but rolling it out in a country where only men had access to cellphones. Or, in the middle of a crisis, it’s very unlikely for someone to download an app and learn how to use it and trust it. So, one thing that I’ve been personally interested in is that how do we get individuals who were not pipelined in human rights practice to be more sensible on ethics and social issues that ideally they will think about when they are designing new technologies.

Roya: And the role of human rights funders here?

Alexa: Yes, there has been an enormous diversion of resources from traditional methods of human right practice toward investing in these very sexy, shiny new tools. But the risks of innovation in the human rights space are arguably more acute than when you are talking about innovating day-to-day practice in the context of [places] not being hit by crisis. The biggest message to funders is to find people who know human rights and do it well. Sometime the answer might be no technology at all.

Roya: Berkeley is positioned close to Silicon Valley. Do you try to have these types of conversations and partnerships with technologists to inform them about these lessons learned?

Alexa: I think there are still silos in practice across the Bay Area. One effort that we’ve been really experimenting with is a new nonprofit spun out of the UC Berkeley campus called Archer [website.] This is a team of about 30 computer science and electrical engineering students and a couple of students from other disciplines. Three students had actually been on the ground in the South of France when a terrorist attack happened and one of the students was killed during that attack. So, the two students who came back were thinking how to tap into this extraordinary talent here and use their degrees to fight against terrorism. Over the past year we had them embedded with our human rights investigations lab. So they are helping our team to understand how to use technology to make our work more efficient and effective.

One of the issues with working in a Law School is that law is notoriously behind the curve in terms of adopting technology into practice. So we’ve been thinking that this partnership will help lawyers and political science and anthropology students understand the social context and what needs to be accomplished, and computer scientists can [point out] when this is something that could be automated to be more effective. Many of these students may go on to pursue high profile and high power tech careers. We are hoping that the insights they get from working on these sensitive human rights investigations will be able to inform everything they do in the future in their tech careers.

The other area that we have been working in is thinking through the large quantities of information that’ve been getting shared on Youtube, Facebook, and Twitter. [These services] are extraordinarily valuable for getting justice for human rights abuses. So, the big conversations that have been happening in the human rights space is how can we work directly with corporations to make sure that if information had to be taken down because of violations of companies’ terms of service, how can they be preserved so it can still be used for our investigative purposes? Or, how can human rights communities capture this content at the moment of uploading, such that even if the videos do have to be taken down, they are still preserved and the stories behind them can be told.

Human Rights Investigations Lab (Credit: California Magazine / Just In)

Roya: As you mentioned, law is behind in this regard. It makes me think about international human rights law and the role that the Human Rights Council in particular and the UN in general plays here. What do you expect to see from them to cope with these technological changes?

Alexa: I think there’s been some conversation within the UN and other large bureaucratic organizations about whether there is a need for other new technologies to be developed to improve the process of gathering information for human rights investigations. Many of us in human rights communities feel that the technologies are largely there — they just need to be strengthened and constantly refined. But, I think it’s become very difficult to map the entire network of people who are working on these technologies for human rights. What I would hate to see more than anything is new technologies being developed when they’ve already been developed.

I think there is a role for groups like the United Nations and other bodies to play here. Because they have a sort of 30,000-foot overview and everybody knows who they are, they could be these umbrella groups to gather all the information on who’s doing what.

Roya: The field of machine learning has unleashed numerous opportunities for data analysis. What are some of your hopes to further usage of AI/ML in your work?

Alexa: As we’re moving forward, the more that we see artificial intelligence and machine learning as being an interactive practice between humans and machines, the better it is going to be, as opposed to seeing machines as replacing humans. Our interpretive processes are somewhat quantitative that can be automated, but they are also highly qualitative and subjective. Machines and humans can be checks and balance for each other.

One thing I would love to see is for machines to be able to detect and flag videos that are extraordinary graphic so there’s a warning on them before human beings have to work with that footage. One of the things that resiliency experts have observed is that when people are trying to work with highly graphic footage, and you’re surprised by the content, it seems to have a much larger negative impact than when they’re told what they’re going to see, such as a beheading or something else. You need to be prepared for that.

Another thing that I would love to see is a “Google translate for algorithms.” A lot of the biases that we are all reading about that go undetected might be detected by human rights practitioners if they could understand how the systems they are using actually work.

Roya: What are some of your concerns?

Alexa: I think my biggest concern right now with the proliferation of AI and machine learning is on how these technologies are potentially shifting our focus. For example, are we going to begin paying attention only to the crises where smartphones proliferate — the ones that are, in some ways, low hanging fruit? Does that mean the communities where there aren’t common use of digital technologies will be less visible or set aside because “they are not easy to deal with?” That’s not only about the location but the kinds of crimes and harms as well. For example sexual violence can be very difficult to document with digital technologies, but crimes like the dropping of chemical weapons might be less difficult to document with video footage. So does this mean that there’s going to be a growing divide as we shift our focus towards these technologies that make harms visible? Does that mean those [crimes like sexual violence] will be even more in the background?

Roya: Do you have any final message to people in this ecosystem of technology and human rights?

Alexa: I think my biggest message to people in this field of work is to get out of whatever bubbles they are in — and we’re all in at least one.

If engineers and computer scientists start working on the ground with human rights practitioners, if lawyers start partnering up with people who understand emerging technologies and start thinking a little bit more for the future, if all of these individuals can better understand the experiences of survivors of human rights abuse, I think we’re going to be able to do incredible things safely and efficiently.

We wrapped up here. This interview has been lightly edited and condensed.

This conversation was part of the interview series for my newsletter Humane AI. I will continue talking with both policy and technical experts in the field of ethics of AI in future installments. Tune in to know their opinions about many issues including cybersecurity and AI, History of technology and its connection to AI, Digital Humanities, and much more. To subscribe to the newsletter, click here.