The algorithm had been designed to predict famine.
If famines were spotted before they started, more aid could be routed to the affected country, more people could be saved. Or so the thinking went.
Yet there was a snag. Rather than occur naturally, modern famine is human-made and is a tool of war, says Branka Panic M.I.D.P. ’19. It was the summer of 2018, between her two years at Sanford’s Master of International Development Policy program, and as part of her studies, Panic was consulting for the World Bank’s Fragility, Conflict, and Violence unit in Washington, D.C. One issue was that the famine-prediction algorithm, which was part of her work, inadvertently could notify malicious actors that certain populations were vulnerable. Another issue, says Catherine Admay, senior lecturer with Sanford and Duke faculty director of the Duke-UNC Rotary Peace Center (and Panic’s mentor), is that there is little logic behind notifying a government about impending famine if the government is using it to starve its own population.
“If someone is pretending to be asleep, you cannot wake them up,” Admay says.
But then Panic and Admay approached the problem from a different angle. If the famine-prediction tool foresaw famine in a region, the World Bank could instead approach the government there, say it had detected upcoming famine, and offer assistance ahead of the fact. The government would be given a choice—prevent famine or cause it through further inaction—and would lose its deniability if it declined assistance and then famine occurred.
“It’s very hard to prove genocide,” says Admay. “But that kind of thing would count as proof.”
Panic returned to Duke with a renewed appreciation for quantitative data—and the seeds of an idea. She realized few spaces existed in which to discuss issues at the junction of artificial intelligence (a technical science) and peace-building (a social science). To be fair, this revelation had been building for years; it was a synthesis of threads that wound through Panic’s life, first in activism and then in a career in peace and reconciliation.
In 2019, these threads would twine together as AI for Peace, an international think tank merging the computer science and humanitarian worlds, and driven by Panic’s moral compass and sheer force of will. “She is just on the move and getting things done with a laptop that I couldn’t do with my whole office,” Admay says.
Panic—pronounced pon-itch—grew up in Serbia and has known conflict from a young age. The Bosnian War broke out in the Balkans when she was nine, and several other conflicts ensued, lasting through her teenage years. Panic describes her parents as superheroes who shielded her and her brother from the horrors of growing up in wartime. “The only thing we were noticing were actually sanctions and the economic impact,” Panic recalls. “You notice that stores are empty. There’s nothing to buy there. You have to wait for milk in queues for hours to get it, and so on.”
In 1999, high-school-age Panic was active in Otpor! (literally, “Resistance!”), a protest movement opposing Serbia’s (and then Yugoslavia’s) authoritarian president Slobodan Milošević. Panic and other teenagers active in Otpor! would leave class for daily marches against the government and its human- rights abuses.
In one story from this era that Panic shared with her Duke cohort, Panic and other teenaged members of Otpor! stood on a bridge that was about to be bombed by Milošević’s forces and sang in Serbian: Bomb this bridge, and you’ll be killing children. “That singing was to signal something to herself and to her friends, but also to signal something to her enemies about her humanity,” Admay says. “Let’s not break these bridges. We are connected!”
Panic’s direct action with Otpor! led directly to her career.
“I wanted to back up my activism with knowledge from the college and university,” Panic says. So she studied political science and international relations (not at Duke—that comes later) and, before graduating, landed a job with the Stability Pact for Southeastern Europe, which was formed with the goal of transitioning southeastern European states to more democratic regimes.
Then Panic worked for the European Fund for the Balkans, which concentrated on Albania and the countries that previously constituted Yugoslavia. Much of its peace-building model, Panic says, was based on post-World War II reconciliation between Germany and France—particularly involving the youth of both countries.
“A lot of these strategies were actually around having dialogue, how to exchange opinions—how even not to agree about certain topics,” Panic says.
Yet near the end of her time with the European Fund for the Balkans, Panic had a valuable realization. In hindsight, she says, this was one of the critical moments leading to the foundation of AI for Peace. Panic was in a refugee camp in Belgrade, working with migrants fleeing conflict in the Middle East and North Africa. The refugees she met, from one perspective, had very little. At the same time and from another perspective, technology had given people in or fleeing oppressive nations agency and a voice. When Panic was active with Otpor!, the only way to speak out was to physically take to the streets, distribute fliers—that sort of thing. Not many years later, twenty-first century tools like social media amplified the voices of refugees and activists. This was still a vulnerable population, but it was a little less invisible— and a little more empowered—thanks to something as increasingly ubiquitous as a smartphone.
“In many cases they barely had a backpack with them...but they always had a mobile phone,” Panic recalls. “This was a sort of an essential technology to have to survive on this path, to know the routes, to get informed.”
Panic also realized it was time to expand beyond her native region.
Admay doesn’t tend to respond well when a student cuts out a few days before spring break, but it was hard to draw a hard line with Panic. In the spring of 2018, this was because Panic and friend Linda Lowe were in Puerto Rico, doing long-term recovery work after Hurricane Maria. It could still be frustrating, Admay admits, as when Panic skipped town for 2019’s extended spring break—even as her master’s project deadline loomed.
“I’m sorry,” Admay remembers Panic telling her. “I’m in Nicaragua looking after kids who have nowhere to stay.” For Panic, Admay continues, the schoolwork never felt like enough. She also had to be actively involved in direct humanitarian action.
Panic attended Duke through the Rotary Peace Fellowship, which is a cooperative Duke/UNC program. (“If you manage to connect UNC and Duke to work together, then you can make peace anywhere in the world,” Panic says, sharing a Peace Fellowship in-joke.) Her cohort in this mid-career fellowship included Chris Lara M.I.D.P. ’19, whose prior decade with the U.N. included coordinating emergency famine response at the South Sudanese border. (After graduating from Duke, he resumed humanitarian work with the U.N.) Panic’s master’s project was on famine response, and Lara felt she had a keen sense of food security and famine as a weapon of war.
In their conversations, Lara talked with Panic about how the U.N. as a whole operates and what the moving pieces of its humanitarian system are. He and Admay saw firsthand how Panic’s experience in peace and reconciliation combined with lessons in tech and AI she learned in a Belgrade refugee camp and at the World Bank.
“Quantitative analysis is powerful, and algorithms can work, for good or for bad,” Admay says. “You really need to be harnessing the good as a way to push back on the bad.”
After graduating, Panic headed to Silicon Valley, where she started AI for Peace. And though from one perspective this was her first full immersion in the tech world, the experience was also nothing new. She had built bridges between citizens of previously warring Balkan states in her first career. Now she was building a coalition of computer scientists and humanitarians. Each field spoke its own lingo, had its own strengths. Policy experts thought in social-science terms and qualitative data. AI experts thought in formulas and were fascinated to learn about the root causes of violence and the nature of policy itself.
AI for Peace appealed to computer scientists’ sense of philanthropy, too. “Some of them feel saturated with the work in their offices. They don’t have this direct contact with social issues and problems that they can tackle,” Panic recalls. “They felt that this can be their contribution.”
To ensure that AI for Peace was a truly global initiative, Panic assembled an advisory board with members in Ethiopia, Kyrgyzstan, Colombia, India, and the U.S. (Lara is a member.) Panic wrote AI guidelines for policymakers based on principles of “do no harm,” a document Lara says deserves widespread attention. The organization concentrates on three main areas: humanitarian action, democracy and human rights, and human security.
“The third one, I think, is crucial for actually explaining to people what we mean when we say ‘peace,’ ” says Panic. As a concept, she continues, human security is not the same as (and is a sort of opposite of ) national security. Rather than concerning itself with the security of borders or protecting the notion of states, human security focuses on people themselves.
“We want actually to cover this concept of positive peace,” says Panic. “We are not interested only in stopping wars or conflict or violence—that’s very important—but we also want to make sure that we sustain peace.”
AFOG typically stands for “another [effing] opportunity for growth,” Admay chuckles. It’s usually an exasperated response to an unforeseen uphill battle. Yet for Panic, Admay says, this acronym breaks down to “another fantastic opportunity for growth!”
That ability to pivot and grow during dire situations expressed itself in AI for Peace’s pivot when COVID-19 overwhelmed and shut down much of the world in early 2020. Panic left San Francisco for Croatia, and her fledgling organization became a purely digital collaborative—and therefore truly global.
In the spring of 2020, AI for Peace joined with data- scientist collaborative network Omdena for its AI Policy Pandemic Challenge, a ten-week project that used data or AI-related technologies to analyze the impact of pandemic policies and lockdowns on the world’s most vulnerable populations.
The seventy-plus participants, representing twenty- one nations on six continents and drawing from both AI and policy fields, met in Slack channels. Lara’s roles included ensuring participants kept the seventeen goals of the U.N.’s 2030 Agenda for Sustainable Development in mind. He also did some gentle moderation, bringing participants back to Earth if their heads strayed to the clouds. A focus on vulnerable populations requires a “leave no one behind” mindset, but it seems, as Lara posits, that the COVID era has members of the computer-science community ready to embrace it.
If that is the case, and if collaborations between computer scientists and humanitarians can lead to positive change, then Panic is positioning herself as a nexus between their worlds. It’s a role she’s inhabited since a young age.
Working in the Balkans and facilitating dialogue between the sides in conflict and having peaceful dialogue is useful, “even when you work with AI experts and field experts,” she says. “Sometimes it feels like we speak different languages, so definitely facilitating this dialogue comes as a very important and necessary thing.”
Share your comments
Have an account?
Sign in to commentNo Account?
Email the editor