Interview: Data Privacy, Distributed Denial of Service Attacks, and Human Rights: A Conversation with Nathaniel Raymond


By Jason Lapadula

“We’ve gone into an experimental context on the most vulnerable people in the world, and put them in a commodification posture where we’re monetizing and trading their data.”

 —Nathaniel Raymond [1]

Spotify can now recommend songs better than your closest friends and family, Facebook can market products through targeted ads better than any Super Bowl advertisement (and much more cheaply), and Gmail can finish your sentences. We are continuing to aggregate data and develop machine-learning capabilities that clearly have social benefit, but not without cost. In the past few years, issues have manifested in the cyber domain that expose vulnerabilities in human rights as we currently define them. Additionally, in the existing iteration, international rule of law is ill-equipped to handle digital exploitation of peoples and nations.

Examples of big data’s dark side are omnipresent. The risks became apparent to Americans after the 2016 presidential election, when bots controlled by Russian intelligence targeted U.S. voters to influence the election outcome and Cambridge Analytica’s use of social media behavior to predict voter preferences raised concerns about freedom. [2] Facebook came under increased scrutiny in 2018 when it admitted that Myanmar military personnel used its services to incite genocide against the Rohingya population. The company acknowledged that it “can and should do more” to limit the spread of hate and incitement of violence through its platforms. [3] China, meanwhile, in a move straight out of Black Mirror, is developing a system of mass surveillance known as the Social Credit System, which will track the daily activities of every person and company in China to develop their level of “trustworthiness.” Participation in the system will be mandatory by 2020. [4]

The 1949 Geneva Conventions established the international standards for human rights law and jus in bello in the aftermath of World War II. When it comes to cyberspace, though, the international standards that were developed after the most physically destructive war in recent history have vague transference. What constitutes “doing more” in Facebook’s case, and who determines the definition of ethical and legal use?

Nathaniel Raymond, a lecturer at the Yale Jackson Institute for Global Affairs, has been fighting for the development of data regulations for humanitarian aid workers since 2016, when he co-authored The Signal Code—a rights-based approach to humanitarian information. In a conversation with him, we discussed his recent work in applying international legal standards to the current information age.

The Data Commodification Battle Lab

In the humanitarian space, where big data has shown immense value for administering aid to vulnerable populations, Raymond believes ungoverned cyberspace may allow exploitation of both aid recipient populations and aid workers. His belief is that we’ve created “a ’battle lab’ for governments and corporations involving high-vulnerability, high-risk data use cases, and market expansion, often for for-profit, private-sector products.”

One example of an ethical “grey zone” relating to humanitarian aid data is the recent partnership between the World Food Programme (WFP), the UN’s food-assistance program, and Palantir Technologies. Palantir was founded by Peter Thiel using funding from In-Q-Tel (the CIA’s venture capital arm), and links together information across U.S. intelligence services. [5] Some practitioners, such as Raymond, see Palantir’s business practices, lack of transparency, and allegiances to the intelligence community as risks to individual privacy for the 90 million people that the WFP serves. [6] For example, use of iris-recognition software for the distribution of food allotments in refugee camps, rather than the physical identification cards of yesterday, can be exploited to identify target populations for future ethnic war or genocide.

Private-sector involvement does not automatically lead to malpractice, but Raymond believes that the current lack of data governance may lend itself to an era of “data colonialism, where data is being brokered between civil society, governments, [and] private sector organizations. Sometimes it’s being brokered for cash, sometimes it’s being brokered for access—access to other technologies or the means of processing. Sometimes it’s being brokered even for additional data.” Raymond sees outdated data protection regulations as the most concerning issue. He believes that current regulations that protect personally identifiable information are not sufficient in an arena that relies on aggregated demographically identifiable information to create predictive data about groups. Under current regulation, it is possible to target a specific demographic for information operations campaigns based on its likelihood to take a specific action. Looking forward, he posited, “It’s that absence of group data protections that will, whether we fill that or don’t fill that gap, determine the future of human freedom in the twenty-first century.” [7]

Distributed Denial-of-Service (DDoS)

Aid recipients are not the only ones vulnerable to potential cyber exploitation. Raymond also sees gaps in the protections offered to international aid workers. The Fourth Geneva Convention provides protection for international aid workers, and by extension, provides protected status to their civilian objects such as hospitals. Civilians and civilian objects are mutually exclusive of combatants and military objects—once a civilian gains combatant status, they forfeit their protections as a civilian. For example, U.S. targeting protocols state that a dual-use structure can be targeted if it poses a threat to military personnel—if the enemy uses a hospital as a defensive fighting position, that hospital no longer has status as a civilian object per Article 52 of Additional Protocol I. [8] Military personnel can engage the hospital in accordance with the rules of proportionality and the requirement to take precautions in attack. But what constitutes an attack on a civilian object and how do these rules apply to cyber attacks? This ambiguity puts civilians in war zones who rely on cyber infrastructure at risk.

In order to address this ambiguity, Raymond explains, we need to change our conception of humanitarian space. “When we say ‘humanitarian space’ in terms of protection of aid workers now, [we mean] you can’t bomb a hospital or you should not bomb a hospital under international humanitarian law,” he says. “Where I think we’re going to go is that we will start treating servers, infrastructure, mobile networks…as equal to facilities and vehicles.” [9]

In April and May 2007, cyber activists and hackers attacked Estonia in a distributed denial-of-service (DDoS) attack. In this type of cyber attack, the perpetrator uses multiple devices to flood the target network with superfluous requests. This overloads the system, rendering services inaccessible. [10] The DDoS attack exploited Estonia’s reliance on the internet, paralyzing banking services and government operations. After-action reports concluded that the damage could have been worse—hackers could have shut down the power grid, water services, and other critical infrastructure. [11] The attack on Estonia raised multiple questions about the use of force, attribution, and civilian protection under current Geneva Conventions. For example, does a DDoS attack on a power grid constitute a human rights violation if that grid powers a hospital? This is precisely the type of attack that Russian hackers employed in Ukraine in 2015 and again in 2016, when they targeted electrical distribution facilities in Kiev and the western Ivano-Frankivsk region. [12]

In the wake of the attacks on Estonia and Ukraine, the NATO Cooperative Cyber Defense Centre of Excellence convened an international group of experts to develop rules and applications of international law for the cyber domain. The Tallinn Manual, named after the Estonian city that was attacked in 2007, helps pave a way forward. Regarding an attack on a power grid, the international group of experts determined the following:

Whenever an attack on data foreseeably results in the injury or death of individuals or damage or destruction of physical objects, those individuals or objects constitute the ‘object of attack’ and the operation, therefore, qualifies as an attack. Further, as discussed below, an operation against data upon which the functionality of physical objects relies can sometimes constitute an attack.

Within the International Group of Experts, there was extensive discussion about whether interference by cyber means with the functionality of an object constitutes damage or destruction for the purposes of this Rule…., a majority of them was of the view that interference with functionality qualifies as damage if restoration of functionality requires replacement of physical components. Consider a cyber operation that is directed against the computer-based control system of an electrical distribution grid. The operation causes the grid to cease operating. In order to restore distribution, either the control system or vital components thereof must be replaced. The cyber operation is an attack for the majority. [13]

Thus, a cyber attack that causes injury or death of individuals or physical destruction of a civilian object (such as a hospital) would constitute a human rights violation if the targeted network is not dual-use. For dual-use, the Tallinn Manual states the following:

The object and purpose of this Rule is to clarify the issue of ‘dual use’ objects, since it is often the case that civilian and military users share computers, computer networks, and other cyber infrastructure. By this Rule, any use or intended future use effectively contributing to military action renders an object a military objective so long as its destruction, capture, or neutralisation offers a definite military advantage in the circumstances ruling at the time (Rule 100). As a matter of law, status as a civilian object and military objective cannot coexist; an object is either one or the other. This principle confirms that all dual-use objects and facilities are military objectives, without qualification. [14]

Although the Tallinn Manual clarifies international law in the context of cyber operations, it is not a legally binding document. Some rules outlined by the Manual, such as the one above, are also more permissive than humanitarian aid workers would like. The ways in which an aggressor defines “intended future use contributing to military action” may still predispose civilians to unnecessary military retaliation. Raymond argues that international regulations which can be used to prosecute state and non-state actors must be developed to prevent the exploitation of legal ambiguity.

International Data Use Regulations

Differing opinions regarding data regulation policy for the humanitarian space have made it difficult to update legal standards. According to Raymond, data protection regulation is a “hot coal” that no one wants to hold. Each international and domestic agency wants a different degree of regulation, and no one agrees on what that regulation should look like.

Generally speaking, there are three broad camps of people when it comes to data protection: 1) those who say additional data protection regulation should not be pursued, 2) those who believe we should create new laws, and 3) those who are attempting to extend current international law to the cyber domain. Raymond says he falls into the last camp, and is actively working with international organizations to translate what is already available into current practice. This would go a step further than the Tallinn Manual by creating a binding legal statute.

Status of Data Agreements

One way in which Raymond sees potential to shape future data security is to use the accepted precedent of Status of Forces Agreements (SOFA) and apply it to data, which he calls Status of Data Agreements (SODA). SOFAs are bilateral or multilateral agreements in which nations outline the framework for military personnel and operations in a country. For example, the SOFA between the United States and Afghanistan outlines where and when the United States may conduct operations for humanitarian assistance, military training/exercises, and responses to terrorism. [15]

Raymond believes we can use a similar framework to encourage responsible data protections for humanitarian crises. In a SODA between two or more countries, the countries would agree upon the uses of data and data protection regulations in accordance with humanitarian law. This would outline what a country can and cannot do in another country regarding the use of data. It would also streamline data agreements for humanitarian organizations—rather than creating multiple, redundant bilateral agreements with government organizations, countries can use the SODA template for cyber operations in each country. This framework would also allow countries to abide by their respective biometric registration requirements (such as those to identify persons under sanction by the United States from the Office of Foreign Assets Control), in a way “that doesn’t violate international humanitarian law, the refugee convention, and our [humanitarian aid community] principles. Right now, it’s not that we are [violating any of these], it’s that we don’t even know if we are.” [16]

The War on Trust

In 21 Lessons for the 21st Century, Yuval Noah Harari outlines fears that data is becoming the world’s most powerful commodity. The few organizations that control data will be able to influence politics and markets to the point where the purity of democracy comes into question. [17] For Raymond, this has already begun, and trust in democratic institutions has suffered because of it. Recent election meddling and cyber attacks by Russia, the use of a social credit score in China, and Facebook’s exploitation by military officials in Myanmar all show a shift in the relationship between people, technology, and the state. To restore this trust, Raymond argues nations need to outline clear data protection laws and avoid the current ambiguous status of demographically identifiable information:

“I believe [it] is going to be those who control regulation or control the relevance which regulation and norms have [who will control the world]. We’re in a moment now where we’re in the middle of the Third World War on trust. And it’s a trust of institutions, of infrastructure, of communities, of each other. And either we restore trust, or those who can fracture the means by which we create trust will.” [18]


 About the Author

Jason Lapadula is an MA/MBA joint-degree candidate at the Yale Jackson Institute for Global Affairs and the Yale School of Management. He was an infantry officer in the Marine Corps prior to graduate school. He is interested in pursuing development in conflict and post-conflict areas after graduation.


Endnotes:

  1. Nathaniel Raymond (lecturer, Yale Jackson Institute for Global Affairs), in discussion with the author, March 2019.

  2. Issie Lapowsky, “How Cambridge Analytica Sparked The Great Privacy Awakening,” Wired, March 17, 2019, https://www.wired.com/story/cambridge-analytica-facebook-privacy-awakening/.

  3. Alexandra Stevenson, “Facebook Admits It Was Used to Incite Violence in Myanmar,” The New York Times, November 6, 2018, https://www.nytimes.com/2018/11/06/technology/myanmar-facebook.html.

  4. Bernard Marr, “Chinese Social Credit Score: Utopian Big Data Bliss Or Black Mirror On Steroids?,” Forbes, January 21, 2019, https://www.forbes.com/sites/bernardmarr/2019/01/21/chinese-social-credit-score-utopian-big-data-bliss-or-black-mirror-on-steroids/.

  5. Matt Burns, “Leaked Palantir Doc Reveals Uses, Specific Functions And Key Clients,” Tech Crunch, last modified January 11, 2015, https://techcrunch.com/2015/01/11/leaked-palantir-doc-reveals-uses-specific-functions-and-key-clients/.

  6. Jennifer Easterday, “Open Letter to WFP re: Palantir Agreement,” Responsible Data, last modified February 8, 2019, https://responsibledata.io/2019/02/08/open-letter-to-wfp-re-palantir-agreement/.

  7. Raymond, March 2019.

  8. International Committee of the Red Cross, Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I), Art. 52, available from https://ihl-databases.icrc.org/ihl/WebART/470-750067.

  9. Raymond, March 2019.

  10. “Security Tip (ST04-015): Understanding Denial-of-Service Attacks,” United States Computer Emergency Readiness Team, last modified June 28, 2018, https://www.us-cert.gov/ncas/tips/ST04-015.

  11. Stephen Herzog, “Revisiting the Estonian Cyber Attacks: Digital Threats and Multinational Responses,” Journal of Strategic Security 4, no. 2 (Summer 2011): 49-60, http://dx.doi.org/10.5038/1944-0472.4.2.3.

  12. Donghui Park, Julia Summers, and Michael Walstrom, “Cyberattack on Critical Infrastructure: Russia and the Ukrainian Power Grid Attacks,” The University of Washington, last modified October 11, 2017, https://jsis.washington.edu/news/cyberattack-critical-infrastructure-russia-ukrainian-power-grid-attacks/#_ftn32.

  13. Michael Schmitt et al., Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations (New York: Cambridge University Press, 2017), 416-417.

  14. Ibid, 445.

  15. R. Chuck Mason, “Status of Forces Agreement (SOFA): What Is It, and How Has It Been Utilized?,” Congressional Research Service, March 15, 2012, https://fas.org/sgp/crs/natsec/RL34531.pdf.

  16. Nathaniel Raymond, March 2019

  17. “Big data is reshaping humanity, says Yuval Noah Harari,” The Economist, August 30, 2018, https://www.economist.com/books-and-arts/2018/08/30/big-data-is-reshaping-humanity-says-yuval-noah-harari.

  18. Raymond, March 2019.