In the first part of this piece, we examined the unique challenges that cyberspace presents for the application of the conventional rules of international law. We demonstrated how the attribution problem, stemming from the technical complexity and anonymity of cyber operations, severely limits the effectiveness of traditional state responsibility regimes under the ARSIWA framework. While principles such as the prohibition of intervention or the use of force remain applicable, their high attribution thresholds often leave harmful cyber activities without consequence. Against this backdrop, we proposed the due diligence obligation as a viable alternative approach,one that does not alter established legal thresholds, yet offers a pathway to address harmful cyber activities by shifting the focus from attribution to a state’s proactive duty to prevent harm emanating from its territory.
Building on this theoretical foundation, Part II seeks to operationalize the due diligence obligation against cyberattacks, specifically, availability attacks against critical infrastructure. To this end, it applies the three classical due diligence tests (foreseeability, prevention capacity, and harm), – adapting each to the technical and transboundary realities of cyberspace. In doing so, it departs from certain aspects of the Tallinn Manual (Hereinafter, “The Manual”) framework, proposing a fixed, objective threshold for states in order to prevent lack of capacity based loopholes and enhance global cyber security. This part therefore aims not only to refine the legal contours of due diligence in cyberspace but also to present a practical model capable of breaking the prevailing cycle of impunity.
In this section, we will first examine the three tests of due diligence in detail. We will then outline the points at which our analysis diverges from the Tallinn Manual and subsequently discuss how the due diligence obligation can break the prevailing impunity, as well as how the effective implementation of due diligence in cyberspace can enhance cooperation and contribute to ensuring global security.
A. Tests For Due Diligence:
The due diligence obligation does not constitute a strict liability regime; rather, it imposes a positive duty of conduct (See para. 68). Accordingly, whether a State is bound by this obligation in a given case is determined through three context-specific tests: foreseeability, capacity to prevent and harm.
While the application of these three tests is well-established in the classical domains of international law, their direct transposition to cyberspace may not always be appropriate. Therefore, it is necessary to examine each of these tests individually and adapt their interpretation to the specific features of the cyber context.
In this piece, we will apply all three tests in the context of availability attacks targeting critical infrastructure. Subsequently, we will present a set of recommendations that, departing from the Tallinn Manual and the classical framework, we believe it’s a necessity to adapt the due diligence obligation to the specific conditions of cyberspace.
- Foreseeability Test:
For a state to be held responsible for a cyberattack under the duty of due diligence, the attack must be foreseeable. (See Jensen on Tallinn Manual 2.0, p. 744) Attacks detected at an early stage can be prevented through timely intervention. Therefore, the criteria for foreseeability must be clearly defined. Foreseeability depends on parameters such as the functioning of technological systems and their capacity to detect the attack.
The primary structures for detecting and preventing cyberattacks are national CERTs (Computer Emergency Response Teams) and FIRST (Forum of Incident Response and Security Teams), which coordinates among them internationally. The National CERT is responsible for detecting, analyzing, and responding to cybersecurity incidents nationwide. FIRST facilitates early warning and preventive action through information sharing among CERTs. The procedure is simple: suspicious IP addresses are reported, and members block these addresses to cut botnet access.
Technological infrastructure determines the detection capacity of CERTs:
- Level-1 CERT: Equipped with advanced technology, continuous monitoring, and high international integration, it can detect attacks in the preparation phase and both send and receive early warnings.
- Level-2 CERT: Lacks some up-to-date technologies, has limited monitoring, and although a member of FIRST, integration is weak; some attacks are detected only after they have started.
- Level-3 CERT: Has low capacity, relies mostly on manual processes, and attacks are usually detected after they have started, or even after they have ended.
To understand attack detection capacity, the functioning of availability attacks in particular should be examined. This analysis can be conducted using the commonly accepted seven phased model of “Cyber Kill Chain” by Lockheed Martin.
Phases of Cyberattack:
In the Reconnaissance phase, the attacker identifies vulnerabilities in the target system through various scans. At this stage, detection by CERTs is generally unlikely.
In the Weaponization phase, the attacker creates a botnet or adds devices to an existing one. At Level-1, newly connected devices linking to known C2 (See: Command & Control) addresses can be observed, whereas Level-2 detection generally depends on intelligence from security companies or other CERTs. The main challenge here is detecting the activity before it escalates into an actual attack. Under the due diligence obligation, such activity should be monitored as “suspicious,” thereby shortening detection time in the subsequent phase.
The Delivery phase is critical for foreseeability; commands are sent to bot devices hours or minutes before the attack begins. At Level-1, abnormal connections to C2 addresses can be detected, while at Level-2, detection is more challenging but still possible.
In the Exploitation phase, the attack is actually launched; once system capacity is exceeded, availability is denied. At this point, detection may be high, but preventing damage becomes challenging.
During the remaining three phases, the attack is already underway, and foreseeability is no longer relevant.
In scenarios where the attacker uses global cloud infrastructure instead of domestic botnet devices, the process differs. Traffic originates from global data centers rather than domestic sources, and cloud traffic is highly likely to blend with normal service traffic. States may, in such cases, engage with the cloud provider to enable early detection. However, given that global cloud systems process account data and perform significant verification to ensure they belong to real individuals, the mass creation of hundreds of thousands of accounts through cloud infrastructure is highly unlikely.
In result, it is evident that States possessing Level-1 and Level-2 CERT systems have a significant capacity to foresee availability attacks such as DDoS.
As explained in Part I, Botnet networks are often tested with small-scale commands from the C2 server before launching their main attacks, allowing the attacker to verify that the network is functioning properly. This anomaly is one that States could plausibly detect, although if the number of devices within their own cyber infrastructure is low, it is equally possible that such activity might not be perceived as anomalous.
Similarly, the Delivery phase immediately preceding the attack can serve as a strong indicator of anomalous activity, and it is reasonable to expect States to detect it. Such detection can be carried out without inspecting the content of the transmitted messages – using only metadata – and is therefore free of legal barriers.
When establishing an objective criterion for the foreseeability of attacks under the due diligence obligation, these capacities should be considered. It should also be recognized that this is not a matter of subjective foreseeability. While Tallinn Manual Article 6 proposes a “reasonable in light of the State’s capacities” objective test for foreseeability, we contend that it would be more appropriate to adopt a single, common objective standard applicable to all States. This proposal will be addressed in the following section.
-
- Prevention Capacity Test
The prevention capacity refers to the test applied once a state has foreseen,or ought to have foreseen,an act within its jurisdiction that could harm another state. Therefore, it should be applied after the foreseeability test. This is because in some cases, even if the act is foreseen, it may be impossible to prevent it, or preventing it may require actions that are unlawful. For a state to be held responsible, the act must be “preventable” both factually and legally. (See,ILC’s Report on Due Diligence in International Law)
In brief, when a state detects an anomaly within its own cyber infrastructure, its possible courses of action are quite limited. This is because dismantling a botnet would require the unlawful processing of personal data and access to individuals’ devices, which is legally unacceptable.
However, as explained in the previous section, none of this is necessary. The state can fulfill its due diligence obligation simply by identifying suspicious IP addresses through its CERT and transferring them to FIRST, without infringing on individuals’ private sphere. Other FIRST member states can then take measures against these IP addresses and thus avert the attack.
Similarly, when devices in the botnet communicate with the C2, only basic “metadata” – which can be obtained without intruding into the private sphere – needs to be used. Therefore, there is no sensitive personal data issue in listing botnets or detecting the C2.
From the perspective of “prevention capacity,” the critical factor is the speed with which the state acts once it has detected – or should have detected – an anomaly. Since anomalies often become detectable at the Delivery stage, the state is racing against time to protect other states.
Although it may be expected that the state will produce a full list of IP addresses, this process takes time. Consequently, it can be said that the state will have fulfilled its responsibility if it transmits the data it collects to FIRST in batches as soon as they are detected. This approach can also reduce the scale of the attack.
Divergence from the Manual: Determining the Due Diligence Threshold
The Tallinn Manual provides that both the foreseeability test and the assessment of prevention capacity should be measured against an objective criterion of what is “within the means of the state.” While this approach accounts for differences in technical capability, it effectively allows states to escape responsibility by invoking inadequate resources.
Our recommendation in this piece is drawing the criterion from a fixed threshold applicable to all states. This would prevent states from escaping responsibility by invoking inadequate technical capabilities and would compel them to take protective measures for one another. Otherwise, only states with high technical capacity would bear responsibility – a situation that would be insufficient to ensure the security of the international community. Moreover, given the ubiquity of cyber devices today, limiting oversight to devices within the infrastructure of technically advanced states would be neither fair nor effective.
Applying the same fixed threshold to all states in a given case is not contrary to fairness. Unlike private actors, states – being subjects of international law – possess far greater power, and this power is taken into account in determining their responsibility. For example, in the case of an internationally wrongful act, unlike in domestic legal systems, fault on the part of the state is not a prerequisite for responsibility.
Similarly, a state’s capacity should not serve as a criterion in fulfilling its due diligence obligations toward other states. By virtue of its power, a state is an entity expected to possess such capacity. While it is unrealistic to expect all states to have the highest level of technical capacity, it is reasonable to expect them to have the capacity to detect major anomalies within their jurisdiction that could harm others. This is a natural consequence of sovereignty in cyberspace.
Furthermore, in classical domains such as land, air, and sea, the “lack of capacity” defense is interpreted quite narrowly, as exemplified in the Corfu Channel Case (See p. 21-22). According to this case, a state is obliged not to allow its territory to be used in a manner that harms another state, even if its capacity is insufficient to prevent. Similarly, if we claim that a state has sovereignty in cyberspace, it must also have the obligation not to allow this sphere of sovereignty to be used in ways that cause harm to other states. This is, in fact, not a new proposal, but merely an adaptation of the classical public international law perspective to the cyber domain.
The “incapability” defense is often perceived as evidence of a state’s inability or unwillingness and is sometimes treated as a justification for actions, such as cross-border operations, that would otherwise constitute a breach of sovereignty. Accordingly, the same narrow interpretation should be applied in cyberspace, and states must fulfill this responsibility arising from their sovereignty.
Our proposal in this piece, particularly with respect to availability attacks, is to presume that a State must possess both the detection systems necessary for foreseeability and the infrastructure required to promptly compile and transmit lists of suspicious IP addresses. Moreover, our specific recommendation is that the “minimum standard” to be met by every State should be set some degree between Level 1 and Level 2 capacity. States with greater capabilities would, by virtue of their higher prevention capacity, be subject to a correspondingly higher due diligence obligation.
It should be noted that the scale, sophistication, and preparation of an availability attack will affect the degree of anomaly observed in the infrastructure in each case. Accordingly, the foreseeability and prevention obligations must be assessed in light of the specific circumstances of each incident. Our proposal does not alter this case-by-case assessment; rather, it imposes on States a constant obligation to maintain detection and listing infrastructure appropriate to the circumstances of the incident.
-
- Harm Test:
The final test required for due diligence is that the act must have caused measurable harm to another State. (See, ILC’s Report on Due Diligence in International Law)
As explained in this paper, among cyber activities, the type of attack most capable of causing such harm is the availability attack. Considering that availability attacks also leave detectable digital footprints from the preparation stage – even before reaching the attempt stage – the model we propose focuses primarily on availability attacks.
Although, in rare cases, integrity or other types of attacks may also cause harm, (See: Part I of this piece) it is clear that the due diligence method proposed here is not as well-suited for them. Nonetheless, because the vast majority of harmful cyber operations are availability attacks, the proposed model would remain highly effective in promoting both cybersecurity and State responsibility.
B. Advancing Due Diligence for Global Cybersecurity
The effective implementation of the due diligence obligation in cyberspace has implications that extend well beyond the settlement of individual disputes. By placing a proactive duty on all States to detect and address harmful cyber activities originating from their territory, due diligence reorients the focus from reactive attribution, often hindered by technical and political obstacles, to a preventative, cooperative model.
This approach not only offers a legal pathway to challenge the entrenched cycle of impunity but also fosters conditions for sustained inter-State collaboration, thereby strengthening the collective resilience of the international community against cyber threats. In the following sections, the potential benefits of this framework will be examined under two separate sections.
C. Challenging the Impunity
One of the most problematic features of the current international order is that the difficulty of attributing cyberattacks effectively shields states behind a de facto wall of impunity. Yet the model proposed dismantles this shield by shifting the focus away from the identity of the attacker and toward whether the harmful activity passed through a state’s infrastructure in a detectable manner. By doing so, it renders defenses such as “we lacked the technical capacity” or “our systems were insufficient” far less credible,both legally and practically. The central question becomes not who launched the attack, but whether the state noticed anomalous activity within its jurisdiction and transmitted timely warnings.
This approach also has the potential to reduce interstate tensions. Rather than treating a state as if it were automatically responsible for the attack itself, the model allows for its conduct to be assessed solely on the basis of a failure to fulfil its “obligation to prevent” harm emanating from its territory. As a result, the frequent misattributions, political maneuvering, and strategic accusations that characterize the cyber domain gain less traction. States are encouraged to focus on their own responsibilities, evaluated through a common objective threshold, rather than engaging in politically charged debates over authorship.
Ultimately, the proposed due diligence standard offers a clearer, more workable, and more cooperation-oriented mechanism – legally and politically. It clarifies the scope of state obligations, increases transparency in responsibility assessments, and helps to disrupt the entrenched cycle of impunity surrounding cyber operations. In doing so, it creates the conditions for a more predictable and collective approach to cybersecurity in an increasingly complex digital environment.
D. Strengthening International Cooperation
In addition to impunity matter, this model naturally strengthens international cooperation. Because states are required to share suspicious IP data and early-warning signals through channels like FIRST, information exchange becomes a built-in element of fulfilling their legal obligations rather than an optional gesture of goodwill. This turns due diligence into a structured, routine form of collaboration.
As states begin to depend on one another’s alerts and detection capacities, a more cohesive and mutually reinforcing security network emerges. Even states with limited technical resources gain access to timely warnings, reducing global blind spots and enhancing collective resilience. In this way, preventing harm to others becomes a direct pathway to improving each state’s own cybersecurity.
E. Conclusion
In conclusion, operationalizing the due diligence obligation in cyberspace offers a realistic and normatively coherent pathway to address harmful cyber operations without altering existing thresholds of international law. By reframing state responsibility around the detection of anomalies, the timely sharing of early-warning data, and the maintenance of minimum technical capacity, this model provides both a practical means to overcome the attribution stalemate and a foundation for deeper international cooperation. While it does not eliminate all challenges inherent to the cyber domain, it establishes a clearer, fairer, and more proactive framework – one capable of reducing impunity, strengthening collective resilience, and contributing meaningfully to global cybersecurity governance.





