• Aucun résultat trouvé

Co-ordinated Vulnerability Disclosure

Dans le document ENCOURAGING VULNERABILITY TREATMENT (Page 22-25)

CVD has become a key best practice to optimise the outcome of vulnerability treatment. This section explores the key conditions for CVD, the tools to facilitate CVD as well as the legal risk faced by researchers, which is a major obstacle to CVD.

3.1. Key conditions for CVD

CVD is a process through which vulnerability owners and researchers work co-operatively to reduce the risks associated with the disclosure of a vulnerability. The premise of CVD is that vulnerability owners and researchers need to co-operate to reduce users’ window of exposure (see Glossary). The objective of CVD is to develop and distribute mitigations before vulnerability information becomes public. CVD is widely recognised as a good practice for researchers and vulnerability owners to address vulnerability disclosure responsibly. It is a co-ordination framework involving one or more security researcher(s) and vulnerability owner(s). CVD can, but does not necessarily, involve a third-party co-ordinator. International standards and guides provide guidance for CVD at the operational level. CVD is effective both for code and system vulnerabilities, and therefore an extremely large number of stakeholders can potentially use it to improve digital security.

The complementary, competing or conflicting incentives influencing the behaviour of organisations and individuals largely determine the success or failure of CVD. Externalities can also play a role when the costs incurred by the exploitation of vulnerabilities are not borne by the vulnerability owner (Cf. Figure 3).

Several conditions, introduced below, are required for CVD.

Awareness and knowledge of CVD

To implement a CVD process, stakeholders need first to be aware that it is a recognised best practice to reduce digital security risk. They also need to understand how it works and what their role is, which may be difficult given the complexity of CVD. Many businesses lacking a strong digital and digital security culture, such as IoT devices producers and non-traditional ICT firms, are often unaware of CVD. In organisations, internal stakeholders lacking a sufficient understanding of CVD may discourage entering into a CVD process. For example, they may refuse to recognise that their products or system can have vulnerabilities, view CVD as a risk to their brand, interpret CVD as a broad encouragement for “hacking”

the organisation, or fear that embracing CVD would attract criminals. When sufficiently trusted, the security and IT teams can educate departments with a role in decision-making or exposed to negative consequences of a vulnerability disclosure failure (leadership, marketing, public relations, legal, etc.).

3. Co-ordinated Vulnerability

Disclosure

OECD DIGITAL ECONOMY PAPERS

Figure 3. Economic incentives, motivations and barriers in a co-ordinated vulnerability disclosure process

Note: in ENISA’s terminology, a security researcher is a finder and a vulnerability owner is a vendor.

Source: (ENISA, 2018[8])

Managing the sensitivity of vulnerability information

Newly discovered vulnerability information can be highly sensitive. It need to be reported only to the parties best placed to develop, test and deploy mitigations, and only to the extent necessary to enable them to do so. The possibility of leaks is a serious challenge. It increases if entities receive sensitive information they do not need to have, or when the number of entities informed is too high. As noted above, it may be necessary to involve governments early in the co-ordination process, for example, when the vulnerability can affect critical activities, which increases the difficulty of keeping the vulnerability secret. Exchanges of vulnerability information should use tools and techniques that reduce the risk of confidentiality or integrity breaches, such as end-to-end encryption.

Balancing patch availability and quality

As explained above, swift development and distribution of patches is key to minimise users’ window of exposure to code vulnerabilities. However, developing reliable patches and related risks can take time, in particular in multi-party CVD. Code owners need to ensure that proposed patches are complete and effective based on sufficient tests in different usage scenarios and technical configurations prior to being distributed, and system owners have to test them prior to deployment. For example, some initial patches for the 2018 Spectre and Meltdown microprocessors vulnerabilities had negative effects on performances and compatibility with certain anti-virus software. While addressing software vulnerabilities can take a few weeks, hardware vulnerabilities may require 6 months or more. Rigid mandatory deadlines for patch availability and application are likely to impede code owners’ ability to assess mitigations-related risks and create undesired side effects.

OECD DIGITAL ECONOMY PAPERS

Establishing trust through good communications and clear expectations

Trust between vulnerability owners and researchers can be difficult to establish in a CVD process. The power dynamics is not in favour of security researchers. Vulnerability owners, including governments, can easily use legal threat and coercion against researchers. Many organisations can feel threatened by a vulnerability report, in particular when their communications and legal departments, as well as high-level decision makers have a low level of digital and digital security maturity. This is often the case in particular with IoT manufacturers who are new to the digital world. Researchers have less power but they can communicate anonymously with vulnerability owners and disclose the vulnerability unilaterally to the public, which can give them some leverage but it needs to be used carefully. Co-ordination is primarily a matter of communication and information exchange between parties, as well as management of expectations.

Both parties can reach out to a co-ordinator to help mend relationships and resolve tensions. Well-recognised standards and good practices can be used as a basis for all stakeholders to develop a CVD culture.

Overcoming co-ordination complexity

CVD can become particularly complex when many code owners are involved (“multi-party CVD”). Most modern software includes pre-existing third-party components, modules, and libraries from the open source and commercial software worlds. Complex products’ value chains increase CVD complexity. For example, a vulnerability can affect a product included as a component in one or many (e.g. hundreds) other products, making it difficult to know which parties can be affected. A researcher can report a vulnerability to the entity owning the product’s brand or vendor, which may not own the responsibility to address the vulnerable layer of code or component, developed by someone else. In some cases, such as the Spectre microchips vulnerability, CVD may entail a broad collaboration within the product ecosystem to validate the vulnerability, develop, test and finally deliver mitigations to end users. The code owner may be located several steps away from the consumer down the value chain, and there may not be a communication channel across these layers. This can make uncertain and complex the co-ordination of vulnerability handling, including the distribution of a mitigation. International standards provide guidance on how to manage multi-party CVD, for example suggesting that the affected or best-positioned code owner should lead the co-ordination effort, and that a third-party co-ordinator can assist in setting up a broad collaboration within the concerned ecosystem (see below).

3.2. Tools to facilitate CVD

Vulnerability Disclosure Policies (VDP)

A VDP is an essential tool to invite researchers to send reports, increase their confidence that reports will be handled seriously, and to reduce the risk of legal action. The VDP helps clarify researchers’

expectations by setting clear rules of the game. If they are sufficiently readable and visible, VDPs can reduce the likelihood of vulnerability reports exploring the possibility of a reward to be interpreted as extortion attempts. A basic VDP can be as short as a single paragraph indicating how to send a vulnerability report securely to the vulnerability owner. A more typical and more effective VDP explains the contact method for secure communication; preconditions for reporting parties; clear expectations for handling a report; and methods for rewarding a report. It also details the scope of the policy, i.e. the list of assets explicitly allowed and not allowed for testing; and conditions under which a researcher can disclose the details of a vulnerability to third parties. A VDP also needs to provide a high level of certainty to the researcher that he/she will not face legal proceedings if he/she respects the terms of the VDP. This is particularly useful to encourage reporting, including from researchers located in other jurisdictions. While

OECD DIGITAL ECONOMY PAPERS

VDPs are a key tool, they do not always match researchers’ expectations or intentions. In practice, researchers can always research and disclose vulnerabilities in the manner they want (full disclosure, anonymous disclosure, etc.), and face the positive and negative consequences of doing so.

If relevant, the policy can cover rules related to rewards, including their conditions and nature. Non-monetary rewards, credits, acknowledgements and marks of prestige can be particularly appreciated and act as an important incentive. For example, NCSC-NL’s VDP has been particularly successful in the Dutch security researchers’ community by rewarding researchers with a humorous T-shirt marked “I hacked the Dutch government and all I got was this lousy T-shirt”, or a cup with a plate saying “I hacked the Dutch tax administration and never got a refund”. Similarly, the Korean government credits security researchers in the public description of the vulnerability they report and includes them in a “hall of fame”.

Co-ordinator

Stakeholders can turn to a co-ordinator to facilitate co-ordination, namely a trusted third party such as a Computer Emergency Response Teams (CERTs) or Computer Security Incident Response Teams (CSIRTs), or any other entity in a position to facilitate the process. A co-ordinator can assist in a variety of cases, from easing the researcher-vulnerability owner relationship, to orchestrating complex multi-party co-ordination, including vulnerability handling in the context of complex product value chains. Co-ordinators can also facilitate relationships between stakeholders across borders. Over time, CERTs and CSIRTs have established trusted relationships, including through FIRST, the international Forum of Incident Response Teams.

Trust in the co-ordinator has multiple facets: technical competence, neutral judgement, strict respect of confidentiality, ability and capacity to make a balanced assessment of the reasonableness of the various parties’ claims and demands, possibility to interact with trusted stakeholders across borders. The co-ordinator may receive confidential information from all parties and facilitate mutual understanding without sharing such information between the parties. Parties need to have a high degree of confidence that the ordinator will preserve the confidentiality of vulnerability and other sensitive information. When the co-ordinator is a government agency, parties need to trust that confidential information will not reach other parts of the government who could weaponise it for offensive use. Trust can become very challenging when vulnerability information needs to cross borders, in particular when governments are involved (cf.

Box 3). Experts have highlighted cases of international co-ordination where governments have leaked sensitive information to the press, or to third parties with poor security practices. Some experts suggest that an international, not-for-profit and well-resourced vulnerability co-ordinator should be established to address such issues.

Standards and good practice

Standards are a useful means to facilitate co-ordination by providing parties with a shared understanding of processes and procedures. International standards are particularly relevant with respect to CVD since the co-ordination often takes place across borders and local standards, social norms and laws may create confusion and uncertainty among stakeholders. ISO/IEC 29147 on Vulnerability disclosure provides requirements and recommendations to code owners (called vendors) on the disclosure of vulnerabilities in products. ISO/IEC 30111:2019 on Vulnerability handling processes provides requirements and recommendations for how to process and remediate reported potential vulnerabilities in a product or service. There is currently no ISO/IEC standard on vulnerability management, but there are guides and domestic standards.

Annex 1 provides an overview of good practice based a selection of guidance documents that

Dans le document ENCOURAGING VULNERABILITY TREATMENT (Page 22-25)

Documents relatifs