“Responsible” disclosure? AMD given 24 hours to fix

Reminiscent (but unrelated to) the Spectre and Meltdown vulnerabilities that plagued Nvidia earlier this year, AMD was recently hit with a fairly significant set of vulnerabilities.  Although the vulnerabilities were less severe than the ones hitting Nvidia (these vulnerabilities require administrative access to exploit rather than just the ability to write memory), the method by which they were made public is alarming.

According to AMD’s initial assessment, there are three groups of vulnerabilities, all requiring administrative access to exploit.  MASTERKEY is an exploit by which someone who has already corrupted the system can hide their corruption from the AMD Secure Processor checks throughout reboot.  RYZENFALL and FALLOUT allow a user to install malware on the system that is difficult to detect.  While those issues will be patched with a firmware update, CHIMERA is a hardware issue that allows malware to be hidden, which cannot be fully patched through an update.

According to AMD as well as various media outlets, the company had less than 24 hours after being notified of the issues before the discoverer (CTS Labs) went public with the vulnerabilities.  While the technical details of the vulnerability were not (and have still not yet been) made public, this creates very bad PR for AMD, as well as encourages others who may have access to the technical details to exploit the issue before a fix can be made (compared to only knowing that they had been discovered as a fix was released).

In response to the controversy, Ilia Luk-Zilberman (the Chief Technology Officer of CTS Labs) posted an open letter explaining the decision as well as their issues with responsible disclosure.  Responsible disclosure is a generally-agreed-upon policy that, upon finding a serious security issue with a program, software, or system, one should report it to the manufacturer, maintainer, or security team so that they can have a patch ready before the public knows of the issue.  This helps maintain PR, prevent a company from being caught off-guard, and helps consumers know that they are as protected as they can be.  This policy was handled with many recent exploits such as DirtyCOW and Spectre/Meltdown (although those were accidentally leaked a few days early).

Another aspect of responsible disclosure is a time limit on the vulnerability (typically cited as 90 days) before the discoverer posts the vulnerability.  This is to ensure that the company puts priority on fixing the issue instead of pushing it off (after all… even though it’s broken, no one knows about it, so it’s probably fine).  This famously happened with the MoonPig vulnerability last year, in which the greeting card company MoonPig had an incredibly broken API (there was no need to log in as a user, just cite their user ID), and refused to fix it for over a year until the details went public.

CTS Labs, however, decided that the risk of not going public outweighed the benefits, so they released the possible impacts of the vulnerabilities without disclosing technical details of the vulnerability or how to exploit it.

– Ryan V.

Extended Validation is Broken

Extended Validation is a tool that can be used by site owners in order to prove the identity of their site beyond a standard HTTPS certificate.  While an HTTPS certificate proves that the server you are communicating with is the site identified by the domain name, it can be easy to spoof domain names for some sites (like facebok.com).  If a site is verified, a person may be likely to trust it without verifying the domain name.

In order to receive an Extended Validation certificate, one must prove to a Certificate Authority that they “are” that name, rather than just owning the domain name.  Most commonly, this is done by proving that you own a company by that name – which is a fairly secure system.  However, in this report, Ian Carroll exploits a vulnerability not in the technical system, but in the United States.

In America, the same company name can be registered in different states (since, for all practical purposes, we are 50 separate countries that are just really friendly).  Carroll takes advantage of this fact by registering the company name “Stripe, Inc.” in Kentucky (Stripe is a popular payment platform, registered in Delaware).  He uses the site registered with this certificate not for malicious purposes but in order to spread awareness of the vulnerability, hosting his whitepaper on the vulnerability there.

This issue raises many questions on how we should be verifying identity, as well as how browsers should deliver verification information to the client.  The entire vulnerability is completely technically sound in that the entire process does what it should (the company named “Stripe, Inc.” has been verified to serve this content).  There is, unfortunately no simple way to solve this problem.  Should the certificate authority only issue these certificates for companies that are “big” or, even more ambiguously, “well-known”, and deny verification to startups?  Should the browser also display the state name of registration along with the certificate (assuming that the common citizen knows the state name of every website he or she visits)?  These are not difficult answers, but their answers are fundamental to the future of identification in an increasingly automated world.

 

– Ryan Volz