Google as a security bellwether

Security is like a great white shark: It has to constantly move forward or it dies. Our own affinity to best practices should keep us moving our support for security standards forward, but most people need a push, and Google is the pushiest player on the Internet.

There have been many examples over the past 10 years of Google moving the industry forward, at first by example and then by using its industry muscle. In January 2010, in one of the earliest examples, Google switched all Gmail servers to run HTTPS all the time. Not long after, all its services ran HTTPS by default. This, combined with a series of embarrassing attacks on users of other services, pushed most other large sites to run HTTPS exclusively. And it was revolutionary; at the time, even most banks didn’t run all their web pages on HTTPS.

The Chromium hammer

But it’s on the client end, with Chrome and Android, that Google has effected the most change. Chrome has been the dominant web browser for many years now, and in that time, Google has used error and warning messages like “Not secure,” along with other visual cues and program behavior, to advantage sites that meet its security standards. The point of these visual cues is to make users suspicious, and justifiably so, of the content. It trains them to expect secure content, and thereby, it trains site producers to provide only secure content.

The latest development is Google’s decision to block, by default, mixed content in Chrome. “Mixed content” is an HTTPS page with HTTP elements, usually images. Browsers like Chrome already block many types of particularly hazardous mixed content, including IFrames and scripts when loaded through HTTP. But images, audio, and video are still allowed through. Google says the change “will improve user privacy and security on the web and present a clearer browser security UX to users.”

Many content producers grumble about these burdens on them, but it’s fair to say that we, as users, benefit from them. Because of Google’s leadership, attentive users almost always have all the tools they need to determine if a site is legit.

Over 1M people read Enterprise.nxt. Are you one of them?

Through search and services

And it’s not just in Chrome. Google’s real market power comes through search, and in 2014, Google started using HTTPS as a search “ranking signal.” This means that, all other things being equal, an HTTPS site will get a better search rank than an HTTP site.

Sometimes Google works not through standards or actual browser features but by providing security facilities for anyone to use. The best example is the Google Safe Browsing API, a web service that lets client applications “check URLs against Google’s constantly updated lists of unsafe web resources,” such as “social engineering sites (phishing and deceptive sites) and sites that host malware or unwanted software.” It’s not hard to see that any such list collected by Google would be a big and useful one. From early on, Google provided the API to Mozilla, and it is still used in Firefox (as you can test on this page).

Through certificates

Much of Google’s work with security standards has to do with SSL/TLS certificates, the documents that positively identify a site within the public-key infrastructure (PKI). SSL (Secure Sockets Layer), originally developed a million years ago by Netscape for version 2 of the Navigator web browser, was taken over by the IETF and standardized as TLS (Transport Layer Security), although it’s still commonly called SSL.

Trust on the Internet is largely based on these digital certificates, but their operation in the real world is problematic. By trusting the certificate, you (or more specifically, your web browser) are trusting the certificate authority (CA) that issued it. Why does your browser trust the CA? Because a few key operating systems and applications, including Windows, macOS, iOS, Android, and Mozilla applications, come preconfigured to trust specific “root CAs.” You are also trusting the CA to revoke a certificate in some circumstances, such as when the private key has been compromised. It turns out that providing clients with the ability to quickly check whether a certificate has been revoked is a very hard problem.

Certificates build on a large number of open standards, but the industry body where most of the action is these days is the CA/Browser Forum, a consortium consisting mostly of CAs and browser developers. Google is mostly in the latter group, but as the developer of Chrome, it carries a lot of weight.

Even so, its last attempt to push a security standard change through the CA/Browser Forum failed. Certificate authorities used to sell certificates that did not expire for eight years. Various parties agreed many years back to move this number down, first to five years, then three, and then to the current 825 days (about two years and three months). In June 2019, Google’s Ryan Sleevi proposed at a CA/Browser Forum meeting that the limit be reduced to 397 days.

There are good arguments for shortening certificate life spans—basically that if a private key is compromised, the period of the compromise is limited. If shorter life spans are better, then the shortest is the best. Why not daily certificates? The revocation problem almost solves itself then.

But having to renew more frequently puts a burden on customers, at least those who haven’t automated the process sufficiently. Sleevi’s proposal was discussed and endorsed by some members (including Apple and Let’s Encrypt), but in the end, the proposal failed. Several CAs surveyed their customers about the change and, according to Patrick Nohe of The SSL Store™, customers were overwhelmingly opposed. Nothing stops issuers from using shorter life spans. The current certificate for google.com has a life span of 84 days (Sept. 17, 2019, to Dec. 10, 2019), but Google is its own root CA, a luxury not enjoyed by many.

The only way to function when certificates must be updated so frequently is to automate the process. This requires having a full inventory of certificates for your own assets, which many organizations do not have. It requires a programmed interface to a certificate authority (e.g., DigiCert’s REST APIs or Boulder, the free Let’s Encrypt CA interface), which is probably still cutting edge. In practice, unless you’re a company of the size and scope of Google, which can create its own root CA, it requires a certificate management system, such as Venafi TrustAuthority. Do you have all these things? You’ll probably end up needing them eventually. You should start planning for it or at least investigate.

CAs and many other parties argue that Google is trying to get users to trust it rather than the more independent CAs. Whatever the merits are of CAs, very few people are going to go to the trouble of comparing the alternatives. It’s Chrome that’s in front of them, and if Chrome says to trust it, they will.

Pinning, secrecy, POODLE

Some of the other advances in security pushed, if not pioneered, by Google:

Google spurns EV SSL

And then, sometimes Google acts unilaterally. As of version 77, Chrome no longer provides any visual cue for a site with an Extended Validation (EV) SSL certificate. This is not exactly intended as a security enhancement. Rather, Google has decided EV is not sufficiently worthwhile to merit a UI endorsement of greater security.

The idea behind EV SSL is that the CA performs much more exacting verification of the identity of the applicant, such as checking the articles of incorporation and all contact information. Of course, this raises the cost of the certificate significantly. In exchange, the browser is supposed to display the lock icon in green along with the name of the organization where the user can see it. Current versions of Mozilla Firefox and Microsoft Edge still have this behavior, although future versions of Edge will remove it. Google stopped displaying the organization name long ago; the change just stops the lock from turning green. If you click on the lock icon, you can tell whether a site has an EV certificate, but you have to dig for the information. As Google’s Devon O’Brien explains, “… the Chrome Security UX team has determined that the EV UI does not protect users as intended.”

EV is certainly presented as a security feature, but if Chrome no longer supports it, it’s hard to see why it’s worth the extra cost. In fact, there is no longer any visual difference in Chrome between a site with a certificate bought at full retail from a CA and one with a free certificate from Let’s Encrypt.

Cryptography is core

Google doesn’t make money directly off any of the measures described above. Why does it do it? I’m satisfied that there are many top security researchers working at Google who sincerely believe it is their job to make the Internet as secure as possible for users. It appears that Google has decided that it’s the company’s job to use its market power to push everyone toward more secure products and configurations.

For the most part, and particularly with its work with the Chrome browser, it’s hard to argue that. Its de-recognition of EV SSL, which could very well kill off the market for those expensive certificates, seems like a whole new level of maneuver. Google may be right that EV conveys a misleading sense of increased security, but there is a standard for EV, from a body in which Google participates, and the move is unilateral. If it succeeds in killing off the EV SSL market, it will be a clear demonstration of who’s in charge.

Google as a security bellwether: Lessons for leaders

  • Keep all software updated to the latest version.
  • Don’t let testing drag on while your production systems are vulnerable.
  • Keep up with changes in security and other standards, and how they may affect your internal and customer-facing services.

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.

For more information contact United Imaging Technology Services today