Thursday, October 6, 2016

Study Group: Crying Wolf: An Empirical Study of SSL Warning Effectiveness

Today's study group was on the now a little dated paper of 2009 'Crying Wolf: An Empirical Study of SSL Warning Effectiveness' [1], which was published at USENIX. In cryptography research, it is easy to overlook implementation and usability and instead focus on theory. As is succinctly explained in Randall Munroe's well-known comic, the weaknesses in our cryptographic solutions are seldom in the constructions themselves, but in their real-world application.

This paper explores the use and design of warnings which modern (!) browsers present to a user when SSL certificates cannot be verified, and in particular the user's reaction to them. There is little point in a cryptographically secure system of authentication if the end user ignores and proceeds past warnings when presented with them. The authors suggests that when browsers 'cry wolf' upon encountering SSL errors, users become desensitised over time, learn to ignore these warnings, and thus become susceptible to having their data stolen.

What is SSL?

(The initiated can skip this.)
SSL stands for Secure Sockets Layer, and is a method by which a client can access a web server securely. The SSL Handshake protocol uses a so-called SSL certificate to verify a server's authenticity to a client. An SSL certificate specifies whom the certificate was issued to, whom it was issued by, the period of validity and the server's public key. (Old SSL protocols have been superseded by TLS, but the principles involved are essentially the same.) At a very high level, the protocol proceeds as follows:
  1.  The client sends a 'hello' message to the server, requesting content.
  2.  The server sends the client its certificate, which contains its public key.
  3.  The client checks that the certificate is valid.
  4.  If the check passes, the client generates a session key, encrypts using the server's public key, and sends this to the server. If the check fails, the client aborts.
  5.  The server decrypts the session key using its secret key.
The client and the server can now encrypt all data sent between them using the (symmetric) session key.

What can go wrong?

If the certificate is invalid, the client aborts. The problems this study considers are:
  •  Expired certificate: the certificate is no longer valid.
  •  Unknown certificate authority: the issuing authority is not known.
  •  Domain mismatch: the domain of the web server and the certificate's listed domain do not match.
If one of the above occurs, the web browser will alert the user. The purpose of the study was to assess the effectiveness of the browser in conveying the severity of the problem to the user: strong warnings where the risks are small cause people to assume high-risk situations given the same warning are just as innocuous.

The Studies


Using a survey, the authors gathered data from 409 web users on their reactions to SSL warnings and their overall comprehension of the risks involved in ignoring them.

They found that context (i.e. the type of website visited) made little difference to whether or not a user would heed the warnings.

According to the data, respondents who understood 'Domain mismatch' and 'Unknown certificate authority' warnings were less likely to proceed than those who did not, whereas those who understood certificate expiry errors were more likely to proceed. In fact, the experimenters found that users consistently rated risk of an expired certificate lower than the other two errors.

The authors additionally report some wonderful responses from users, including:
  •  'I use a Mac, so nothing bad would happen'
  •  'Since I use FreeBSD, rather than Windows, not much [risk]'
  •  'On my Linux box, nothing significantly bad would happen'

Laboratory Experiment

A set of 100 participants were asked to use four websites to complete different tasks. One website was a banking website with an invalid certificate, one a library website with an invalid certificate, and two were other sites used as dummies.

The participants were shown either Internet Explorer 7 (IE7), Firefox 2 (FF2), Firefox 3 (FF3), or one of two newly-designed SSL warnings. The IE7 warning is whole page but requires just one click to ignore. The FF2 warning is a pop-up window but also only requires one click to ignore. The first version of the FF3 warning needed 11 steps. 'They made the original version of the warning so difficult for users to override, that only an expert could be likely to figure out how to do it.' The first new design was multi-page and asked users to specify the nature of the website they were visiting, presenting severe warnings for websites requiring a high level of security and milder warnings otherwise. The second new design was similar to the FF3 warning but 'looked more severe'. Images can be found in the paper.

For the library website, the IE7, FF2 and multi-page warnings did not prevent people from proceeding compared to the FF3 warning, and the single-page warning was similar to the previous warnings.

For the banking website, the two new warnings did prevent people from accessing the website, but no more than the FF3 warning. The new warnings and the FF3 warning outperformed the IE7 and FF2 warnings in preventing people from accessing the website.


In conclusion, the authors say that the average user does not understand the dangers of SSL warnings, and as such the decision of whether or not to proceed should essentially be made for them by the browser in most cases.

More recently, Chrome recently redesigned its SSL warnings due to the large proportion of users who simply ignored all SSL warnings [2].

To see different SSL warnings in your current browser, visit


[1] Crying Wolf: An Empirical Study of SSL Warning Effectiveness by Joshua Sunshine, Serge Egelman, Hazim Almuhimedi, Naha Atri and Lorrie Faith Cranor. In Proceedings of the 18th Conference on USENIX Security Symposium, 2009; link.
[2] Improving SSL Warnings: Comprehension and Adherence by Adrienne Porter Felt, Alex Ainslie, Robert W. Reeder, Sunny Consolvo, Somas Thyagaraja, Alan Bettes, Helen Harris and Jeff Grimes. In CHI 2015; link.

Thursday, September 29, 2016

Study Group: On the Impossibility of Tight Cryptographic Reductions

Today I kicked off the study groups for 2016/17. I spoke about On the Impossibility of Tight Cryptographic Reductions, from this year's Eurocrypt. Keen readers of the blog might recall that this paper is a particular favourite of mine.

Since I've wrote about it before, I won't go into much detail about the paper. Instead I'll say why I (still) like it, and a bit about how it's shaped my own work this year.

So, why choose this paper again? First and foremost, it's just really good. It's well written and the result - that certain reductions are necessarily lossy - has big implications for the way we do things in provable security. There is an increasing drive for theoreticians to produce work that has a meaningful impact on the real world. Choosing security parameters is an important part of that picture, but this paper shows that the traditional tools of provable security can sometimes be inadequate in this regard - especially in a setting like the internet, with billions of users of cryptography.

Is it that our methods need changing? Or should practitioners ignore the theory and 'go with their gut' when choosing parameters? Do we need to actively avoid using those crytographic primitives for whom reductions are always lossy,  like rerandomisable signatures and encryption schemes where each public key has a unique secret key? These are profound questions for the community.

Another reason I chose to talk about this paper is that it's nicely self-contained. This is not an incremental result about something obscure. Almost everyone here at Bristol has encountered reductions, and after recalling the standard EUF-CMA definition for signatures it was easy to build up to the main theorem of the paper (or at least the particular case of signatures in the main theorem). If any new PhD students are looking for some theory to get their teeth into, this paper would be a great starting point.

Finally, I cheated a bit by giving my presentation about a paper that I've become very familiar with in the last few months, as I'm trying to extend it. At the moment, the result only applies to certain public-key primitives; I'd like to say something about multi-key to single-key reductions for symmetric encryption (which is of particular relevance to my PhD project, on Key Management APIs). I hope to have more to say on this in the not-too-distant future.

Sunday, August 28, 2016

Crypto 2016: Breaking the Circuit Size Barrier for Secure Computation Under DDH

The CRYPTO 2016 Best Paper Award went to Boyle et al [1]. The paper provides several new protocols based on a DDH assumption with applications to 2PC (2 party-computation), private information retrieval as well as function secret sharing.

Even more interesting, the authors present a protocol where 2PC for branching programs is realized in a way that communication complexity depends only on the input size and the computation is linear in circuit size.

The central idea develops around building efficient evaluation of RMS (restricted multiplication straight line) programs. The special feature of RMS is that they allow multiplications only with memory and input values; the additions come for free between memory values. Although this class seems quite restrictive it covers the class of branching programs (logaritmic depth boolean circuits with polynomial size and bounded input).

In the 2PC evaluation of RMS, suppose there is a linear shared memory value $[y]$ between the parties $P_1$ and $P_2$. When $P_1$ wants to share an input value $x$ to $P_2$ it sends an ElGamal encryption of $x$, $g^{xc}$ where $c$ is a symmetric ElGamal key. Clearly, the encryption is homomorphic with respect to multiplication, but how can we make any operations between a linear SS (secret shared) value and an ElGamal encryption?

This is solved by introducing a distributive DLog procedure which converts the El-Gamal ciphertexts into linear SS values. The method uses a truncated PRF which counts the number of steps until the PRF evaluated in the ElGamal encryption equals to $0$. Unfortunately this algorithm has a probability of outputting an incorrect result but it can be fixed by evaluating multiple instances of the same protocol in parallel and then use an MPC protocol to select the result majority.

Of course, there are some caveats at the beginning of the scheme such as converting the key generation procedure to a public key one and removing circularity key assumptions. These are gradually presented by the authors so that it can ease the reader's understanding of the ideas.

What I find neat is that at the end of the paper we can see easily how to reduce the communication for general 'dense' arithmetic circuits by splitting them in multiple reduced depth chunks and then apply the RMS programs for each gate (because an addition or multiplication gate can be represented as a branching program).

Of course we can spot some open problems left as future work such as:
  1. Extend the protocols for larger classes other than branching programs.
  2. Protocol only works for $2$ parties. Can we find something with constant communication for multiple parties without using FHE?
  3. Can we convert the protocol for malicious parties in some other way rather than a generic complier as in [2]?

[1]: Boyle, Elette, Niv Gilboa, and Yuval Ishai. "Breaking the Circuit Size Barrier for Secure Computation Under DDH."
[2]: Ishai, Yuval, et al. "Cryptography with constant computational overhead." Proceedings of the fortieth annual ACM symposium on Theory of computing. ACM, 2008.

Friday, August 26, 2016

CHES 2016: On the Multiplicative Complexity of Boolean Functions and Bitsliced Higher-Order Masking

During the morning session on the final day of CHES 2016, Dahmun Goudarzi presented his paper, co-authored with Matthieu Rivain, on bit-sliced higher-order masking.

Bit-sliced higher-order masking of S-boxes is an alternative to higher-order masking schemes where an S-box is represented by a polynomial over binary finite field. The basic idea is to bit-slice Boolean circuits of all the S-boxes used in a cipher round. Securing a Boolean AND operation, needed in the case of bit-sliced approach, is significantly faster than securing a multiplication over a binary finite field, needed in the case of polynomial-based masking schemes. But now the number of such AND operations required is significantly higher in the former case than the number of field multiplications required in the latter case. However, the use of bit-slicing with relatively large registers (for instance, 32-bit registers) previously lead the same authors to demonstrate significant improvements over polynomial-based masking schemes for specific block ciphers such as AES and PRESENT [GR16]. However, no generic method to apply bit-sliced higher-order masking to arbitrary S-boxes were previously known, and proposing such a method is one of the main contributions of the current work.

The running time and the randomness requirement of the bit-sliced masking technique mainly depends on the multiplicative complexity, i.e., the number of AND gates in the masked circuit. Indeed, a more precise measure is the parallel multiplicative complexity. While from previous works it is already known how to obtain optimal circuits (w.r.t. multiplicative complexity) for small S-boxes by using SAT solvers, solving the same problem for 6-bit or larger S-boxes had remained as an interesting problem. In the current work, the authors propose a new heuristic method to obtain boolean circuits of low multiplicative complexity for arbitrary S-boxes. The proposed method follows the same approach as a previous work [CRV14] that computes efficient polynomial representation of S-boxes over binary finite fields. The authors provide a heuristic analysis of the multiplicative complexity of their proposed method that is quite close to the experimental results for S-box sizes of practical relevance. Finally, an implementation of the bit-sliced masking technique evaluating sixteen 4-bit S-boxes in parallel and another implementation evaluating sixteen 8-bit S-boxes in parallel on a 32-bit ARM architecture is performed. The timing results seem to indicate that the bit-sliced masking method performs way better than the polynomial-based masking methods when the number of shares is greater than a certain bound.

[CRV14] Jean-Sébastien Coron, Arnab Roy, Srinivas Vivek: Fast Evaluation of Polynomials over Binary Finite Fields and Application to Side-Channel Countermeasures. CHES 2014 & JCEN 2015.
[GR16] Dahmun Goudarzi and Matthieu Rivain. How Fast Can Higher-Order Masking Be in Software? Cryptology ePrint Archive, 2016.

Tuesday, August 23, 2016

CRYPTO 2016 – Backdoors, big keys and reverse firewalls on compromised systems

The morning of the second day at CRYPTO’s 2016 started with a track on “Compromised Systems”, consisting of three talks covering different scenarios and attacks usually disregarded in the vast majority of the cryptographic literature. They all shared, as well, a concern about mass surveillance.

Suppose Alice wishes to send a message to Bob privately over an untrusted channel. Cryptographers have worked hard on this scenario, developing a whole range of tools, with different notions of security and setup assumptions, between which one of the most common axioms is that Alice has access to a trusted computer with a proper implementation of the cryptographic protocol she wants to run. The harsh truth is that this is a naïve assumption. The Snowden revelations show us that powerful adversaries can and will corrupt users machines via extraordinary means, such as subverting cryptographic standards, intercepting and tampering with hardware on its way to users or using Tailored Access Operation units.

Nevertheless, the relevance of these talks was not just a matter of a “trending topic”, or distrust on the authoritarian and unaccountable practices of intelligence agencies. More frequently than we would like, presumably accidental vulnerabilities (such as Poodle, Heartbleed, etc.) are found in popular cryptographic software, leaving the final user unprotected even when using honest implementations. In the meantime, as Paul Kocher remembered on his invited talk the day after, for most of our community it passes without notice that, when we design our primitives and protocols, we blindly rely on a mathematical model of reality that sometimes has little to do with it.

In the same way as people from the CHES community has become more aware –mainly also the hard way– that relying on the wrong assumptions leads to a false confidence of the security of the deployed systems and devices, I think those of us not that close to hardware should also try to step back and look at how realistic are our assumptions. This includes, as these talks addressed in different ways, starting to assume that some standards might –and most of the systems will— be compromised at some point, and that we understand what can still be done in those cases.

How would a Cryptography that worries not only about prevention, but also about the whole security cycle look like? How can the cryptography and information security communities come closer?

Message Transmission with Reverse Firewalls— Secure Communication on Corrupted Machines

The reverse firewalls framework was recently introduced by Mironov and Stephens-Davidowitz, with a paper that has already been discussed in our group’s seminars and this same blog. A secure reverse firewall is a third party that “sits between Alice and the outside world” and modifies her sent and received messages so that even if her machine has been corrupted, Alice’s security is still guaranteed.

Their elegant construction does not require the users to place any additional trust on the firewall, and relies on having the underlying cryptographic schemes to be rerandomizable. With this threat model and rerandomization capabilities, they describe impossibility results as well as concrete constructions.

For example, in the context of semantically secure public-key encryption, in order to provide reverse firewalls for Bob, the scheme must allow a third party to rerandomize a public key and map ciphertexts under the rerandomized public key to ciphertexts under the original public key. In the same context, Alice’s reverse firewall must be able to rerandomize the ciphertext she sends to Bob, in such a way that Dec(Rerand(Enc(m)))=m.

Big-Key Symmetric Encryption: Resisting Key Exfiltration

The threat addressed in Bellare’s talk is that of malware that aims to exfiltrate a user's key, likely using her system’s network connection. On their work, they design a schemes that aim to protect against this kind of Advanced Persistent Threats by making secret keys so big that their undetected exfiltration by the adversary is difficult, yet making the user’s overhead almost exclusively in terms of storage instead of speed.

Their main result is a subkey prediction lemma, that gives a nice bound on an adversary’s ability to guess a modest length subkey, derived by randomly selecting bits of a big-key from which partial information has already been leaked. This approach, known as the Bounded Retrieval Model, has been –in the words of the authors—largely a theoretical area of research, whereas they show a fully concrete security analysis with good numerical bounds, constants considered.
Other highlighted aspects of their paper were the concrete improvements over [ADW09] and the key encapsulation technique carefully based in different security assumptions (random oracle, standard model).

Backdoors in Pseudorandom Number Generators: Possibility and Impossibility Results

The last talk of the session focused on the concrete problem of backdoored Pseudorandom Number Generators (PRGs) and PRNGs with input, which are fundamental building blocks in cryptographic protocols that have already been successfully compromised, as we learnt when the DUAL_EC_DRBG scandal came to light.
On their paper, the authors revisit a previous abstraction of backdoored PRGs [DGA+15] which modeled the adversary (Big Brother) with weaker powers than it could actually have. By giving concrete “Backdoored PRG” constructions, they show how that model fails. Moreover, they also study robust PRNGs with input, for which they show that Big Brother is still able to predict the values of the PRNG state backwards, as well as giving bounds on the number of these previous phases that it can compromise, depending on the state-size of the generator.

[ADW09] J. Alwen, Y. Dodis, and D. Wichs. Leakage-resilient public-key cryptography in the bounded-retrieval model. In S. Halevi, editor, CRYPTO 2009, volume 5677 of LNCS, pages 36{54. Springer, Heidelberg, Aug. 2009.

[DGA+15] Yevgeniy Dodis, Chaya Ganesh, Alexander Golovnev, Ari Juels, and Thomas Ristenpart. A formal treatment of backdoored pseudorandom generators. In Elisabeth Oswald and Marc Fischlin, editors, EUROCRYPT 2015, Part I, volume 9056 of LNCS, pages 101–126, Sofia, Bulgaria, April 26–30, 2015. Springer, Heidelberg, Germany.

CHES 2016: Flush, Gauss, and Reload – A Cache Attack on the BLISS Lattice-Based Signature Scheme

MathJax TeX Test Page
Leon Groot Bruinderink presented at CHES a cache-attack against the signature scheme BLISS, a joint work with Andreas Hulsing, Tanja Lange and Yuval Yarom.
The speaker first gave a brief introduction on BLISS (Bimodal Lattice Signature Scheme), a signature scheme whose security is based on lattice problems over NTRU lattices. Since such problems are believed to be hard even if in the presence of quantum computers, BLISS is a candidate for being a cryptographic primitive for the post-quantum world. In addition, its original authors proposed implementations making BLISS a noticeable example of a post-quantum algorithm deployable in real use-cases.
Informally speaking, a message $\mu$ is encoded in a challenge polynomial $\mathbf{c}$, which is then multiplied by the secret key $\mathbf{s}$ according to the following formula: $$ \mathbf{z} = \mathbf{y} + (-1)^b ( \mathbf{s} \cdot \mathbf{c} ) $$ where the bit $b$ and the noise polynomial $\mathbf{y}$ are unknown to the attacker. It is easy to see that if the attacker gains information about the noise polynomial, some linear algebra operations would lead her to the secret key. The coordinates of $\mathbf{y}$ are independently sampled from a discrete Gaussian distribution, which is implementable in several ways. The ones targeted in the paper are CDT and rejection samplings. In particular, the first method was also covered during the talk therefore I am focusing only on that in this blog post.
The idea behind CDT sampling is precomputing a table according to the cumulative distribution function of the discrete Gaussian, drawing a random element and considering it as an index in the table. The element in the cell indexed by the random number is returned. In the end, elements returned by such a procedure will be distributed statistically close to a discrete Gaussian. Although being fast, this has the drawback of needing to store a large table, fact that it is known to be vulnerable to cache-attacks.
The peculiarity of the attack carried out by Bruinderink \emph{et al.} is that, since the algorithm does not return the exact cache lines in which the sampling table is accessed, the equations learned are correct up to a small error, say $\pm 1$. The authors managed to translate such an issue into a shortest vector problem over lattices. Then, they run LLL algorithm to solve the problem and retrieve correct equations.

Thursday, August 18, 2016

Crypto & CHES 2016: 20 years of leakage, leakage, leakage

Paul Kocher was invited to give an invited presentation at Crypto and CHES, which was certainly deserved on the 20th anniversary of his paper that has more than 3500 citations on Google Scholar. The content of his talk ranged from tales of his work over philosophical considerations on security to an outlook to the future.

It was interesting to see how Kocher, a biologist by training, got into cryptography and managed to break implementations via side channels with rather cheap equipment, including parts from a toy electronics kit. He claimed that they could break every smart card at the time, which was of course disputed by the vendors.

In the philosophy part of the talk, the speaker brought up various analogies that I found interesting even though they did not provide direct advice. For example, he compared security to fractals that provide ever more detail the closer you look. More practically, Kocher mentioned building codes and the aviation industry. Both minimize risks by best practices that include safety margins even though these incur extra costs. However, I could not help thinking that aviation killed a fair amount of people before the standards improved.

On the outlook, Kocher did not seem particularly optimistic. The world is already complex and full of devices that regularly exhibit security problems, but it will get worse with the Internet of Things, where there will be more devices with longer life spans produced by vendors with less security knowledge while the impact of vulnerabilities will be even higher. He predicted that the security breaches will get worse for 3-5 years at least.

In terms of constructive ideas, he suggested to move the security into chips because it won't be ruined by the lower layer there. There already has been a significant move in that direction with Intel's SGX, but there are of course other approaches.