Friday, October 22, 2010

Cryptoforma Workshop

Nigel, Essam, Ming-Feng and myself are currently attending the Cryptoforma Workshop at the University of Surrey.

The first talk of the day was given by Sriram Srinivasan, of the University of Surrey, on the topic of key sizes (in the context of electronic voting). Typically, when cryptographers give a cryptographic proof they discuss security in terms of the security parameter. However, what actually is the security parameter? For example, a key size of 128 bits alone is meaningless. When considering an asymmetric scheme, a 128 bits modulus really would provide no security at all. If we consider primitives, we find that to achieve an equivalent of 128bit AES security, in the asymmetric setting (for say RSA) we need keys that are 3248 bits long. When we want schemes based on elliptic curves, or discrete logarithms the key length is different again. There are various resources available to help choose the appropriate key size; the Ecrypt II Report on Algorithms and Key Sizes being one such document.

If we now extend our thinking to a protocol built from multiple primitives, we have several questions to answer to make choices about the security. How long do we need the protocol to provide security for? If our protocol is a voting scheme, some people may want a scheme to provide security for at least their lifetime, others may want it to remain secure forever. When proving the security of schemes, the proof often introduces a security loss. These may need to be considered. Sometimes this security loss may mean one needs to increase the key size of the underlying primitives (to provide equivalent security), but sometimes this loss may not be (practically) relevant. So, the question now would be what does this mean, and how on earth do we choose key sizes? And this, unfortunately, is a very open question indeed, which I will not try to answer here, and was the source of much discussion at the Cryptoforma Workshop.

A second talk today by Graham Steel on "Attacking and fixing PKCS#11 security tokens" discussed various ways of attacking cryptographic tokens in order to discover the cryptographic keys stored on the device. To do this they have built "Tookan", an automated tool. This tool reverse engineers a device by querying the device through its API to construct information on how the device functions (and commands available etc). This data can then be run through a model checker which can check for possible vulnerabilities in the token. For example, say there are two keys on the device, k1 and k2. You ask the device to encrypt k1 under k2, receiving back a ciphertext; you then ask the device to decrypt the same ciphertext and receive k1 as the plaintext. These techniques were applied to several real devices, and of the 17 commercially available tokens, 9 were vulnerable to attacks, and 8 had severely restricted functionality. Interesting, and slightly worrying!

Friday, October 8, 2010

Still some more CCS'2010

At CCS'10 Wilko Henecka, Stefan Kögl, Ahmad-Reza Sadeghi, Thomas Schneider and Immo Wehrenberg presented in their paper "TASTY: Tool for Automating Secure Two-partY computations" a new tool to implement a variety of (relatively efficient) secure two-party computation protocols and to compare the results of these protocols. It is based on previous work of the Fairplay Project which uses Yao circuits and previous improvements thereof. In TASTY, the authors implemented further optimization techniques which mainly focus on shifting as much computation as possible into the setup phase. Additionally, the authors implement the additively homomorphic scheme by Paillier and also allow for a hybrid mix of both.

The interesting part is, that now the efficiency of both schemes can be compared. And surprisingly enough, even though additively homomorphic encryption should be expected to be more efficient for multiplications, this does not seem to hold. Indeed the authors show that for some scenarios, Yao Circuits are more efficient at multiplications than the Paillier scheme. This is quite surprising.

Another interesting scheme that was proposed at CCS'10 is that of "Worry-Free Encryption: Functional Encryption with Public Keys" by Hakan Seyalioglu and Amit Sahai. It is based on Yao circuits as well and provides a solution for the following problem:
  • A has data d which may only be read by people with who have security clearance x1 but does not want to reveal that d is only accessible to people with security level x1.
  • B wants to get d from A without knowing which security level is required for d and without having to reveal his/her own security level xb.
Basically, this can be achieved if you have a function f( ) which produces
d=f(x1)
and different (random) output for all other security levels. This function has to look random so that it does neither reveal d nor x1 and it may only be evaluated once. Of course this is just what can be achieved with Yao Circuits. (This was really just a very fundamental explanation, please read the paper for an accurate description. For example, a central authority is required as well.)

Thursday, October 7, 2010

Turing at CCS'10

Today we had two papers at CCS'10 introducing new, Turing complete languages. The second one is "Return-Oriented Programming Without Returns" by Stephen Checkoway, Lucas Davi, Alexandra Dmitrienko, Ahmad-Reza Sadeghi, Hovav Shacham and Marcel Winandy and extends the concept of return-oriented programming into "jump-oriented programming" that uses jump instructions instead of return instructions to build gadgets and this has severe security implications as the authors showed at the examples of x86 processors and Android based devices running on ARM chips. But the first paper "Platform-Independent Program" by Sang Kil Cha, Brian Pak, David Brumley and Richard J. Lipton was even more impressive.

However, before I continue to write about the paper, I should give a short explanation of Turing complete languages and why these are important: In 1937, a few years before the first programmable computer was built, the mathematician Alan Turing invented the concept of a Turing machine to prove that a universal (or programmable) machine can be built that can solve any computable problem. Although no universal Turing machine will ever be built since it requires infinite memory, this is basically the most important result of computer science. If you take a set of instructions which are sufficient to simulate such a Turing machine (with exception of the infinite memory), this set is called "Turing complete". This certainly is not the biggest deal in the world since all modern processors have Turing complete instruction sets. And indeed, in both papers, the Turing completeness of the languages is only used to prove that their languages do not lack fundamental concepts.

So let me now explain what's so special about the language introduced in "Platform-Independent Program". Commonly the instruction sets of two different processors overlap to a certain extent but are not equal; a program for x86 processors will never run on an ARM processor and vice versa. So the authors of the paper started looking at the overlap of the instruction sets to find jump instructions that will have the following effect:
  • If executed on platform a, jump to address x.
  • If executed on platform b, jump to address y.
Now they can place instructions for platform a at position x and instructions for platform b at position y. Out of such short code sequences the authors build gadgets and all the gadgets together form a turing complete language. (The instructions at x and y do not have to have the same effect; on platform a the program might be a harmless desktop gimmick, on platform b it might be malware.) The really amazing thing about this is, that (to my knowledge) this is the first language that is at least semi-platform independent but does not require a virtual machine such as Java or an interpreter to achieve platform independence. (It still needs to have enough overlap in the instruction sets.)

Wednesday, October 6, 2010

Notes from CCS'10 - II

Today there were two papers beating on the almost same issues. One was "Dismantling SecureMemory, CryptoMemory and CryptoRF" by Garcia, van Rossum, Verdult and Wichers Schreur in which the authors analyzed the security of three Atmel chip families that claimed to guarantee security based on a proprietary stream cipher that was kept secret. The other paper was "Attacking and Fixing PKCS#11 Security Tokens" by Bortolozzo, Centenaro, Focardi and Steel which examines the security of 17 tamper resistant tokens that claimed to implement PKCS#11. In both papers, the authors managed to find severe weaknesses such that many of these devices now have to be considered broken.

In case of the Atmel chips the biggest issue was that the manufacturer chose a security-by-obscurity approach (possibly to reduce production costs). However, the authors didn't even have to use expensive semiconductor tools to extract the cipher description from the chips; all they needed to do was disassemble a software library and analyzing the code for the cipher specification. It took them just 3 days which means that even less knowledgeable people would have been able to do it within a reasonable amount of time. Once the algorithm was known, it was quite easy for the authors to break the devices with a combination of side-channel attacks and some cryptanalysis.

In case of the PKCS#11 tokens the authors constructed an automated tool to analyze the tokens and to exploit a range of vulnerabilities if possible. The result was quite devastating: Either the tokens did not offer full PKCS#11 functionality or they had at least one easily exploitable vulnerability. The worst thing was, that some of the vulnerabilities should not exist if the standard had been implemented properly.

So both papers address two major engineering issues for secure devices, both resulting from a lack of security awareness:
  • Security by obscurity does not work! If you have a secure algorithm, you can publish it. If it's not secure, it will leak.
  • A security standard is almost worthless if it does not come with automated standard compliance tests so that customers can verify that the products they want to buy actually are as secure as the standard. (There is no way to guarantee security against unknown vulnerabilities.)
The latter point comes with a couple of benefits:
  • The reputation of the standard will not suffer from bad implementations. Bad implementations just ruin the implementers reputation.
  • The implementation cost of a standard is reduced since implementation errors are more easily to detect. (If you have to implement something deadlines usually do not allow to develop your own testing tool for a standard of somehundred pages full of technical details.)
  • Standard compliant devices will be more trustworthy.
Two additional, somewhat speculative, advantages are that automated standard compliance testing will aid independent security testers and I believe it will help to discover ambiguities in the standard before the standard is adopted since the automated compliance test has to be implemented by that time.

Tuesday, October 5, 2010

Notes from CCS'10

I would like to point out a paper that was presented at CCS'10 today. I liked "Survivable Key Compromise in Software Update Systems" by Samuel, Mathewson, Cappos and Dingledine because it is an excellent example how careful engineering can ease (if not solve) the pains of a worst case scenario. Key compromise, especially of root keys, is the worst case scenarios of any Public Key Infrastructure (PKI) and the PKIs used to authenticate software updates are among those with the highest impact on our entire IT infrastructure. Every software contains some vulnerabilities and without authenticated software updates it is only a matter of time until attackers exploit those vulnerabilities for their purpose. Even worse, a malicious software update could be used to insert new vulnerabilities into secure systems. If you can not trust the PKI and the keys used to authenticate a software update, how can you trust the update? The root keys used to establish a PKI are very well protected and rarely used but, unfortunately, it doesn't mean that they are always secure, it just means that they are less likely to be compromised. But it still happens that you can not trust them anymore, as e.g. https://www.redhat.com/archives/fedora-announce-list/2008-August/msg00012.html shows. Replacing the compromised keys with new, trustworthy ones is a delicate task and regaining the lost trust is difficult. Unfortunately, currently the PKIs used for software updates do not prepare for this case much; therefore the TOR project decided to develop a new PKI system out of existing concepts that is better prepared to cope with this worst case and the result was presented in the paper. I do hope that bigger software projects, such as major linux distributions or Mozilla pick up on this and continue improving the update infrastructure.

Monday, October 4, 2010

Workshop on Privacy in the Electronic Society 2010

Today there were two interesting talks at the Workshop on Privacy in the Electronic Society (which is co-located with CCS 2010) that relate to the work we're doing at Bristol: The first one was on "Deniable Cloud Storage: Sharing Files via Public-key Deniability", a paper written by Paolo Gasti , Giuseppe Ateniese and Marina Blanton. In their paper they look at the scenario where multiple people collaborate on some files which are stored in a computing cloud and one of these persons is forced to hand over all of his/her keys to the attacker. If such a scenario has to be expected (e.g. because you have to travel to a country where the authorities can not be trusted) they show that you can prepare for this scenario: Based on Paillier's homomorphic scheme and RSA-OAEP they construct a deniable encryption scheme in which the attacker will not be able to tell whether you are revealing the true information or a manufactured false document. (Unless he can exploit a side-channel which in this case might be done using a lie detector.)

The other interesting talk was on "Investigating Privacy-Aware Distributed Query Evaluation", a paper written by Nicholas Farnan, Adam Lee and Ting Yu in which they describe their work on assuring privacy for SQL queries. The problem they are facing is that one query which combines data from multiple databases should not reveal more than possible to any of the databases: Each database should only see the information related directly to the data it is supposed to deliver. Additionally, the data bases should not learn the entire query, they should only learn the part of the query that has to be answered by them. If you have been reading the previous entries of this blog, that might remind you of the i-Hop homomorphic scheme presented by Gentry et al. at Crypto 2010 and indeed I believe that the i-Hop scheme can be used to solve some of the open issues that Farnan, Lee and Yu listed in their talk today.
However, that is not the solution they took. Instead they started looking at current implementations of SQL: SQL just describes what you want to learn with the query but it does not say how the answer has to be computed. One technique to do so are mutant query trees and these are what Farnan et al. looked at. In their research they ask how to split these tree into queries solvable by each database without revealing more than necessary and how to »homomorphise« (this is not the term they used but I guess it is the best generic description of what they are doing) them. So instead of designing a secure system that can be used to answer database queries (with a potentially large overhead) they took a very efficient, highly engineered database system and try to retro fit security into it.
It would be interesting to see, whether both approaches can meet in the middle to solve the security issues that Farnan et al. still have without suffering too much of an efficiency backlash from using the i-Hop scheme (or similar schemes).