Thursday, November 28, 2013

Sleuth: Automated Verification of Software Power Analysis Countermeasures

Yesterday's study group was led by Elisabeth and was about the work contributed by Bayrak et al. to CHES 2013:
Ali Galip Bayrak, Francesco Regazzoni, David Novo, Paolo Ienne, "Sleuth: Automated Verification of Software Power Analysis Countermeasures", Cryptographic Hardware and Embedded Systems - CHES 2013, Lecture Notes in Computer Science 8086, 2013, pp 293-310 (http://dx.doi.org/10.1007/978-3-642-40349-1_17)
In a nutshell, if one tries to implement a block cipher in a straightforward manner, the implementation may be leaky and susceptible to side-channel attacks. To increase the system's security against attacks of this nature, hardware and software countermeasures can be employed. In the aforementioned work, the authors present a method for automated verification of the security against side-channel attacks of a protected software implementation.

The authors discuss the notions of sensitivity and insensitivity of security-critical applications against power analysis. An operation or group of operations is defined as sensitive, if it depends on secret data but not on any random data. As an example, the authors mention this operation:
st = pt xor key
This operation is vulnerable to power analysis when executed on embedded devices, because power consumption characteristics depend on the secret key, which can therefore be recovered through Differential Power Analysis. This vulnerability can be avoided by applying a random mask to the secret variable. Intermediate calculations are randomized and the mask is removed at the end of the calculation before outputting the ciphertext. However, a secret intermediate value may be leaked if a mask is removed too early and problems of this nature are difficult to detect manually in long and complex implementations.

For their "Power Analysis Sensitivity" method, the authors define four elements:
  • A Straight Line Program, which is a sequence of branch-free operations.
  • Additions to the Type System, whereby the developer tags each variable as public, secret, or random.
  • A Leakage Model.
  • Sensitivity of a vector of operations and the leakage related to it.
In this context, a set of operations is sensitive if it has both of the following characteristics:
  • It statistically depends on at least one secret input variable
  • It does not depend statistically on any random input variables
Verification is a two-step process. Firstly, the authors convert a program to a Data Flow Graph (DFG). Subsequently, the DFG is analysed for the detection of sensitive operations.

Each DFG node represents either an arithmetic/logic operation (x = y op z) or an array handing operation (x = y[z] or y[z] = x). The former node type will have two incoming edges (y and z) and one outgoing edge (for x).

Nodes handing array operations are somewhat more complex, because the accessed address may or may not depend on an input variable. The address is statically-determinable if it depends on a constant (e.g. y[0] = x), since the program will always access element 0 regardless of inputs. Conversely, x = y[z] is not statically-determinable if z is a node input. If all accesses to an array are statically-determinable the array is called directly accessed and all its accesses are treated as if they were ordinary assignment operations (as above). Otherwise, it is called indirectly accessed and its behaviour is represented in the DFG through a Look-Up-Table (LUT). To achieve perfect accuracy, the LUT is formed by all inputs influencing either data in the array or the array element being accessed. Therefore, the LUT is a 2m : 1 multiplexer, whereby m is the total width in bits of the aforementioned inputs.

A DFG can compute the univariate leakage of an operation as, for example, the Hamming Weight (HW) of the result of the lookup operation. Similarly, bivariate leakage caused by the sequential execution of two operations can be calculated as a Hamming Distance (HD). HW and HD are only examples of leakage models.

As discussed above, sensitivity has been defined in terms of statistical dependence between variables. To test these conditions, the authors formulate the problem as a satisfiability query (SAT) and argue that the statistical dependence condition is analogous to don't care analysis of logic synthesis. Their solution first checks the second sensitivity condition (dependence on random variables) before testing dependence on a secret variable (first condition).

Sleuth has been implemented in the back-end of the LLVM open source compiler. Sleuth uses LLVM assembly as its input. The authors discuss that machine-specific assembly can be converted to LLVM assembly in a straightforward fashion and mention as an example a script developed by themselves to perform such conversion from AVR assembly. High-level languages (e.g. C) can be analysed by first compiling them into LLVM assembly using off-the-shelf compilers. As discussed above, the verification algorithm works on straight line programs. To this end, Sleuth converts LLVM assmebly to a straight line program by unrolling loops and inlining function calls, without applying any optimizations. The output of this conversion is used to generate the DFG and operations are then queried for sensitivity based on security types and leakage models provided by the user.

Sunday, November 24, 2013

Active two-party computation

A large number of protocols have been proposed in the last few years for active secure two-party computation. One significant approach is based on Yao’s garbled circuit. We have already discussed garbled circuit technique previously in this blog. Informally speaking, in the standard protocol the two parties, Alice and Bob, work asymmetrically: one party (say Alice), also called the generator, constructs a garbled circuit computing the original function f, and sends it to Bob, the evaluator. He evaluates the garbled circuit producing the output f(x,y), where x and y in {0,1}n are the private inputs of Alice and Bob, respectively. A garbled circuit (GC) is nothing but an “encrypted” circuit: along with the GC the evaluator receives “random” keys corresponding to his input bits yi; then he computes their encryption obliviously via 1-out-of-2 OTs and evaluates the circuit.

To achieve active security the “cut-and-choose” paradigm is often used. Alice produces many garbled circuits, then Bob asks Alice to open a fixed fraction of them (typically an half). He aborts if any of these check-circuits are inconsistent; otherwise he evaluates the remaining ones (the evaluation-circuits) and chooses the majority of their results as final output. While cut-and-choose solves the problem of corrupted Alice constructing the garbled circuits incorrectly, it produces some well-known byproducts. Mainly we need to ensure that the parties use the same inputs in all the evaluations. Many different approaches have been proposed to address the problems related with the cut-and-choose paradigm and malicious adversary.

In our last study group, led by Enrique, we consider three different papers LP07, LP11 and  L13. In LP07 the authors firstly provide an implementation and a full proof of security for active secure Yao’s protocol based on cut-and-choose. To ensure Alice’s input consistency they proposed an approach requiring O(s2n) commitments, where s is the statistical parameter of cut-and-choose and n is the input size. Another problem is the so called selective failure attack: a corrupted Alice could provide inconsistent random key to the Bob's input wire and its associated OT, gaining information about Bob's input. The suggested solution in LP07 requires a larger circuit and an encoding of the input bits.
We recall that Bob cannot abort even if he knows that Alice is cheating, i.e. in the case that the evaluation-circuits do not all provide the same result, and that this is the reason he takes the majority output. Now suppose that the total number of garbled circuits is s and that an half of these are opened. An adversary succeeds if s/4 circuits are incorrect and none of them is checked. So Alice can cheat with probability at most 2-s/4. In LP07 the authors prove a bound for the error probability of 2-s/17 . In practice this means that to achieve an error of 2-40 we need to construct and exchange 680 circuits.

In LP11 this bound is improved to 2-0.311s. In this work the authors not only decrease the number of circuits (132 to have an error of 2-40), but they also remove the commitments and the necessity to encode the bit's input, incorporating the checking on the circuits and the OT. In order to obtain this result they define a new functionality, the cut-and-choose OT functionality. Their implementation is based on an instantiation of the OT protocol from PVW08 based on the DDH assumption; however while the protocol in [PVW08] is in the common reference string model, the protocol in LP11 is in the plain model.

The major drawback of this approach still remains the large number of OT needed. In LP13 the author specifically handles this issue. The main idea here is that Alice can only act incorrectly by making all of the check-circuits correct and all of the evaluation-circuits incorrect (not just the majority). If this holds, the probability that each circuit is independently chosen to be an evaluation or a check circuit is 1/2, and the probability of Alice to cheat is exactly 2-s. This implies that to obtain an error probability of 2-40 we need 40 circuits! This result is obtained using a "proof of cheating": if Bob obtains two different values in the evaluation phase, he is able to produce a proof that he received inconsistent outputs. Then if this proof is effectively valid, he receives Alice's input and can evaluate the circuit correctly.

Tuesday, November 19, 2013

How to stop the NSA from tampering with your vote

I'm just back from a two-day PhD Voting Workshop in Switzerland. A great event in many ways, from the location (all security events should be held in castles from now on!) to the food. Oh, and we talked about voting too.

Few people understand what cryptographic voting is really about but a lot of organisations and even nations are waking up to the idea of voting online. The two are of course not the same thing: by cryptographic voting we mean a lot more than SSL/TLS.

Some examples: Switzerland at the moment has neither online nor cryptographic voting at a national level, Estonia has online voting but it's not cryptographic in the way we use the word, Israel uses cryptographic but not online voting for primaries (wombat-voting.com) and Norway has an online, cryptographic (in our sense of the word) system.

The moment you decide to implement online voting, you run into the whole area of computer and internet security issues: malware, password problems, man-in-the-middle attacks, the NSA and so on. Cryptography can't solve these problems entirely but it can add an extra safeguard: verifiability.

A verifiable election typically uses a "bulletin board" on which everyone posts their ballot, usually an encrypted form of their vote. Any voter can save a copy of their ballot and check later on that it's still on the board and hasn't been tampered with. Better still, any member of the public can take a look at the board and check that each ballot there is valid and that the announced election results match the ballots on the board. This is like putting a camera in the vote-counting centre and letting the whole world watch the counting process, with the difference that everyone's vote is kept secret even though the count is public.

At the workshop this year we had many interesting talks. Some were on existing systems like the Estonian scheme, the UniVote system used at Swiss universities or the Primesvote system used by shareholders of the Swiss bank UBS. Others talked about new systems under devlopment, with two separate projects looking at bringing cryptographic voting to smartphones.

What impressed me most as opposed to some other conferences I've been at was the very practical aspect, every talk either about a system that exists for real or one the speakers are trying to build in the coming years. Cryptographic voting as an idea has been around for over 30 years now but there's still many challenges in bringing it to the market. Meanwhile, the market is definitely there if we can build systems that are usable in practice.

So what about the NSA? Well, anyone deploying online voting secured by nothing more than a padlock icon is just opening the door to them - we know that they can get around that kind of cryptography if they really want to. But Snowden also said that cryptography works "if done properly" - and I consider cryptographically verifiable voting to be the only proper way to hold elections online.

Friday, November 15, 2013

How to Search on Encrypted Data, in Practice


With the rise of cloud computing and storage in the last decade or so, many people have raised concerns about the security issues of outsourced data. And to match this, cryptographers have come up with many proposals aiming to solve all of these problems. The most well-known of these methods, Fully Homomorphic Encryption, can in theory offer a perfect solution to this, but is extremely inefficient and not currently practical for most real-world situations. Other solutions include Oblivious RAM and special forms of identity-based encryption, but these are both still pretty expensive.

Symmetric Searchable Encryption (SSE), which we looked at in this week's study group, stands at the other end of the scale. It is extremely practical, capable of handling extremely large (hundreds of GB) databases, and even has performance comparable to MySQL. However it does compromise somewhat on leakage. It is similar in some ways to deterministic encryption, but ensures that leakage only occurs when a query is made (rather than as soon as the database is uploaded), and has some clever techniques to minimise the quantity of this leakage.

Our study group focussed on the Crypto 2013 paper by Cash et al., which also describes their state-of-the-art implementation. In this post I'll cover the basic concepts and mechanisms, and explain exactly what leaks and when.

For a basic overview of the many ways in which encrypted search can be done, I recommend checking out Seny Kamara's blog, which currently has a series of articles running on this topic.

Scenario


A client holds a database (or collection of documents) and wishes to store this, encrypted, on a third party server. The client later wants to search the database for certain keywords and be sent an encrypted list of documents that contain these keywords. Note that we're not searching the entire contents of the database, only a set of keywords that have been indexed in advance (just as Google doesn't look in every HTML file on the web for your search terms, instead using precomputed data structures based on keywords). Also note that the server only returns a set of encrypted identifiers, rather than the documents themselves. In the real world the client would then request the documents from the server.

The remarkable aspect about the Crypto paper is that it aims to do the above in time proportional to the number of documents matching the least frequent keyword in the search query -- independent of the database size! Efficiency-wise, this seems optimal -- you can't do much better this, even on plaintext data. The compromise, of course, comes with leakage, which we'll look at shortly.

Warm-up: Single Keyword Search


Let's start with the simple scenario of searching for a single keyword.
To enable search time proportional to the number of documents matching the keyword, the database first needs to be structured in a special way. It is arranged into a table T, so that for every keyword w, T[w] contains a list of indices of all the documents containing w, encrypted under the key Kw, obtained by applying a PRF to w. That is:

T[w] = { Enc Kw (ind) : ind  DB(w) }

where DB(w) is the set of indices of documents containing w.

Now this is not yet secure. Firstly, the keywords will be leaked to the server as they are not encrypted. Secondly, even if the keywords were somehow hidden, the structure of T will reveal the frequencies of all keywords in the database, which the server could use to perform frequency analysis. The way around these problems is to store T in a special data structure that hides the association between keywords and documents.

The data structure, called a TSet, consists of a number of fixed size buckets. Each document index in T[w] is randomly assigned to a bucket, by using a hash function H and PRFs F and F applied to w and the index of T[w], as illustrated below. A bit β is concatenated with the entry of T[w] to signal when the last element of T[w] is reached. This is all then XORed with a key (also derived from H), and added to a random empty position in the bucket, alongside a label L. L is also obtained by hashing, and is needed to identify the correct entry within a bucket. Modelling H as a random oracle ensures that the bucket and key are truly random, and using the key as a one-time pad hides all information about T[w] from the server.



The bucket sizes need to be chosen to ensure that the probability of overflow isn't too high - as long as this probability is reasonably small, the setup can just be repeated with different PRF keys until everything works. This needs a storage overhead of around 2-3x.

Searching


After this setup is done by the client, they upload the TSet data structure to the server. Searching for a keyword w is done by:

  1.  Client sends the search tag stag = FKw(w) to Server

  2.  Server sets β = 1, = 0. While β = 1, do

  • Derive a bucket, label and key by hashing Fstag(i)
  • Search the bucket for an entry containing the label
  • XOR with the key to obtain β, si
  • Send si to the client, increment i

  3. Client decrypts the indices si

Leakage


Let's stop a minute and think about how much leakage is going on here. The server can learn:

  •  The indices of all documents containing w (the access pattern)
  • When repeated queries have been made (the search pattern)

Compare this with a simple solution using deterministic encryption, where the table T is stored with each w encrypted deterministically. In this case the server can straight away learn the keyword frequencies, before any queries have been made, whereas using the TSet as described above, this information is only gradually revealed with each query.

Conjuctive queries - naively


For the case of conjunctive queries, i.e. searches of the form w1 AND w2 AND w3, things are more complex. Naively, the client could perform each search separately, then compute the intersection themselves. However, this has two main drawbacks. For queries where one of the keywords is very common, e.g. "gender=male", this would require a large proportion of the database indices to be sent to the client. Moreover, the server still learns the encrypted indices of every document matching one of the keywords, which is more leakage than just the access pattern of the result.

Alternatively, you might try and have the server perform the intersection. The client could send the server the secret key Kw (derived from a PRF) for each keyword, but this would still leak all indices matching at least one keyword.


Conjunctive queries - less leakage


To get around these issues, they propose a modified protocol called Oblivious Cross-Tags (OXT), which allows the server to compute the intersection without learning (much) more information than the indices of documents matching the entire query. Impressively, they manage to do this in time proportional to the least frequent keyword in the query.

For this to work, another data structure, called an XSet, is needed. The XSet is essentially a set of all 'encrypted' pairs (w, ind), where w is a keyword appearing in document ind. The TSet from the single-query search is also augmented with an 'encrypted' index for each entry. During search, the TSet is used for the first keyword, which gives the server the encrypted indices matching this keyword. The client also sends some tokens allowing the server to compute encryptions of (w, ind) for the rest of the keywords and the encrypted indices already recovered. These pairs are then searched for in the XSet (implemented as a bloom filter) and the resulting matches sent to the client.

The cleverness lies in the special way in which the XSet pairs and the extra indices in the TSet are encrypted. Their initial idea was to use an interactive protocol based on DDH to give the search tags for the correct terms and indices to the server, but this turns out to be insecure, and also needs an extra round of computation. By precomputing the first protocol message and making some other neat changes, they get around both of these issues. I'll leave you to figure out the details from the paper (pages 12-15).

To make the running time only dependent on the least frequent keyword, you have to ensure that this keyword is always searched with the TSet, and the XSet is used for all other keywords. By storing a small state with some basic keyword statistics, it's straightforward for the client to do this every time.

Leakage


Leakage-wise, this improves a lot on the naive methods. The only situation where the server might be able to learn more than desired is when the client sends two queries with different least-frequent keywords, but the same most frequent keywords. In this case, say when (w1, w2) and then (w1', w2) are queried, the server can compute the indices matching (w1, w1'). Also, note that in a conjunctive query of several terms, e.g. (w1, w2, w3), the server can learn the number of documents matching (w1, w2) and (w1, w3) in addition to the number of matches for the entire query (this leakage doesn't seem to be mentioned in the paper, but came up in our discussion).

Conclusion


I think this technology is a really cool way of increasing the security of deterministic encryption, whilst maintaining plaintext-like search speeds, and it seems like a good alternative to things like CryptDB, which leaves much to be desired regarding security. Even so, I would advise caution to anyone considering this kind of technology. We've recently been reminded of the perils of deterministic encryption (and several other bad security practices) with the leakage of Adobe's password database, so a careful assessment of the scenario is essential.

It's worth noting that in many cases the adversary is not the storage provider, but a 3rd party who may have obtained access to their servers. Using SSE ensures that nothing can be learned when seeing a snapshot of the database, and it would typically be much more difficult for an attacker to also gain access to the individual queries. For these scenarios it could really make a lot of sense to deploy SSE.

Clearly this is an area that needs more research to assess the suitability of specific applications. Also it would be really nice to see if other technologies like FHE and ORAM can be used to reduce the leakage, whilst still maintaining a reasonable level of efficiency.

Monday, November 11, 2013

These are not the packets you are looking for.

Deep Packet Inspection (DPI) is now a common tool in the management of computer networks. On the positive side it can be used to help improve network performance but more negatively, it can aid businesses or nation states in data mining, eavesdropping and Internet Censorship. So how does it work? Most modern DPI systems use regular expressions as a means to define a fingerprint for a particular protocol. Packets can then be examined to see whether they match with this expression/fingerprint.

On the Tuesday morning of CCS Kevin Dyer explained how to fool a regular-expression based DPI system to incorrectly classify a connection as that of the attacker's choice, for example make Tor traffic look like regular HTTP traffic. The paper he presented is entitled "Protocol Misidentification Made Easy with Format-Transforming Encryption" and is joint work with Scott Coull, Tom Ristenpart and Tom Shrimpton [pdf]. To perform such misidentification attacks they introduce a new cryptographic primitive called Format-Transforming Encryption (FTE). 

An FTE scheme, just like any other encryption scheme, is defined by three algorithms: key generation, encryption and decryption. The difference arises in the fact that in FTE the encryption scheme takes as input the key k, message m and in addition a message format F. The ciphertext that is output by the encryption algorithm shall be contained in the language L(F) defined by F. Similarly, for decryption the message format must be given as input. It is easy to see that this is in fact a generalisation of Format-Preserving Encryption introduced by Bellare et al. [pdf], where the message format of both the plaintext and the ciphertext are the same.

An FTE scheme can be created by using an "Encrypt-then-Unrank" construction. More specifically, first a message is encrypted using a secure authenticated encryption scheme (the implementation given in the paper uses an Encrypt-then-MAC construction with Counter mode and HMAC). Next the ciphertext is treated as an integer in {0, 1, ..., |L(F)|-1} and is encoded using a function unrank which maps the inputs to an element in the language L(F). (In order to decrypt, unrank must be a bijection with an efficiently computable inverse rank). The paper builds on this scheme by describing how to construct a full FTE record layer which can be used to transform the format of arbitrary streams of TCP traffic.

The paper gives full details of the experiments they carried showing their (near perfect) success in fooling almost all of the studied DPI systems into continually misclassifying packets (the only blip was the SSH classifier of the nProbe DPI system).  Further to this they also studied the efficiency of FTE where the bandwidth overhead can be as small as 16%. Most interesting of all, in addition to the experimental results, Kevin described how they were able to successfully use the system to prevent Internet censorship. The Great Firewall of China is known to block a number of websites including Facebook and YouTube. By setting up an FTE tunnel between the USA and China, Kevin and his co-authors were able to successfully view censored websites within China. Building on this they were also able to successfully use the FTE tunnel to route Tor traffic between the China and the US (Winter and Lindskog recently showed Tor was also being actively blocked [pdf]).

For anyone interesting in using their FTE implementation it is freely available here.

Friday, November 8, 2013

The Bitcoin Improvement That Really Protects Your Privacy

At PETshop on Monday before CCS, Cédric Fournet gave a talk on how extend Bitcoin to be provably anonymous.

Recent papers show that Bitcoin only offers limited anonymity because transactions are linked by the so-called addresses in the public ledger. This allows to run graph algorithms to find clusters and thus link addresses together and possibly to people if an address was to used to buy or sell anything. There are numerous anonymizer services, some of which simply steal the coins. However, even services with better intentions cannot completely guarantee anonymity.

Zerocoin addresses the issue by having transaction only linked through secret information. More concretely, every transaction features a commitment to a random serial number, the serial number of a previous transaction, and a zero-knowledge proof that the serial number is actually hidden in one of the previous transactions. The authenticity of a transaction is given by the proof, the uniqueness of the serial number prevents double spending, and the zero-knowledge property breaks the public link between transactions. However, every proof depends on the whole history, which implies that the verification complexity grows with the ledger, whereas in Bitcoin this only depends of the last usage of the coin.

It remains to choose a commitment scheme. Zerocoin uses Pedersen commitments and double-discrete logarithm proofs, to prove knowledge of an opening of an undisclosed commitment in a given list. It uses about one minute to both generate and verify a proof. To speed this up, Cédric proposed a system called Pinocchio, which is a pairing-based non-interactive zero-knowledge proof system. This allows information-theoretic succinct proofs with eight group elements in an extension field of a 254-bit prime order field. Another possibility would be to use SHA1-based commitments, but this would make verifying inefficient.

Cédric also highlighted that verifiable computation could become the norm with such a practical system that implements succinct non-interactive zero-knowledge proofs for an NP-complete language.

Thursday, November 7, 2013

ACM CCS 2013 - Day Two

The section on Secure Multiparty Computation started with the very good presentation given by abhi shelat on “Fast two-party Secure Computation with Minimal Assumptions” (joint work with Chih-Hao Shen). He started the talk pointing out how in recent times multiparty computation have become more and more practical: since the Yao's seminal work in 1982, many different protocols have been proposed providing, in the last few years, drastic improvements and efficient implementations relevant for real-world applications.

The approach presented in the talk for secure two-party computation is essentially based on Yao's construction and on cut-and-choose technique to achieve security in the active model.
Informally speaking the protocol works as follows: we have two parties, Alice and Bob, that want to securely and privately evaluate an arbitrary function represented by a Boolean circuit C. Then one party, say Alice, prepares a “garbled” (or “encrypted”) version of the circuit C and gives it to Bob together with her appropriate input keys, in such a way that he can evaluate the circuit without learning anything about Alice's inputs and intermediate values. To achieve security against active adversary Alice creates many copies of the garbled circuit (with different randomness) and sends them to Bob. Then he asks Alice to open a fixed fraction of these circuits checking that they are correctly encoded.
Unfortunately using this technique we have three main issues to resolve: 1) Input inconsistency: we need that Alice use the same inputs in each evaluated circuits 2) Output authentication and privacy: a malicious Bob could learn Alice's output or outputs an incorrect value 3) Selective failure attack: a malicious Alice can use inconsistent inputs in the OTs and in creation of the garbled circuits to learn Bob's inputs. There are many different approaches to address these issues.

Abhi started focusing on the output authentication and privacy issue. The problem has been studied in many prior works: in [LP07] a very elegant solution has been proposed, but it requires O(k^2n ) symmetric operations (where k is the security parameter and n is the inputs size); the subsequent protocols in [LP11, Kir08, SS11] require only O(kn) symmetric operations but they also need some extra expansive operations. The new approach they proposed consists essentially in a new witness-indistinguishable proof and it does not need any extra computations or any number-theoretic assumptions. To ensure the input consistency they use an auxiliary 2-universal hash circuit; and for the selective failure attack they propose the same solution as in [LP07], but with a different and more efficient implementation. All in all they manage to obtain a two-party computation protocol in the malicious setting that is faster than prior protocols, which use the same techniques, and with fewer assumptions.

The talk concluded with a brief comparison with other works on secure computation in the malicious setting, and in particular with [NNOB12]. More specifically the protocol in [NNOB12] turns out to be more efficient in terms of computation complexity (it requires O(k/log(n) C) symmetric operations in the RA model, while the one presented in the talk requires O(kC) operations in the standard model), but it is worst in terms of communication complexity.
Finally  Abhi discussed about the advantage of using garbled circuits for secure two-party computation, in particular because, using this technique, we do not have any asymptotic overhead  in  passing from passive to active security.

In the second talk of the section Michael Zohner presented a joint work with Gilad Asharov, Yehuda Lindell and Thomas Schnesider on More Efficient Oblivious Transfer and Extensions for Faster Secure Computation. In this work the authors describe a more efficient protocol for OT extensions in the passive model. In addition they provide specific functionalities for OT extensions especially designed for secure computation protocols and an open source optimized OT implementation.

The last talk of the section was given by Marcel. He very well presented the Bristolian work An Architecture for Practical Actively Secure MPC with Dishonest Majority (joint work with Peter and Nigel).

Tuesday, November 5, 2013

Hiding Where You Put That Data

A key issue in securing data is that encryption is often not enough. Consider a situation where you encrypt data in a database, it can turn out that your access patterns to the database can reveal a lot of information about your queries. For example suppose you look up the IBM stock price and then buy IBM stock. An attacker who does not see what you access can corelate the access location with you buying IBM stock. This leads to the question as to whether one can hide the access pattern.

The solution to this problem is to use ORAM, and that was the topic of two papers in the Tuesday afternoon of CCS.  In the first the authors presented the path-ORAM algorithm. This is a method to hide access patterns.  Data is held in a tree structure, with multiple blocks of data held at each node. The tree is assumed to be held on a remote server. To access a specific block the user requests one path in the tree, which contains their desired block. The server only learns the path, and not which specific block is accessed. 

This deals with reading data, but we also need to write the data back. Here we re-randomize the locations of all the blocks on the retrieved path and then we re-write the path back. By clever design of this re-randomization step the method can be made incredibly efficient and practical.

In the second talk the path ORAM algorithm was implemented to create a secure processor, namely a processor which operates on secure data. The resulting processor, called PHANTOM, is novel in that it also hides the memory access patterns. This is done by using a path-ORAM implementation in hardware. That this can even be considered is an indication that path-ORAM is an efficient methodology for obtaining ORAM in practice.

The talk on PHANTOM explained some of the engineering challenges in implementing ORAM in hardware. Explaining some of the changes needed to the basic algorithm so as to efficiently realise it, given the underlying hardware constraints. A lot of use is made of pipelining, so as to ensure that operations are performed in parallel. Care is also taken to avoid timing side-channels, by creating a DRAM buffer to mitigate delays between the RAM and the secure processor. Concrete timings obtained were about 25-35 micro seconds per ORAM access, this is a slow down of around 32 to 44 times over standard (non-secured) RAM access.

Honey I Blew Up The Password File


Ari Juels today at CCS gave a presentation of his Honeywords idea (joint work with Ron Rivest) which aims to proect against server breaches of password files.

Assume an adversary determines a password file, which we can assume (for most users) allows the adversary to obtain the clear text password (since most users choose easily guessed passwords). This is a real attack which has been mounted against various companies in the past couple of years, often resulting in high profile media coverage which the companies would have rather avoided.

Ari suggested adding to the password file a set of ``fake'' passwords, called Honeywords. Suppose there are 19 such Honeywords and one real one. The adversary on performing a server breach obtains twenty possible passwords, and so if he submits one of these at random he will be detected with probability 19/20, i.e. 95 percent. Such a detection also reveals to the server that they have been compromised.

This simple idea raises two key issues. Firstly how to detect the correct password out of a set of passwords, and how to choose the Honeywords. For the first problem the authors suggest a simple form of distributed cryptography. The first server takes the input password and checks it against the set of twenty words. The index of the input password within this set is then passed to a second server, which simply returns whether the index is the index of a Honeyword or the correct password.  As described the system is clearly secure against passive corruption of one of the two servers.

As for the second problem of how to choose Honeywords, the authors present a number of techniques. The main issue being that simply choosing random words as the Honeyword for some users will easily lead to the detection of the correct password.

Combined with standard hashing techniques for password storage the proposed Honeyword system seems to work well. However, an issue not discussed by the talk is what happens if an adversary actively compromises one of the two servers.

Friday, November 1, 2013

Does novel hardware require novel cryptography?

Novel hardware technology usually is introduced to overcome shortcomings of established technology but nothing is free in live so the novel technology usually has shortcomings of it's own which need to be accommodated in system design. For example, we're looking at a wave of Non-Volatile RAM (NVRAM) technologies - Flash memory being the best known but others such as Phase-Change Memory (PCM), Ferroelectric RAM (F-RAM) or Magnetoresistive RAM (MRAM) - sweeping in to replace established DRAM in embedded devices. This happens due to the relatively high power consumption of DRAM which needs to be regularly refreshed if no write operation happened within the last time frame. Contrary to that, NVRAM - as the name suggests - retains data even after power loss, removing the need for power consuming refresh operations.

However, these novel NVRAM technologies all have one major draw-back: after a certain number of write operations, they will invariably fail. Due to this, wear-levelling techniques that map logical addresses to pseudo-random physical addresses in order to spread write operations over the entire physical address space, ensuring that all memory areas have approximately the same life time. Of course, would cryptographic memory protection interfere with wear-levelling or incur additional write operations, that would be a bad thing. But do we need cryptographic protection for RAM memory? After all, we have lived happily for decades with unencrypted and unauthenticated DRAM. (Except maybe a few special cases where encryption and authentication was necessary.)

The authors of " An Efficient Run-time Encryption Scheme for Non-volatile Main Memory", Xian Zhang, Chao Zhang, Guangyu Sun, Tao Zhang and Jia Di raise an interesting issue: Cold boot attacks on unencrypted RAM become a lot easier if the RAM is non-volatile; NVRAM does not loose it's information and hence the "cold" part of cold boot attacks is not required any more. Basically NVRAM behaves similar to hard disks and several information leaks due to lost or stolen laptops have highlighted that hard disks in mobile devices should be encrypted.