Wednesday, May 1, 2013

Study Group: Multilinear Maps (I)

This week's student group (30 Apr) was given by Joop and Enrique. They discussed the paper "Candidate Multilinear Maps from Ideal Lattices" (PDF). The paper describes plausible lattice-based constructions with properties that approximate the soughtafter multilinear maps in hard-discrete-logarithm groups. The security of the constructions relies on seemingly hard assumption (which is a multilinear analog of DDH) in ideal lattices.

Definition. Cryptographic Multilinear Maps [BS03]:
For κ+1 cyclic groups G_1,...,G_κ ,G_T (written additively) of the same order p, a κ  multilinear map e : G_1,...,G_κ→ G_T has the following properties:

1. For elements {g_i  G_i}_{i=1,...,κ}, index [κ] and integer αZ_pit holds that e(g_1,...,α· g_i,...,g _κ) = α· e(g_1,...,g _κ).

2. The map e is non-degenerate in the following sense: if the elements {g_i G_i}_{i=1,...,κ}, are all generators of their respective groups, then e(g_1,...,g_κ ) is a generator of G_T .

A cryptographic multilinear map scheme consists of efficient procedures for instance-generation, element-encoding validation, group-operation and negation, and multilinear map,
MMP = (InstGenEncTestadd, neg, map). These procedures are as follows.
  • Instance Generation. A randomized algorithm InstGen that takes the security parameter λ and the multi-linearity parameter κ , and outputs (G_1,...,G_T,p,e,g_1,...,g_κ ).  G_i's and G_T describe the groups, p Z is their order,  G_1,...,G_κ→ G_T describes an κ -multilinear map as above, and g_i∈{0,1}^{*}  for i = 1,..., κ encode generators in these groups. Denote params=(G_1,...,G_T,p,e).
  • Element Encoding. Given the instance params, an index i∈[κ ], and a string x ∈{0,1}^{*},  EncTest(params,i,x) decides if x encodes an element in G_i. Similarly EncTest(params, κ+1; x) efficiently recognizes description of elements in G_T .
  • Group Operation. Given x,yG_i, add(params,i,x,y) computes x+y G_i and neg(params,i,x) computes -x G_i. This implies also that for any α G_i we can e ciently compute α· G_i.
  • Multilinear Map.For {x_i  G_i}_{i=1,...,κ}, map(params,x_1,...,x_κ) computes e(x_1,...,x_n)G_T.

Hardness Assumptions

For the multilinear map to be cryptographically useful, at least the discrete logarithm must be hard in the respective groups, and we usually also need the multilinear-DDH to be hard.


1. Multilinear Discrete-log (MDL). The Multilinear Discrete-Log problem is hard for a scheme MMP, if for all κ > 1, all   [κ] and all probabilistic polynomial time algorithms, the discrete logarithm advantage of A

AdvDlog_{MMP,A,κ}(λ) = Pr[A(params,i,g_i, α· g_i) = α : (params,g_1,...,g_l) ← InstGen(1^λ,1^κ ), αZp], 
is negligible in λ.


2. Multilinear DDH (MDDH). For a symmetric scheme MMP (with G_1 = G_2 = ), the Multilinear Decision-Di e-Hellman problem is hard for MMP if for any and every probabilistic polynomial time algorithms A, the advantage of A in distinguishing between the following two distributions is negligible in : (params, gα_{0}g, α_{1}g,...,α_{κ}g,( Prod_{i=0,..,κ} α_i)·e(g,..,g)) and (paramsgα_{0}g, α_{1}g,...,α_{κ}g, α·e(g,..,g)).


Graded Encoding Scheme: Efficient Procedure, the Dream Version
The authors first describe  a "dream version" of the efficient procedures which they do not know how to realize, then  modify them to deal with technicalities that arise from their use of lattices in the realization.

Instance Generation. The randomized InstGen(1 ^λ,1^κ ) takes as inputs the parameters λ, κ, and outputs (params, p_{zt}), where params is a description of a κ-Graded Encoding System, and p_{zt} is a zero-test parameter for level κ.

Ring Sampler. The randomized samp(params) outputs a level-zero encoding α  S_{0}^( α) for a nearly uniform element α  R.

Encoding. The possibly randomized enc(paramsi, α) takes a level-zero encoding α  S_{0}^( α) for some   α ∈ R and index i ≦ κ , and outputs the level-i encoding u  S_{i}^( α)  for the same α .

Addition and Negation. Given params and two encodings relative to the same index, u_{1 S_{i}^( α_1
and u_{2 S_{i}^( α_2), we have add(params, i, u_1, u_2) = u_1 + u_2  S_{i}^( α_1+ α_2), and neg(params, i, u_1) = -u_1  S_{i}^(-α).

Multiplication. For u_1  S_{i_1}^(α_1), u_2  S_{i_2}^(α_2) such that i_1+ i_2 ≦ κ , we have mul(params, i_1, u_1, i_2, u_2) = u_1 × u_2  S_{i_1 + i_2}^(α_1 · α_2) .

Zero-test. The procedure isZero(params, u) output 1 if u  S_{κ}^(0) and 0 otherwise.

Extraction. This procedure extracts a canonical and random representation of ring elements from their level-κ encoding. Namely ext(paramsp_{zt}, u) outputs s  {0,1}^λ, such that:
(a) For any α ∈ R and two u_1, u_2  S_{κ }^(α), 
ext(paramsp_{zt}, u_1) = ext(paramsp_{zt}, u_2),
(b) The distribution {ext(paramsp_{zt}, u) : α ∈ R, u ∈ S_{κ }^(α)} is nearly uniform over {0,1}^λ.


New Encoding Scheme
An instance of the scheme relative to the parameters encodes elements of a quotient ring QR = / I, where I is a principal ideal I = <g>   R, generated by a short vector g. Namely, the ring elements that are encoded in the scheme are cosets of the form e + I for some vector e. The short generator g itself is kept secret. In addition, the system depends on another secret element z, which is chosen at random in R_q. For higher-level encodings, a level-i encoding of the same coset is a vector of the form c/z^i  R_q with c e + I short. Speci cally, for {0, 1, ..., κ} the set of all level-i encodings is S_i = {c/z_i   R_q :  ||c|| < q^{8/g}}, and the set of levle-i encodings of the plaintext element e + I is
S_{i}^(e + I) = {c/z_i   R_q :  c ∈ e + I, ||c|| < q^{8/g}}.

Instance Generation. The instance-generation procedure chooses at random the ideal-generator g and denominator z. The denominator z  is chosen uniformly at random in R_q. The generator g  R should be chosen so that both g and g^{-1}  K = Q[X]/(X^n +1) are short. Once we have g, z, we choose and publish some other elements in R_q. Speci cally we have m + 1 elements rand_1,..., x_m, y that are used for encoding, and an element p_{zt} that is used as a zero-testing parameter. We also choose a random seed s for a strong randomness extractor. The instance-generation procedure outputs 
params = (n, q, y, {x_i}_i, s) and p_{zt}.

Sampling level-zero encodings. To sample a level-zero encoding of a random coset, we just draw a random short element in R, d ← D_{Z_n, σ'} , where σ' = σ n (for σ  that was used to sample g).

Encodings at higher levels. To allow encoding of cosets at higher levels, we publish as part of our instance-generation a level-one encoding of 1 + I, namely an element y = [a/z]_q where a   1 + I is short. A  simplistic method of doing that is drawing a  ←  D_{1+I, σ'} , then computing y from a.

Adding and multiplying encodings. It is easy to see that the encoding as above is additively homomorphic, in the sense that adding encodings yields an encoding of the sum. This follows since if we have many short c_j 's then their sum is still short, || Prod_{j} c_j || ≦ q, and therefore the sum  cProd_{j} c_j = [Prod_{j} c_j]_R_q belong to the coset Prod_{j} (c_j+I). Hence, if we denote u_j = c_j/z  R_q then each u_j is an encoding of the coset c_j + I, and the sum [Prod_{j} u_j]_q is of the form c/z where c is still a short element in the sum of the cosets.

Zero-testingisZero(params, p_{zt}, u) = 1 if ||[p_{zt}_q ]||_∞ < q^{3/4}, or otherwise 0.

Extraction. To extract a canonical and random representation of a coset from an encoding u = [c/z ^κ]_q, we just multiply by the zero-testing parameter p_{zt}, collect the (log q)/  most-signifi cant bits of each of the n coe fficients of the result, and apply a strong randomness extractor to the collected bits (using the seed from the public parameters). Namely ext(params,  p_{zt}, u) = EXTRACT_s(msbs([u ·  p_{zt}]_q)) (msbs of coe fficient representation). This works because for any two encodings u, u' of the same coset we have ||p_{zt}u - p_{zt}u'|| = ||p_{zt}(u - u')|| < q^{3/4}.

The security of the graded encoding systems relies on new and at present it seems unlikely that they can be reduced to more established assumptions, such as learning-with-errors (LWE),  or even the NTRU hardness assumption.




No comments:

Post a Comment