55 minute read ★★★☆☆

Lattice is a very important construct for modern day cryptography, especially when we know that in the future, quantum computers will break both ECC and RSA, cryptographic primitives that are not vulnerable to the Shor’s algorithm will be the future of cryptography.

Prelude: Vector Spaces

Recall a vector space \(V\) over a field (say \(\mathbb{R}\) or \(\mathbb{C}\)) is a set of “vectors”, with the addition of vectors and multiplication of a vector by a scalar, satisfying certain properties. Since we know in linear algebra that every \(n\)-dimensional vector space over \(\mathbb{R}\) is isomorphic to \(\mathbb{R}^n\), let’s focus on the case of \(\mathbb{R}^n\), which can be realized as a \(n\)-tuple of real numbers \((x_1,x_2,\cdots,x_n)\), where the \(x_i\)’s are real numbers.

Definition (Vector Space) A vector space \(V\) over a field \(K\) is an abelian group \((V,+)\) along with a scalar multiplication operator \(\cdot: K\times V\to V\) such that

  1. \(1\cdot v=v\qquad \forall v\in V\) and \(1\) the multiplicative identity in \(K\)
  2. \(c\cdot(dv)=(cd)\cdot v\qquad \forall c,d\in K, v\in V\) (Compatibility)
  3. \((c+d)\cdot v=c\cdot v + d\cdot v\qquad \forall c,d\in K, v\in V\) (Distributive Law 1)
  4. \(c\cdot (v+w)=c\cdot v + c\cdot w\qquad\forall c\in K, v,w\in V\) (Distributive Law 2)

We will omit the dot for simplicity. Also recall the definition of an abelian group \((V,+)\) is a set \(V\) with a binary operation \(+: V\times V\to V\) such that for every \(a,b,c\in V\),

  1. \((a+b)+c = a+(b+c)\) (Associativity)
  2. \(\exists e\in V\) such that \(e+a=a+e=a\). Denote the element by \(0\). (Identity)
  3. \(\forall a\in V\ \exists d\in V\) such that \(a+d=d+a=e\). Also denote the element by \((-a)\). (Inverse)
  4. \(a+b=b+a\) (Commutivity)

Theorem A vector space \(V\) of dimension \(n\) over \(K\) is isomorphic to \(K^n\) (as vector spaces).

For \(n=2\), we can just imagine an infinite plane, where vectors are simply an arrow pointing from the origin to some points in the plane. This helps us imagine what lattice looks like in a moment.

For vector spaces, we can talk about the idea of a basis, which is a set of vectors that, if we multiply each vector by some suitable scalar and add them up, we can generate every vector in the vector space (Spanning set), and that the zero vector cannot be generated unless all the scalars we choose are 0 (Linearly independent set). For example, the set \(\{(0,1),(1,0)\}\) can generate every vector \((x,y)\) in \(\mathbb{R}^2\) plane, where \(x,y\) are real numbers, just by doing \(x(0,1)+y(1,0)\).

A standard theorem of linear algebra is that a finite dimensional vector space always has a basis, and that any bases has the same number of vectors, which is the dimension of the vector space. Also, any basis can be transformed to another basis of the same vector space by taking appropriate linear combination, such that the coefficients form an invertible matrix.

However, what if we restrict \(x\) and \(y\) to be integers? The set of vectors with integer entries certainly don’t form a vector space over \(\mathbb{R}\), because for example \(0.5(1,0)\) makes a vector of not in that set. In this case, we get a Lattice.



There are many equivalent definitions of lattices, but let’s use the simplest one.

Definition (Lattice) Given a basis of \(n\) vectors of real coordinates with \(n\) entries, the lattice generated by that basis is the set of integer linear combinations of the vectors. In other words, let \(\mathcal{B}=\{\vec{v_i}=\,^t(x_{i1},x_{i2},\cdots,x_{in}): 0\lt i\leq n, x_{ij}\in\mathbb{R}\ \forall i,j\}\), then the lattice \(\mathcal{L}\) is given by \(\mathcal{L}=\mathcal{L}(\mathcal B)=\{\sum_i a_i\vec{v_i}: a_i\in\mathbb{Z}\}\). We also write \(\mathcal{B}\) to denote a matrix formed by taking the columns as the basis vectors. So we can write \(\mathcal{B}=\left[\begin{matrix}\vec{v_1}&\vec{v_2}&\cdots&\vec{v_n}\end{matrix}\right]=\left[\begin{matrix}x_{11}&x_{21}&\cdots&x_{n1}\\ x_{12}&x_{22}&\cdots&x_{n2}\\\vdots&\vdots&\ddots\\x_{1n}&x_{2n}&\cdots&x_{nn}\end{matrix}\right]\).

Of course you can also take the row as the basis vectors, then everything described later can still work by transposing all the involved matrices.

For simplicity, we shall work with integer lattices, i.e. the basis vectors have integer entries, or \(\mathcal{L}\subset \mathbb{Z}^n\).

You can visualize lattices with this: Lattice Demonstration

Unimodular Matrix

If two bases generate the same lattice \(\mathcal{L}\), then their corresponding matrix are related by multiplying a matrix with determinant \(\pm 1\). Matrix with unit determinant are called a unimodular matrix. We shall state this as a theorem (without proof).

Theorem Let \(\mathcal{B}\) and \(\mathcal{C}\) be two bases of some lattices. \(\mathcal{L(B)}=\mathcal{L(C)}\) iff there exists an integer matrix \(U\) such that \(\det U=\pm 1\) and \(\mathcal{B}=\mathcal{C}U\).

In fact, a basis can be transformed to another by taking elementary column operations, i.e. (1) Swapping columns, (2) multiplying the column by \(-1\), and (3) add an integer multiple of a column to another. It is left as exercise to the reader to verify that such operations corresponds to multiplying the basis matrix by a suitable unimodular matrix.

Fundamental Domain

Definition The fundamental domain (or the fundamental parallelepiped) of a lattice \(\mathcal{L}(B)\), \(B=\{v_1,\cdots,v_n\}\) is the set \(\mathcal{F}=\{\sum_{i=1}^n c_iv_i: 0\leq c_i < 1\}\).

The \(n\)-dimensional volume (perhaps the Lebesgue measure) of this fundamental parallelepiped is given by \(\text{Vol}(\mathcal{F})=\left\lvert\det B\right\rvert\) The volume is invariant when changing basis, so we can also denote the volume as \(\det \mathcal{L}\).

Hard Problems in Lattices

For any lattice \(\mathcal{L}=\mathcal{L(B)}\), we define the shortest distance of \(\mathcal{L}\) to be the minimum distance between any two different lattice points. Alternatively, it is the length of the shortest (non-zero) vector in the lattice (why?). We denote this quantity by \(\lambda(\mathcal{L})=\inf\limits_{\vec{v}\in\mathcal{L}-\{\vec{0}\}}\vert\vert\vec{v}\vert\vert\), where \(\vert\vert\vec{v}\vert\vert\) here is taken to be the usual Euclidean distance \(\ell_2\). With this quantity defined, we can state the two (hard) lattice problems.

Closest Vector Problem (CVP) Given a vector \(\vec{w}\in \mathbb{R}^n\) that is not in the lattice \(\mathcal{L}\), find a vector \(\vec{v}\in\mathcal{L}\) such that the distance between them are the shortest. i.e., \(\vert\vert\vec{v}-\vec{w}\vert\vert\) is minimal.

Shortest Vector Problem (SVP) Given the lattice \(\mathcal{L}\), find \(\vec{v}\in\mathcal{L}\) such that \(\vert\vert\vec{v}\vert\vert=\lambda(\mathcal{L})\).

To a similar vein, we shall define the problem, but the solution is not required to be exact, instead allow a certain error.

Approximate Shortest vector problem (apprSVP) Let \(\psi(n)\) be a function depending on \(n\) only, and \(\mathcal{L}\) to be a lattice of dimension \(n\). The \(\psi(n)\)-apprSVP is to find a vector \(\vec{v}\in\mathcal{L}\) such that it’s distance is less than \(\psi(n)\) times that of the shortest vector. i.e. \(\vert\vert\vec{v}\vert\vert\leq\psi(n)\lambda(\mathcal{L})\).

It is left as exercise to the reader to define the approximate version of CVP (apprCVP).

CVP is known to be a NP-hard problem, while SVP is NP-hard in a certain “randomized reduction hypothesis”. However in certain cases (specific lattices for example), the problems may be easier to crack. For example, for \(n=2\), Gauss’s lattice reduction algorithm can completely solve the shortest vector problem.

Hermite’s Theorem and Gaussian Heuristics

One may want to estimate the distance of the shortest vector in any given lattice. Indeed, there are results that give some approximations.

Theorem(Hermite’s Theorem) For any lattice \(\mathcal{L}\), there exists a non-zero vector \(\vec{v}\in\mathcal{L}\) such that \(\vert\vert\vec{v}\vert\vert\leq \sqrt{n}\det(\mathcal{L})^{\frac 1 n}\).

Definition(Hermite’s Constant) For a given dimension \(n\), define the Hermite constant \(\gamma_n\) to be the smallest value such that every lattice \(\mathcal{L}\) of dimension \(n\) has a non-zero vector \(\vec{v}\in\mathcal{L}\) such that \(\vert\vert\vec{v}\vert\vert^2\leq \gamma_n \det(\mathcal{L})^{\frac 2 n}\). So Hermite’s theorem states that \(\gamma_n\leq n\).

For large \(n\), we know that \(\frac{n}{2\pi e}\leq \gamma_n\leq \frac{n}{\pi e}\). In fact, due to a result from Gauss, we expect for large \(n\), the shortest vector will have the length of approximately \(\sqrt{\frac{n}{2\pi e}}\det(\mathcal{L})^{\frac 1 n}\).

Babai’s Algorithm

We first introduce the relation between \(\det \mathcal{L}\) and the basis vectors.

Theorem Let \(B=\{v_i,\cdots,v_n\}\) be the basis of a lattice \(\mathcal{L}(B)\). Then \(\det \mathcal{L}\leq \prod_{i=1}^n \|v_i\|\)

Equality holds iff the basis \(B\) is orthogonal, in other words every pair \(i\neq j\) has \(\langle v_i,v_j\rangle =0\).

Proposition-Definition (Orthogonality Defect) Define the orthogonality defect of a basis \(B\) of lattice \(\mathcal{L}(B)\) to be \(\delta(B)=\frac{\prod_{i=1}^n \|v_i\|}{\det \mathcal{L}}\)

We always have \(\delta(B)\geq 1\) (by Hadamard’s Inequality). We say a basis is non-orthogonal when \(\delta(B)\) is very large.

Theorem Every lattice \(\mathcal{L}\) of dimension \(n\) has a basis \(B\) of \(\mathcal{L}\) such that \(\delta(B)\leq n^{\frac n 2}\)

If the basis is orthogonal, then note that if \(c_i\in\mathbb{Z}\) , \(\|c_1v_1+\cdots+c_nv_n\|^2= c_1^2 \|v_1\|^2+\cdots+c_n^2 \|v_n\|^2\), so the shortest vector in \(\mathcal{L}\) is just the shortest vector in \(B\). On the other hand, for \(w\in \mathbb{R}^n-\mathcal{L}\), write \(w=t_1v_1+\cdots +t_nv_n\) (\(t_i\in \mathbb{R}\)), then for \(v=\sum_{i=1}^n c_iv_i\in\mathcal{L}\),\(\|v-w\|^2=\sum_{i=1}^n (t_i-c_i)^2 \|v_i\|^2\) So CVP can be trivially solved by taking \(c_i\) to be the integer closest to \(t_i\).

Doing the same trick for nearly-orthogonal bases (i.e. \(\delta(B)\) is close to 1) is known as the Babai’s Algorithm.

For highly non-orthogonal bases, this trick will not work. While the Babai’s algorithm is simple, it is actually quite hard to analyze when it will work or fail. We shall see better methods for solving apprSVP and apprCVP soon.

Lattice Basis Reduction

One idea to remedy the above problem is to transform the basis into a relatively orthogonal basis.

Definition (LLL-Reduced) Let \(\delta=\frac{3}{4}\) and \(\|\cdot\|\) be the standard Euclidean norm.

Let \(B=\{v_1,\cdots,v_n\}\) be a basis of lattice \(\mathcal{L}\) and \(B^*=\{v_1^*,\cdots,v_n^*\}\) be the orthogonal basis of \(\mathbb{R}^n\) (not of \(\mathcal{L}\) in general) after applying the Gram-Schmidt process. Then we say \(B\) is LLL-reduced if

  1. \(\lvert\mu_{i,j}\rvert\leq\frac{1}{2}\) (Size-reduced)
  2. \(\delta\|v_{k-1}^*\|^2\leq \|v_k^*\|^2 + \mu_{k,k-1}^2\|v_{k-1}^*\|^2\) (Lovász condition)

The Lovász condition can also be expressed as \(\|v_k^*\|^2\geq + (\delta - \mu_{k,k-1}^2)\|v_{k-1}^*\|^2\).

Theorem If a basis \(B=\{v_1,\cdots,v_n\}\) of lattice \(\mathcal{L}\) is LLL-reduced, then

  1. \[\prod_{i=1}^n\|v_i\|\leq 2^{\frac{n-1}{4}}\det\mathcal{L}\]
  2. \[\|v_j\|\leq 2^{\frac{i-1}{2}}\|v_i^*\|\text{ for all }1\leq j\leq i\leq n\]
  3. \[\|v_1\|\leq 2^{\frac{n-1}{4}}(\det\mathcal{L})^{\frac{1}{n}}\]
  4. \[\|v_1\|\leq 2^{\frac{n-1}{2}}\lambda(\mathcal{L})\]

The forth assertion implies that an LLL-reduced basis gives a solution to the \(2^{\frac{n-1}{2}}-\)apprSVP.

On the other hand, given a basis \(B\) that is LLL-reduced of lattice \(\mathcal{L}\), we can solve the \(C^n-\)apprCVP for some constant \(C\) by directly applying the Babai’s Algorithm on the LLL-reduced basis.

LLL Algorithm

While the exact algorithm to find a LLL-reduced basis will not be mentioned here, we state the following theorem:

Theorem Given a basis \(B\) of lattice \(\mathcal{L}\subset \mathbb{Z}^n\), a LLL-reduced basis can be found in polynomial time of \(n\) and \(\log\max\limits_{v_i\in B}\|v_i\|\).

One important note is that empirically, LLL algorithm gives much better results than it guarantees in theory.


Hoffstein, J., Pipher, J., Silverman, J. H., & Silverman, J. H. (2008). An introduction to mathematical cryptography (Vol. 1). New York: Springer.