with Alice’s qubit. Therefore (skipping over some arguments), Alice and Bob must share two EPR
states. Chaining together pairwise statements like this establishes that they share n EPR states.
Note that this test requires very little processing to test a high-dimensional quantum state, as
the number of possible operations to be performed scales quadratically with the number of qubits
tested. This contrasts with the exponential number of possibilities required for parallel self-tests.
Possibly a further value is that the test is largely nondestructive; only two of the n EPR states are
measured in the test, leaving n − 2 verified EPR states for other uses.
The practical usefulness of the test, however, depends on its tolerance to imperfections. We
characterize a test as “robust” if the guaranteed distance to the target state scales polynomially
with the sub-optimality incurred by, say, instrumental imperfection. Precisely, we say that a test
has “robustness f
(
n, δ
)
” if a success probability that is δ-close to optimal guarantees a distance
f
(
n, δ
)
to the target state. For example, the test from [RUV13] is robust, and our test is robust
as well: the dependence on errors grows only polynomially in n, not exponentially. This is not to
say that the test can readily be implemented experimentally. Might a stronger analysis show, for
example, that if Alice and Bob pass the test with
99%
probability, then they must share a state
with
99%
fidelity to
1
√
2
n
(
|00i
+
|11i
)
⊗n
? No. We give an explicit construction by which Alice
and Bob can use logarithmically many EPR states, O
1
δ
2
log n
, to pass the test with probability
within δ of the optimal value. This implies that the analysis breaks down for δ
= Ω(
p
(log n)/n
)
.
The test does not tolerate constant error rates. (A similar limitation will likely hold for other tests
that consider only two qubits at a time [CRSV17].)
Our strategy for exhibiting n qubits is based on a technique first used in [KKM
+
11]. We intro-
duce two main ideas. The first consists in combining
2
n
(
n −
1)
pairwise approximate commutation
relations
[
P
i
, Q
j
]
|ψi ≈
0
, for P, Q ∈ {X, Z} and i 6
=
j, with the exact commutation that always
exists between an operator on Alice’s Hilbert space H
A
and an operator on Bob’s space H
B
. We
crucially use the fact that, as a consequence of the CHSH test, an operator P
i
or Q
j
can be “pulled”
from H
A
to H
B
, at which point it commutes with any operator on H
A
. This operation allows us to
control the error blow-up that would become unmanageable if we only had access to a single space H.
The second idea is to enforce the desired approximate commutation
[
P
i
, Q
j
]
|ψi ≈
0
through the use
of an intermediate set of operators, P
{i,j}
i
and Q
{i,j}
j
, that are obtained by adding an additional test
in which the CHSH test is executed on a pair {i, j} of (purported) qubits. The use of a “dummy
question” was first introduced in [IKM09] for the purposes of inducing approximate commutation,
and we show that it can be effectively leveraged in our context as well.
Although our main result is based on the CHSH test and the use of EPR states, we show in
Appendix A that the result can be generalized to separate qubits via any two-qubit state entangled
across H
A
⊗ H
B
. It is an open question whether similar results can be attained based on higher-
dimensional partially entangled states.
Recently, several other entanglement tests have been given. Coudron and Natarajan [CN16] and
Coladangelo [Col17] study parallel repetition of the Magic Square game [Mer90, Per90], achieving
robustness of O
(
n
2
√
δ
)
and O
(
n
3/2
√
δ
)
, respectively. The optimal success probability for this game
is
1
, which allows for a more efficient analysis in comparison to the CHSH game. They also
require fewer experiments to achieve certain statistical confidence. Natarajan and Vidick [NV17]
leverage a quantum version of the linearity test by Blum et al. [BLR93], and obtain robustness
that is independent of n. Their test is rather complex and requires the verifier to choose among
exponentially many possible questions. This was recently improved [NV18] to a test that achieves
simultaneously poly
(
n
)
questions and constant robustness (O
(
√
δ
)
). Subsequently to our work,
3