Data re-uploading for a universal quantum classiﬁer
´
an P
´
erez-Salinas
1,2
, Alba Cervera-Lierta
1,2
, Elies Gil-Fuster
3
, and Jos
´
e I. Latorre
1,2,4,5
1
Barcelona Supercomputing Center
2
Institut de Ci

encies del Cosmos, Universitat de Barcelona, Barcelona, Spain
3
Dept. F
´
ısica Qu

antica i Astrof
´
ısica, Universitat de Barcelona, Barcelona, Spain.
4
Nikhef Theory Group, Science Park 105, 1098 XG Amsterdam, The Netherlands.
5
Center for Quantum Technologies, National University of Singapore, Singapore.
A single qubit provides suﬃcient compu-
tational capabilities to construct a universal
quantum classiﬁer when assisted with a clas-
sical subroutine. This fact may be surpris-
ing since a single qubit only oﬀers a simple
superposition of two states and single-qubit
gates only make a rotation in the Bloch sphere.
The key ingredient to circumvent these limita-
tions is to allow for multiple data re-uploading.
A quantum circuit can then be organized as
a series of data re-uploading and single-qubit
processing units. Furthermore, both data re-
date multiple dimensions in the input and sev-
eral categories in the output, to conform to
a universal quantum classiﬁer. The extension
of this idea to several qubits enhances the eﬃ-
ciency of the strategy as entanglement expands
the superpositions carried along with the clas-
siﬁcation. Extensive benchmarking on diﬀer-
ent examples of the single- and multi-qubit
quantum classiﬁer validates its ability to de-
scribe and classify complex data.
1 Introduction
Quantum circuits that make use of a small number of
quantum resources are of most importance to the ﬁeld
of quantum computation. Indeed, algorithms that
need few qubits may prove relevant even if they do
not attempt any quantum advantage, as they may be
useful parts of larger circuits.
A reasonable question to ask is what is the lower
limit of quantum resources needed to achieve a given
computation. A naive estimation for the quantum
cost of a new proposed quantum algorithm is often
made based on analogies with classical algorithms.
But this may be misleading, as classical computation
can play with memory in a rather diﬀerent way as
quantum computers do. The question then turns to
the more reﬁned problem of establishing the absolute
minimum of quantum resources for a problem to be
solved.
We shall here explore the power and minimal needs
of quantum circuits assisted with a classical subrou-
tine to carry out a general supervised classiﬁcation
task, that is, the minimum number of qubits, quan-
tum operations and free parameters to be optimized
classically. Three elements in the computation need
renewed attention. The obvious ﬁrst concern is to
ﬁnd a way to upload data in a quantum computer.
Then, it is necessary to ﬁnd the optimal processing
of information, followed by an optimal measurement
strategy. We shall revisit these three issues in turn.
The non-trivial step we take here is to combine the
ﬁrst two, which is data uploading and processing.
There exist several strategies to design a quantum
classiﬁer. In general, they are inspired in well-known
classical techniques such as artiﬁcial neural networks
[13] or kernel methods used in classical machine
learning [410]. Some of these proposals [46] encode
the data values into a quantum state amplitude, which
is manipulated afterward. These approaches need an
eﬃcient way to prepare and access to these ampli-
tudes. State preparation algorithms are in general
costly in terms of quantum gates and circuit depth,
although some of these proposals use a speciﬁc state
preparation circuit that only require few single-qubit
gates. The access to the states that encode the data
can be done eﬃciently by using a quantum random
access memory (QRAM) [11]. However, this is experi-
mentally challenging and the construction of a QRAM
is still under development. Other proposals exploit
hybrid quantum-classical strategies[710]. The clas-
sical parts can be used to construct the correct en-
coding circuit or as a minimization method to extract
the optimal parameters of the quantum circuit, such
as the angles of the rotational gates. In the ﬁrst case,
the quantum circuit computes the hardest instances of
the classical classiﬁcation algorithm as, for example,
the inner products needed to obtain a kernel matrix.
In the second case, the data is classiﬁed directly by
using a parametrized quantum circuit, whose variables
are used to construct a cost function that should be
minimized classically. This last strategy is more con-
venient for a Near Intermediate Scale Quantum com-
putation (NISQ) since, in general, it requires short-
depth circuits, and its variational core makes it more
resistant to experimental errors. Our proposal be-
longs to this last category, the parametrized quantum
Accepted in Quantum 2020-01-27, click title to verify. Published under CC-BY 4.0. 1
arXiv:1907.02085v2 [quant-ph] 30 Jan 2020
classiﬁers.
A crucial part of a quantum classiﬁcation algorithm
is how data is encoded into the circuit. Proposals
based on kernel methods design an encoding circuit
which implements a feature map from the data space
to the qubits Hilbert space. The construction of this
quantum feature map may vary depending on the al-
gorithm, but common strategies make use of the quan-
tum Fourier transform or introduce data in multiple
qubits using one- and two-qubit gates [9, 10]. Both
the properties of the tensor product and the entangle-
ment generated in those encoding circuits capture the
non-linearities of the data. In contrast, we argue that
there is no need to use highly sophisticated encoding
circuits nor a signiﬁcant number of qubits to intro-
duce these non-linearities. Single-qubit rotations ap-
plied multiple times along the circuit generate highly
non-trivial functions of the data values. The main dif-
ference between our approach and the ones described
above is that the circuit is not divided between the
encoding and processing parts, but implements both
multiple times along the algorithm.
Data re-uploading is considered as a manner of solv-
ing the limitations established by the no-cloning the-
orem. Quantum computers cannot copy data, but
classical devices can. For instance, a neural network
takes the same input many times when processing the
data in the hidden layer neurons. An analogous quan-
tum neural network can only use quantum data once.
Therefore, it makes sense to re-upload classical data
along a quantum computation to bypass this limita-
tion on the quantum circuit. By following this line
of thought, we present an equivalence between data
orem applied to artiﬁcial neural networks [12]. Just
as a network composed of a single hidden layer with
enough neurons can reproduce any continuous func-
tion, a single-qubit classiﬁer can, in principle, achieve
the same by re-uploading the data enough times.
The single-qubit classiﬁer illustrates the computa-
tional power that a single qubit can handle. This pro-
posal is to be added to other few-qubit benchmarks
in machine learning [13]. The input redundancy has
also been proposed to construct complex encoding in
parametrized quantum circuits and in the construc-
tion of quantum feature maps [10, 14]. These and
other proposals mentioned in the previous paragraphs
are focused on representing classically intractable or
very complex kernel functions with few qubits. On
the contrary, the focus of this work is to distill the
minimal amount of quantum resources, i.e., the num-
ber of qubits and gates, needed for a given classiﬁca-
tion task quantiﬁed in terms of the number of qubits
and unitary operations. The main result of this work
is, indeed, to show that there is a trade-oﬀ between
the number of qubits needed to perform classiﬁcation
and multiple data re-uploading. That is, we may use
fewer qubits at the price of re-entering data several
times along the quantum computation.
We shall illustrate the power of a single- and multi-
qubit classiﬁers with data re-uploading with a series
of examples. First, we classify points in a plane that
is divided into two areas. Then, we extend the num-
ber of regions on a plane to be classiﬁed. Next, we
consider the classiﬁcation of multi-dimensional pat-
terns and, ﬁnally, we benchmark this quantum clas-
siﬁer with non-convex ﬁgures. For every example,
we train a parametrized quantum circuit that car-
ries out the task and we analyze its performance in
terms of the circuit architecture, i.e., for single- and
multi-qubit classiﬁers with and without entanglement
between qubits.
This paper is structured as follows. First, in Sec-
tion 2, we present the basic structure of a single-qubit
quantum classiﬁer. Data and processing parameters
are uploaded and re-uploaded using one-qubit gen-
eral rotations. For each data point, the ﬁnal state
of the circuit is compared with the target state as-
signed to its class, and the free parameters of the cir-
cuit are updated accordingly using a classical mini-
mization algorithm. Next, in Section 3, we motivate
the data re-uploading approach by using the Univer-
sal Approximation Theorem of artiﬁcial neural net-
works. In Section 4, we introduce the extension of
this classiﬁer to multiple qubits. Then, in Section 5,
we detail the minimization methods used to train the
quantum classiﬁers. Finally, in Section 6, we bench-
mark single- and multi-qubit quantum classiﬁers de-
ﬁned previously with problems of diﬀerent dimensions
and complexity and compare their performance re-
spect to classical classiﬁcation techniques. The con-
clusions of this proposal for a quantum classiﬁer are
exposed in Section 7.
2 Structure of a single-qubit quantum
classiﬁer
The global structure of any quantum circuit can be
divided into three elements: uploading of informa-
tion onto a quantum state, processing of the quan-
tum state, and measurement of the ﬁnal state. It is
far from obvious how to implement each of these ele-
ments optimally to perform a speciﬁc operation. We
shall now address them one at a time for the task of
classiﬁcation.
To load classical information onto a quantum circuit is
a highly non-trivial task [4]. A critical example is the
processing of big data. While there is no in-principle
obstruction to upload large amounts of data onto a
state, it is not obvious how to do it.
The problem we address here is not related to a
large amount of data. It is thus possible to consider a
Accepted in Quantum 2020-01-27, click title to verify. Published under CC-BY 4.0. 2
quantum circuit where all data are loaded in the co-
eﬃcients of the initial wave function [8, 9, 1315]. In
the simplest of cases, data are uploaded as rotations of
qubits in the computational basis. A quantum circuit
would then follow that should perform some classiﬁ-
cation.
This strategy would be insuﬃcient to create a uni-
versal quantum classiﬁer with a single qubit. A ﬁrst
limitation is that a single qubit only has two degrees
of freedom, thus only allowing to represent data in
a two-dimensional space. No quantum classiﬁer in
higher dimensions can be created if this architecture
is to be used. A second limitation is that, once data
is uploaded, the only quantum circuit available is a
rotation in the Bloch sphere. It is easy to prove that
a single rotation cannot capture any non-trivial sepa-
ration of patterns in the original data.
We need to turn to a diﬀerent strategy, which turns
out to be inspired by neural networks. In the case of
feed-forward neural networks, data are entered in a
network in such a way that they are processed by sub-
sequent layers of neurons. The key idea is to observe
that the original data are processed several times, one
for each neuron in the ﬁrst hidden layer. Strictly
speaking, data are re-uploaded onto the neural net-
work. If neural networks were aﬀected by some sort
of no-cloning theorem, they could not work as they
do. Coming back to the quantum circuit, we need to
design a new architecture where data can be intro-
duced several times into the circuit.
The central idea to build a universal quantum clas-
siﬁer with a single qubit is thus to re-upload classical
data along with the computation. Following the com-
parison with an artiﬁcial neural network with a single
hidden layer, we can represent this re-upload diagram-
matically, as it is shown in Figure 1. Data points in a
neural network are introduced in each processing unit,
represented with squares, which are the neurons of the
hidden layer. After the neurons process these data, a
ﬁnal neuron is necessary to construct the output to be
analyzed. Similarly, in the single-qubit quantum clas-
siﬁer, data points are introduced in each processing
unit, which this time corresponds to a unitary rota-
tion. However, each processing unit is aﬀected by the
previous ones and re-introduces the input data. The
ﬁnal output is a quantum state to be analyzed as it
will be explained in the next subsections.
The explicit form of this single-qubit classiﬁer is
shown in Figure 2. Classical data are re-introduced
several times in a sequence interspaced with process-
ing units. We shall consider the introduction of data
as a rotation of the qubit. This means that data from
three-dimensional space, ~x, can be re-uploaded using
unitaries that rotate the qubit U (~x). Later processing
units will also be rotations as discussed later on. The
whole structure needs to be trained in the classiﬁca-
tion of patterns.
As we shall see, the performance of the single-qubit
(a) Neural network (b) Quantum classiﬁer
Figure 1: Simpliﬁed working schemes of a neural network
and a single-qubit quantum classiﬁer with data re-uploading.
In the neural network, every neuron receives input from all
neurons of the previous layer. In contrast with that, the
single-qubit classiﬁer receives information from the previous
processing unit and the input (introduced classically). It pro-
cesses everything all together and the ﬁnal output of the
computation is a quantum state encoding several repetitions
of input uploads and processing parameters.
quantum classiﬁer will depend on the number of re-
uploads of classical data. This fact will be explored
in the results section.
The single-qubit classiﬁer belongs to the category of
parametrized quantum circuits. The performance of
the circuit is quantiﬁed by a ﬁgure of merit, some
speciﬁc χ
2
to be minimized and deﬁned later. We
need, though, to specify the processing gates present
in the circuit in terms of a classical set of parameters.
Given the simple structure of a single-qubit circuit
presented in Figure 2, the data is introduced in a sim-
ple rotation of the qubit, which is easy to character-
ize. We just need to use arbitrary single-qubit rota-
tions U (φ
1
, φ
2
, φ
3
) SU(2). We will write U(
~
φ) with
~
φ = (φ
1
, φ
2
, φ
3
). Then, the structure of the universal
quantum classiﬁer made with a single qubit is
U(
~
φ, ~x) U (
~
φ
N
)U(~x) . . . U(
~
φ
1
)U(~x), (1)
which acts as
|ψi = U(
~
φ, ~x)|0i. (2)
The ﬁnal classiﬁcation of patterns will come from
the results of measurements on |ψi. We may introduce
the concept of processing layer as the combination
L(i) U (
~
φ
i
)U(~x), (3)
so that the classiﬁer corresponds to
U(
~
φ, ~x) = L(N) . . . L(1), (4)
where the depth of the circuit is 2N . The more layers
the more representation capabilities the circuit will
have, and the more powerful the classiﬁer will be-
come. Again, this follows from the analogy to neural
Accepted in Quantum 2020-01-27, click title to verify. Published under CC-BY 4.0. 3
L(1) L(N)
|0i U (~x)
U(
~
φ
1
)
···
U (~x)
U(
~
φ
N
)
(a) Original scheme
L(1) L(N)
|0i U
~
φ
1
, ~x
···
U
~
φ
N
, ~x
(b) Compressed scheme
Figure 2: Single-qubit classiﬁer with data re-uploading. The
quantum circuit is divided into layer gates L(i), which con-
stitutes the classiﬁer building blocks. In the upper circuit,
each of these layers is composed of a U (~x) gate, which up-
loads the data, and a parametrized unitary gate U(
~
φ). We
apply this building block N times and ﬁnally compute a cost
function that is related to the ﬁdelity of the ﬁnal state of
the circuit with the corresponding target state of its class.
This cost function may be minimized by tunning the
~
φ
i
pa-
rameters. Eventually, data and tunable parameters can be
introduced with a single unitary gate, as illustrated in the
bottom circuit.
networks, where the size of the intermediate hidden
layer of neurons is critical to represent complex func-
tions.
There is a way to compactify the quantum circuit
into a shorter one. This can be done if we incorporate
data and processing angles in a single step. Then, a
layer would only need a single rotation to introduce
data and tunable parameters, i.e. L(i) = U(
~
φ, ~x). In
addition, each data point can be uploaded with some
weight w
i
. These weights will play a similar role as
weights in artiﬁcial neural networks, as we will see in
the next section. Altogether, each layer gate can be
taken as
L(i) = U
~
θ
i
+ ~w
i
~x
, (5)
where ~w
i
~x =
w
1
i
x
1
, w
2
i
x
2
, w
3
i
x
3
product of two vectors. In case the data points have
dimension lesser than three, the rest of ~x components
are set to zero. Such an approach reduces the depth of
the circuit by half. Further combinations of layers into
fewer rotations are also possible, but the nonlinearity
inherent to subsequent rotations would be lost, and
the circuit would not be performing well.
Notice that data points are introduced linearly into
the rotational gate. Non-linearities will come from
the structure of these gates. We chose this encoding
function as we believe it is one of the lesser biased
ways to encode data with unknown properties. Due
to the structure of single-qubit unitary gates, we will
see that this encoding is particularly suited for data
with rotational symmetry. Still, it can also classify
other kinds of data structures. We can also apply
other encoding techniques, e.g. the ones proposed in
Ref. [10], but for the scope of this work, we have
just tested the linear encoding strategy as a proof of
concept of the performance of this quantum classiﬁer.
It is also possible to enlarge the dimensionality of
the input space in the following way. Let us extend
the deﬁnition of i-th layer to
L(i) = U
~
θ
(k)
i
+ ~w
(k)
i
~x
(k)
···U
~
θ
(1)
i
+ ~w
(1)
i
~x
(1)
,
(6)
where each data point is divided into k vectors of di-
mension three. In general, each unitary U could ab-
sorb as many variables as freedom in an SU(2) uni-
tary. Each set of variables act at a time, and all of
them have been shown to the circuit after k iterations.
Then, the layer structure follows. The complexity of
the circuit only increases linearly with the size of the
input space.
2.3 Measurement
The quantum circuit characterized by a series of pro-
cessing angles {θ
i
} and weights {w
i
} delivers a ﬁnal
state |ψi, which needs to be measured. The results
of each measurement are used to compute a χ
2
that
quantiﬁes the error made in the classiﬁcation. The
minimization of this quantity in terms of the classical
parameters of the circuit can be organized using any
preferred supervised machine learning technique.
The critical point in the quantum measurement is
to ﬁnd an optimal way to associate outputs from the
observations to target classes. The fundamental guid-
ing principle to be used is given by the idea of max-
imal orthogonality of outputs [16]. This is easily es-
tablished for a dichotomic classiﬁcation, where one of
two classes A and B have to be assigned to the ﬁnal
measurement of the single qubit. In such a case it
is possible to measure the output probabilities P (0)
for |0i and P (1) for |1i. A given pattern could be
classiﬁed into the A class if P (0) > P (1) and into B
otherwise. We may reﬁne this criterium by introduc-
ing a bias. That is, the pattern is classiﬁed as A if
P (0) > λ, and as B otherwise. The λ is chosen to op-
timize the success of classiﬁcation on a training set.
Results are then checked on an independent validation
set.
The assignment of classes to the output reading of
a single qubit becomes an involved issue when many
classes are present. For the sake of simplicity, let us
mention two examples for the case of classiﬁcation to
four distinct classes. One possible strategy consists on
comparing the probability P(0) to four sectors with
three thresholds: 0 λ
1
λ
2
λ
3
1. Then, the
value of P(0) will fall into one of them, and classiﬁ-
cation is issued. A second, more robust assignment is
obtained by computing the overlap of the ﬁnal state
to one of the states of a label states-set. This states-
set is to be chosen with maximal orthogonality among
Accepted in Quantum 2020-01-27, click title to verify. Published under CC-BY 4.0. 4
Figure 3: Representation in the Bloch sphere of four and six
maximally orthogonal points, corresponding to the vertices
of a tetrahedron and an octahedron respectively. The single-
qubit classiﬁer will be trained to distribute the data points in
one of these vertices, each one representing a class.
all of them. This second method needs from the max-
imally orthogonal points in the Bloch sphere. Figure
3 shows the particular cases that can be applied to
a classiﬁcation task of four and six classes. In gen-
eral, a good measurement strategy may need some
prior computational eﬀort and reﬁned tomography of
the ﬁnal state. Since we are proposing a single-qubit
classiﬁer, the tomography protocol will only require
three measurements.
It is possible to interpret the single-qubit classi-
ﬁer in terms of geometry. The classiﬁer opens a 2-
dimensional Hilbert space, i.e., the Bloch sphere. As
we encode data and classiﬁcation within the param-
eters deﬁning rotations, this Hilbert space is enough
to achieve classiﬁcation. Any operation L(i) is a ro-
tation on the Bloch sphere surface. With this point
of view in mind, we can easily see that we can clas-
sify any point using only one unitary operation. We
can transport any point to any other point on the
Bloch sphere by nothing else than choosing the an-
gles of rotation properly. However, this does not
work for several data, as the optimal rotation for some
data points could be very inconvenient for some other
points. However, if more layers are applied, each one
will perform a diﬀerent rotation, and many diﬀerent
rotations together have the capability of enabling a
feature map. Data embedded in this feature space
can be easily separated into classes employing the re-
gions on the Bloch sphere.
2.3.1 A ﬁdelity cost function
We propose a very simple cost function motivated by
the geometrical interpretation introduced above. We
want to force the quantum states |ψ(
~
θ, ~w, ~x)i to be as
near as possible to one particular state on the Bloch
sphere. The angular distance between the label state
and the data state can be measured with the relative
ﬁdelity between the two states [17]. Thus, our aim
is to maximize the average ﬁdelity between the states
at the end of the quantum circuit and the label states
corresponding to their class. We deﬁne the following
cost function that carries out this task,
χ
2
f
(
~
θ, ~w) =
M
X
µ=1
1 |h
˜
ψ
s
|ψ(
~
θ, ~w, ~x
µ
)i|
2
, (7)
where |
˜
ψ
s
i is the correct label state of the µ data
point, which will correspond to one of the classes.
2.3.2 A weighted ﬁdelity cost function
We shall next deﬁne a reﬁned version of the previous
ﬁdelity cost function to be minimized. The set of
maximally orthogonal states in the Bloch sphere, i.e.,
the label states, are written as |ψ
c
i, where c is the
class. Each of these label states represents one class
for the classiﬁer. Now, we will follow the lead usually
taken in neural network classiﬁcation.
Let us deﬁne the quantity
F
c
(
~
θ, ~w, ~x) = |h
˜
ψ
c
|ψ(
~
θ, ~w, ~x)i|
2
, (8)
where M is the total number of training points, |
˜
ψ
c
i
is the label state of the class c and |ψ(
~
θ, ~w, ~x)i is the
ﬁnal state of the qubit at the end of the circuit. This
ﬁdelity is to be compared with the expected ﬁdelity of
a successful classiﬁcation, Y
c
(~x). For example, given
a four-class classiﬁcation and using the vertices of a
tetrahedron as label states (as shown in Figure 3),
one expects Y
s
(~x) = 1, where s is the correct class,
and Y
r
(~x) = 1/3 for the other r classes. In general,
Y
c
(~x) can be written as a vector with one entry equal
to 1, the one corresponding to the correct class, and
the others containing the overlap between the correct
class label state and the other label states.
With these deﬁnitions, we can construct a cost
function which turns out to be inspired by conven-
tional cost functions in artiﬁcial neural networks. By
weighting the ﬁdelities of the ﬁnal state of the circuit
with all label states, we deﬁne the weighted ﬁdelity
cost function as
χ
2
wf
(~α,
~
θ, ~w) =
1
2
M
X
µ=1
C
X
c=1
α
c
F
c
(
~
θ, ~w, ~x
µ
) Y
c
(~x
µ
)
2
!
,
(9)
where M is the total number of training points, C
is the total number of classes, ~x
µ
are the training
points and ~α = (α
1
, ··· , α
C
) are introduced as class
weights to be optimized together with
~
θ and ~w pa-
rameters. This weighted ﬁdelity has more parameters
than the ﬁdelity cost function. These parameters are
the weights for the ﬁdelities.
The main diﬀerence between the weighted ﬁdelity
cost function of Eq. (9) and the ﬁdelity cost function
of Eq. (7) is how many overlaps do we need to com-
pute. The χ
2
wf
requires as many ﬁdelities as classes
every time we run the optimization subroutine, while
the χ
2
f
needs just one. This is not such a big diﬀer-
ence for a few classes and only one qubit. It is possible
Accepted in Quantum 2020-01-27, click title to verify. Published under CC-BY 4.0. 5
to measure any state with a full tomography process
which, for one qubit, is achievable. However, for many
diﬀerent classes, we expect that one measurement will
be more eﬃcient than many.
Besides the weighted ﬁdelity cost function being
costlier than the ﬁdelity cost function, there is another
qualitative diﬀerence between both. The ﬁdelity cost
function forces the parameters to reach the maximum
in ﬁdelities. Loosely speaking, this ﬁdelity moves the
qubit state to where it should be. The weighted ﬁ-
delity forces the parameters to be close to a speciﬁed
conﬁguration of ﬁdelities. It moves the qubit state to
where it should be and moves it away from where it
should not. Therefore, we expect that the weighted ﬁ-
delity will work better than the ﬁdelity cost function.
Moreover, this extra cost in terms of the number of
parameters of the weighted ﬁdelity cost function will
only aﬀect the classical minimization part of the al-
gorithm. In a sense, we are increasing the classical
processing part to reduce the quantum resources re-
quired for the algorithm, i.e. the number of quantum
operations (layers). This fact gain importance in the
NISQ computation era.
3 Universality of the single-qubit clas-
siﬁer
After analyzing several classiﬁcation problems, we ob-
tain evidence that the single-qubit classiﬁer intro-
duced above can approximate any classiﬁcation func-
tion up to arbitrary precision. In this section, we pro-
vide the motivation for this statement based on the
Universal Approximation Theorem (UAT) of artiﬁcial
neural networks [12].
3.1 Universal Approximation Theorem
Theorem– Let I
m
= [0, 1]
m
be the m-dimensional unit
cube and C(I
m
) the space of continuous functions in
I
m
. Let the function ϕ : R R be a nonconstant,
bounded and continuous function and f : I
m
R
a function. Then, for every > 0, there exists an
integer N and a function h : I
m
R, deﬁned as
h(~x) =
N
X
i=1
α
i
ϕ ( ~w
i
·~x + b
i
) , (10)
with α
i
, b
i
R and ~w
i
R
m
, such that h is an ap-
proximate realization of f with precision , i.e.,
|h(~x) f(~x)| < (11)
for all ~x I
m
.
In artiﬁcial neural networks, ϕ is the activation
function, ~w
i
are the weights for each neuron, b
i
are the
biases and α
i
are the neuron weights that construct
the output function. Thus, this theorem establishes
that it is possible to reconstruct any continuous func-
tion with a single layer neural network of N neurons.
The proof of this theorem for the sigmoidal activation
function can be found in Ref. [18]. This theorem was
generalized for any nonconstant, bounded and contin-
uous activation function in Ref. [12]. Moreover, Ref.
[12] presents the following corollary of this theorem:
ϕ could be a nonconstant ﬁnite linear combination of
periodic functions, in particular, ϕ could be a non-
constant trigonometric polynomial.
3.2 Universal Quantum Circuit Approximation
The single-qubit classiﬁer is divided into several layers
which are general SU(2) rotational matrices. There
exist many possible decompositions of an SU(2) rota-
tional matrix. In particular, we use
U(
~
φ) = U(φ
1
, φ
2
, φ
3
) = e
2
σ
z
e
1
σ
y
e
3
σ
z
, (12)
where σ
i
are the conventional Pauli matrices. Using
the SU(2) group composition law, we can rewrite the
above parametrization in a single exponential,
U(
~
φ) = e
i~ω(
~
φ)·~σ
, (13)
with ~ω(
~
φ) =
ω
1
(
~
φ), ω
2
(
~
φ), ω
3
(
~
φ)
and
ω
1
(
~
φ) = d N sin ((φ
2
φ
3
)/2) sin (φ
1
/2) , (14)
ω
2
(
~
φ) = d N cos ((φ
2
φ
3
)/2) sin (φ
1
/2) , (15)
ω
3
(
~
φ) = d N sin ((φ
2
+ φ
3
)/2) cos (φ
1
/2) , (16)
where N =
1 cos
2
d
1
and cos d =
cos ((φ
2
+ φ
3
)/2) cos (φ
1
/2).
The single-qubit classiﬁer codiﬁes the data points
into
~
φ parameters of the U unitary gate. In particu-
lar, we can re-upload data together with the tunable
parameters as deﬁned in Eq. (5), i.e.
~
φ(~x) = (φ
1
(~x), φ
2
(~x), φ
3
(~x)) =
~
θ + ~w ~x. (17)
Thus,
U(~x) = U
N
(~x)U
N1
(~x) ···U
1
(~x) =
N
Y
i=1
e
i~ω(
~
φ
i
(~x))·~σ
,
(18)
Next, we apply the Baker-Campbell-Hausdorﬀ (BCH)
formula [19] to the above equation,
U(~x) = exp
"
i
N
X
i=1
~ω(
~
φ
i
(~x)) ·~σ + O
corr
#
. (19)
Notice that the remaining BCH terms O
corr
are
also proportional to Pauli matrices due to [σ
i
, σ
j
] =
2i
ijk
σ
k
.
Accepted in Quantum 2020-01-27, click title to verify. Published under CC-BY 4.0. 6
Each ~ω terms are trigonometric functions, uncon-
stant, bounded and continuous. Then
N
X
i=1
~ω(
~
φ
i
(~x)) =
N
X
i=1
ω
1
(
~
φ
i
(~x)), ω
2
(
~
φ
i
(~x)), ω
3
(
~
φ
i
(~x))
=
N
X
i=1
ω
1
(
~
θ
i
+ ~w
i
~x), ω
2
(
~
θ
i
+ ~w
i
~x), ω
3
(
~
θ
i
+ ~w
i
~x)
= (f
1
(~x), f
2
(~x), f
3
(~x)) . (20)
We still have to deal with the remaining terms
O
corr
of the BCH expansion. Instead of applying
such expansion, we can use again the SU(2) group
composition law to obtain the analytical formula of
U(~x) = e
i
~
ξ(~x)·~σ
, where
~
ξ(~x) will be an inextricably
trigonometric function of ~x. The O
corr
terms are pro-
portional to ~σ matrices, so O
corr
= ~%(~x) · ~σ for some
function ~%(~x). Then,
U(~x) = e
i
~
ξ(~x)·~σ
= e
i
~
f(~x)·~σ+i~%(~x)·~σ
. (21)
Thus, O
corr
terms can be absorbed in
~
f(~x).
For each data point ~x, we obtain a ﬁnal state that
will contain these
~
ξ(~x) functions. With all train-
ing points, we construct a cost function that can in-
clude new parameters α
c
for each class if we use the
weighted ﬁdelity cost function of Eq. (9). The func-
tion obtained from the combination of
~
ξ(x) and α
c
is
expected to be complex enough to probably represent
almost any continuous function. However, more pa-
rameters are necessary to map this argument with the
UAT expression.
If we compare the parameters of the UAT with the
single-qubit circuit parameters, the ~w
i
will correspond
with the weights, the
~
θ
i
with the biases b
i
, the number
of layers N of the quantum classiﬁer will correspond
with the number of neurons in the hidden layer and
~ω functions with the activation functions ϕ.
We have explained why it is necessary to re-upload
the data at each layer and why a single qubit could
be a universal classiﬁer. As has been stated before,
an artiﬁcial neural network introduces the data points
in each hidden neuron, weights them and adds some
bias. Here we cannot just copy each data point be-
cause the non-cloning theorem, so we have to re-
upload it at each layer.
4 From single- to multi-qubit quantum
classiﬁer
The single-qubit classiﬁer cannot carry any quantum
advantage respect classical classiﬁcation techniques
such as artiﬁcial neural networks. In the previous
sections, we have deﬁned a quantum mechanical ver-
sion of a neural network with a single hidden layer. In
general, a huge amount of hidden neurons is necessary
to approximate a target function with a single layer.
To circumvent this inconvenience, more hidden layers
are introduced, leading eventually to the concept of
deep neural networks.
By using the single-qubit classiﬁer formalism that
we have introduced in the previous sections, we pro-
pose its generalization to more qubits. The introduc-
tion of multiple qubits to this quantum classiﬁer may
improve its performance as more hidden layers im-
prove the classiﬁcation task of an artiﬁcial neural net-
work. With the introduction of entanglement between
these qubits, we reduce the number of layers of our
classiﬁer as well as propose a quantum classiﬁcation
method that can achieve quantum advantage.
Figure 1 shows the analogy between a neural net-
work with a single hidden layer and a single-qubit
classiﬁer. The generalization of this analogy is not
so obvious. A multi-qubit classiﬁer without entan-
glement could have some similarities with a convolu-
tional neural network, where each qubit could repre-
sent a neural network by itself. However, it is not clear
if the introduction of entanglement between qubits
can be understood as a deep neural network archi-
tecture. The discussion around this analogy as well
as an extended study of the performance of a multi-
qubit classiﬁer is beyond the scope of this work. In
the next subsections, we present a general proposal
for a multi-qubit classiﬁer which we compare with the
single-qubit one in Section 6.
4.1 Measurement strategy and cost function
for a multi-qubit classiﬁer
With a single-qubit classiﬁer, the measurement strat-
egy consisting on comparing the ﬁnal state of the
circuit with a pre-deﬁned target state was achiev-
able. Experimentally, one needs to perform a quan-
tum state tomography protocol of only three mea-
surements. However, if more qubits are to be con-
sidered, tomography protocols become exponentially
expensive in terms of number of measurements.
We propose two measurement strategies for a multi-
qubit classiﬁer. The ﬁrst one is the natural general-
ization of the single-qubit strategy, although it will
become unrealizable for a large number of qubits. We
compare the ﬁnal state of the circuit with one of the
states of the computational basis, one for each class.
The second strategy consist on focusing in one qubit
and depending on its state associate one or other class.
This is similar to previous proposals of binary multi-
qubit classiﬁers [7], although we add the possibility of
multiclass classiﬁcation by introducing several thresh-
olds (see Section 2).
Another part that should be adapted is the deﬁni-
tion of the cost function. In particular, we use diﬀer-
ent functions for each strategy explained above.
For the ﬁrst strategy, we use the ﬁdelity cost func-
tion of Eq. (7). Its generalization to more qubits is
straightforward. However, the orthogonal states used
Accepted in Quantum 2020-01-27, click title to verify. Published under CC-BY 4.0. 7
|0i L
1
(1) L
1
(2) L
1
(3)
···
L
1
(N)
|0i L
2
(1) L
2
(2) L
2
(3)
···
L
2
(N)
(a) Ansatz with no entanglement
|0i L
1
(1)
L
1
(2)
···
L
1
(N)
|0i L
2
(1)
L
2
(2)
···
L
2
(N)
(b) Ansatz with entanglement
Figure 4: Two-qubit quantum classiﬁer circuit without en-
tanglement (top circuit) and with entanglement (bottom
circuit). Here, each layer includes a rotation with data re-
uploading in both qubits plus a CZ gate if there is entangle-
ment. The exception is the last layer, which does not have
any CZ gate associated to it. For a ﬁxed number of layers,
the number of parameters to be optimized doubles the one
needed for a single-qubit classiﬁer.
for a multi-qubit classiﬁer are taken as the computa-
tional basis states. A more sophisticated set of states
could be considered to improve the performance of
this method.
For the second strategy, we use the weighted ﬁdelity
cost function. As stated above, we just focus on one
qubit, thus
F
c,q
(
~
θ, ~w, ~x) = h
˜
ψ
c
|ρ
q
(
~
θ, ~w, ~x)|
˜
ψ
c
i, (22)
where ρ
q
is the reduced density matrix of the qubit to
be measured. Then, the weighted ﬁdelity cost func-
tion can be adapted as
χ
2
wf
(~α,
~
θ, ~w) =
1
2
M
X
µ=1
C
X
c=1
Q
X
q=1
α
c,q
F
c,q
(
~
θ, ~w, ~x
µ
) Y
c
(~x
µ
)
2
!
,
(23)
where we average over all Q qubits that form the clas-
siﬁer. Eventually, we can just measure one of these
qubits, reducing the number of parameters to be op-
timized.
4.2 Quantum circuits examples
The deﬁnition of a multi-qubit quantum classiﬁer cir-
cuit could be as free as is the deﬁnition of a multi-
layer neural network. In artiﬁcial neural networks,
it is far from obvious what should be the number of
hidden layers and neurons per layer to perform some
task. Besides, it is, in general, problem-dependent.
For a multi-qubit quantum classiﬁer, there is extra
degree of freedom in the circuit-design: how to in-
troduce the entanglement. This is precisely an open
problem in parametrized quantum circuits: to ﬁnd a
|0i L
1
(1) L
1
(2) L
1
(3)
···
L
1
(N)
|0i L
2
(1) L
2
(2) L
2
(3)
···
L
2
(N)
|0i L
3
(1) L
3
(2) L
3
(3)
···
L
3
(N)
|0i L
4
(1) L
4
(2) L
4
(3)
···
L
4
(N)
(a) Ansatz with no entanglement
|0i L
1
(1)
L
1
(2)
···
L
1
(N)
|0i L
2
(1)
L
2
(2)
···
L
2
(N)
|0i L
3
(1)
L
3
(2)
···
L
3
(N)
|0i L
4
(1)
L
4
(2)
···
L
4
(N)
(b) Ansatz with entanglement
Figure 5: Four-qubit quantum classiﬁer circuits. Without
entanglement (top circuit), each layer is composed by four
parallel rotations. With entanglement (bottom circuit) each
layer includes a parallel rotation and two parallel CZ gates.
The order of CZ gates alternates in each layer between (1)-
(2) and (3)-(4) qubits and (2)-(3) and (1)-(4) qubits. The
exception is in the last layer, which does not contain any CZ
gate. For a ﬁxed number of layers, the number of parameters
to be optimized quadruples the ones needed for a single-qubit
classiﬁer.
correct ansatz for the entangling structure of the cir-
cuit.
Figures 4 and 5 show the explicit circuits used in
this work. For a two-qubit classiﬁer without entangle-
ment, and similarly for a four-qubit classiﬁer, we iden-
tify each layer as parallel rotations on all qubits. We
introduce the entanglement using CZ gates between
rotations that are absorbed in the deﬁnition of layer.
For two-qubit classiﬁer with entanglement, we apply a
CZ gate after each rotation with exception of the last
layer. For a four-qubit classiﬁer, two CZ gates are ap-
plied after each rotation alternatively between (1)-(2)
and (3)-(4) qubits and (2)-(3) and (1)-(4) qubits.
The number of parameters needed to perform the
optimization doubles the ones needed for a single-
qubit classiﬁer for the two-qubit classiﬁer and quadru-
ples for the four-qubit classiﬁer. For N layers, the cir-
cuit depth is N for the non-entangling classiﬁers and
2N for the entangling classiﬁers.
5 Minimization methods
The practical training of a parametrized single-qubit
or multi-qubit quantum classiﬁer needs minimization
in the parameter space describing the circuit. This
Accepted in Quantum 2020-01-27, click title to verify. Published under CC-BY 4.0. 8
is often referred as a hybrid algorithm, where classi-
cal and quantum logic coexist and beneﬁt from one
another. To be precise, the set of {θ
i
} angles and
{w
i
} weights, together with α
q,l
parameters if appli-
cable, forms a space to be explored in search of a min-
imum χ
2
. In parameter landscapes as big as the ones
treated here, or in regular neural network classiﬁca-
tion, the appearance of local minima is ultimately un-
avoidable. The composition of rotation gates renders
a large product of independent trigonometric func-
tions. It is thus clear to see that our problem will
be overly populated with minima. The classical min-
imizer can easily get trapped in a not optimal one.
Our problem is reduced to minimizing a function
of many parameters. For a single-qubit classiﬁer, the
number of parameters is (3 + d)N where d is the di-
mension of the problem, i.e. the dimension of ~x, and
N is the number of layers. Three of these parameters
are the rotational angles and the other d correspond
with the ~w
i
weight. If using the weighted ﬁdelity cost
function, we should add C extra parameters, one for
each class.
In principle, one does not know how is the parame-
ter landscape of the cost function to be minimized. If
the cost function were, for example, a convex function,
a downhill strategy would be likely to work properly.
The pure downhill strategy is known as gradient de-
scent. In machine learning, the method commonly
used is a Stochastic Gradient Descent (SGD) [20].
There is another special method of minimization
known as L-BFGS-B [21]. This method has been used
in classical machine learning with very good results
[22].
The results we present from now on are starred by
the L-BFGS-B algorithm, as we found it is accurate
and relatively fast. We used open source software [23]
as the core of the minimization with own made func-
tions to minimize. The minimizer is taken as a black
box whose parameters are set by default. As this is
the ﬁrst attempt of constructing a single- or multi-
qubit classiﬁer, further improvements can be done on
the hyperparameters of minimization.
Nevertheless we have also tested a SGD algorithm
for the ﬁdelity cost function. This whole algorithm
has been developed by us following the steps from [17].
The details can be read in Appendix A. In general, we
found that L-BFGS-B algorithm is better than SGD.
This is something already observed in classical neu-
ral networks. When the training set is small, it is
often more convenient to use a L-BFGS-B strategy
than a SGD. We were forced to use small training
sets due to computational capabilities for our simula-
tions. Numerical evidences on this arise when solving
the problems we face for these single- and multi-qubit
classiﬁers with classical standard machine learning li-
braries [22]. This can be understood with a simple
argument. Neural networks or our quantum classiﬁer
are supposed to have plenty of local minima. Neural
networks have got huge products of non linear func-
tions. The odds of having local minima are then large.
In the quantum circuits side, there are nothing but
trigonometric functions. In both cases, if there are a
lot of training points it is more likely to ﬁnd some of
them capable of getting us out of local minima. If this
is the case, SGD is more useful for being faster. On
the contrary, when the training set is small, we have
to pick an algorithm less sensitive to local minima,
such as the L-BFGS-B.
6 Benchmark of a single- and multi-
qubit classiﬁer
We can now tackle some classiﬁcation problems. We
will prove that a single-qubit classiﬁer can perform
a multi-class classiﬁcation for multi-dimensional data
and that a multi-qubit classiﬁer, in general, improves
these results.
We construct several classiﬁers with diﬀerent num-
ber of layers. We then train the circuits with a train-
ing set of random data points to obtain the values of
the free parameters {θ
i
} and {w
i
} for each layer and
{α
i
} when applicable. We use the cost functions de-
ﬁned in Eq. (9) and Eq. (7). Then, we test the perfor-
mance of each classiﬁer with a test set independently
generated and one order of magnitud greater than the
training set. For the sake of reproducibility, we have
ﬁxed the same seed to generate all data points. For
this reason, the test and training set points are the
same for all problems. For more details, we provide
the explicit code used in this work [24].
We run a single-, two- and four-qubit classiﬁers,
with and without entanglement, using the two cost
functions described above. We benchmark several
classiﬁers formed by L = 1, 2, 3, 4, 5, 6, 8 and 10 layers.
In the following subsections, we describe the partic-
ular problems addressed with these single- and multi-
qubit classiﬁers with data re-uploading. We choose
four problem types: a simple binary classiﬁcation,
a classiﬁcation of a ﬁgure with multiple patterns, a
multi-dimensional classiﬁcation and a non-convex ﬁg-
ure.
The code used to deﬁne and benchmark the single-
and multi-qubit quantum classiﬁer is open and can be
found in Ref. [24].
6.1 Simple example: classiﬁcation of a circle
Let us start with a simple example. We create a ran-
dom set of data on a plane with coordinates ~x =
(x
1
, x
2
) with x
i
[1, 1]. Our goal is to classify these
points according to x
2
1
+x
2
2
< r
2
, i.e. if they are inside
or outside of a circle of radius r. The value of the ra-
dius is chosen in such a way that the areas inside and
outside it are equal, that is, r =
q
2
π
, so the proba-
bility of success if we label each data point randomly
Accepted in Quantum 2020-01-27, click title to verify. Published under CC-BY 4.0. 9