classifiers.
A crucial part of a quantum classification algorithm
is how data is encoded into the circuit. Proposals
based on kernel methods design an encoding circuit
which implements a feature map from the data space
to the qubits Hilbert space. The construction of this
quantum feature map may vary depending on the al-
gorithm, but common strategies make use of the quan-
tum Fourier transform or introduce data in multiple
qubits using one- and two-qubit gates [9, 10]. Both
the properties of the tensor product and the entangle-
ment generated in those encoding circuits capture the
non-linearities of the data. In contrast, we argue that
there is no need to use highly sophisticated encoding
circuits nor a significant number of qubits to intro-
duce these non-linearities. Single-qubit rotations ap-
plied multiple times along the circuit generate highly
non-trivial functions of the data values. The main dif-
ference between our approach and the ones described
above is that the circuit is not divided between the
encoding and processing parts, but implements both
multiple times along the algorithm.
Data re-uploading is considered as a manner of solv-
ing the limitations established by the no-cloning the-
orem. Quantum computers cannot copy data, but
classical devices can. For instance, a neural network
takes the same input many times when processing the
data in the hidden layer neurons. An analogous quan-
tum neural network can only use quantum data once.
Therefore, it makes sense to re-upload classical data
along a quantum computation to bypass this limita-
tion on the quantum circuit. By following this line
of thought, we present an equivalence between data
re-uploading and the Universal Approximation The-
orem applied to artificial neural networks [12]. Just
as a network composed of a single hidden layer with
enough neurons can reproduce any continuous func-
tion, a single-qubit classifier can, in principle, achieve
the same by re-uploading the data enough times.
The single-qubit classifier illustrates the computa-
tional power that a single qubit can handle. This pro-
posal is to be added to other few-qubit benchmarks
in machine learning [13]. The input redundancy has
also been proposed to construct complex encoding in
parametrized quantum circuits and in the construc-
tion of quantum feature maps [10, 14]. These and
other proposals mentioned in the previous paragraphs
are focused on representing classically intractable or
very complex kernel functions with few qubits. On
the contrary, the focus of this work is to distill the
minimal amount of quantum resources, i.e., the num-
ber of qubits and gates, needed for a given classifica-
tion task quantified in terms of the number of qubits
and unitary operations. The main result of this work
is, indeed, to show that there is a trade-off between
the number of qubits needed to perform classification
and multiple data re-uploading. That is, we may use
fewer qubits at the price of re-entering data several
times along the quantum computation.
We shall illustrate the power of a single- and multi-
qubit classifiers with data re-uploading with a series
of examples. First, we classify points in a plane that
is divided into two areas. Then, we extend the num-
ber of regions on a plane to be classified. Next, we
consider the classification of multi-dimensional pat-
terns and, finally, we benchmark this quantum clas-
sifier with non-convex figures. For every example,
we train a parametrized quantum circuit that car-
ries out the task and we analyze its performance in
terms of the circuit architecture, i.e., for single- and
multi-qubit classifiers with and without entanglement
between qubits.
This paper is structured as follows. First, in Sec-
tion 2, we present the basic structure of a single-qubit
quantum classifier. Data and processing parameters
are uploaded and re-uploaded using one-qubit gen-
eral rotations. For each data point, the final state
of the circuit is compared with the target state as-
signed to its class, and the free parameters of the cir-
cuit are updated accordingly using a classical mini-
mization algorithm. Next, in Section 3, we motivate
the data re-uploading approach by using the Univer-
sal Approximation Theorem of artificial neural net-
works. In Section 4, we introduce the extension of
this classifier to multiple qubits. Then, in Section 5,
we detail the minimization methods used to train the
quantum classifiers. Finally, in Section 6, we bench-
mark single- and multi-qubit quantum classifiers de-
fined previously with problems of different dimensions
and complexity and compare their performance re-
spect to classical classification techniques. The con-
clusions of this proposal for a quantum classifier are
exposed in Section 7.
2 Structure of a single-qubit quantum
classifier
The global structure of any quantum circuit can be
divided into three elements: uploading of informa-
tion onto a quantum state, processing of the quan-
tum state, and measurement of the final state. It is
far from obvious how to implement each of these ele-
ments optimally to perform a specific operation. We
shall now address them one at a time for the task of
classification.
2.1 Re-uploading classical information
To load classical information onto a quantum circuit is
a highly non-trivial task [4]. A critical example is the
processing of big data. While there is no in-principle
obstruction to upload large amounts of data onto a
state, it is not obvious how to do it.
The problem we address here is not related to a
large amount of data. It is thus possible to consider a
Accepted in Quantum 2020-01-27, click title to verify. Published under CC-BY 4.0. 2