that any real arithmetic with arbitrarily high precision can be done in unit time, then an
unavoidable conclusion is that not only can one factor integers in time that is linear in the
number of digits , but also solve NP-hard problems in polynomial time .
In a seemingly
mundane problem involving real numbers, the number of required bits during the computation
can be a priori very large. For example, it is still an open problem whether one can decide the
, which are sums of square roots of positive integers
that are smaller than n, in time poly(k log n) on a Turing machine; see e.g. [12, 13] for
The numerical instability of previous methods may be attributed to expansions of large
degree polynomials that are found by roots of another polynomial. Crudely speaking, there
are two problems in this approach. First, the polynomial expansions can be regarded as
the computation of convolutions,
which, when naively iterated, may suﬀer from numerical
instability. Second, although the root ﬁnding is a well-studied problem, to use the roots to
construct another polynomial one has to understand the distribution of the roots to keep
track of the loss of precision. These problems were not addressed previously.
Here, we reﬁne a classical algorithm to ﬁnd interspersing single-qubit rotations and bound
the number of required bits in the computation for a desired accuracy to a ﬁnal answer. We
conclude that the classical preprocessing can indeed be done in polynomial time on a Tur-
ing machine. We generally adopt the methods of Ref. , but make manipulations easier by
avoiding trigonometric polynomials. Some generalizations are obtained with simpler calcula-
tions. For the numerical stability and analysis, our algorithm avoids too small numbers by
sacriﬁcing approximation quality in the initial stage, and replaces polynomial expansions by a
Fourier transform. These modiﬁcations enable us to handle the problems that are mentioned
above. However, it should be made clear that our reﬁnement also requires high precision
arithmetic. Speciﬁcally, we show that O(N log(N/)) bits of precision during the execution
of our algorithm is suﬃcient to produce a reliable ﬁnal decomposition, where N is the degree
of the polynomial that approximates a given transformation function f of eigenvalues up to
additive error . Previously no such bound was known. On a sequential (rather than parallel)
random access machine, our algorithm runs in time O(N
polylog(N/)). In our rudimen-
tary numerical experiment we were able to produce a sequence of length over 2000 for the
Jacobi-Anger expansion on a laptop by a few hours of computation.
We will start by reviewing quantum signal processing in the next section, and then develop
an algorithm and analyze it. In two short sections later we provide self-contained treatment
of polynomial approximations for Hamiltonian simulation and matrix inversion problems.
The section on Hamiltonian simulation contains some running time data of our algorithm.
Throughout the paper, we use U (1) to denote the group of all complex numbers of unit
modulus. Sometimes we will refer to U(1) as the unit circle. As usual, i =
X = |1ih0| + |0ih1|, Y = i |1ih0| − i |0ih1|, Z = |0ih0| − |1ih1| are Pauli matrices.
The basic idea in Ref.  is to relate the expansion of a Boolean formula in a conjunctive form into a
disjunctive form with the expansion of a polynomial from its factors. Then, the satisﬁability translates to the
positiveness of certain coeﬃcients in the expansion. The variable in the polynomial is set to a suﬃciently large
number, eﬀectively encoding the entire polynomial in a single big number. An important elementary fact that
underlies the power of arithmetic model is that we only need O(n) compositions of squaring operation x 7→ x
to reach a 2
can be identiﬁed with a “coeﬃcient function” j 7→ a
. The coeﬃcient function of
the product of two polynomials is the convolution of the two coeﬃcient functions.
Accepted in Quantum 2019-09-27, click title to verify 2