Tomographically reconstructed master equations for any
open quantum dynamics
Felix A. Pollock and Kavan Modi
School of Physics & Astronomy, Monash University, Clayton, Victoria 3800, Australia
July 9, 2018
Memory effects in open quantum dy-
namics are often incorporated in the equa-
tion of motion through a superoperator
known as the memory kernel, which en-
codes how past states affect future dynam-
ics. However, the usual prescription for
determining the memory kernel requires
information about the underlying system-
environment dynamics. Here, by deriv-
ing the transfer tensor method from first
principles, we show how a memory ker-
nel master equation, for any quantum pro-
cess, can be entirely expressed in terms
of a family of completely positive dynam-
ical maps. These can be reconstructed
through quantum process tomography on
the system alone, either experimentally
or numerically, and the resulting equation
of motion is equivalent to a generalised
Nakajima-Zwanzig equation. For experi-
mental settings, we give a full prescription
for the reconstruction procedure, render-
ing the memory kernel operational. When
simulation of an open system is the goal,
we show how our procedure yields a con-
siderable advantage for numerically calcu-
lating dynamics, even when the system
is arbitrarily periodically (or transiently)
driven or initially correlated with its envi-
ronment. Namely, we show that the long
time dynamics can be efficiently obtained
from a set of reconstructed maps over a
much shorter time.
1 Introduction
A large fraction of active research in physics and
chemistry, both theoretical and experimental, in-
volves characterising or modelling the dynamics of
Felix A. Pollock: felix.pollock@monash.edu
Kavan Modi: kavan.modi@monash.edu
open quantum systems, usually in terms of the time
evolution of the system’s density operator. Such
dynamics is, in general, significantly more compli-
cated than that dictated by Schrödinger’s equation.
Nevertheless, many techniques have been developed
to predict how open systems will evolve in time.
These include schemes for tomographically deter-
mining an unknown open process [1, 2] (even going
beyond density operator evolution to the full multi-
time statistics of observables [35]), perturbative ap-
proaches to theoretically modelling dynamics [68],
and computationally expensive numerical techniques
which can exactly simulate an open system under cer-
tain circumstances [913]. However, the resources
required for the latter often scale exponentially with
the evolution time, and numerical shortcuts usually
rely on assumptions about the dynamics, such as no
time-dependent driving [14] or limited memory ef-
fects [1517], meaning long-time dynamical simula-
tions are intractable for the most general open pro-
cesses. Here, among other results, we develop a
method for extracting the long-time dynamics for
general driven open quantum systems.
Conceptually, approaches to describing open quan-
tum dynamics tend to take one of two perspectives,
depicted in Fig. 1: First is the “dilated” or “underly-
ing” picture, where the system in question (S) is rep-
resented as evolving with its environment (E) from
an initial state ρ
SE
t
0
, with a time evolution superoper-
ator U
SE
t:t
0
1
that propagates solutions to an equation of
the form
˙ρ
SE
t
= L
SE
t
ρ
SE
t
. (1)
All the physics of the system is encoded in the su-
peroperator L
SE
t
, which is usually taken to evolve
SE according to the von Neumann equation
2
, but
1
Superop erator U
S E
t:t
0
need not be unitary in general;
however, it must be divisible.
2
That is, L
S E
t
ρ
S E
t
=
i
~
[H
S E
t
, ρ
S E
t
], where H
S E
t
is the
SE Hamiltonian.
Accepted in Quantum 2018-07-05, click title to verify 1
arXiv:1704.06204v3 [quant-ph] 6 Jul 2018
could also incorporate dissipative and decohering dy-
namics through (possibly time-dependent) Lindblad
terms [18, 19].
The second picture is the operational one, where
the entire dynamics is described in terms of quantities
that would be, in principle, directly measurable by an
experimenter with access to the system alone. Such a
description contains only that information necessary
to predict the evolution of the system, nothing more,
and is usually more compact than a corresponding de-
scription in the dilated picture. In this picture, the
dynamics may often be described in terms of a dy-
namical map Λ
t:t
0
, which is completely positive and
trace preserving, and evolves the initial system state
ρ
t
0
to its counterpart at time t: ρ
t
= Λ
t:t
0
ρ
t
0
3
. This
description is valid when the system and environ-
ment are initially uncorrelated in the other picture.
More generally, the dynamics can be described in
terms of a superchannel [2, 20] M
t:t
0
, which maps
an initial preparation superoperator A—any transfor-
mation that could be applied to the system at time
t
0
—to the state at later times: ρ
t
= ρ
t,A
= M
t:t
0
[A].
The superchannel guarantees the complete positivity
of the dynamics even in the presence of initial corre-
lations with an environment. The time-evolved sys-
tem state that one would obtain without performing
any preparation procedure (the ‘freely-evolved’ state)
is simply given by M
t:t
0
[I], where I is the iden-
tity superoperator. The dynamics, Λ
t:t
0
or M
t:t
0
, can
be reconstructed operationally by means of quantum
process tomography [1, 3, 21].
While these two pictures are equivalent, it is not
always clear how to convert from the operational to
the dilated picture (the reverse is straightforwardly
achieved by tracing out over the environment, ρ
t
=
tr
E
ρ
SE
t
). That is, recovering the underlying SE
quantities, ρ
SE
t
0
and L
SE
t
, from operationally recon-
structible time-dependent maps, such as M
t:t
0
, is
neither uniquely constrained nor easy to achieve in
practice; though, in the case of smooth dynamical
maps, the underlying objects have been shown to be
continuous and smooth themselves [22, 23]. Where
such a recovery is possible, even partially, the system
can be used as a probe of its environment, extracting
information about its Hamiltonian directly [24, 25],
or otherwise determining the parameters of a phe-
nomenological master equation [2629].
3
Throughout this Article we denote operators and su-
perop erators acting on the system-environment space with
superscript SE. We do not put a superscript on operators
and superoperators acting on the system alone.
Partial knowledge of the global dynamics, inferred
from the evolution of the system, can also be used
to more efficiently predict future dynamics. This is
embodied in the transfer tensor method [14], where
short time system dynamics can be used to con-
struct a discretized memory kernel, in the form of
the eponymous transfer tensors. These can then be
used to propagate the system to later times, simulat-
ing the long-time dynamics with exponentially fewer
resources than other methods when the exact propa-
gation from an underlying model is computationally
complex [30, 31]. Memory kernels are often used
to formulate exact, continuous time open dynamics,
most famously in the form of the Nakajima-Zwanzig
master equation [6]. However, unless the underlying
dynamics is homogeneous in time, and the system is
initially uncorrelated with its environment, there is no
clear way to construct the memory kernel or transfer
tensors operationally, either in experiment or in nu-
merical simulations.
In this Article, we start by solving this problem
definitively: First, in Sec. 2 we derive transfer tensors
from first principles for any open quantum dynamics,
in particular those with initial correlations and gen-
erators that have arbitrary periodic or transient time
dependence. We then go on, in Sec. 3 to demonstrate
the power of the transfer tensor technique for simu-
lating long-time dynamics, explicitly recovering the
dynamical steady state of a driven, dissipative exam-
ple system. In Sec. 4, we construct a master equation
from the transfer tensors, demonstrating a direct cor-
respondence between tomographically reconstructed
dynamics and a generalised Nakajima-Zwanzig mas-
ter equation. Finally, in Sec. 5, we show how this cor-
respondence could be used in an experimental setting
to relate directly measurable quantities to properties
of the underlying dynamics.
Our results render a large class of difficult numeri-
cal problems efficiently solvable and build a new link
between underlying physics and experimentally ac-
cessible quantities. In addition, they open up the pos-
sibility of deriving approximate master equations for
the system, through operationally meaningful coarse-
graining of the reduced dynamics. Before presenting
our main results, we first introduce memory kernels
and their relation to system’s equation of motion.
Memory kernel master equations
The most general form for the generator of a memo-
ryless, or Markov, quantum evolution was famously
derived by Lindblad [18], as well as Gorini, Kos-
Accepted in Quantum 2018-07-05, click title to verify 2
Figure 1: Two equivalent pictures of open quantum
dynamics. (a) The system, initially in joint state ρ
S E
t
0
with its environment, is prepared at time t
0
with op-
eration A. The two then evolve together according to
˙ρ
S E
t
= L
S E
t
ρ
S E
t
, leading to a final state ρ
t
for the system
that depends on the initial preparation. (b) The system
state at time t can also be thought of as a function of
the preparation operation at time t
0
. All the uncontrol-
lable parts of the process (enclosed within the dashed
lines in the first picture), including the initial system-
environment state, are encapsulated in the linear map
M
t:t
0
. When ρ
S E
t
0
is uncorrelated, the dynamics can be
equivalently thought of as a completely-positive trace-
preserving map from the initial state of the system.
sakowski and Sudarshan [19]. In this case, the deriva-
tive of the system state at time t is a homogeneous
linear function of the state at that same time, as in
Eq. (1). For more general, non-Markovian processes,
one natural way to extend the formalism is to con-
sider master equations in which the time derivative
depends on the history of the system’s evolution:
˙ρ
t
= L
t
ρ
t
+
Z
t
t
0
ds K
t,s
ρ
s
+ J
t,t
0
, (2)
where L
t
is a time-local generator, K
t,s
is a super-
operator known as the memory kernel, and J
t,t
0
is
an inhomogeneous term, that can depend on the ini-
tial condition. Put simply, the memory kernel K
t,s
quantifies how the system state at time s influences,
through interaction with the environment, the evolu-
tion at later time t. Knowledge of the functions on
the right hand side of Eq. (2) allows for the self-
consistent solution of the system’s dynamics.
In recent years, several phenomenological mas-
ter equations with memory kernels have been pro-
posed [3236]. It is always possible, given a mi-
croscopic description, to cast any open dynamics in
the form of Eq. (2) using the Nakajima-Zwanzig pro-
jection superoperator technique [6] (see Sec. 4 and
Appendix D), though the solution of the resulting
equation is far from trivial and calculating the mem-
ory kernel for different models remains an ongoing
research problem [3739]. However, there is cur-
rently no prescription for determining the elements of
Eq. (2) when a microscopic model is unavailable. In
Sec. 4 we show how this can be done, in a simulation
or an experiment, in terms of completely positive dy-
namical maps, but first we show how a discrete time
memory kernel structure arises, for general open dy-
namics, in the form of transfer tensors.
2 Operational derivation of transfer
tensors
The transfer tensor method was first proposed as an
ansatz in Ref. [14] under certain restrictive condi-
tions: (i) the underlying dynamics is time homoge-
neous, i.e., L
SE
t
= L
SE
; (ii) the initial SE state is
uncorrelated: ρ
SE
t
0
= ρ
t
0
ρ
E
t
0
; (iii) ρ
E
t
0
is a stationary
state. Our key result is a first principles derivation
of the transfer tensors for a general open evolution,
without making any of the above assumptions.
We begin by introducing a set of objects {P
s
, Q
s
}
which act as
P
s
ρ
t
= Λ
t:s
ρ
s
and Q
s
ρ
t
= ρ
t
Λ
t:s
ρ
s
, (3)
such that (P
s
+ Q
s
)ρ
t
= ρ
t
. Subsequent action
of these objects further breaks up the evolution, i.e.,
P
s
0
Λ
t:s
= Λ
t:s
0
Λ
s
0
:s
. Here, Λ
t:s
is a completely posi-
tive dynamical map from time s to time t, Λ
t:s
ρ =
tr
E
{U
SE
t:s
(ρ τ
E
s
)}, where, as in the introduction,
U
SE
t:s
is the time evolution operator of the underlying
dynamics and τ
E
s
is a unit trace time-dependent den-
sity operator not necessarily related to the actual envi-
ronment dynamics; we will elaborate on the interpre-
tation of Λ
t:s
and τ
E
s
in numerical and experimental
contexts in Secs. 3 and 5 respectively; for now, Λ
t:s
can be thought of simply as a two parameter family
of dynamical maps.
Notably, P
s
and Q
s
are not superoperators, rather
they can be seen as similar kinds of objects to the
time ordering operator; that is, as consistently defined
prescriptions for rewriting the subsequent expression
based on its time indices. P
s
replaces an object at
any t > s (either a dynamical map from s
0
< s to
t or the system state at time t) with that at time s
acted on by Λ
t,s
; Q
s
gives the complement. These
can be seen simply as a shorthand for the decomposi-
tion ρ
t
= Λ
t:s
ρ
s
+(ρ
t
Λ
t:s
ρ
s
) (and a corresponding
decomposition for maps), which we will now iterate
to further break up the dynamics. Specifically, by re-
peatedly using the identity (P
s
+ Q
s
) = I, we can
Accepted in Quantum 2018-07-05, click title to verify 3
write the evolution up to time t = t
N
in terms of the evolution to N 1 earlier times {t
j
}
ρ
t
=
P
t
N1
+ Q
t
N1
P
t
N2
+ Q
t
N2
(. . . (P
t
1
+ Q
t
1
(P
t
0
+ Q
t
0
) . . . )) ρ
t
(4)
=
N1
X
j=0
T
(Nj)
t:t
j
ρ
t
j
+ Ξ
t:t
0
, where T
(Nj)
t:t
j
ρ
t
j
= Q
t
N1
···Q
t
j+1
P
t
j
ρ
t
and Ξ
t:t
0
= Q
t
N1
···Q
t
0
ρ
t
. (5)
Here, the T
(n)
t,s
are exactly the transfer tensors de-
fined in Ref. [14], except that we are able to ex-
plicitly derive them in terms of, in principle, opera-
tionally accessible maps, without appealing to time-
homogeneity of the memory kernel. They are given
by
T
(1)
t:t
N1
= Λ
t:t
N1
and
T
(Nj)
t:t
j
= Λ
t:t
j
N
X
k=j+1
T
(Nk)
t:t
k
Λ
t
k
:t
j
. (6)
Chasing this definition leads to the explicit form
T
(Nj)
t:t
j
t:t
j
N1
X
k=j+1
Λ
t:t
k
Λ
t
k
:t
j
+
N1
X
k=j+2
k1
X
l=j+1
Λ
t:t
k
Λ
t
k
:t
l
Λ
t
l
:t
j
. . . . (7)
Finally, the remaining inhomogeneous term in
Eq. (5) encodes all memory effects arising from the
initially correlated SE state, and is defined as the dif-
ference Ξ
t:t
0
= ρ
t
P
N1
j=0
T
(Nj)
t:t
j
ρ
t
j
; this can be
operationally determined for any initial preparation
by reconstructing the superchannel M
t:t
0
. For long
enough times, this term can be neglected [40]; how-
ever, in Appendix A we show how this term can be
included without direct calculation at the expense of
reconstructing a larger set of dynamical maps. There,
we account for the initial correlations operationally
by replacing the map Λ
t:t
0
with the completely pos-
itive superchannel M
t:t
0
[2]. We will now show ex-
plicitly how the transfer tensors derived in this sec-
tion can be used to simulate driven open systems.
3 Efficiently simulating long-time dy-
namics
The principle advantage of the transfer tensor ap-
proach over other simulation methods is that it allows
for the determination of long-time dynamics, with an
error that does not grow with the evolution time
4
but
instead depends on the time beyond which memory
effects are neglected. This is in contrast to many nu-
merically exact techniques, where the computational
effort to achieve a given numerical precision grows
exponentially with the evolution time [13] (of course,
if memory effects persist beyond the time taken to
converge on a steady state, then the numerically ex-
act techniques will do just as well). In what follows,
we will outline how a transfer tensor simulation could
be performed and how the error can be bounded.
Fixing δt = t
j+1
t
j
determines the time reso-
lution of the resulting dynamics, this is an external
choice that does not affect the accuracy of the ap-
proach, but rather limits the degree to which fine-
grained dynamical features can be resolved. To guar-
antee all such features are captured, one could choose
it to be smaller than the time scale set by the largest
energy in the system to be simulated. Once the tem-
poral resolution is set, we will assume that the dy-
namical maps Λ
t:s
are periodic (or transient), such
that Λ
t+T :s+T
= Λ
t:s
for large enough s, and where
the period T = c δt with integer c. This is equiv-
alent to requiring that the underlying generator L
SE
t
and reference state τ
E
t
are transient or periodic with
periods a δt and b δt, where c is the greatest common
denominator of 1/a and 1/b (δt and τ
E
t
can always be
chosen in a simulation to be compatible with the gen-
erator in this way). In this case, more general than
that considered in Ref. [14], we have periodicity of
the transfer tensors, such that
T
(l)
t
k
:t
kl
= T
(l)
t
k+c
:t
kl+c
, l < k and t
kl
(8)
sufficiently large that transient effects are negligi-
ble. Therefore, only transfer tensors T
(l)
t
k
:t
kl
with
k l < c need to be calculated explicitly (in the
4
Assuming that the short time exact dynamics is re-
constructed accurately.
Accepted in Quantum 2018-07-05, click title to verify 4
transient case, all transfer tensors starting within the
transient period must be computed).
Furthermore, in most real open systems, the mem-
ory kernel K
t,s
decreases in magnitude (often expo-
nentially) with the time difference t s [6]. That
is, the system’s state at one time is continuously for-
gotten until it no longer affects the future evolution.
Consequently, the transfer tensors T
(l)
t
k
:t
kl
also be-
come negligibly small for sufficiently large l. Sim-
ilarly, the influence of the initial state encoded in
J
t,t
0
or Ξ
t,t
0
commonly decays with t t
0
. What
this means is that, beyond a certain time t
m
= m δt,
memory effects can be neglected. That is, T
(l)
t
k
:t
kl
with l > m can be effectively set to zero without
substantially affecting the dynamics. Combined with
the time translation symmetry described above, this
means the dynamics at any time, given the finite res-
olution set by δt, can be reconstructed using only a fi-
nite set of cm transfer tensors (m transfer tensors for
each of the c starting points in a driving period). As
such, decreasing δt while leaving other time scales
fixed increases the computational cost quadratically.
In the first step of the simulation, numerical pro-
cess tomography is performed by exactly solving
the dynamics to a time t
m
beyond each time t,
x < c, in the first driving period (or the transient
period) for an informationally complete set of initial
states. These dynamics can be reconstructed for any
choice of environment state τ
E
t
permitted by the exact
technique used to solve for the short time dynamics
(though different choices can lead to longer or shorter
memory times, as discussed in Sec. 5). The result-
ing set of dynamical maps is then used to reconstruct
the transfer tensors on this interval (starting with
those between adjacent time steps and iterating with
Eq. (6)). Importantly, despite the fact the transfer ten-
sors are constructed starting from different points of
the driving cycle, the exact dynamics only needs to be
calculated for an evolution time t
m
from each point,
meaning the efficiency of the technique does not de-
pend on the driving period. Using Eq. (5), the state
at any arbitrary time t
k
can be found by identifying
T
(l)
t
k
:t
kl
with T
(l)
t
kl
mod T +lδt:t
kl
mod T
, for l m,
and with 0, for l > m (similarly, Ξ
t
k
,t
0
is set to
0 for k > m), before propagating the initial state.
We reiterate that the complexity of this latter stage is
independent of the underlying SE model, meaning
it could be equally well applied to any open system
whose short time dynamics can be determined.
In Appendix B, we show how the maximum dis-
tance between the numerically propagated state ˜ρ
t
k
and the exact state ρ
t
k
can be bounded as
ρ
t
k
˜ρ
(m)
t
k
1
.
m
X
l=1
T
(2ml)
t
k2m
mod T +2t
m
:t
k2m
mod T +lδt
, (9)
where kXk := max
ρ,σ
tr{σXρ} is the operator
norm of the matrix representation of superoperator
X that acts on vectorised density operators. In other
words, the error grows with the cumulative size of the
transfer tensors beyond the first memory time. What
this means is that one can guarantee a desired accu-
racy in the propagated state by varying t
m
and con-
structing transfer tensors to time T + 2t
m
until the
right hand side of the above equation is small enough.
The upper bound is approximate, insofar as it only
considers contributions to the error from the second
memory time period (as long as the memory decays
faster than arithmetically, then contributions from
subsequent memory times will not diverge). How-
ever, we find that this bound tends to grossly overes-
timate the propagation error, which is more closely
bounded by the largest norm of the ‘longest’ transfer
tensors: max
xc
kT
(m)
t+t
m
:t
k. On the other hand, if
the memory cutoff approximation is too severe, then
the propagated dynamics may become unphysical,
leading to divergent error in some cases (as in other
approximate methods, where throwing away signifi-
cant terms results in unphysical behaviour [6, 7]).
To illustrate the effectiveness of the transfer ten-
sor we now consider a simple example of a driven,
dissipative open system. The dynamics of the total
system-environment state is governed by a memory-
less master equation:
L
SE
t
ρ
SE
t
= i[H
SE
t
, ρ
SE
t
]
+ Γ
SE
t
L
1
2
{L
L, ρ
SE
t
}
. (10)
While the SE dynamics is Markovian, the dynamics
of S alone will be non-Markovian and will have the
form of Eq. (2).
For concreteness, we take the system-environment
to be a pair of qubits. The two evolve according
to the Hamiltonian H
SE
t
=
ω
2
σ
S
z
+
ω
0
2
σ
E
z
+ g[σ
S
x
σ
E
x
+ cos(Ωt)σ
S
y
σ
E
y
], and the Lindblad operators
L =
S
|0ih1|
E
act on the environment alone. This
generates dissipative dynamics for the environment;
incoherently pumping it to the excited state with rate
Γ. This seemingly simple model has many of the fea-
tures of more complex open systems; in fact, it is ap-
proximately equivalent to a class of spin-boson mod-
els [41].
Accepted in Quantum 2018-07-05, click title to verify 5
Figure 2: Example propagated dynamics (a) elements
of the system density operator at short and long times
for model described in the text. Transfer tensors are re-
constructed, using a time independent reference state,
between the first nine time steps (red and blue crosses),
before being used to propagate the state to later times.
The propagated population of the system’s excited state
(blue dots) is shown alongside exactly calculated dynam-
ics (continuous lines). Also shown (yellow squares) is the
trace distance between the exact density operator and
its transfer tensor propagated counterpart. (b) Real and
imaginary parts of the system’s coherence (the coherence
is zero at long times). (c) Long time propagation error
kρ
t
˜ρ
t
k
1
as a function of memory time t
m
and time
step size δt; also plotted, for reference, is the norm of the
memory kernel at t
m
. Points with arrows correspond to
cases where the distance is greater than two (sometimes
by many orders of magnitude), indicating non-physical
dynamics. In all cases, the bound in Eq. (9) is larger
than the error.
The simulated dynamics, with correlated initial
state ρ
SE
0
=
3
4
|0ih0|⊗|+ih+|+
1
4
|1ih1|⊗|−ih−|, where
|±i = (|0i ± |1i)/
2, is shown in Fig. 2 along with
the accompanying error, quantified by the trace dis-
tance kρ
t
k
˜ρ
t
k
k
1
= tr{|ρ
t
k
˜ρ
t
k
|}(in all figures the
parameters are set as ω = Γ = ω
0
/16 = g/2 = Ω/2).
In the figure, short time exact dynamics, calcu-
lated between the first nine time steps (using a time-
independent reference state τ
E
= tr
S
ρ
SE
0
), is used
to propagate the state ˜ρ
t
k
to much longer times with
no noticeable increase in error, as indicated by the
yellow dots, which remain close to zero for all times.
This can also be observed by comparing the exact dy-
namics (solid curves) with the propagated points. The
dynamics are accurate despite choosing a relatively
short memory cutoff time (ωt
m
= 5). As shown in
panel (b), the memory kernel still has a significant
norm at this time, which would usually lead one to
conclude that we are discarding important memory
contributions when making the cutoff.
One of the reasons for the remarkable effectiveness
of the transfer tensor approach, in this case, is the
coarse sampling (long δt) of the initial dynamics. In
general, as seen in panel (b) of the figure, where we
calculate the long-time propagation error for a vari-
ety of cutoff times and time step lengths, we find that
finer grained dynamics require longer memory times
for the same accuracy
5
. In every case, we find that
the bound in Eq. (9) is extremely loose, meaning that
the transfer tensor propagation is more robust than
indicated by the conservative analysis presented in
Appendix B. However, beyond a certain point, even
this loose bound on the error seems to decrease ex-
ponentially with t
m
. In the limit of continuous time
dynamics, our approach leads us to a generalised ver-
sion of the Nakajima-Zwanzig equation, of the form
of Eq. (2), as we will now show.
4 Equivalence with the Nakajima-
Zwanzig equation in the continuum limit
While Eq. (2) governs the exact dynamics of the sys-
tem alone, conventional derivations usually rely im-
plicitly on an underlying model for the SE dynam-
ics. In this sense, the evolution it predicts is only
5
We could have also reduced the error by choosing a
time-dep endent reference state (since the memory kernel
norm decays much faster). However, in this case, a longer
memory time would need to be chosen to allow transients
in the time-dependent state to decay.
Accepted in Quantum 2018-07-05, click title to verify 6
as good as the initial model, which may only ap-
proximate the system’s true behaviour. We will show
how operationally determined quantities, encoded in
transfer tensors, can be used to construct discrete-
time memory kernel equations for general open pro-
cesses, where the SE model may not be well known,
and show that these tend to a generalised Nakajima-
Zwanzig equation in the continuum limit.
To do this, we will construct a difference equa-
tion for the state at time t, before taking a limit and
recovering a derivative at that time. We first di-
vide the interval [t
0
, t] into N parts, as above, and
choose the N + 1 times {t
k
} to each be δt apart:
t
j+1
t
j
= δt := (t t
0
)/N
6
. By substituting
Eq. (5) into the difference between ρ
t
= ρ
t
N
and
ρ
t
N1
we get:
ρ
t
ρ
t
N1
δt
=
t:t
N1
I)
δt
ρ
t
N1
+
N2
X
j=0
δt
T
(Nj)
t:t
j
δt
2
ρ
t
j
+
Ξ
t:t
0
δt
, (11)
which resembles, term by term, a discrete version of
the memory kernel master equation given in Eq. (2).
In the limit N we can equate the terms as:
K
t,t
j
= lim
N→∞
T
(Nj)
t:t
j
δt
2
, J
t,t
0
= lim
N→∞
Ξ
t:t
0
δt
,
L
t
= lim
N→∞
Λ
t:t
N1
I
δt
. (12)
In order to relate these quantities directly to the un-
derlying dynamics, in terms of L
SE
t
and ρ
SE
0
, we can
substitute Λ
t:s
ρ = tr
E
{U
SE
t:s
(ρ τ
E
s
)} into the left
hand sides of these expressions (writing T
(Nj)
t:t
j
and
Ξ
t:t
0
in terms of dynamical maps). In this way, we
can recast the objects in the decomposition in Eq. (4)
in terms of SE projection superoperators as
P
s
ρ
t
= tr
E
{U
SE
t:s
P
SE
s
U
SE
t:t
0
Aρ
SE
t
0
} and
Q
s
ρ
t
= tr
E
{U
SE
t:s
Q
SE
s
U
SE
t:t
0
Aρ
SE
t
0
}, (13)
where P
SE
s
ρ
SE
s
= tr
E
{ρ
SE
s
} τ
E
s
and Q
SE
s
=
I
SE
P
SE
s
, with τ
E
s
the environment state appear-
ing in the underlying representation of the dynam-
ical map Λ
t,s
; A is, as before, the operation used
to prepare the system at the initial time. We can
then take the limit by expanding U
SE
for small δt:
U
SE
t+δt:t
' I
SE
+ δtL
SE
t
+ δt
2
L
SE
t
2
+ . . . . We do this
in detail in Appendix C, finding
L
t
ρ
t
= tr
E
{P
SE
t
L
SE
t
P
SE
t
ρ
t
x
E
}
K
t,s
ρ
s
= tr
E
n
P
SE
t
L
SE
t
G
t,s
(14)
×
Q
SE
s
L
SE
s
P
SE
s
˙
P
SE
s
ρ
s
x
E
o
,
J
t,t
0
= tr
E
P
SE
t
L
SE
t
G
t,t
0
Q
SE
t
0
Aρ
SE
t
0
,
6
Equidistance between time steps is not necessary for
our derivation and is only chosen for convenience. It may
be that other choices lead to faster convergence in practice.
where the propagator G
t,s
=
T
exp
h
R
t
s
ds
0
Q
SE
s
0
L
SE
s
0
i
is a time ordered ex-
ponential (as indicated by T
) and the value of the
unit trace operator x
E
is unimportant, as it is acted
on directly by a projector.
Equation (14) represents exactly the terms ap-
pearing in a generalised version of the Nakajima-
Zwanzig master equation. In the standard deriva-
tion of the Nakajima-Zwanzig equation, a time-
independent superoperator P
SE
and its complement
Q
SE
= I
SE
P
SE
are defined on the joint SE
space [6]. In Appendix D, we show that one ar-
rives at Eq. (14) following the standard derivation,
but starting with the time-dependent projection super-
operators defined above. Unlike in earlier approaches
utilising time-dependent projectors for open systems
(see e.g. Refs. [42, 43]) we do not place any restric-
tions on the operator τ
E
t
beyond that it should be a
positive, unit trace density operator, and that it and
its first derivative ˙τ
E
t
should be continuous functions
of time. It is worth noting the similarity to recent pro-
jection operator approaches to describing non-linear
many-body dynamics [44, 45].
The significance of our result is that, even for
time-inhomogeneous SE dynamics and initial corre-
lations, the memory kernel can be operationally de-
termined by reconstructing the short time-difference
dynamics starting from a series of intermediate time
points, alleviating the need to explicitly calculate or
approximate the time-ordered exponential integrals
implicit in Eq. (14). However, the convergence of
the limit in Eqs. (12) may be slow, and will depend
on the time difference t t
j
in general.
Importantly, there is a freedom in choosing the en-
vironment operator τ
E
t
that goes into the definition
Accepted in Quantum 2018-07-05, click title to verify 7
of the dynamical map Λ
t,s
and, hence, the projector
P
SE
t
. Different choices of projector P
SE
t
lead to dif-
ferent forms for the superoperators in Eqs. (14) but
the same solution for the dynamics ρ
t
. This is exem-
plified, using the model presented in Sec. 3, in Fig. 3,
where different (time-dependent) projectors are seen
to lead to memory kernels with different norms; in
this case, a particular time-dependent choice leads to
a memory kernel whose norm decays several times
faster than the most obvious time-independent choice
(where τ
E
t
= tr
S
ρ
SE
0
). For simulations, this means
that a judicious choice of projector can lead to a much
shorter memory cutoff time for the same degree of ap-
proximation in the dynamics. While τ
E
t
can be freely
chosen in a simulation (exact numerical method per-
mitting), the dynamical maps reconstructed in an ex-
perimental setting have an environment operator dic-
tated by the SE dynamics. However, as we will now
discuss, there is still an analogous freedom in how the
tomography is performed.
Figure 3: Norm of different memory kernels for the
same process. Derived from exact simulation of the
model presented in Sec. 3. Three different choices of
P
S E
t
: (i) the usual case of projection onto initial envi-
ronment state, P
S E
ρ
S E
t
= ρ
t
tr
S
{ρ
S E
t
0
} (blue, solid,
largest at long times); (ii) projection arising from keep-
ing the system in fixed state |0ih0|, i.e., E
t
= B
|0ih0|
(red,
dot-dashed, smallest at long times); (iii) projection onto
true environment state, P
S E
t
ρ
S E
t
= ρ
t
tr
S
{ρ
S E
t
}, cor-
responding to E
t
= I (green, dashed). Note the log
scale.
5 Experimental interpretation of pro-
jector choice
One of the applications of the operational deriva-
tion of the Nakajima-Zwanzig equation, presented
in the previous subsection, is to relate experimen-
tally accessible quantities to properties of the un-
derlying SE Hamiltonian and initial state (through
Eqs. (12) & (14)). In this case, it is important to
identify exactly which choices of environment state
τ
E
t
and hence dynamical map Λ
t,s
(or equivalently
projector P
SE
t
) correspond to which experimental re-
construction procedure, if any.
In the simplest scenario, the joint SE evolution is
left unperturbed before tomographically reconstruct-
ing Λ
t:s
(by discarding the state at time s and prepar-
ing a fresh one, before measuring the system at
time t). The state of the environment at time s, in
this case, would be the freely evolved reduced state:
ρ
E
s
= tr
S
{U
SE
s:t
0
ρ
SE
t
0
}. While this is perfectly reason-
able from an operational standpoint, it is somewhat
limiting from the perspective of extracting informa-
tion about L
SE
t
from the memory kernel. τ
E
t
must
be known to separate out properties of the genera-
tor from the memory kernel, and the time-dependent
state of the environment is (usually) at least as diffi-
cult to determine as ρ
t
.
In order to get around this problem, one could
choose to act on the system with a series of superop-
erators
~
E = {E
t
k
} (any physically allowed transfor-
mation of the system) between times t
0
and s, lead-
ing to different environment states ρ
E
s,
~
E
and, corre-
spondingly, different families of intermediate dynam-
ical maps Λ
t:s
(where we have left the
~
E dependence
implicit for notational convenience). This procedure
is depicted in Fig. 4 and represents the operational
analogue of the freedom to choose P
SE
t
in Eqs. (14).
A particularly convenient choice (from the per-
spective of calculating ρ
E
t
j
,
~
E
) is the one where
entanglement-breaking operations E
t
j
ρ = B
σ
t
j
ρ =
σ
t
j
are applied at times t
j
< s, where σ
t
j
is some
fixed state for each time step. In this case, the evo-
lution of ρ
E
t
j
,
~
E
effectively decouples from the sys-
tem in the limit that these operations are applied in-
finitely quickly. That is, the reduced state of the envi-
ronment evolves according to an average generator:
˙ρ
E
t
= tr
S
{L
SE
t
σ
t
ρ
E
t
} = L
E
t
ρ
E
t
. In an experi-
ment, this could be achieved by, for instance, dynam-
ically decoupling the system [46] or using the quan-
tum Zeno effect to freeze the dynamics through fre-
quent strong measurement [47], though these may be
difficult in practice and represent only two possible
choices of control that could lead to a known ρ
E
t
j
,
~
E
.
In many cases, there will be choices of σ
t
such that
ρ
E
t
0
is a stationary state with respect to L
E
t
, leading to
time-independent projectors P
SE
and Q
SE
.
Accepted in Quantum 2018-07-05, click title to verify 8
Figure 4: Experimentally reconstructing intermedi-
ate dynamics. (a) After decorrelating the system from
its environment (e.g., by projectively measuring and for-
getting the outcome) at some intermediate time t
j
, the
subsequent dynamics is described by a CPTP map Λ
t:t
j
.
This can be operationally determined through normal
quantum process tomography, i.e., by repreparing the
system in various states ρ and determining the cor-
responding ρ
0
at later times. In general, operations
~
E = {E
t
k
} could be performed on the system at a se-
ries of earlier times {t
k
< t
j
} (in addition to the initial
preparation operation A), leading to different Λ
t:t
j
. (b)
One choice of
~
E which leads to a self-consistently solv-
able reconstructed master equation is E
t
k
= B
σ
k
, where
B
σ
k
ρ = σ
k
. (c) The map Λ
t:t
j
corresponds to joint
evolution with an environment initially in the history de-
pendent state ρ
E
t
j
,
~
E
.
6 Discussion
In this Article, we have presented a derivation of
the transfer tensor method, leading to a scheme for
relating an operationally meaningful description of
any open quantum process, in terms of completely
positive dynamical maps, to a Nakajima-Zwanzig
equation that depends on the underlying system-
environment dynamics. We have also generalised the
latter from the usual case to include time-dependent
projectors. In addition to providing a fundamental
connection between two different pictures of open
dynamics, our result opens up the possibility for effi-
cient simulation of the long-time evolution of driven
systems or those with initial correlations.
Instead of directly solving the dynamics of a sys-
tem to some long time t, dynamical maps can be re-
constructed to a point where the memory kernel has
decayed (from the end of the first driving period).
These can then be used to calculate transfer tensors
which will propagate the system to long times with
an error that depends only on the accuracy of the ini-
tial reconstruction and the memory cutoff approxima-
tion used. This could be used to, for instance, deter-
mine dynamical steady states of driven systems [48]
or probe signatures of many-body localisation [49].
Using Eq. (11), an experimentally or numeri-
cally reconstructed set of transfer tensors could also
be used to create an approximate memory kernel
through interpolation or form fitting. The resulting
master equation would be guaranteed to give physi-
cally sensible solutions. Since its accuracy would de-
pend on the smoothness of the memory kernel rather
than, e.g., the strength of the SE coupling, it would
provide an alternative to the usual perturbative ap-
proaches to approximate dynamics.
Finally, since the memory kernel contains infor-
mation about the SE generator, its operational re-
construction will allow for the extraction of infor-
mation about the underlying dynamics. That is, the
scheme presented here could be used to probe an un-
known environment by observing the dynamics of
the system alone (cf. Refs. [2426]). In fact, the
full set of intermediate completely positive dynami-
cal maps constitutes the maximum possible amount
of dynamical information imprinted on the system
without considering higher order multitime correla-
tions (see Refs. [3, 4]). The latter we will consider in
a similar context in subsequent work.
Acknowledgments
Acknowledgments—The authors would like to
thank Jared Cole, Guy Cohen, and César Rodríguez-
Rosario for stimulating discussions regarding this
work. KM is supported through ARC FT160100073.
References
[1] I. L. Chuang and M. A. Nielsen, J. Mod. Opt.
44, 2455 (1997).
[2] K. Modi, Sci. Rep. 2, 581 (2012).
[3] F. A. Pollock, C. Rodríguez-Rosario, T. Frauen-
heim, M. Paternostro, and K. Modi, Phys. Rev.
A 97, 012127 (2018).
[4] F. A. Pollock, C. Rodríguez-Rosario, T. Frauen-
heim, M. Paternostro, and K. Modi, Phys. Rev.
Lett. 120, 040405 (2018).
[5] S. Milz, F. A. Pollock, and K. Modi, accepted
in Phys. Rev. A (2018).
[6] H.-P. Breuer and F. Petruccione, The Theory
of Open Quantum Systems (Oxford University
Press, 2002).
[7] A. Fruchtman, N. Lambert, and E. M. Gauger,
Sci. Rep. 6, 28204 (2016).
Accepted in Quantum 2018-07-05, click title to verify 9
[8] J. Iles-Smith, A. G. Dijkstra, N. Lambert, and
A. Nazir, J. Chem. Phys. 144, 044110 (2016).
[9] Y. Tanimura, J. Phys. Soc. Jpn. 75, 082001
(2006).
[10] J. Strümpfer and K. Schulten, J. Chem. Theory
Comput. 8, 2808 (2012).
[11] A. W. Chin, S. F. Huelga, and M. B. Plenio, in
Semiconductors and Semimetals, Vol. 85, edited
by U. Würfel, M. Thorwart, and E. R. Weber
(Elsevier, 2011) pp. 115 – 143.
[12] N. Makri and D. E. Makarov, J. Chem. Phys.
102, 4611 (1995).
[13] I. de Vega and D. Alonso, Rev. Mod. Phys. 89,
015001 (2017).
[14] J. Cerrillo and J. Cao, Phys. Rev. Lett. 112,
110401 (2014).
[15] P. Nalbach, A. Ishizaki, G. R. Fleming, and
M. Thorwart, New J. Phys. 13, 063040 (2011).
[16] A. Strathearn, B. W. Lovett, and P. Kirton, New
J. Phys. 19, 093009 (2017).
[17] A. Strathearn, P. Kirton, D. Kilda, J. Keeling,
and B. W. Lovett, arXiv:1711.09641 (2017).
[18] G. Lindblad, Commun. Math. Phys. 48, 119
(1976).
[19] V. Gorini, A. Kossakokowski, and E. C. G. Su-
darshan, J. Math. Phys 17, 821 (1976).
[20] K. Modi, Open Systems & Information Dynam-
ics 18, 253 (2011).
[21] M. Ringbauer, C. J. Wood, K. Modi,
A. Gilchrist, A. G. White, and A. Fedrizzi,
Phys. Rev. Lett. 114, 090402 (2015).
[22] D. Kretschmann, D. Schlingemann, and R. F.
Werner, J. Funct. Anal. 255, 1889 (2008).
[23] B. Dive, F. Mintert, and D. Burgarth, Phys. Rev.
A 92, 032111 (2015).
[24] L. M. Norris, G. A. Paz-Silva, and L. Viola,
Phys. Rev. Lett. 116, 150503 (2016).
[25] F. A. Pollock, A. Ch˛eci
´
nska, S. Pascazio, and
K. Modi, Phys. Rev. A 94, 032112 (2016).
[26] J. Jeske, J. H. Cole, C. Müller, M. Marthaler,
and G. Schön, New J. Phys. 14, 023013 (2012).
[27] B. Bellomo, A. De Pasquale, G. Gualdi, and
U. Marzolino, Phys. Rev. A 80, 052108 (2009).
[28] B. Bellomo, A. De Pasquale, G. Gualdi, and
U. Marzolino, Phys. Rev. A 82, 062104 (2010).
[29] B. Bellomo, A. De Pasquale, G. Gualdi, and
U. Marzolino, J. Phys. A 43, 395303 (2010).
[30] R. Rosenbach, J. Cerrillo, S. F. Huelga, J. Cao,
and M. B. Plenio, New J. Phys. 18, 023035
(2016).
[31] A. A. Kananenka, C.-Y. Hsieh, J. Cao, and
E. Geva, J. Phys. Chem. Lett. 7, 4809 (2016).
[32] S. M. Barnett and S. Stenholm, Phys. Rev. A 64,
033808 (2001).
[33] S. Daffer, K. Wódkiewicz, J. D. Cresser, and
J. K. McIver, Phys. Rev. A 70, 010304 (2004).
[34] H.-P. Breuer and B. Vacchini, Phys. Rev. Lett.
101, 140402 (2008).
[35] D. Chru
´
sci
´
nski and A. Kossakowski, Phys. Rev.
A 94, 020103 (2016).
[36] B. Vacchini, Phys. Rev. Lett. 117, 230401
(2016).
[37] Q. Shi and E. Geva, J. Chem. Phys. 119, 12063
(2003).
[38] G. Cohen and E. Rabani, Phys. Rev. B 84,
075150 (2011).
[39] G. Cohen, E. Gull, D. R. Reichman, A. J. Millis,
and E. Rabani, Phys. Rev. B 87, 195108 (2013).
[40] M. Buser, J. Cerrillo, G. Schaller, and J. Cao,
Phys. Rev. A 96, 062122 (2017).
[41] D. Tamascelli, A. Smirne, S. F. Huelga, and
M. B. Plenio, Phys. Rev. Lett. 120, 030402
(2018).
[42] C. R. Willis and R. H. Picard, Phys. Rev. A 9,
1343 (1974).
[43] R. H. Picard and C. R. Willis, Phys. Rev. A 16,
1625 (1977).
[44] P. Degenfeld-Schonburg and M. J. Hartmann,
Phys. Rev. B 89, 245108 (2014).
[45] P. Degenfeld-Schonburg, C. Navarrete-
Benlloch, and M. J. Hartmann, Phys. Rev. A
91, 053850 (2015).
[46] L. Viola, E. Knill, and S. Lloyd, Phys. Rev.
Lett. 82, 2417 (1999).
[47] P. Facchi and S. Pascazio, J. Phys. A 41, 493001
(2008).
[48] T. M. Stace, A. C. Doherty, and D. J. Reilly,
Phys. Rev. Lett. 111, 180602 (2013).
[49] R. Nandkishore and D. A. Huse, Annu. Rev.
Condens. Matter Phys. 6, 15 (2015).
[50] P.-Y. Yang and W.-M. Zhang, arXiv:1605.08521
(2016).
[51] S. Kitajima, M. Ban, and F. Shibata, J. Phys. A
50, 125303 (2017).
[52] V. Prepeli¸t
˘
a, M. Doroftei, and T. Vasilache,
Balkan J. Geom. Appl. 3, 111 (1998).
Accepted in Quantum 2018-07-05, click title to verify 10
APPENDICES
A Avoiding the inhomogeneous term
Many techniques for simulating open quantum systems, exactly or approximately, rely on a product assumption
for the initial state: ρ
SE
t
0
= ρ
t
0
ρ
E
t
0
. When this is not the case, there is always an inhomogeneous term in
Eq. (2), and, while techniques have been developed to absorb it into the homogeneous part [50, 51], these
rely on knowledge of the underlying dynamics. This makes the explicit calculation of ρ
t
= M
t,t
0
[A], and
hence Ξ
t:t
0
, problematic for an initially correlated SE, at least when the preparation A does not break those
correlations.
However, when a preparation does break the correlations between system and environment, the subsequent
dynamics can be simulated in the usual way (with the environment initially in state tr
S
{Aρ
SE
t
0
}). We now
use the property that the superchannel is linear, and that any preparation operation can be written as a linear
combination A =
P
α
c
α
A
(α)
, where {A
(α)
} is a set of entanglement breaking completely-positive (but not
necessarily trace-preserving) maps, and c
α
C [3]. This means that the time-evolved state for any preparation
can be written as a linear combination of time-evolved states from some finite set of preparations where there
are no initial correlations: ρ
t,A
=
P
α
c
α
ρ
t,A
(α)
.
In the dilated picture, this is equivalent to writing the initial SE state as ρ
SE
t
0
=
P
α
c
α
X
(α)
τ
(α)
, where
{X
(α)
} is a set of d
2
S
linearly independent system operators. By solving for the dynamics with each of the
d
2
S
uncorrelated initial environment states τ
(α)
, the need to calculate the inhomogeneous term can be entirely
circumvented.
B Simulation error
Since the expression for the time-evolved density operator in Eq. (5) is exact, errors in a simulation using trans-
fer tensors can arise only if terms in the expansion are neglected, or otherwise from errors in the reconstruction
of the maps Λ
t:s
. Here, we assume the reconstruction is accurate, and determine the error associated with ne-
glecting memory effects beyond some cutoff time. In what follows, we will quantify the ‘size’ of an operator
with the trace norm kXk
1
= tr{|X|}, where |X| is the operator formed by taking the magnitudes of the singu-
lar values of X and the same singular vectors. For superoperators, we will use kXk := max
A,B
{tr[A(XB)]},
which is the largest singular value (or operator norm) of the matrix representation of X which acts on vectorised
operators.
The error in the final state ˜ρ
(m)
t
k
due to neglecting memory effects beyond a time t
m
= t is given by
ρ
t
k
˜ρ
(m)
t
k
1
=
k1
X
j=0
T
(kj)
t
k
:t
j
ρ
t
j
+ Ξ
t
k
:t
0
k1
X
j=m
T
(kj)
t
k
: