Read [2ex]Telling optimal control how to maximize the entangling power of twoqubit gates text version
Telling optimal control how to maximize the entangling power of twoqubit gates
Christiane P. Koch
joint work with Daniel Reich , Michael Goerz , Mamadou Ndong , Matthias M. M¨ller, u Jiri Vala, Haidong Yuan, Birgitta Whaley & Tommaso Calarco
Institut f¨r Theoretische Physik, Freie Universit¨t Berlin, Germany u a
introduction: optimal control & QIP
^+^ ^ ^ Tr O PN U(T , 0; )PN ^ desired gate operation : O ^ actual evolution : U(T , 0; ) desired fidelity : 1  where < 104
introduction: optimal control & QIP
^+^ ^ ^ Tr O PN U(T , 0; )PN ^ desired gate operation : O ^ actual evolution : U(T , 0; ) desired fidelity : 1  where < 104 ^ choice of O ?
outline
1
introduction: optimal control
terminology & basic concepts functionals: how to convey the physics to the algorithm
2
a functional to optimize the entangling power of twoqubit gates
local equivalence classes the new functional 2nd order Krotov algorithm
3
application to a Rydberg gate in cold atoms where can we go from here?
4
some terminology & basics of optimal control
coherent control
quantum mechanics probabilistic, but deterministic theory (t > 0)
(t = 0)
Schr¨dinger equation o
given an initial state, (t = 0) or (t = 0), ^ ^ which dynamics ( which H) guarantees a particular outcome, (t > 0) or (t > 0) ? ^
principle of coherent control
wave properties of matter (superposition principle) variation of phase between different, but indistinguishable quantum pathways constructive interference in desired channel destructive interference in all other channels
final goal: understanding the intricate workings of quantum interferences
principle of coherent control (rev'd)
wave properties of matter (superposition principle) interacting quantum (sub)systems
variation of phase between different, but indistinguishable quantum pathways constructive interference in desired channel destructive interference in all other channels
final goal: understanding the intricate workings of interferences & entanglement
coherent control & optimal control
goal: improve outcome of process vary some parameters
simple, intuitive schemes
bichromatic c., pumpdump, STIRAP
in time or in frequency domain
coherent control & optimal control
goal: improve outcome of process vary some parameters goal: obtain maximum control over process tune `all' available parameters complex outcome discovery of new schemes
not necessarily accessible by intuition
simple, intuitive schemes
bichromatic c., pumpdump, STIRAP
in time or in frequency domain
in time / frequency "phase space"
global or local in time
classical vs quantum control
classical quantum
coherent control control
classical vs quantum control
classical quantum
optimal control theory (based on variational calculus)
coherent control control
let's put philosophy aside ... work out the tools
 they will be needed anyhow 
optimal control theory
time/frequency 'phasespace' picture t=0 i t=T f
inverse problem: given the target and the equations of motion, calculate the field
optimal control theory
time/frequency 'phasespace' picture t=0 i t=T f
inverse problem: given the target and the equations of motion, calculate the field local in time
impose 2 conditions (for phase and amplitude) derive equations for field also: Lyapunov control / tracking
global in time
information from dynamics throughout time interval to reach desired target at final T OCT
optimal control theory
t=0 i define the objective : GOAL ^+ i U (T , 0; )f
2
t=T f
= F
as a functional of the field
optimal control theory: variants
variational approach `guess' the right functional, including eqs. of motion & phasefactors do the variations to obtain eqs. of motion and eq. for the field `guess' the correct time discretization s.t. method converges
W. Zhu, J. Botina, H. Rabitz, JCP 108, 1953 (1998)
optimal control theory: variants
variational approach `guess' the right functional, including eqs. of motion & phasefactors do the variations to obtain eqs. of motion and eq. for the field `guess' the correct time discretization s.t. method converges
W. Zhu, J. Botina, H. Rabitz, JCP 108, 1953 (1998)
Krotov method ingredients: objective + constraint(s) + eqs. of motion construct auxiliary functional L with auxil. potential to guarantee monoton. convergence derive the eq. for (t) from the minimization of L
Sklarz & Tannor, PRA 66, 053619 (2002), Palao & Kosloff, PRA 68, 062308 (2003)
optimal control theory: schemes
improve the field by
j+1 (t) =
S(t) 0
Im
^ i  U (T , 0; j ) f forward propagation (1)
+
^+ f  U (t, T ; j ) µ U(t, 0; j+1 ) i ^ ^ backward propagation (2) forward propagation (3)
optimal control theory: schemes
improve the field by
j+1 (t) =
S(t) 0
Im
^ i  U (T , 0; j ) f forward propagation (1)
+
^+ f  U (t, T ; j ) µ U(t, 0; j+1 ) i ^ ^ backward propagation (2) forward propagation (3)
(t)
interference between past and future events
i 0
1 t
0
f T
optimal control theory: schemes
improve the field by
j+1 (t) =
S(t) 0
Im
^ i  U (T , 0; j ) f forward propagation (1)
+
^+ f  U (t, T ; j ) µ U(t, 0; j+1 ) i ^ ^ backward propagation (2) forward propagation (3)
(t)
interference between past and future events
i 0
1 t
0
f T
variational approach & Krotov method lead to similar schemes Maday & Turinici, JCP 118, 8191 (2003) & work by N.C. Nielsen group
optimal control theory: schemes
improve the field by
j+1 (t) =
S(t) 0
Im
^ i  U (T , 0; j ) f forward propagation (1)
+
^+ f  U (t, T ; j ) µ U(t, 0; j+1 ) i ^ ^ backward propagation (2) forward propagation (3)
(t)
interference between past and future events
i 0
1 t
0
f T
variational approach & Krotov method lead to similar schemes Maday & Turinici, JCP 118, 8191 (2003) & work by N.C. Nielsen group
to date: reduced Krotov method only (1st order variant)
functionals or
how to convey the desired physics to the OCT algorithm
objective functionals / costs
J [{k (t), (t)}, (t)] = k JT [{k (T ), (T )}] + Jt [{k (t), (t)}, (t)] k k finaltime target intermediatetime target timedependent cost statedependent cost
functionals of the field (t) explicitly implicitly through k (t), k (T )
finaltime objectives JT
(i) statetostate transfer (ii) unitary transformation
finaltime objectives JT
(i) statetostate transfer (ii) unitary transformation
) 11
01 , 10 00
d
goal: perform a twoqubit gate on the logical basis
example: phasegate for two Ca atoms
qubit encoding:
example: phasegate for two Ca atoms
qubit encoding:
twoatom Hamiltonian:
example: phasegate for two Ca atoms
qubit encoding:
twoatom Hamiltonian:
example: phasegate for two Ca atoms
target: true twoqubit phase = 00  01  10 + 11 d = 5 nm
example: phasegate for two Ca atoms
target: true twoqubit phase = 00  01  10 + 11 d = 200 nm: scaling up interaction C3
example: phasegate for two Ca atoms
target: true twoqubit phase = 00  01  10 + 11
d = 200 nm: scaling up interaction C3
limiting factor for fast gate: sufficient time to resolve GS motional dynamics
(will play a role in any scheme based on resonant excitation)
example: phasegate for two Ca atoms
target: true twoqubit phase = 00  01  10 + 11
d = 200 nm: scaling up interaction C3
limiting factor for fast gate: sufficient time to resolve GS motional dynamics
(will play a role in any scheme based on resonant excitation)
what about flexibility in singlequbit phases?
finaltime objectives JT
JT =  0 ^ ^+^ ^ Re Tr O PN U(T , 0; )PN N
realvalued, phasesensitive functional
^ O target operator 0 weight ^ N = dim{O}
^ PN projector on ^ subspace of O ^ U(T , 0; ) actual time evolution
finaltime objectives JT
JT =  0 ^ ^+^ ^ Re Tr O PN U(T , 0; )PN N
realvalued, phasesensitive functional
^ O target operator 0 weight ^ N = dim{O}
^ PN projector on ^ subspace of O ^ U(T , 0; ) actual time evolution
^ statetostate transfer: O = target target , N = 1 singlequbit gate: N = 2, twoqubit gate: N = 4
cf. Palao & Kosloff, PRA 68, 062308 (2003)
intermediatetime objectives Jt
assumption: additive costs
T
Jt =
0
{ga [(t)] + gb [(t), (t)]} dt
intermediatetime objectives Jt
assumption: additive costs
T
Jt =
0
{ga [(t)] + gb [(t), (t)]} dt examples
ga [(t)] = a S(t) [(t)  ref (t)]2
minimization of field intensity (ref (t) = 0) or change in field intensity (ref (t) = old ) choice of ref (t) determines update vs replacement rule !
intermediatetime objectives Jt
assumption: additive costs
T
Jt =
0
{ga [(t)] + gb [(t), (t)]} dt examples
ga [(t)] = a S(t) [(t)  ref (t)]2
minimization of field intensity (ref (t) = 0) or change in field intensity (ref (t) = old ) choice of ref (t) determines update vs replacement rule !
^ gb [(t), (t)] = b (t)D(t)(t)
^ D(t) target operator a , b weights, S(t) switch/shape function
timedependent targets
^ gb [(t), (t)] = b (t)D(t)(t) prescribing a desired evolution
' '
^ D(t)
=
6 6(T1  t) + 1 1(t  T1 )(T2  t) + 7 7(t  T2 )(T3  t) + 2 2(t  T3 )(T  t)
Ndong, TalEzer, Kosloff, CPK, JCP 130, 124108 (2009)
timedependent targets
^ gb [(t), (t)] = b (t)D(t)(t) prescribing a desired evolution
' '
keeping the dynamics in a subspace
^ D(t)
=
6 6(T1  t) + 1 1(t  T1 )(T2  t) + 7 7(t  T2 )(T3  t) + 2 2(t  T3 )(T  t)
Ndong, TalEzer, Kosloff, CPK, JCP 130, 124108 (2009)
^ ^ D(t) = Pallow
Palao, Kosloff, CPK, PRA 77, 063412 (2008)
where are we? outline!
1
introduction: optimal control
terminology & basic concepts functionals: how to convey the physics to the algorithm
2
a functional to optimize the entangling power of twoqubit gates
local equivalence classes the new functional 2nd order Krotov algorithm
3
application to a Rydberg gate in cold atoms where can we go from here?
4
local equivalence classes
classification of twoqubit gates
G = SU(4) group of all twoqubit gates K = SU(2) SU(2) local gates G /K = SU(4)/SU(2) SU(2) nonlocal gates
su(4) = k p
Cartan decomposition of Lie algebras
i ^ k U = ^1 e  2
j=x,y ,z
cj 1 2 ^ ^j ^j k2
^ U2 = ^1 U2^2 k ^ k
Zhang, Vala, Sastry, Whaley, PRA 67, 042313 (2003)
Yuan & Khaneja, PRA 72, 040301 (2005)
local invariants
^T ^ m = UB UB ^ ^ ^+^ ^ UB = Q UQ
^ (i.e. U in Bell basis)
^ g1 = Re Tr[m]2 /16 det(U) ^ ^ g2 = Im Tr[m]2 /16 det(U) ^ ^ g3 = Tr[m]2  Tr[m2 ]/4 det(U) ^ ^ ^ g1 , g2 , g3 define local equivalence class [U], i.e. a class of twoqubit gates that are equivalent up to local (singlequbit) operations
Zhang, Vala, Sastry, Whaley, PRA 67, 042313 (2003)
Weyl chamber
the new functional
^ ^ optimization target [O] instead of O
^ (old) functional to obtain O JT =  0 ^+^ ^ ^ Re Tr O PN U(T , 0; )PN N
^ (new) functional to obtain [O] JT = g1 2 + g2 2 + g3 2
^ ^ with gi 2 = gi (O)  gi (U)
2
^ ^ and gi (O) the local invariants of O
^ ^ optimization target [O] instead of O
^ (old) functional to obtain O JT =  0 ^+^ ^ ^ Re Tr O PN U(T , 0; )PN N
^ (new) functional to obtain [O] JT = g1 2 + g2 2 + g3 2
^ ^ with gi 2 = gi (O)  gi (U)
2
^ ^ and gi (O) the local invariants of O
remember: J = J [{k (t), (t)}, (t)] k
to carry out variations, we need to express gi in terms of k (t)
functional based on local invariants
using the definition of the invariants and of the Bell basis
functional based on local invariants
using the definition of the invariants and of the Bell basis and after quite some algebra
functional based on local invariants
using the definition of the invariants and of the Bell basis and after quite some algebra
JT = f12 + f22 + f32 + f42 + f5
functional based on local invariants
using the definition of the invariants and of the Bell basis and after quite some algebra
JT = f12 + f22 + f32 + f42 + f5
f1 = " "" " h i 1 X 2 2 2 2 2 2 ^ k l + k l  2k l  4 k · k l · l Re a0 det(U)  16 k,l h i " " " " 1 X 2 2 ^ Im a0 det(U)  4k l · l  4k l · l 16 k,l h i " "" " " "2 1 X 2 2 2 2 2 2 2 ^ Re b0 det(U)  k l + k l  2k l  4 k · k l · l  (k · l )  k · l 4 k,l " " " " +2 (k · l ) k · l + 4 (k · l ) k · l " " h i " " " " " "" " 1 X 2 2 ^ Im b0 det(U)  4k l · l  4k l · l  4 (k · l ) k · l + 4 k · l k · l 4 k,l
f2
=
f3
=
f4
=
functional based on local invariants
using the definition of the invariants and of the Bell basis and after quite some algebra
JT = f12 + f22 + f32 + f42 + f5
f1 = " "" " h i 1 X 2 2 2 2 2 2 ^ k l + k l  2k l  4 k · k l · l Re a0 det(U)  16 k,l h i " " " " 1 X 2 2 ^ Im a0 det(U)  4k l · l  4k l · l 16 k,l h i " "" " " "2 1 X 2 2 2 2 2 2 2 ^ Re b0 det(U)  k l + k l  2k l  4 k · k l · l  (k · l )  k · l 4 k,l " " " " +2 (k · l ) k · l + 4 (k · l ) k · l " " h i " " " " " "" " 1 X 2 2 ^ Im b0 det(U)  4k l · l  4k l · l  4 (k · l ) k · l + 4 k · l k · l 4 k,l
f2
=
f3
=
f4
=
^ with a0 = Tr2 (mO ) /16 det(O) and b0 = Tr2 (mO )  Tr m2 ^ ^ ^O
^ /4 det(O)
(k )m = Re [ mk (T ) ], (k )m = Im [ mk (T ) ], m = 1, . . . , dim(H)
functional based on local invariants
JT = f12 + f22 + f32 + f42 + f5
f1 = h i " "" " 1 X 2 2 2 2 2 2 ^ Re a0 det(U  k l + k l  2k l  4 k · k l · l 16 k,l h i " " " " 1 X 2 2 ^ Im a0 det(U)  4k l · l  4k l · l 16 k,l h i " "" " " "2 1 X 2 2 2 2 2 2 2 ^ Re b0 det(U)  k l + k l  2k l  4 k · k l · l  (k · l )  k · l 4 k,l " " " " +2 (k · l ) k · l + 4 (k · l ) k · l h i " " " " " " " "" " 1 X 2 2 ^ k · l Im b0 det(U)  4k l · l  4k l · l  4 (k · l ) k · l + 4 k · l 4 k,l
f2
=
f3
=
f4
=
^ with a0 = Tr2 (mO ) /16 det(O) and b0 = Tr2 (mO )  Tr m2 ^ ^ ^O
^ /4 det(O)
(k )m = Re [ mk (T ) ], (k )m = Im [ mk (T ) ], m = 1, . . . , dim(H)
problem:
functional based on local invariants
JT = f12 + f22 + f32 + f42 + f5
f1 = h i " "" " 1 X 2 2 2 2 2 2 ^ Re a0 det(U  k l + k l  2k l  4 k · k l · l 16 k,l h i " " " " 1 X 2 2 ^ Im a0 det(U)  4k l · l  4k l · l 16 k,l h i " "" " " "2 1 X 2 2 2 2 2 2 2 ^ Re b0 det(U)  k l + k l  2k l  4 k · k l · l  (k · l )  k · l 4 k,l " " " " +2 (k · l ) k · l + 4 (k · l ) k · l h i " " " " " " " "" " 1 X 2 2 ^ k · l Im b0 det(U)  4k l · l  4k l · l  4 (k · l ) k · l + 4 k · l 4 k,l
f2
=
f3
=
f4
=
^ with a0 = Tr2 (mO ) /16 det(O) and b0 = Tr2 (mO )  Tr m2 ^ ^ ^O
^ /4 det(O)
(k )m = Re [ mk (T ) ], (k )m = Im [ mk (T ) ], m = 1, . . . , dim(H)
problem: JT is 8th degree polynomial in {k , k }, resp. {k } nonconvex
optimization of nonconvex functionals
^ (old) functional to obtain O JT =  0 ^+^ ^ ^ Re Tr O PN U(T , 0; )PN N quadratic (new) functional to ^ obtain [O] JT = g1 2 + g2 2 + g3 2 nonconvex
for nonconvex functionals local optima may exist how to ensure monotonic convergence?
2nd order Krotov algorithm
basic concept
ingredients:
finaltime target JT [T , ] T timedep. targets / costs ga [ ] + gb [(t), (t)] equations of motion ^ i t (t) = H(t)(t) (t0 ) = 0
Sklarz & Tannor, PRA 66, 053619 (2002)
basic concept
ingredients:
finaltime target JT [T , ] T timedep. targets / costs ga [ ] + gb [(t), (t)] equations of motion ^ i t (t) = H(t)(t) (t0 ) = 0
construction of auxiliary functional L L[, , , ] = J[, , ] choose arbitrary scalar potential [, , t] such that L[i , ,i , i , ] L[i+1 , ,i+1 ,
i+1
, ]
building in monotonic convergence
Sklarz & Tannor, PRA 66, 053619 (2002)
auxiliary functional L
L[, , , ] = G [(T ), (T )]  [(0), (0), 0]
T

0
R[(t), (t), (t), t]dt
finaltime contribution: G [(T ), (T )] = JT [(T ), (T )] + [(T ), (T ), T ]
auxiliary functional L
L[, , , ] = G [(T ), (T )]  [(0), (0), 0]
T

0
R[(t), (t), (t), t]dt
finaltime contribution: G [(T ), (T )] = JT [(T ), (T )] + [(T ), (T ), T ] intermediatetime contribution: R [(t), (t), (t), t] =  ga [ (t)] + gb [(t), (t)] + + + t k=1
k N k
· fk [, , , t]
· fk [, , , t]
auxiliary functional L
L[, , , ] = G [(T ), (T )]  [(0), (0), 0]
T

0
R[(t), (t), (t), t]dt
finaltime contribution: G [(T ), (T )] = JT [(T ), (T )] + [(T ), (T ), T ] intermediatetime contribution: R [(t), (t), (t), t] =  ga [ (t)] + gb [(t), (t)] + + + t k=1
k N k
· fk [, , , t]
· fk [, , , t]
the choice of [(t), (t), t] completely determines G , R, L
central idea of Krotov's method
goal: minimization of L, resp. JT twostep solution
1
we need an extremum in i
G (i)
= 0 and
R (i)
=0
equation for backward propagation
d (t) = JT (t) · (t) + dt (T ) =  JT (i) (T )
g t, (i) ,
(i)
central idea of Krotov's method
2
we want a minimum of L, i.e. minimum of G & maximum of R but L is changed by both changes in and changes in Krotov's solution
Konnov & Krotov, Automation Remote Control 60, 10 (1999)
central idea of Krotov's method
2
we want a minimum of L, i.e. minimum of G & maximum of R but L is changed by both changes in and changes in Krotov's solution
(i) choose at the extremum, i , such that it is the worst possible choice with respect to any change in the states maximize L when going from i to i+1 for fixed i (ii) then any change in the field from minimization of L
(i+1) (t) i
to
i+1
will lead to a or ,t < 0
(t) = arg max R (t)(i+1) , (t), t ,t = 0 , 2 R (i+1) , 2
(i+1)
R (i+1) ,
(i+1)
Konnov & Krotov, Automation Remote Control 60, 10 (1999)
central idea of Krotov's method
2
we want a minimum of L, i.e. minimum of G & maximum of R but L is changed by both changes in and changes in Krotov's solution
(i) choose at the extremum, i , such that it is the worst possible choice with respect to any change in the states maximize L when going from i to i+1 for fixed i (ii) then any change in the field from minimization of L
(i+1) (t) i
to
i+1
will lead to a or ,t < 0
(t) = arg max R (t)(i+1) , (t), t ,t = 0 , 2 R (i+1) , 2
(i+1)
R (i+1) ,
(i+1)
Konnov & Krotov, Automation Remote Control 60, 10 (1999)
central idea of Krotov's method
Krotov's solution (i) optimization with respect to change in states is translated into construction of at the extremum i (ii) convergence for step in field, R by
(i)
(i+1)
, assured globally for
R (i+1) (i+1) , ,t = 0 R = R (i+1) , (i+1) , t  R (i+1) , (i) , t 0 a global optimum would be found, if we could actually implement
(i+1)
(t) = arg max R (t)(i+1) , (t), t
(t)
Krotov's step (i) second order construction of
Krotov's ansatz
(t, (i+1) , ,(i+1) ) = 1 2 +
N
construct to second order in the states k
k k
k=1 N (i) (i+1)
(i+1)
+ k
(i+1)
k
(i)
1 (i+1) (i) (i+1) (i) k  k ^ kl (t)l  l 2 k,l=1
choose kl (t) such that ^ maximum condition for G and minimum condition for R are fulfilled
Krotov: constructive proof for global conditions (t) = e (T t)  1 + · 1 (t) · 1 ^ 1 1
Konnov & Krotov, Automation Remote Control 60, 10 (1999)
Krotov's ansatz
(t, (i+1) , ,(i+1) ) = 1 2 +
N
construct to second order in the states k
k k
k=1 N (i) (i+1)
(i+1)
+ k
(i+1)
k
(i)
1 (i+1) (i) (i+1) (i) k  k ^ kl (t)l  l 2 k,l=1
choose kl (t) such that ^ maximum condition for G and minimum condition for R are fulfilled
Krotov: constructive proof for global conditions (t) = e (T t)  1 + · 1 (t) · 1 ^ 1 1
Konnov & Krotov, Automation Remote Control 60, 10 (1999)
construction of kl (t) ^
can be done locally or globally Sklarz/Tannor's discussion local (but results coincide with global derivation due to choice of JT ) Krotov: constructive proof for global conditions derivation for global conditions leads to much simpler solution for fourthdegree tensor ^ (t) = e (T t)  1 + · 1 (t) · 1 ^ 1 1
Krotov's proof
main idea: assure that nothing goes wrong for very large & very small and very large
If:
1
The righthand side of the equation of motion, f (t, , ), is bounded. Specifically, for large values of the state vector, , the righthand side of the equations of motion does not grow faster than quadratically with respect to for all t and possible fields . The Jacobian of the righthand side of the equations of motion, J, is bounded for any time t, field and state vector . The functionals JT () and g ( , , t) are twice differentiable and bounded. In particular, for large values of the state vector , the functionals JT and g do not grow faster than quadratically with respect to .
2
3
then
(t) = e (T t)  1 + · 1 (t) · 1 ^ 1 1
quantum control
state vectors (i) , (i+1) inherently bounded boundedness conditions already guaranteed if f (t, , ), J, JT () and g ( , , t) regular change in states compact subset of R2NM
quantum control
state vectors (i) , (i+1) inherently bounded boundedness conditions already guaranteed if f (t, , ), J, JT () and g ( , , t) regular change in states compact subset of R2NM
fulfilling G () 0
G = 1 (T ) · + (T ) · · + JT (T )(i) +  JT (T )(i) 2
for = 0: G 0 = 0 for = 0:
fulfilling G () 0
G = 1 (T ) · + (T ) · · + JT (T )(i) +  JT (T )(i) 2
for = 0: G 0 = 0 for = 0: G = · (T ) · + JT (T )(i) + JT (T )(i) 1 (T ) + 2 ·
T · + JT (T )(i) +  JT (T )(i) A = sup
·
fulfilling G () 0
G = 1 (T ) · + (T ) · · + JT (T )(i) +  JT (T )(i) 2
for = 0: G 0 = 0 for = 0: G = · (T ) · + JT (T )(i) + JT (T )(i) 1 (T ) + 2 ·
T · + JT (T )(i) +  JT (T )(i) A = sup
·
(T ) < 2A
fulfilling R() 0
R (t), t = 1 (t) · (t) + (t) · (t) · (t) 2 + (t) + (t)(t) · f (t), t  g (t), t
for = 0: R 0, t = 0 t for = 0:
1 (t)   (t) · B + C > 0 2
fulfilling R() 0
R (t), t = 1 (t) · (t) + (t) · (t) · (t) 2 + (t) + (t)(t) · f (t), t  g (t), t
for = 0: R 0, t = 0 t for = 0: R (t), t = (t) · (t) (t) · f 1 (t) · (t) + (t) · f  g (t) + (t) + 2 (t) · (t) (t) · (t) (t) · f (t) · (t) (t) · (t) + (t) · f  g (t) · (t)
B C
= =
sup
(t)R2NM ;t[0,T ]
inf
(t)R2NM ;t[0,T ]
fulfilling R() 0
R (t), t = 1 (t) · (t) + (t) · (t) · (t) 2 + (t) + (t)(t) · f (t), t  g (t), t
for = 0: R 0, t = 0 t for = 0: B C = = sup
(t)R2NM ;t[0,T ]
(t) · f (t) · (t) (t) · (t) + (t) · f  g (t) · (t)
inf
(t)R2NM ;t[0,T ]
1 (t)   (t) · B + C > 0 2
maximizing L wrt
(T ) < 2A 1 (t)   (t) · B + C > 0 2 one solution (t) = e B(T t)
¯
¯ ¯ C C ¯ A  ¯ ¯ B B
¯ ¯ ¯ with B = 2B + , C = min (, 2C  ) and A = max (, 2A + )
or more generally (t) = e (T t)  1 +
how to get A, B, C ?
A, B, C are Taylor expansions of certain quantities starting at the first or second order estimate the remainder (Lagrange's form)
W () =
n1
1 ( W ) (i) · + R(i) ,n !
how to get A, B, C ?
A, B, C are Taylor expansions of certain quantities starting at the first or second order estimate the remainder (Lagrange's form)
W () =
n1
1 ( W ) (i) · + R(i) ,n ! 1 W M (i) ,  = n ! n W (i) +
R(i) ,n
W Mn (i) = sup ,=n
,
how to get A, B, C ?
A, B, C are Taylor expansions of certain quantities starting at the first or second order estimate the remainder (Lagrange's form)
W () =
n1
1 ( W ) (i) · + R(i) ,n ! 1 W M (i) ,  = n ! n W (i) +
R(i) ,n
W Mn (i) = sup ,=n
,
estimate that is independent of (i)
W Mn = sup Mn (i) (i) X
estimate of A
A = sup
JT ,2 ·
.
estimate JT ,2 by its Lagrange remainder: 1 J 1 A M2 T = sup JT 2 2 ,=2
estimate of A
A = sup
JT ,2 ·
.
estimate JT ,2 by its Lagrange remainder: 1 J 1 A M2 T = sup JT 2 2 ,=2 for functionals JT that are linear or quadratic in A=0
estimate of B
B X sup
+
sup
(t);t[0,T ]
(t),
(i)
,t
mat
estimate of B
B X sup
+
sup
(t);t[0,T ]
(t),
(i)
,t
mat
for Hamiltonians that do not depend on the state B = = sup
;t[0,T ]
(t) ·
(i)
(i)
, t · (t)
(t) · (t) ,t
mat
sup
t[0,T ]
for unitary evolution: B = 0 for nonunitary evolution: max. eigenvalue
estimate of C
C sup
(t);t[0,T ]
(t) · 1 · (t) (t) · (t)
+
sup
(t);t[0,T ]
g2 (t) · (t)
C sup (M1 · (t) ) + t[0,T ]
1 sup g 2 ,=2
.
estimate of C
C sup
(t);t[0,T ]
(t) · 1 · (t) (t) · (t)
+
sup
(t);t[0,T ]
g2 (t) · (t)
C sup (M1 · (t) ) + t[0,T ]
1 sup g 2 ,=2
.
^ for stateindependent H and g depending on only up to linear order : 1 = 0 and g2 = 0 C =0
estimate of C
C sup
(t);t[0,T ]
(t) · 1 · (t) (t) · (t)
+
sup
(t);t[0,T ]
g2 (t) · (t)
C sup (M1 · (t) ) + t[0,T ]
1 sup g 2 ,=2
.
^ for stateindependent H and g depending on only up to linear order : 1 = 0 and g2 = 0 C =0 for certain (!) functionals and EoMs: A = 0 & B = 0 & C = 0 (t) = 0 and the second order contribution to vanishes: PalaoKosloff version of Krotov method (KrotovPK) (still ensuring monotonic convergence globally)
Krotov's step (ii) second order construction of
equation for the field
remember R (i+1) (i+1) , ,t = 0 R = R (i+1) , (i+1) , t  R (i+1) , (i) , t 0
equations of motion in basis set expansion: 2M × 2M matrix H k,R H k,I H k,I H k,R !
=
k
first order condition yields:
1 2a S(t)
N 2M
(i+1)
(t) =
ref (t) +
km
k=1 m,n=1 N
(i)
k (i+1) mn kn km k mn kn
2M
+(t)
k=1 m,n=1
we now have an algorithm that is monotonically convergent for arbitary targets/constraints
remark: combine Krotov with BFGS
1
QuasiNewton algorithms are approximate solutions to an extremization problem using information from the secondorder Taylor expansion of the function BFGS is a quasiNewton algorithm using a ranktwo update formula involving only gradient information to approximate the (inverse) Hessian for convex functions it is globally and monotonically convergent if supplemented by a line search fulfilling the Wolfe conditions LBFGS uses only information from the gradients and state vectors of previous steps to solve the memory problem in storing the approximate inverse Hessian under certain additional assumptions convergence remains monotonic
2
3
remark: combine Krotov with BFGS
1
QuasiNewton algorithms are approximate solutions to an extremization problem using information from the secondorder Taylor expansion of the function BFGS is a quasiNewton algorithm using a ranktwo update formula involving only gradient information to approximate the (inverse) Hessian for convex functions it is globally and monotonically convergent if supplemented by a line search fulfilling the Wolfe conditions LBFGS uses only information from the gradients and state vectors of previous steps to solve the memory problem in storing the approximate inverse Hessian under certain additional assumptions convergence remains monotonic
2
3
remark: combine Krotov with BFGS
1
QuasiNewton algorithms are approximate solutions to an extremization problem using information from the secondorder Taylor expansion of the function BFGS is a quasiNewton algorithm using a ranktwo update formula involving only gradient information to approximate the (inverse) Hessian for convex functions it is globally and monotonically convergent if supplemented by a line search fulfilling the Wolfe conditions LBFGS uses only information from the gradients and state vectors of previous steps to solve the memory problem in storing the approximate inverse Hessian under certain additional assumptions convergence remains monotonic
2
3
remark: combine Krotov with BFGS
remember: 1 2a S(t)
N 2M (i)
(t)Krotov =
km
k=1 m,n=1 N
k (i+1) mn kn km k mn kn
2M
+(t)
k=1 m,n=1
remark: combine Krotov with BFGS
remember: 1 2a S(t)
N 2M (i)
(t)Krotov =
km
k=1 m,n=1 N
k (i+1) mn kn km k mn kn
2M
+(t)
k=1 m,n=1
^ compare gradient ascent w/ BFGS (for linear dependence of H on !):
1 N 2M
^ (t)BFGS = B
(t)
k=1 m,n=1
km
(i)
k (i) mn kn
^ with B(t) approximated Hessian
remark: combine Krotov with BFGS
remember: 1 2a S(t)
N 2M (i)
(t)Krotov =
km
k=1 m,n=1 N
k (i+1) mn kn km k mn kn
2M
+(t)
k=1 m,n=1
^ compare gradient ascent w/ BFGS (for linear dependence of H on !):
1 N 2M
^ (t)BFGS = B
(t)
k=1 m,n=1
km
(i)
k (i) mn kn
^ with B(t) approximated Hessian
choose a S(t) according to LBFGS this does not affect the monotonicity that is ensured by Krotov's construction
where are we ? outline !
1
introduction: optimal control
terminology & basic concepts functionals: how to convey the physics to the algorithm
2
a functional to optimize the entangling power of twoqubit gates
local equivalence classes the new functional 2nd order Krotov algorithm
3
application to a Rydberg gate in cold atoms where can we go from here?
4
Rydberg qubits
oneatom level scheme
optical tweezers
Gaetan et al., Nat. Phys. 5, 115 (2009)
Rydberg qubits
oneatom level scheme oneatom Hamiltonian
0 ^ (1) ^r r H = 0 0 T^ + Vtrap (^) 1 ^r +1 1 T^ + Vtrap (^) r i ^r +i i T^ + Vtrap (^) r
optical tweezers
+B (t) (0 i + i 0) µ(^) r r ^r +r r  T^ + Vtrap (^) r +R (i r  + r i) µ(^) r
Gaetan et al., Nat. Phys. 5, 115 (2009)
Rydberg qubits
oneatom level scheme twoatom level scheme
u = u(r1  r2 )
twoatom Hamiltonian
^ ^ (1) 1 H = H1 1 4,2 1 ^2 1r
(1) +1 4,1 1 ^1 H2 1 1r ^
^ +Hint
(1,2)
^ (1,2) = rr rr  Hint
u0 ^1  ^2 3 r r
Jaksch et al., PRL 85, 2208 (2000)
Rydberg qubits
oneatom level scheme twoatom level scheme
u = u(r1  r2 )
twoatom Hamiltonian
^ ^ (1) 1 H = H1 1 4,2 1 ^2 1r
(1) +1 4,1 1 ^1 H2 1 1r ^
^ +Hint
(1,2)
^ (1,2) = rr rr  Hint
u0 ^1  ^2 3 r r
Jaksch et al., PRL 85, 2208 (2000)
controlled Rydberg phasegate
gate time T=20 ns ^ functional to obtain O
1 0.5 0 0.5 1 1 1 0.5 0 0.5 1 1 1
^ functional to obtain [O]
1 0.5 0 0.5 1 1 1 0.5 0 0.5 1 1 1
00>
01>
0.5 0 0.5 1 1
00>
01>
0.5 0 0.5 1 1
0.5
0
0.5
1
1
0.5
0
0.5
1
0.5
0
0.5
1
1
0.5
0
0.5
1
10>
11>
0.5 0 0.5 1
10>
11>
0.5 0 0.5 1
0.5
0
0.5
1
1
0.5
0
0.5
1
0.5
0
0.5
1
1
0.5
0
0.5
1
F = 0.993
F = 0.996
controlled Rydberg phasegate
gate time T=20 ns ^ functional to obtain O
1 0.8
^ functional to obtain [O]
1 0.8
pulse envelope
0.6 0.4 0.2 0 0
pulse envelope
0.6 0.4 0.2 0 0
5
10 time [ ns ]
15
20
5
10 time [ ns ]
15
20
F = 0.993
F = 0.996
controlled Rydberg phasegate
approaching the quantum speed limit
1×10
0
1×10
1
T = 15 ns
error
1×10
2
T = 20 ns
1×10
3
T = 30 ns
1×10 0
4
1000
2000 3000 iterations
4000
5000
controlled Rydberg phasegate
to be continued analyse error sources check role of pulse parametrization test with Hamiltonian allowing for ^ nondiagonal O
summary
optimal control is an extremely versatile tool but
summary
optimal control is an extremely versatile tool but you need to know how to ask questions! we derived a new class of optimization functionals suitable for quantum information purposes based on geometric classification of entangling operations (Cartan decomposition & representation in Weyl chamber) requires optimization algorithm ensuring monotonic convergence 2nd order Krotov method first results for a Rydberg gate encouraging full power of approach still needs to be explored
summary
optimal control is an extremely versatile tool but you need to know how to ask questions! we derived a new class of optimization functionals suitable for quantum information purposes based on geometric classification of entangling operations (Cartan decomposition & representation in Weyl chamber) requires optimization algorithm ensuring monotonic convergence 2nd order Krotov method first results for a Rydberg gate encouraging full power of approach still needs to be explored
summary
optimal control is an extremely versatile tool but you need to know how to ask questions! we derived a new class of optimization functionals suitable for quantum information purposes based on geometric classification of entangling operations (Cartan decomposition & representation in Weyl chamber) requires optimization algorithm ensuring monotonic convergence 2nd order Krotov method first results for a Rydberg gate encouraging full power of approach still needs to be explored
summary
optimal control is an extremely versatile tool but you need to know how to ask questions! we derived a new class of optimization functionals suitable for quantum information purposes based on geometric classification of entangling operations (Cartan decomposition & representation in Weyl chamber) requires optimization algorithm ensuring monotonic convergence 2nd order Krotov method first results for a Rydberg gate encouraging full power of approach still needs to be explored
summary
optimal control is an extremely versatile tool but you need to know how to ask questions! we derived a new class of optimization functionals suitable for quantum information purposes based on geometric classification of entangling operations (Cartan decomposition & representation in Weyl chamber) requires optimization algorithm ensuring monotonic convergence 2nd order Krotov method first results for a Rydberg gate encouraging full power of approach still needs to be explored
summary
optimal control is an extremely versatile tool but you need to know how to ask questions! we derived a new class of optimization functionals suitable for quantum information purposes based on geometric classification of entangling operations (Cartan decomposition & representation in Weyl chamber) requires optimization algorithm ensuring monotonic convergence 2nd order Krotov method first results for a Rydberg gate encouraging full power of approach still needs to be explored
where can we go from here?
where can we go from here?
1
optimize for the complete Weyl chamber, i.e. for an arbitrary perfect entangler
problem: no simple inversion of g1 , g2 , g3 c1 , c2 , c3 solution: define ellipsoid in g space containing almost all of the Weyl chamber
2
optimize for a specified trajectory in the Weyl chamber ...
3
thank you !
Information
[2ex]Telling optimal control how to maximize the entangling power of twoqubit gates
119 pages
Report File (DMCA)
Our content is added by our users. We aim to remove reported files within 1 working day. Please use this link to notify us:
Report this file as copyright or inappropriate
1240097