Научная статья на тему 'Gaussian classical capacity of Gaussian quantum channels'

Gaussian classical capacity of Gaussian quantum channels Текст научной статьи по специальности «Физика»

CC BY
193
40
i Надоели баннеры? Вы всегда можете отключить рекламу.
Область наук
Ключевые слова
QUANTUM CHANNELS / INFORMATION TRANSMISSION / QUANTUM INFORMATION / CHANNEL CAPACITY

Аннотация научной статьи по физике, автор научной работы — Karpov E., Schäfer J., Pilyavets O.V., García-Patrón R., Cerf N.J.

The classical capacity of quantum channels is the tight upper bound for the transmission rate of classical information. This is a quantum counterpart of the foundational notion of the channel capacity introduced by Shannon. Bosonic Gaussian quantum channels provide a good model for optical communications. In order to properly define the classical capacity for these quantum systems, an energy constraint at the channel input is necessary, as in the classical case. A further restriction to Gaussian input ensembles defines the Gaussian (classical) capacity, which can be studied analytically. It also provides a lower bound on the classical capacity and moreover, it is conjectured to coincide with the classical capacity. Therefore, the Gaussian capacity is a useful and important notion in quantum information theory. Recently, we have shown that the study of both the classical and Gaussian capacity of an arbitrary singlemode Gaussian quantum channel can be reduced to the study of a particular fiducial channel. In this work we consider the Gaussian capacity of the fiducial channel, discuss its additivity and analyze its dependence on the channel parameters. In addition, we extend previously obtained results on the optimal channel environment to the singlemode fiducial channel. In particular, we show that the optimal channel environment for the lossy, amplification, and phaseconjugating channels is given by a pure quantum state if its energy is constrained.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Gaussian classical capacity of Gaussian quantum channels»

GAUSSIAN CLASSICAL CAPACITY OF GAUSSIAN QUANTUM CHANNELS

E. Karpov1, J. Schäfer1, O.V. Pilyavets1, R. García-Patrón2'1, N.J. Cerf1

1QuIC, Ecole Polytechnique de Bruxelles, CP 165, Universite Libre de Bruxelles,

1050 Brussels, Belgium 2 Max-Planck-Institut fur Quantenoptik, Hans-Kopfermann-Straße 1,

85748 Garching, Germany

ekarpov@ulb.ac.be

PACS 03.67.Hk, 89.70.Kn

The classical capacity of quantum channels is the tight upper bound for the transmission rate of classical information. This is a quantum counterpart of the foundational notion of the channel capacity introduced by Shannon. Bosonic Gaussian quantum channels provide a good model for optical communications. In order to properly define the classical capacity for these quantum systems, an energy constraint at the channel input is necessary, as in the classical case. A further restriction to Gaussian input ensembles defines the Gaussian (classical) capacity, which can be studied analytically. It also provides a lower bound on the classical capacity and moreover, it is conjectured to coincide with the classical capacity. Therefore, the Gaussian capacity is a useful and important notion in quantum information theory. Recently, we have shown that the study of both the classical and Gaussian capacity of an arbitrary single-mode Gaussian quantum channel can be reduced to the study of a particular fiducial channel. In this work we consider the Gaussian capacity of the fiducial channel, discuss its additivity and analyze its dependence on the channel parameters. In addition, we extend previously obtained results on the optimal channel environment to the single-mode fiducial channel. In particular, we show that the optimal channel environment for the lossy, amplification, and phase-conjugating channels is given by a pure quantum state if its energy is constrained.

Keywords: Quantum channels, Information transmission, Quantum Information, Channel Capacity. 1. Introduction

Information transmission and processing are ubiquitous in modern human society. By the end of the XXth century information technologies experienced tremendous growth accompanied by "exponential" downscaling of the hardware elements. Simple extrapolation shows that the element size will soon achieve the level where quantum effects cannot be neglected. This is one of the reasons why the interdisciplinary field known as quantum information theory appeared. Another reason comes from a possibility to apply particular properties of quantum systems in order to solve those problems which are intractable using only classical means. Information theory provides a quantitative measure of information and the tools for studying the information transmission through communication channels. A fundamental quantity characterizing their performance is the maximal achievable rate at which the information can be reliably transmitted. This tight upper bound is called capacity of the communication channel. If the quantum nature of information carriers is taken into account one has to describe communication channels as transformations of quantum states. One of the most general transformations allowed by quantum mechanics is a completely positive trace-preserving map which is identified with a quantum channel.

In addition to the information in a usual sense, which can be measured in bits, in quantum information theory one introduces also other types of information related to the non-classical properties of quantum states, e.g., entanglement. In our paper we discuss the classical capacity of quantum channels focussing on bosonic Gaussian quantum channels. They provide a realistic model for the information transmission via optical communication lines.

The paper is organized as follows. In Section 2 we define the classical capacity of quantum channels. In Section 3 we describe Gaussian quantum channels. In Section 4 we introduce the Gaussian capacity and the type of Gaussian ensembles that achieve this capacity. In Section 5 we present our recently proposed decomposition of Gaussian channels in terms of a particular fiducial channel, find the Gaussian capacity of the fiducial channel and discuss its additivity. In Section 6 we apply this decomposition to maximize the Gaussian capacity of the fiducial channel over the set of states of the environment mode which respect an energy constraint. In Section 7 we present our conclusion.

2. Classical capacity of quantum channels

Quantum channels are completely positive trace-preserving (CPTP) maps $ that act on density operators p defined on a Hilbert space H [1]. The transmission of classical information by quantum channels involves an encoding of classical symbols (alphabet) into a set of quantum input states p,. The input state transmitted via a quantum channel $ is transformed to the output state pout,i = $[pin,i]. Depending on the coding scheme each individual symbol state p^ is used for the information transmission with some probability Pi, therefore, the average input state sent through the channel is pin = P,An,,. Since the CPTP map is linear the average output state is /W = $[pin] = E, Pi$[/w] = E, PiPout,i. In other words, the channel outputs the state pout,i with probability p,. The so-called "Holevo X-quantity" given by following equation [1]

X$ {Pi,Pi}] = S (Pout) - PiS (pout,i) (1)

i

provides a tight upper bound for the maximal amount of information that one can extract from the output ensemble {$[pi],pi} by using all possible measurements. Then, the supremum of the Holevo x-quantity over the whole set of input ensembles

Cx($) = sup x[$, {Pi,Pi}] (2)

[pi,pi]

gives the tight upper bound on the amount of information that can be transmitted on average by one invocation of quantum channel $ provided that the input symbol states are not entangled over different channel uses [2], [3]. This quantity is called the one-shot capacity. However, one may increase the amount of information transmitted per channel use by entangling the input states over a sequence of channel uses. Therefore, the classical capacity is defined by the limit [2], [3]

C($) = lim — Cx($®m) > Cx($). (3)

m^x m

If the equality C($) = Cx($) holds then the classical capacity is additive. The additivity of the classical capacity of quantum channels has long been an open problem until Hastings has shown an example of a channel whose capacity is non-additive [4]. Hence, the additivity has to be studied for each particular channel individually. We focus our study

on bosonic Gaussian channels which constitute an important part of "Gaussian quantum information" [5].

3. Gaussian channels

Bosonic systems are so-called continuous variables systems described by observables with continuous spectra acting on states defined in an infinite-dimensional Hilbert space. The typical example of bosonic systems is the quantized electromagnetic field seen as a collection of quantum harmonic oscillators (bosonic modes). The infinite-dimensional Hilbert space of each mode is spanned by a countable basis of Fock states (number stats), which are the eigenstates of the number operator NV|n) = n|n), where n is a non-negative integer number and the number operator NV = Va is defined via bosonic creation and annihilation operators that act as follows:

â|0) = 0, a|n) = Jn\n — 1) if n > 1,

1 ) J ) A^i ) IX (4)

a1 |n) = V n + 1|n + 1).

These operators satisfy the bosonic commutation relation [à,V] = 1 (throughout the paper we are using natural units h = u = 1).

A convenient representation of these infinite-dimensional systems is the phase-space representation based on the use of quadrature operators

q = — (a + V ), p = —= (a - V). (5)

These operators have a continuous spectrum and satisfy the same canonical commutation relations as position and momentum operators. For m bosonic modes one defines a vector of quadrature operators

X = (Xi,X2, . . . ,X2m)T = (qi,p1, . . . ,Pm)T . (6)

Then the canonical commutation relation is expressed as [xi,xj} = iQij, where Qj is the matrix element of symplectic matrix

« = ® ( -01 1 ) . (7)

n=1 v 7

In this representation quantum state q of m modes is described by its Wigner function:

r rf2m¿

W(x) = —m(q + |/2|p|q - |/2)e-ip^ (8)

JR2m (2n)

where |q) is an eigenstate of operator q = (q1, q2,..., qm)T. The Wigner function is commonly called quasiprobability distribution because, on one hand, its marginals provide valid probability distributions for both quadratures q and p. On the other hand, it may take negative values and, in any case, it cannot be a joint probability distribution of the values of observables q and p because if such distribution existed it would violate the Heisenberg uncertainty relation. In order to define the Wigner function one has to know, in general, its values in all points of the 2m-dimensional phase-space. However, the amount of parameters, which determine the Wigner function of a Gaussian state, can be essentially reduced.

Given a density operator p one defines the displacement vector of the first moments

d = Tr[x p] (9)

and the covariance matrix (CM) V of the second "centered" moments of the quadratures

2

Vij = ôTr[{Xj - di, Xj - dj}p], (10)

where {, } is the anticommutator. Then the Wigner function of a Gaussian state is completely determined by the displacement vector d and the covariance matrix V:

W (x) =-e-1 (x-d)Tv. (11)

(2n)^v/detV

Quantum channels which preserve the "Gaussian" property of quantum states are called Gaussian channels. They are CPTP maps which are closed on the set of Gaussian states. Any such transformation of m-mode Gaussian input states to m-mode Gaussian output states is given by its action on the parameters determining the state

dout = Xdin + dch, (12)

V out = XV inXT + Y, (13)

where dch is the displacement introduced by the channel, X is a real 2m x 2m matrix, and Y is a real, symmetric and positive-semidefinite 2m x 2m matrix fulfilling the following condition:

Y + - (Q - XQXT) > 0. (14)

Matrices X, Y, and vector dch completely define a Gaussian channel $(X, Y, dch).

4. Gaussian capacity

The classical capacity as defined by Eqs. (1)-(3) may be infinite for bosonic channels. We can demonstrate it for the example of a Gaussian channel with detX > 0. Let us consider a sequence of input ensembles {pi,pm>} belonging to the same set of symbol states pi but taken with different probability distributions in such a way that the energy of the average state is increasing up to infinity if m ^ ro. In this case the entropy of the average output state in the first term in Eq. (1) can be increasing up to infinity while the second term remains the same. A similar problem appears for Gaussian channels in standard ("classical") information theory, where the meaningful definition of the capacity is given by imposing an "input power" constraint. With this constraint, the capacity is a function of the input power. A similar constraint exists in the quantum case. Namely, the mean number of photons of the average input state is upper bounded. Therefore, for one bosonic mode we have

Tr [pin^a] < NV, pin = J p(dw)pw, (15)

where NV is the mean number of photons per quadrature and p(dw) is a probability measure on the whole set of quantum states parametrized by w (the probability measure plays a role of pi for the continuous variables case). For simplicity we will call this bound "input

energy constraint". Then the classical one-shot capacity of a bosonic channel $ is defined as:

CX($,NV) = max x($,/), (16)

x($,/) = S ($[pin]) -J /(dw)S ($[pw ]), (17)

where the average input state pin is given by Eq. (15) and En denotes the set of states satisfying the input energy constraint (15). The classical capacity constrained on a set E of average input states was considered in [6,7]. The definition of the classical capacity given by Eq. (3) requires the generalization of the constraint (15) to an arbitrary number of modes Tr[p(m)N

< 2m(N + 1/2). This constraint leads to another possible type of non-additivity, which is not related to the entanglement of the input states. Indeed, this constraint specifies the amount of input photons per channel use only on average. Even if the one-shot capacity Cx($) constrained on a given number of input photons is additive, by distributing the available amount of input photons between the channel uses in a proper way, one may expect to achieve a higher Cx($®m) compared to where /2 corresponds to the uniform distribution of the amount of input photons between the uses of the channel. Nevertheless, as proven in [7], this scenario does not take place due to the concavity of x($,/) as a function of In particular, it is proven that the uniform distribution of the amount of input photons between the channel uses achieves the classical capacity of Gaussian channels if the one-shot capacity is additive for a fixed (though possibly different) amount of input photons at each channel use [7]. This is the case, indeed, for entanglement breaking channels, whose classical capacity was proven to be additive [8,9].

Thus, the additivity problem for the classical capacity of Gaussian channels is resolved for the class of the entanglement breaking channels. Hence, in order to evaluate their capacity it is sufficient to find only the one-shot capacity. However, this simplified problem is still a highly non-trivial task. At the moment the classical capacity is known only for the lossy channel provided that its environment is pure (i.e., in a squeezed vacuum state) and its energy is above some threshold [10,11] (the lossy channel can be realized by a beamsplitter mixing the input signal mode with the environment mode). Therefore, the evaluation of different bounds on the capacity is a valuable alternative.

A natural lower bound is the Gaussian capacity defined as the classical capacity with an additional restriction on the set of admissible input states. In [12] we defined this quantity by requiring that all individual symbol states and the modulated average state are Gaussian. We have shown there that the optimal ensemble achieving the Gaussian capacity, as we define it, is the same input ensemble as the one that was imposed by a previous more restrictive definition (see, for example, [13]). This optimal input ensemble is generated by phase space translates of a single Gaussian pure state modulated according to a Gaussian distribution with CM Vm. For such an ensemble, the covariance matrices of the individual symbol states are the same. Thus, the CM of the average input state Vin is equal to Vin + Vm and the input energy constraint therefore reads

Tr [Vin + Vm] < 2NV + 1. (18)

Recall that the von Neumann entropy of a Gaussian state depends only on its covariance matrix. Moreover, the action of a Gaussian channel on the covariance matrix does not depend on displacements din and dch. Hence, all output entropies in the second term of

Eq. (1) are equal. Therefore, the Holevo x-quantity is the difference of the von Neumann entropies of the average output state and output symbol state:

x[$, Vin, Vin] = S($[Vin]) - S($[Vin]) = g(* - 2) - g(v - 0 , (19) g(x) = (x + 1)log2(x + 1) - x log2(x), (20)

where V and v are symplectic eigenvalues of the CM of the average output state $[Vin] and an individual output symbol state $[Vin], respectively. The new form of the Holevo X-quantity given by Eq. (19) reduces the problem of calculating the one-shot Gaussian capacity to the maximization of the difference of two entropies under the constraint (18). This maximization can be done using the method of Lagrange multipliers. The evaluation of the Gaussian capacity is relatively simple due the restriction to Gaussian states. Below we will show that it can be expressed in a closed form in a certain domain of parameters. The importance of this bound is also highlighted by the fact that in all cases where the classical capacity is known, the Gaussian capacity coincides with it. In addition, Gaussian states maximize the von Neumann entropy on the set of all states with the same energy. This leads to a natural conjecture that the Gaussian capacity always coincides with the classical capacity.

5. Single-mode Gaussian channels

One can try to further simplify the calculation of the Gaussian capacity using an equivalence of any single-mode Gaussian channel $ to one of seven canonical channels $C [9,14,15] preceded and followed by unitary operations:

$ = U2 o $C o U1. (21)

Since unitary transformations do not change the entropy the Holevo x-quantity of any Gaussian channel $ is equal to the one of the corresponding canonical channel $C. However, if the unitary transformation U1 which precedes the canonical channel in the decomposition (21) involves a squeezing operation then the energy of the state at the input of $ and $, respectively, is different.

In order to find the Gaussian capacity of $ one has to consider both the canonical channel and preceding squeezing operation. Actually, in this case, the expressions for the Gaussian capacity can be obtained in a closed form for five of the seven canonical channels preceded by squeezing operations. However, this is possible only if the input energy NV exceeds a certain energy threshold. The latter depends on the parameters of the corresponding canonical channel and the squeezing parameter [16]. These five canonical channels have the same matrix Y which is proportional to identity: Y = y I. Moreover, all of them transform thermal input states to thermal output states. Therefore, we call them thermal channels $TH. They include lossy, amplification, classical additive-noise, phase-conjugating and zero-transmission channels [14].

In order to go beyond the aforementioned results, we proposed another decomposition in terms of a fiducial channel $F [12]

$ = U2 o $F o 6i, (22)

where 61 is a passive unitary operation which corresponds to a rotation in the phase space. If a Gaussian channel $(X, Y, d) is canonically equivalent to one of the thermal channels

$TH, then the fiducial channel $F(XF, YF, 0) in Eq. (22) is given by two matrices [see Eqs. (12)-(14)]

XF = XTH = VM ( i Sgn°(T) ) , YF = y ( ^ e- ) • (23)

t = det X, y = det Y.

The dependence of the squeezing parameter s on the matrices X, Y that define $ is presented in [12]. The fiducial channel is defined by three scalar parameters (t,y,s). The decomposition (22) is important for the following reason. Rotations change neither the entropy nor the energy of quantum states; therefore, the state having passed through 61 and entering $F has the same energy as the input state entering $. This allows us to conclude that both the classical capacity and the Gaussian capacity of $(X, Y, d) are equal to those of the corresponding $FTys) [12]. This statement can be extended to Gaussian channels canonically equivalent to "non-thermal" (or so-called "pathological") channels. These canonical channels may be considered as limiting cases of the fiducial channel with properly chosen preceding squeezing operations. Thus, we have reduced the classical capacity (and Gaussian capacity) of any Gaussian channel to the one of the corresponding fiducial channel. For this reason it is sufficient to study the Gaussian capacity of the fiducial channel in the full range of its parameters in order to obtain the Gaussian capacity of any single mode Gaussian channel. This can be done using the method of Lagrange multipliers. It leads to a general formula for the Gaussian capacity of all Gaussian channels canonically equivalent to thermal channels [12]:

CG ($(r , y,s),NV) = g (|t|NV + y cosh(2s) + - g (y + i^-1) , (24)

1 (e2|s| . 2y

NV > NVthr = 2 [^e2|s| + |T| sinh(2|s|) - lj . (25)

It holds for input energies, which are higher than the threshold Nythr. This corresponds to the so-called quantum water-filling solution [17-20]. It implies that the overall modulated output state is a thermal state. The optimal ensemble is composed of individual symbol states, which are displaced squeezed vacuum states determined by the same squeezing parameter s that enters the matrix YF [see (23)], where the latter represents the effect of the environment in the fiducial channel

$F[Vin] « I, Vin = idiag(e2s, e-2s). (26)

Notice that the squeezing of the individual input symbol state requires energy. Nevertheless, the condition (25) guarantees that the amount of input energy is sufficient to allow such optimal input states. It is known that these optimal symbol states minimize the entropy at the output of the channel on the set of all Gaussian states. Furthermore, the Gaussian capacity is additive above the input energy threshold [12,21].

For both types of non-thermal canonical channels the formula (24) is never applicable. However, we go a step further and find the Gaussian capacity of the fiducial channel below the threshold and we find a solution which is also valid for non-thermal channels. In this case the solution of the optimization problem was already found for the lossy [17] and classical additive-noise channel [18-20] with squeezed environment. An optimal input ensemble is given by CMs Vin and Vin which commute with YF. The optimal value of the squeezing of the individual symbol state is determined by a solution of a transcendental

equation. This solution allows us to study the properties of the Gaussian capacity as a function of the parameters of the additive noise or lossy channels. Here, we generalize this result by extending the solution to the fiducial channel. In our notations the corresponding transcendental equation can be written in two equivalent forms:

g' (^ - 1) sinh(2i) - g' (v - ^ sinh [2(s - r)]e-2ry = 0, g' (^ - 1) sinh(2i) - g' ^v - 0 sinh [2(sv - r)]e-2r = 0,

(27)

where

$F[Vin] = i diag(e , e ), [Vm] = v diag(e

2S;

v e-2sv

),

(28)

function g'(x) is the derivative of g(x), variables y and s are the parameters of the fiducial channel (23), and r is the squeezing parameter which enters the CM of the individual input symbol state Vin = (1/2)diag(e2r,e-2r). The squeezing parameter r plays a role of the unknown variable in the equation. By analyzing this equation we have found that the signs of the solution r and s coincide.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

6. Optimal environment

By studying how the Gaussian capacity depends on the channel parameters [19,22] we arrived to a new problem, which was formulated first for the lossy channel [17]. Here we consider this problem for the fiducial channel $F. In order to formulate it we use the Stinespring dilation which allows us to realize the channel by a joint unitary transformation of a two-mode (product) state. The latter consists of the input and environment modes. If t =1 then the CM of the environment mode Ve is proportional to YF, i.e. |1 - t|Ve = YF. If t =1 then there is no Stinespring dilation with a single environment mode however, in this case, YF represents a classical Gaussian noise "added" to the input state by the channel. The CM of the classical noise Ve = YF In both cases, the trace of Ve has the same meaning. It determines the energy contained in the environment mode or the energy of the noise.

Recall that the definition of the Gaussian capacity includes a maximization of xG [see (19)] over Vin and Vin under the energy constraint (18). In this work we impose a similar energy constraint also on the environment mode (or added noise) and look for the optimal CM Ve which maximizes the Gaussian capacity. Since in all cases the CM Ve is proportional to YF, the constraint on its trace is equivalent to the corresponding constraint on the trace of YF:

Me = Tr[YF] = 2y cosh(2s). (29)

At first, we consider the simplest case which corresponds to the input energy being above the threshold NVthr. In this case, we can use our expression for the Gaussian capacity (24). Due to the constraint (29) parameter y is a function of s. According to the waterfilling solution, if both NV and Me are constant the argument of the first term in (24) remains also constant, even if s is varied. Then the first derivative of CG ($F,NV) with respect to s is obtained from the second term in (24) in the form

if ^ = Meg' (y + V) (30)

The sign of this derivative is the same as the sign of s because g'(x) is a positive function and Me is a positive constant. This means that CG ($F,NV) is a monotonically increasing function of the absolute value of s. Therefore, its maximum lays at the boundaries of the allowed interval for s. There are two reasons for the existence of such boundaries in this problem.

One is due to the condition NV > Nythr which provides that Eq. (30) is valid. This condition together with the constraint (29) upperbounds by some threshold value sthr > 0 the interval of the absolute values of |s|, where Eq. (30) is applicable. As a consequence for |s| > sthr the condition (24) is violated. The particular case t = 0 corresponds to the so-called zero-transmission channel where Eq. (30) is not valid. However, this case is trivial because here the classical capacity is always equal to zero; therefore, the bound sthr does not exist.

The second reason follows from the condition that the symbol state at the output of the channel must be a valid quantum state. This is provided by the condition (14), which is equivalent to YF + |(1 - t)Q > 0 for the fiducial channel (actually, it is also equivalent to a simle inequality for channel parameters y > |1 - t|/2 [12]). If t =1 then this condition can be rewritten in the form Ve + |Q > 0 (or simply y > 0 ). This is equivalent to the requirement that the environment mode must be in a valid quantum state. Due to the constraint (29) it upperbounds the absolute values of |s| by the value s*, which corresponds to the environment mode being in a pure state with det (Ve) = 1/4. If t =1 then the condition (14) for the fiducial channel is equivalent to YF > 0. Since it is satisfied for all real values of s no finite upper bound s* exists.

If s* < sthr then for all |s| < s* Eq. (30) is valid. Using Eq. (30) we conclude that the maximum of the Gaussian capacity is achieved by the environment mode being in a pure quantum state defined by |s| = s*.

If s* > sthr (or s* does not exist) Eq. (24) is not applicable in the interval sthr < |s| < s* and, therefore, we cannot apply our conclusions based on Eq. (30) to this interval of |s|. Nevertheless, in this case, we can also study the derivative of the Gaussian capacity over s using Eq. (27). Notice, that Eq. (27) is equivalent to (d/dr)xG[$, Vin, Vin)] = 0. Let us take the input states with CMs Vin and Vin that satisfy Eq. (27). Using the constraint (29) we deduce

¿y<*>e±2S = ± dfe. (3I)

Then we have

djCG (*■*) = ¿*G|$- V- VJ

Me

e

[g'(v - 1/2) sinh(2sv) - g'(V - 1/2) sinh(2s)].

(32)

cosh2(2s)

Using (27) again we can rewrite it in the form

dscG ^) = cofe g' <v- 1/2)e2s'(1 - e-4r) , (33)

where r satisfies (27). Since the sign of r is the same as the sign of s, the sign of the first derivative of CG ($,NV) coincides with the sign of s. It means that CG ($F,NV) is a monotonically increasing function of the absolute values of s regardless if |s| is higher or lower than sthr. As a result, the only bound on |s| is s* (if it exists for the considered parameters of the channel).

Let us summarize our results for different values of t determining the type of the fiducial channel:

• If t = 0 and t = 1 then the allowed interval of s is finite. Its boundaries, where the maximum of the Gaussian capacity is achieved, correspond to the environment mode being in a pure state. For the lossy channel, this result was formulated as "environment purity theorem" and proved in [17].

• If t = 0 then the classical capacity is equal to zero for all s in the allowed interval, which is finite, i.e. |s| < s*. The environment mode should be in a proper quantum state, but not necessarily pure.

• If t =1 then the constraint (14) reduces to YF > 0 which corresponds to the classical additive-noise channel. Since the allowed interval of s, in this case, is the whole real axis, the optimal Ve is obtained in the limit |s| ^ ro under the condition 2ycosh(2s) = Me = const. This gives Ve = diag(Me, 0) (for positive s) which corresponds to the single-quadrature classical noise channel [9]. This inspires a further study of optimal environment for Gaussian quantum channels. For instance, the generalization of out results to the case of multimode environments (broadband channels) that was discussed in [17] would be an interesting task.

7. Conclusion

We studied the classical information transmission through Gaussian quantum channels by analyzing the Gaussian capacity which, as we argue, is of great importance for the field of quantum information theory. We have used a recently found decomposition of an arbitrary single-mode Gaussian channel which allows us to reduce the problem of calculating its Gaussian capacity to the one of a particular fiducial channel. For the latter, we have developed a method of evaluating its Gaussian capacity and discussed its additivity. Finally, we have applied our results to a new problem of maximizing the Gaussian capacity under the environment energy constraint. We have shown that for a single mode the optimal environment almost in all cases is in a pure state. In a particular case, the environment is classical (noise) and all the noise energy is concentrated in one quadrature of the optimal noise CM. We expect that the decomposition in terms of the fiducial channel will be useful in further research on the Gaussian capacity, in particular, for finding the optimal state of the environment of multimode Gaussian channels.

Acknowledgements

The authors acknowledge financial support from the F.R.S.-FNRS under the Eranet project HIPERCOM, from the Interuniversity Attraction Poles program of the Belgian Science Policy Office under grant IAP P7-35 "photonics@be", from the Belgian FRIA foundation, from the Brussels Capital Region under the project CRYPTASC, from the ULB under the program "Ouvertures internationales", and from the Alexander von Humboldt foundation.

References

[1] Holevo A. S. Some estimations of the amount of information transmitted by quantum communication channel. Probl. Peredachi Inf. 9 (3), P. 3-11 (1973).

[2] Schumacher B., Westmoreland M. D. Sending classical information via noisy quantum channel. Phys. Rev. A 56, P. 131-138 (1997).

[3] Holevo A. S. The capacity of quantum communication channel with general signal states. IEEE Trans. Inf. Theory, 44, P. 269-272 (1998).

[4] Hastings M. B. Superadditivity of communication capacity using entangled inputs. Nature Phys. 5, P. 255-257 (2009).

[5] Weedbrook C., Pirandola S., Garcia-Patron R., Cerf N.J., Ralph T.C., Shapiro J.H., Lloyd S. Gaussian Quantum Information. Rev. Mod. Phys. 84, P. 621-669 (2012).

[6] Holevo A.S. Quantum systems, channels, information: a mathematical introduction. De Gruyter studies in mathematical physics. Walter de Gruyter GmbH & Co. KG, Berlin. 16, P. 11-349 (2012).

[7] Holevo A. S., Shirokov M. E. On Shor's channel extension and constrained channels. Comm. Math. Phys. 249, P. 417-430 (2004).

[8] Shirokov M. E. The Holevo capacity of infinite dimensional channels and the addititivity problem. Comm. Math. Phys. 262, P. 131-159 (2006).

[9] Holevo A.S. Entanglement breaking channels in infinite dimensions. Probl. Inf. Transmiss. 44 (3), P. 3-18 (2008).

[10] Giovannetti V., Guha S., Lloyd S., Maccone L., Shapiro J. H., Yuen H. P. Classical capacity of the lossy bosonic channel: the exact solution. Phys. Rev. Lett. 92 (2), P. 027902-1-027902-4 (2009).

[11] Lupo C., Pilyavets O.V., Mancini S. Capacities of lossy bosonic channel with correlated noise. New J. Phys. 11, P. 063023-1-063023-18 (2009).

[12] Schafer J., Karpov E., Garcia-Patron R., Pilyavets O. V., Cerf N. J. Equivalence relations for the classical capacity of single-mode Gaussian quantum channels. Phys. Rev. Lett. 111, P. 030503-1-030503-5 (2013).

[13] Eisert J., Wolf M.M. Gaussian quantum channels. In: Quantum information with continuous variables of atoms and light, edited by Cerf N. J., Leuchs G., Polzik E.S. Imperial College Press, London. 1, P. 23-42 (2007).

[14] Caruso F., Giovannetti V., Holevo A.S. One-mode bosonic Gaussian channels: a full weak-degradability classification. New J. Phys. 8 P. 310-1-310-18 (2006).

[15] Holevo A.S. One-mode quantum Gaussian channels: structure and quantum capacity. Probl. Inf. Transmiss. 43 (1), P. 1-11 (2007).

[16] Lupo C., Pirandola S., Aniello P., Mancini S. On the classical capacity of quantum Gaussian channels. Phys. Scr. T143, P. 014016-1-014016-6 (2011).

[17] Pilyavets O. V., Lupo C., Mancini S. Methods for Estimating Capacities and Rates of Gaussian Quantum Channels. IEEE Trans. Inf. Theory. 58, P. 6126-6164 (2012).

[18] Schafer J., Karpov E., Cerf N.J. Gaussian capacity of the quantum bosonic memory channel with additive correlated Gaussian noise. Phys. Rev. A 84, P. 032318-1-032318-16 (2011).

[19] Schafer J., Karpov E., Cerf N.J. Quantum water-filling solution for the capacity of Gaussian information channels. Proceedings of SPIE. 7727, P. 77270J-1-77270J-12 (2010).

[20] Schafer J., Karpov E., Cerf N.J. Capacity of a bosonic memory channel with Gauss-Markov noise. Phys. Rev. A 80, P. 062313-1-062313-11 (2009).

[21] Hiroshima T. Additivity and multiplicativity properties of some Gaussian channels for Gaussian Inputs. Phys. Rev. A 73, P. 012330-1-012330-9 (2006).

[22] Pilyavets O.V., Zborovskii V. G., Mancini S. Lossy bosonic quantum channel with non-Markovian memory. Phys. Rev. A 77, P. 052324-1-052324-8 (2008).

i Надоели баннеры? Вы всегда можете отключить рекламу.