Tại sao các giao thức sửa lỗi chỉ hoạt động khi tỷ lệ lỗi đã thấp đáng kể để bắt đầu?


15

Sửa lỗi lượng tử là một khía cạnh cơ bản của tính toán lượng tử, mà không có tính toán lượng tử quy mô lớn trên thực tế là không khả thi.

Một khía cạnh của điện toán lượng tử chịu lỗi thường được đề cập là mỗi giao thức sửa lỗi có liên quan đến ngưỡng tỷ lệ lỗi . Về cơ bản, để một tính toán nhất định có thể bảo vệ chống lại lỗi thông qua một giao thức nhất định, tỷ lệ lỗi của các cổng phải nằm dưới một ngưỡng nhất định.

Nói cách khác, nếu tỷ lệ lỗi của các cổng đơn không đủ thấp, thì không thể áp dụng các giao thức sửa lỗi để làm cho tính toán trở nên đáng tin cậy hơn.

Tại sao lại thế này? Tại sao không thể giảm tỷ lệ lỗi không phải là rất thấp để bắt đầu?


Vâng, tại một số điểm chỉ đơn giản là tiếng ồn. Có gì lạ không khi có một điểm mà việc sửa lỗi có nhiều khả năng sửa các phần bên phải thành nhiễu?
Thằn lằn rời rạc

1
@Discretelizard không nhiều đến mức có thể có một, nhưng các ngưỡng thường rất thấp (hoặc cao về độ trung thực). Tại sao lại như vậy?
glS

Câu trả lời:


4

Chúng tôi muốn so sánh trạng thái đầu ra với một số trạng thái lý tưởng, vì vậy bình thường, trung thực, được sử dụng vì đây là một cách tốt để nói như thế nào các kết quả đo lường có thể xảy ra ρ so sánh với các kết quả đo lường có thể xảy ra | ψ , nơi | ψ là trạng thái đầu ra lý tưởng và ρ là đạt được (có khả năng hỗn hợp) nhà nước sau khi một số quá trình tiếng ồn. Như chúng ta đang so sánh các quốc gia, đây là F ( | ψ , ρ ) = F(|ψ,ρ)ρ|ψ|ψρ

F(|ψ,ρ)=ψ|ρ|ψ.

Mô tả cả các quá trình hiệu chỉnh tiếng ồn và không bị lỗi sử dụng khai thác Kraus, nơi là kênh tiếng ồn với các nhà khai thác Kraus N iE là kênh sửa lỗi với Kraus khai thác E j , tình trạng sau khi tiếng ồn là ρ ' = N ( | ψ ψ | ) = i N i | ψ ψ | N i và nhà nước sau khi cả hai tiếng ồn và sửa lỗi là ρ = ENNiEEj

ρ=N(|ψψ|)=iNi|ψψ|Ni
ρ=EN(|ψψ|)=i,jEjNi|ψψ|NiEj.

Sự trung thực của điều này được đưa ra bởi

F(|ψ,ρ)=ψ|ρ|ψ=i,jψ|EjNi|ψψ|NiEj|ψ=i,jψ|EjNi|ψψ|EjNi|ψ=i,j|ψ|EjNi|ψ|2.

Để giao thức sửa lỗi được sử dụng, chúng tôi muốn độ trung thực sau khi sửa lỗi lớn hơn độ trung thực sau nhiễu, nhưng trước khi sửa lỗi, do đó trạng thái sửa lỗi ít phân biệt với trạng thái không sửa. Nghĩa là, chúng ta muốn Điều này cho phép

F(|ψ,ρ)>F(|ψ,ρ).
Vì độ trung thực là tích cực, điều này có thể được viết lại thànhi,j| Ψ| EjNi| ψ| 2>i| Ψ| Ni| ψ| 2.
i,j|ψ|EjNi|ψ|2>i|ψ|Ni|ψ|2.
i,j|ψ|EjNi|ψ|2>i|ψ|Ni|ψ|2.

Tách thành phần thể sửa chữa được, N c , mà EN c ( | ψ ψ | ) = | ψ ψ | và phần không thể sửa chữa được, N n c , mà EN n c ( | ψ ψ | ) = σNNcENc(|ψψ|)=|ψψ|NncENnc(|ψψ|)=σ . Biểu thị xác suất xảy ra lỗi là Pcvà không thể sửa chữa (nghĩa là có quá nhiều lỗi đã xảy ra để xây dựng lại trạng thái lý tưởng) vì đưa ra i , j | Ψ | E j N i | ψ | 2 = P c + P n cψ | σ | ψ P c , nơi bình đẳng sẽ được giả định bằng cách giả sử ψ | σ | ψ = 0Pnc

i,j|ψ|EjNi|ψ|2=Pc+Pncψ|σ|ψPc,
ψ|σ|ψ=0. Đó là một 'hiệu chỉnh' sai sẽ chiếu vào kết quả trực giao với kết quả chính xác.

Đối với qubit, với xác suất lỗi (bằng) trên mỗi qubit là p ( lưu ý : điều này không giống với tham số nhiễu, sẽ phải được sử dụng để tính xác suất xảy ra lỗi), xác suất xảy ra lỗi có thể sửa được (giả sử rằng n qubit đã được sử dụng để mã hóa k qubit, cho phép lỗi trên tối đa t qubit, được xác định bởi ràng buộc Singleton n - k 4 t ) là P cnpnktnk4t

Pc=jt(nj)pj(1p)nj=(1p)n+np(1p)n1+12n(n1)p2(1p)n2+O(p3)=1(nt+1)pt+1+O(pt+2)

Ni=jαi,jPjPj χj,k=iαi,jαi,k. This gives

i|ψ|Ni|ψ|2=j,kχj,kψ|Pj|ψψ|Pk|ψχ0,,0,
where χ0,0=(1p)n is the probability of no error occurring.

This gives that the error correction has been successfully in mitigating (at least some of) the noise when

1(nt+1)pt+1(1p)n.
While this is only valid for ρ1 and as a weaker bound has been used, potentially giving inaccurate results of when the error correction has been successful, this displays that error correction is good for small error probabilities as p grows faster than pt+1 when p is small.

However, as p gets slightly larger, pt+1 grows faster than p and, depending on prefactors, which depends on the size of the code and number of qubits to correct, will cause the error correction to incorrectly 'correct' the errors that have occurred and it starts failing as an error correction code. In the case of n=5, giving t=1, this happens at p0.29, although this is very much just an estimate.

Edit from comments:

As Pc+Pnc=1, this gives

i,j|ψ|EjNi|ψ|2=ψ|σ|ψ+Pc(1ψ|σ|ψ).

Plugging this in as above further gives

1(1ψ|σ|ψ)(nt+1)pt+1(1p)n,
which is the same behaviour as before, only with a different constant.

This also shows that, although error correction can increase the fidelity, it can't increase the fidelity to 1, especially as there will be errors (e.g. gate errors from not being able to perfectly implement any gate in reality) arising from implementing the error correction. As any reasonably deep circuit requires, by definition, a reasonable number of gates, the fidelity after each gate is going to be less than the fidelity of the previous gate (on average) and the error correction protocol is going to be less effective. There will then be a cut-off number of gates at which point the error correction protocol will decrease the fidelity and the errors will continually compound.

This shows, to a rough approximation, that error correction, or merely reducing the error rates, is not enough for fault tolerant computation, unless errors are extremely low, depending on the circuit depth.


I think you're trying to explain up to which physical error rate the probability of uncorrectable errors is low? Note that fault-tolerance thresholds are smaller (orders of magnitudes for many codes)
M. Stern

@M.Stern So this is a (very rough) estimate for when an error correction 'decreases the error' (i.e. increases the fidelity by some amount after noise is applied), so it's definitely not a fault tolerant threshold, no. Performing error correction may have increased the fidelity after the noise by some amount, but it hasn't reset it or anything, so fidelity will just decrease (and the error(s)) propagate even if error correction is constantly applied, showing error correction by itself isn't enough for fault tolerance
Mithrandir24601

Hm, glS will have to judge if that answers the question. In any case it's interesting and well written. So you assume that the state is orthogonal if the errors were uncorrectable, right? (That's certainly reasonable in many scenarios.) The other extreme would be when there is a 50/50 chance of a logical error in case of uncorrectable errors.
M. Stern

@M.Stern Thanks! Yes, either that states are orthogonal, or taking the lower bound. As comparing one lower bound with another isn't a great idea, I went with the assumption that they're orthogonal. If there's any edits you feel would be useful to add to the end of this, work away! Hmm... I think taking a 50/50 chance of logical error would lead to the same result, only with different prefactors at the end
Mithrandir24601

4

There is a good mathematical answer already, so I'll try and provide an easy-to-understand one.

Quantum error correction (QEC) is a (group of) rather complex algorithm(s), that requires a lot of actions (gates) on and between qubits. In QEC, you pretty much connect two qubits to a third helper-qubit (ancilla) and transfer the information if the other two are equal (in some specific regard) into that third qubit. Then you read that information out of the ancialla. If it tells you, that they are not equal, you act on that information (apply a correction). So how can that go wrong if our qubits and gates are not perfect?

QEC can make the information stored in your qubits decay. Each of these gates can decay the information stored in them, if they are not executed perfectly. So if just executing the QEC destroys more information than it recovers on average, it's useless.

You think you found an error, but you didn't. If the comparison (execution of gates) or the readout of the information (ancilla) is imperfect, you might obtain wrong information and thus apply "wrong corrections" (read: introduce errors). Also if the information in the ancillas decays (or is changed by noise) before you can read it out, you will also get wrong readout.

The goal of every QEC is obviously to introduce less errors than it corrects for, so you need to minimize the aforementioned effects. If you do all the math, you find pretty strict requirements on your qubits, gates and readouts (depending on the exact QEC algorithm you chose).


4

Classical Version

Think about a simple strategy of classical error correction. You've got a single bit that you want to encode,

000000111111
I've chosen to encode it into 5 bits, but any odd number would do (the more the better). Now, let's assume some bit-flip errors have occurred, so what we have is
01010.
Was this originally the encoded 0 or 1? If we assume that the probability of error per bit, p, is less than a half, then we expect that fewer than half the bits have errors. So, we look at the number of 0s and the number of 1s. Whichever there's more of is the one that we assume is the one we started with. This is called a majority vote. There's some probability that we're wrong, but the more bits we encoded into, the smaller this probability.

On the other hand, if we know that p>12, we can still do the correction. You'd just be implementing a minority vote! The point, however, is that you have to do completely the opposite operation. There's a sharp threshold here that shows, at the very least, that you need to know which regime you're working in.

For fault-tolerance, things get messier: the 01010 string that you got might not be what the state actually is. It might be something different, still with some errors that you have to correct, but the measurements you've made in reading the bits are also slightly faulty. Crudely, you might imagine this turns the sharp transition into an ambiguous region where you don't really know what to do. Still, if error probabilities are low enough, or high enough, you can correct, you just need to know which is the case.

Quantum Version

In general, things get worse in the quantum regime because you have to deal with two types of errors: bit flip errors (X) and phase flip errors (Z), and that tends to make the ambiguous region bigger. I won't go further into details here. However, there's a cute argument in the quantum regime that may be illuminating.

Imagine you have the state of a single logical qubit stored in a quantum error correcting code |ψ across N physical qubits. It doesn't matter what that code is, this is a completely general argument. Now imagine there's so much noise that it destroys the quantum state on N/2 qubits ("so much noise" actually means that errors happen with 50:50 probability, not close to 100% which, as we've already said, can be corrected). It is impossible to correct for that error. How do I know that? Imagine I had a completely noiseless version, and I keep N/2 qubits and give the remaining qubits to you. We each introduce enough blank qubits so that we've got N qubits in total, and we run error correction on them. cloning demonstration If it were possible to perform that error correction, the outcome would be that both of us would have the original state |ψ. We would have cloned the logical qubit! But cloning is impossible, so the error correction must have been impossible.


2

To me there seem to be two parts of this question (one more related to the title, one more related to the question itself):

1) To which amount of noise are error correction codes effective?
2) With which amount of imperfection in gates can we implement fault-tolerant quantum computations?

Let me firs stress the difference: quantum error correction codes can be used in many different scenarios, for example to correct for losses in transmissions. Here the amount of noise mostly depends on the length of the optical fibre and not on the imperfection of the gates. However if we want to implement fault-tolerant quantum computation, the gates are the main source of noise.

On 1)

Error correction works for large error rates (smaller than 1/2). Take for example the simple 3 qubit repetition code. The logical error rate is just the probability for the majority vote to be wrong (the orange line is f(p)=p for comparison):

plot physical vs logical error rate

So whenever the physical error rate p is below 1/2, the logical error rate is smaller than p. Note however, that is particularly effective for small p, because the code changes the rate from O(p) to a O(p2) behaviour.

On 2)

We want to perfrom arbitrarily long quantum computations with a quantum computer. However, the quantum gates are not perfect. In order to cope with the errors introduced by the gates, we use quantum error correction codes. This means that one logical qubit is encoded into many physical qubits. This redundancy allows to correct for a certain amount of errors on the physical qubits, such that the information stored in the logical qubit remains intact. Bigger codes allow for longer calculations to still be accurate. However, larger codes involve more gates (for example more syndrome measurements) and these gates introduce noise. You see there is some trade-off here, and which code is optimal is not obvious.
If the noise introduced by each gate is below some threshold (the fault-tolerance or accuracy threshold), then it is possible to increase the code size to allow for arbitrarily long calculations. This threshold depends on the code we started with (usually it is iteratively concatenated with itself). There are several ways to estimate this value. Often it is done by numerical simulation: Introduce random errors and check whether the calculation still worked. This method typically gives threshold values which are too high. There are also some analytical proofs in the literature, for example this one by Aliferis and Cross.


The second paragraph is touching the right points, but it is still very qualitative. You are saying that you need the gates introduced by the error correction protocol to reduce the error rate more than they increase it. However, how does one go from this intuitive idea to an actual quantitative estimate over the threshold? Also, does this imply an universal lower threshold that no error correcting protocol can beat?
glS

@glS I suspect that there is such a "universal lower threshold", i.e. an error value above which there exist no fault tolerant correction protocols. However, the value should depend on both your gate set and your error model. People tend to be more interested in positive results here (showing the existence of a good fault tolerant protocol). It may be interesting to find upper bounds in order to see "how much room we have left" in making our fault tolerant schemes better. I'd guess there isn't much room left.
Jalex Stark

@glS You're right, some actual quantitative calculation would improve this answer. I think these calculations are typically done numerically? But I also want to know about this
M. Stern

@JalexStark What makes you think there is not much room left? For example the surface code doesn't seem to be optimized w.r.t. this threshold. It uses only nearest neighbor interactions on a lattice and you could do a lot more in general.
M. Stern

@M.Stern I don't have any theorem-based evidence, and I'm not an expert in the area. I was just guessing based on the amount of work done and on how large the best thresholds are.
Jalex Stark

2

You need a surprisingly large number of quantum gates to implement a quantum error correcting code in a fault-tolerant manner. One part of the reason is that there are many errors to detect since a code that can correct all single qubit errors already requires 5 qubits and each error can be of three kinds (corresponding to unintentional X, Y, Z gates). Hence to even just correct any single qubit error, you already need logic to distinguish between these 15 errors plus the no-error situation: XIIII, YIIII, ZIIII, IXIII, IYIII, IZIII, IIXII, IIYII, IIZII, IIIXI, IIIYI, IIIZI, IIIIX, IIIIY, IIIIZ, IIIII where X, Y, Z are the possible single qubit errors and I (identity) denotes the no-error-for-this-qubit situation.

The main part of the reason is, however, that you cannot use straight-forward error detection circuitry: Every CNOT (or every other nontrivial 2 or more qubit gate) forwards errors in one qubit to another qubit which would be disastrous for the most trivial case of a single qubit error correcting code and still very bad for more sophisticated codes. Hence a fault-tolerant (useful) implementation of needs even more effort than one might naively think.

With many gates per error correcting step, you can only permit a very low error rate per step. Here yet another problem arises: Since you may have coherent errors, you must be ready for the worst case that an error ϵ propagates not as Nϵ after N single qubit gates but as N2ϵ. This value must remain sufficiently low such that you overall gain after correcting some (but not all) errors, for example single qubit errors only.

An example for a coherent error is an implementation of a gate G that does, to first order, not simply G but G+ϵX which you might call an error of ϵ because that is the probability corresponding to the probability amplitude ϵ and hence the probability that a measurement directly after the gate reveals that it acted as the error X. After N applications of this gate, again to first order, you have actually applied GN+NϵGNX (if G and X commute, otherwise a more complicated construct that has N distinct terms proportional to ϵ). Hence you would, if measuring then, find an error probability of N2ϵ.

Incoherent errors are more benign. Yet if one must give a single value as an error threshold, then one cannot choose to only assume benign errors!


thanks for the answer, but I would appreciate if you could expand the answer to say more about some of the points here. In particular, 1) what do you mean exactly by saying that you need many gates in the error correcting code because there are "many errors to detect"? 2) What do you mean with "straight-forward logical construct"? 3) Why do "coherent errors" imply an error propagation scaling like N2ϵ instead of Nϵ?
glS

@glS I have substantially expanded the answer to address all your questions. Did I manage to do that?
pyramids
Khi sử dụng trang web của chúng tôi, bạn xác nhận rằng bạn đã đọc và hiểu Chính sách cookieChính sách bảo mật của chúng tôi.
Licensed under cc by-sa 3.0 with attribution required.