Basics of Quantum Information¶
Introduction¶
Quantum information science is an interdisciplinary field that combines quantum mechanics and information theory. This notebook provides an introduction to the fundamental concepts of quantum information, starting with classical information.
Classical Information¶
Classical Bits¶
A classical bit is the basic unit of classical information, which can be in one of two states: 0 or 1.
Classical States and Probability Vectors¶
In classical information theory, the state of a system can be described using probability vectors.
Classical State Set¶
The set of all possible states for a classical bit is called a classical state set, denoted by uppercase sigma. An example of the binary set (or a classical set containing only 0 and 1) is show below.
math
\Sigma = \{0, 1\}
Example: Probability Vector¶
A probability vector for a classical bit might look like this:
math
\mathbf{p} = \begin{pmatrix} p_0 \\ p_1 \end{pmatrix}
where $p_0$ is the probability of the bit being in state 0, and $p_1$ is the probability of the bit being in state 1. Note that all entries are nonnegative real numbers and that the probabilities $p_0$ and $p_1$ must sum to 1.
math
p_0 + p_1 = 1
Dirac Notation¶
Dirac notation, also known as bra-ket notation, is a standard notation for describing quantum states in the formalism of quantum mechanics.
Kets¶
A ket represents a column vector and is denoted as:
math
|\psi|
For example, the standard basis vectors or kets for a qubit are:
math
|0> = \begin{pmatrix} 1 \\ 0 \end{pmatrix}
math
|1> = \begin{pmatrix} 0 \\ 1 \end{pmatrix}
Bras¶
A bra represents a row vector and is the Hermitian conjugate (complex conjugate transpose) of a ket. It is denoted as:
math
<\psi
For example, the bras corresponding to the basis kets are:
math
<0| = \begin{pmatrix} 1 & 0 \end{pmatrix}
math
<1| = \begin{pmatrix} 0 & 1 \end{pmatrix}
Inner Product¶
The inner product of two states $\psi$ and $\phi$ is denoted as:
math
|\psi><\phi|
This results in a matrix.
Example: Inner and Outer Products¶
For kets $|0>$ and $|1>$:
math
<0|1> = 0
math
|0><1| = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}
Measuring Probability States¶
Measurement in classical information theory involves determining the state of a system based on its probability distribution.
Example: Measuring a Classical Bit¶
Suppose we observe a classical state (lets call it $\alpha$) which is part of the set $\Sigma$ where $\alpha$ is the first state in $\Sigma$ so that:
math
|\Sigma| = {\alpha, \beta, \gamma,...}.
This changes the probability set of our system (we will call this X) from our viewpoint. We can now say that:
math
Pr(X = \alpha) = 1
or
math
|\alpha> = \begin{pmatrix} 1 \\ 0 \end{pmatrix}
Another example, if a classical bit has a probability vector
math
\mathbf{p} = \begin{pmatrix} 0.75 \\ 0.25 \end{pmatrix}
or
math
Pr(X = 0) = 0.75
math
Pr(X = 1) = 0.25
then there is a 75% chance of measuring the bit in state 0 or |0> and a 25% chance of measuring it in state 1 or |1> or:
math
0.75|0> + 0.25 |1>
Classical Operations¶
Classical operations manipulate the states of classical bits. Common operations include AND, OR, and NOT gates.
Example: Classical NOT Gate¶
The NOT gate flips the state of a bit:
math
{NOT}(0) = 1
math
{NOT}(1) = 0
Deterministic Operations¶
Deterministic operations are those that produce a specific output for a given input with certainty. In classical computing, these operations are predictable and repeatable.
Example: Deterministic AND Gate¶
The AND gate outputs 1 only if both input bits are 1:
math
{AND}(0,0) = 0
math
{AND}(0,1) = 0
math
{AND}(1,0) = 0
math
{AND}(1,1) = 1
Example: Deterministic OR Gate¶
The OR gate outputs 1 if at least one of the input bits is 1:
math
{OR}(0,0) = 0
math
{OR}(0,1) = 1
math
{OR}(1,0) = 1
math
{OR}(1,1) = 1
Probabilistic Operations¶
Probabilistic operations are those where the output is not deterministic but instead follows a probability distribution. These operations can be represented using stochastic matrices, where each entry represents the probability of transitioning from one state to another.
Stochastic Matrices¶
A stochastic matrix is a square matrix used to describe the transitions of a Markov chain. Each entry $p_{ij}$ in the matrix represents the probability of transitioning from state $i$ to state $j$. The rows of a stochastic matrix sum to 1.
Example: Stochastic Matrix¶
Consider a system with two states, 0 and 1. A stochastic matrix for this system might look like:
math
P = \begin{pmatrix} 0.8 & 0.2 \\ 0.3 & 0.7 \end{pmatrix}
This matrix indicates that there is an 80% chance of staying in state 0 and a 20% chance of transitioning to state 1, and so on
Applying Probabilistic Operations¶
To apply a probabilistic operation to a probability vector, we multiply the stochastic matrix by the probability vector.
Example: Applying a Stochastic Matrix¶
If the initial probability vector is:
math
\mathbf{p} = \begin{pmatrix} 0.6 \\ 0.4 \end{pmatrix}
then the new probability vector after applying the stochastic matrix (P) is:
math
\mathbf{p}' = P \mathbf{p} = \begin{pmatrix} 0.8 & 0.2 \\ 0.3 & 0.7 \end{pmatrix} \begin{pmatrix} 0.6 \\ 0.4 \end{pmatrix} = \begin{pmatrix} 0.68 \\ 0.32 \end{pmatrix}
Probabilistic operations can also be applied to classical bits. For example, a probabilistic NOT gate might flip the bit with a certain probability.
Example: Probabilistic NOT Gate¶
A probabilistic NOT gate might be represented by the following stochastic matrix:
math
P_{\text{NOT}} = \begin{pmatrix} 0.9 & 0.1 \\ 0.1 & 0.9 \end{pmatrix}
This matrix indicates a 90% chance of flipping the bit and a 10% chance of not flipping it.
Compositions of Probabilistic Operations¶
Consider a system X with a classical state set $\Sigma$, and let $M_1$...$M_n$ be stochastic matrices representing probabilistic operations on X.
When the first operation $M_1$ is applied to a probabilistic state represented by a probability vector u, the resulting state is given by the vector $M_1 u$. If we then apply a second probabilistic operation $M_2$ to this new state, the resulting probability vector is:
math
M_2 (M_1 u) = (M_2 M_1) u
This equality holds because matrix multiplication (including matrix-vector multiplication) is associative. Therefore, the combined probabilistic operation of first applying $M_1$ and then $M_2$ is represented by the matrix $M_2 M_1$.
It's important to note that the order of operations matters: while matrix multiplication is associative, it is not commutative.
For example, if we have:
math
M_1 = \begin{pmatrix} 1 & 1 \\ 0 & 0 \end{pmatrix} \quad \text{and} \quad M_2 = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}
then:
math
M_2 M_1 = \begin{pmatrix} 0 & 0 \\ 1 & 1 \end{pmatrix} \quad \text{and} \quad M_1 M_2 = \begin{pmatrix} 1 & 1 \\ 0 & 0 \end{pmatrix}
This demonstrates that changing the order of probabilistic operations can lead to different results.
Quantum Information¶
Quantum State Vectors¶
A quantum state of a system is represented by a column vector, similar to probabilistic states. These vectors have two key properties:
- The entries are complex numbers.
- The sum of the absolute values squared of the entries equals 1.
Euclidean Norm¶
The Euclidean norm of a column vector
math
v = \begin{pmatrix} \alpha_1 \\ \vdots \\ \alpha_n \end{pmatrix}
is defined as:
math
|v| = \sqrt{\sum_{k=1}^n |\alpha_k|^2}
For quantum state vectors, this norm must equal 1, making them unit vectors.
Examples of Qubit States¶
A qubit is a quantum system with classical state set {0, 1}. Here are some examples of quantum states of a qubit:
- Basis States:
math
|0> = \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \quad |1> = \begin{pmatrix} 0 \\ 1 \end{pmatrix}
- Superposition States:
math
\frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ 1 \end{pmatrix} = \frac{1}{\sqrt{2}} |0> + \frac{1}{\sqrt{2}} |1>
math
\frac{1}{\sqrt{3}} \begin{pmatrix} 1 \\ \sqrt{2} \end{pmatrix} = \frac{1}{\sqrt{3}} |0> + \frac{2}{\sqrt{3}} |1>
- Complex Number Entries:
math
\begin{pmatrix} \frac{1 + 2i}{3} \\ -\frac{2}{3} \end{pmatrix} = \frac{1 + 2i}{3} |0> - \frac{2}{3} |1>
Superposition and Linear Combination¶
Quantum states can be superpositions of basis states. For example:
math
|+> = \frac{1}{\sqrt{2}} |0> + \frac{1}{\sqrt{2}} |1>
math
|-> = \frac{1}{\sqrt{2}} |0> - \frac{1}{\sqrt{2}} |1>
Measuring Quantum States¶
When a quantum state is measured, the probability of each classical state is given by the absolute value squared of the corresponding entry in the quantum state vector. This is known as the Born rule.
Example: Measuring the Plus State¶
For the state $|+> = \frac{1}{\sqrt{2}} |0> + \frac{1}{\sqrt{2}} |1>$:
math
\text{Pr(outcome is 0)} = \left| \frac{1}{\sqrt{2}} \right|^2 = \frac{1}{2}
math
\text{Pr(outcome is 1)} = \left| \frac{1}{\sqrt{2}} \right|^2 = \frac{1}{2}
Unitary Operations¶
Operations on quantum states are represented by unitary matrices, which preserve the Euclidean norm of vectors.
Important Unitary Operations on Qubits¶
- Pauli Matrices:
math
I = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, \quad X = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}
math
Y = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix}, \quad Z = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}
- Hadamard Matrix:
math
H = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}
- Phase Matrix:
math
P_\theta = \begin{pmatrix} 1 & 0 \\ 0 & e^{i\theta} \end{pmatrix}
Example: Hadamard Operation¶
Applying the Hadamard operation to the basis states:
math
H|0> = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ 1 \end{pmatrix} = |+>
math
H|1> = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ -1 \end{pmatrix} = |->
Compositions of Unitary Operations¶
Compositions of unitary operations are represented by matrix multiplication. For example, if we apply a Hadamard operation followed by a phase operation $P_\theta$, the resulting operation is:
math
R = HP_\theta H
Example: Composition of Operations¶
If $H = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}$ and $P_\theta = \begin{pmatrix} 1 & 0 \\ 0 & e^{i\theta} \end{pmatrix}$, then:
math
R = H P_\theta H = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & e^{i\theta} \end{pmatrix} \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}