
When I was in high school, bras were of great interest to me — mostly in regard to trying to remove them from my girlfriends. That was my errant youth, but it slightly tickles my sense of the absurd that they’ve once again become a topic of interest, though in this case a very different kind of bra.
These days it’s about the useful Bra-Ket notation invented by Paul Dirac. This notation is used throughout quantum mechanics, so it’s definitely something to know about if you’re interested in QM.
In brief, the notation uses the vertical bar (‘|’) and the angle brackets (the ‘⟨’ and ‘⟩’ symbols — not the less-than and greater-than symbols) to construct |kets⟩ and ⟨bras| that represent quantum states.
Quantum States
A ket denotes a quantum state. For example, in two-level quantum systems (like qubits) we have the canonical states:
Which we would refer to verbally as ket zero and ket one. They are the quantum equivalents to computer bits, which can be either 0 or 1.
Part of the utility of the notation comes from the ability to put anything we want into a ket. For example, recalling the infamous cat, we might write its possible states as:
Some authors even use the cat icons: |😿⟩ and |😺⟩.1 The point is that kets can be evocative as well as mathematical. They provide a convenient way to talk about quantum states.
When they are mathematical, a ket is typically a column vector representing a quantum state. In a two-level system, the canonical |0⟩ and |1⟩ states can be defined:
Where the upper number is an x-coordinate, and the lower number is a y-coordinate.2
Why vertical column vectors rather than the more familiar horizontal row vectors you may have encountered in school? Because when we apply an operator to a vector to get another vector, we’re multiplying the vector by a square matrix, and that only works with column vectors:
It’s not a legitimate operation to multiply a square matrix by a row vector:
That’s because matrix multiplication requires the column count of the left matrix to match the row count of right matrix. In a two-level system, the operator matrix has two rows and columns, but a row vector has only one row. A column vector has two rows, though, so the multiplication works.
Kets and Bras
So, a ket is a column vector representing a quantum state. The complementary bra is a row vector, but it’s more than just that.
Given some ket |a⟩, its bra ⟨a| is its complex conjugate transpose. The transpose of a matrix is a flip along its main diagonal. The conjugate of a complex number reverses the sign of the imaginary part.
For a two-level quantum system:
Where x* and y* are the complex conjugates of x and y.3
Note that the definition of both kets and bras is a little more involved (see the Wiki page), but the above will get you through many basic situations. (For instance, it’s enough for everything I write in this Quantum Curious newsletter.)
What can we do with kets and bras?
Two of the most common operations are reflected in the canonical representation of a two-level quantum superposition:
We can add kets, each a quantum state, to create a new state that is a superposition of those two states. In this case, we’re adding the |0⟩ and |1⟩ states to create a superposition we’ve labeled |Ψ⟩ (Psi). The two coefficients, α (alpha) and β (beta) determine the relative amounts of the two contributing states.
New states aren’t limited to only two such states:
For as many as we need. We just add the column vectors. (Which requires that each has the same number of rows, but that is normally the case with different states of a given system.)
We can also multiply a ket by a numeric value (which can be real or complex):
We multiply each component of the column vector by the numeric value.4
Inner Product
Another common operation is to take the inner product of two vectors. We do this by converting one of them to a bra:
Note that the result of this operation is a single numeric value, not a vector. (That value can be complex if the vector components are complex.)
One can think of an inner product as the product (i.e. the multiplication) of two multi-dimensional numbers. When such numbers have just one component (making them essentially ordinary numbers), then the inner product reduces to simple multiplication:
Note an alternate way of sometimes writing the inner product operation: ⟨a,b⟩. The general form ⟨·,·⟩ is sometimes used to denote the entire inner product space.
A less common notation uses the dot operator: a·b (using the “middle dot” symbol, not the period), because when dealing with vectors, the inner product is sometimes called the dot product. It’s also sometimes called the scalar product of two vectors — referencing the notion of a product operation and a single numerical result.
More formally the inner product is the projection of one vector onto the other. This is especially helpful in determining if two vectors are orthogonal to each other — if they are, their inner product is zero.
For instance, the canonical |0⟩ and |1⟩ states are orthogonal:
The inner product of a vector with itself gives the length (or magnitude) squared of the vector. Given some vector, say v=[2, 3]:
Which is the same as the square of the Pythagorean length:
The square root of the inner product of a vector with itself is its length. If the vector is normalized, the length, and thus its square, are both equal to 1.
Outer Product
The term inner product raises an obvious question: Is there an outer product? There is, and it looks like this:
Unlike the inner product, which returns a single numeric value (a scalar), the outer product gives us a square matrix. That means we can use the notation as a convenient way to define operators.
For example, the outer product |0⟩⟨0| is:
And the outer product |1⟩⟨1| is:
We don’t have to use the same vectors. We are free to mix and match. For example:
Or:
We can also use other values. We’re not restricted to the |0⟩ and |1⟩ kets.
We can combine outer products to create more interesting matrices. For example, if we add |0⟩⟨0| + |1⟩⟨1| we get the identity matrix:
If we add the outer products of any set of orthogonal basis vectors, we get the identity matrix.
We can multiply these by a number to create something more interesting. For example:
Which gives us the Z-axis spin operator.
I wrote it with an explicit -1 factor to illustrate multiplying one of the outer products by a constant, but a cleaner (and more usual) way is to subtract the second outer product from the first. This gives us the necessary minus value:
While we’re at it, here is how we can construct the X-axis spin operator:
And the Y-axis spin operator:
Pay attention to the various combinations and minus signs! I’ll come back to these three operators when I write about quantum spin.
For now, I just want to show how a large part of the value of bra-ket notation is due to the ability to so easily represent operations like inner and outer product as well as adding and multiplying by constants. These operations are used throughout quantum mechanics.
Multiplying Two Kets
One thing we cannot do with two kets is multiply them. For example:
This is not |a⟩ times |b⟩, because we cannot multiply two column vectors. For both the column count is one and the row count is two, so there is no way to multiply them. (Lacking a plus or minus sign, nor is it their sum or difference.)
The notation is sometimes used to indicate a pair of unrelated quantum states (say two non-entangled particles) in a given system, but it is more often used to describe entangled particles.
In that case, we use the tensor product, which gives us a new column vector with twice as many rows:
I’ll return to this when I post about quantum entanglement. It’s fundamental to understanding what happens when an entangled particle is measured.
A Note About Notation
As a final note, the angle brackets used in kets and bras (‘⟨‘ and ‘⟩’) are not the less-than and greater-than symbols (‘<‘ and ‘>’).
In the LaTeX system (which is how all the math is implemented in these posts) they are \langle and \rangle (‘left angle bracket’ and ‘right angle bracket’).
In HTML, the angle brackets are ⟨ and ⟩ — with the Unicode code points of 10092 and 10093, respectively (or 276C and 276D in hex). All allow their use in HTML, the numeric ones using numerical character references: ❬ and ❭ (for the decimal values) and ❬ and ❭ (for the hex values).
For purists, the vertical bar (‘|’), which is an ordinary ASCII character, does also have a Unicode code point close to the angle brackets: 10072 (2758 hex).
Just don’t use the greater-than and less-than symbols. It looks wrong and it is wrong. If you do your writing in some other tool and then copy-paste to your blog, that sometimes converts ‘⟨’ and ‘⟩’ to ‘<‘ and ‘>’, so watch out for that.
Until next time…
But these create some risk of not being rendered correctly on all systems. For systems that don’t handle the Unicode, they are the sad and happy cat emojis, respectively.
Note that these can be — and often are — complex number coordinates in the Hilbert space used by wavefunctions.
If some complex number z is expressed as (a+bi), then its conjugate is (a-bi).
[See Complex Number Forms for more about complex numbers.]
Using eta (η) as the numeric value is fairly common. It looks a bit like an n, which can stand for normalization constant.
I must confess that I, too, was attracted because of the “bra” notation. Perhaps I was expecting some sort of 3-dimensional normal distribution. I was too deep into it before I realized Dirac had hooked me with a misspelling. One of my statistics professors was an FRS who got his PhD from the University of Bristol, but I was never brave enough to ask him whether he knew Dirac.
I was just a pure math schmuck.
I never knew Dirac published in physics, let alone Quantum Physics. I did a Graph Theory course where PAM Dirac’s name was on several theorems. There was nothing about bras, but I did attend a few seminars with rather racy counterexample graphs.
I look forward to reading your article.