28 Spherical tensors and Wigner-Eckart theorem
28.1 Motivation: electric quadrupole operator
Previously, we studied the effect of electric fields on the orbital electron of hydrogen using a coupling Hamiltonian of the form \hat{W} = q \vec{E} \cdot \hat{\vec{r}}. However, this is only accurate in the limit of a weak, uniform background electric field. More generally, our external field can have more structure, represented by a general electric potential V(\vec{r}). The first correction term in a multipole expansion would take the form of a quadrupole interaction, \hat{W}_{\rm quad} = \frac{q}{2} \sum_{i,j} \hat{r}_i \hat{r}_j \partial_i E_j written in terms of vector components. This is now an object with more complicated transformation properties under rotation, compared to the vector operators we’ve studied so far: each of the two copies of \hat{\vec{r}} will pick up a rotation matrix. Stripping off the other terms just to focus on the rotational structure gives the object \hat{T}_{ij} \equiv \hat{r}_i \hat{r}_j, known as a dyadic tensor. Under rotations, we know that \hat{T}_{ij} \rightarrow R_{ik} R_{jl} \hat{T}_{kl} with Einstein summation over k,l implied. In general, this seems like a complicated rotation that will scramble the different components of \hat{T}_{ij}. But if we look at the trace of the dyadic tensor, we see something else happen: \hat{T}_{ii} \rightarrow R_{ik} R_{il} \hat{T}_{kl} = (R^T R)_{kl} \hat{T}_{kl} (where the implicit sum over i is what lets us write the matrix product in the last step.) Since the rotation matrices are orthogonal, we know that R^T = R^{-1}, so R^T R is just the identity matrix. Plugging that in, we have the result \hat{T}_{ii} \rightarrow \hat{T}_{ll} or in other words, the trace of \hat{r}_i \hat{r}_j doesn’t transform under rotations at all. Of course, if we take a step back this is an obvious result, because \hat{T}_{ii} = \sum_i \hat{r}_i \hat{r}_i = \hat{\vec{r}}{}^2 which is just the squared length of the position vector - obviously invariant under any amount of rotation! If we go back to the quadrupole operator, this particular piece of the dyadic tensor gives us the term \hat{W}_{\rm quad} \supset \frac{q}{6} \hat{\vec{r}}{}^2 (\nabla \cdot \vec{E}) (substituting in \hat{r}_i \hat{r}_i = \frac{1}{3} \vec{r}^2 \delta_{ii} which follows from rotational invariance.) but since we’re assuming the electric field is generated externally, from Gauss’s law we have \nabla \cdot \vec{E} = 0, so this piece of the dyadic tensor doesn’t actually give any physical effects in this context. Because of this, the electric quadrupole tensor is typically written as the traceless version of what we wrote above, Q_{ij} \propto (r_i r_j - \frac{1}{3} r^2 \delta_{ij}). (We can also derive this same form by noticing that the tensor \partial_i E_j we’re contracting against is also symmetric from Laplace’s equation, and traceless from \nabla \cdot \vec{E}.) This remaining piece transforms non-trivially under rotation, but is clearly more complicated than a vector.
28.1.1 Rotation of a generic rank-2 tensor
The dyadic tensor above is somewhat of a special case because it is symmetric by definition, being built from two copies of the same vector. Let’s take a moment to look at the fully general case where we have a tensor of the form T_{ij} which may or may not be composed of two vectors (dyadic). This is going to be a purely classical study of how things transform under rotation, so I’m not bothering to write hats for now. Regardless of whether it’s built from vectors, a two-index tensor transforms under rotation by picking up one rotation matrix for each index, T_{ij} \rightarrow R_{ik} R_{jl} T_{kl}. We also already know from above that different components within the tensor will transform differently - in particular, we know that the trace doesn’t rotate at all. In fact, we can isolate three separate pieces that rotate separately by way of the decomposition T_{ij} = E \delta_{ij} + A_{ij} + S_{ij} where A_{ij} is a totally antisymmetric tensor, and S_{ij} is totally symmetric but has zero trace, i.e. \sum_i S_{ii} = 0. These facts about the different parts follow from an explicit construction: E = \frac{1}{3} \sum_i T_{ii} \\ A_{ij} = \frac{1}{2} (T_{ij} - T_{ji}) \\ S_{ij} = \frac{1}{2}(T_{ij} + T_{ji}) - \frac{1}{3} \delta_{ij} \sum_k T_{kk} This is just rewriting the original object T_{ij} by adding and subtracting things. Note that in three dimensions, we have 3 distinct non-zero components of A_{ij} and 5 of S_{ij}, so in all we recover the full set of 1+3+5=9 entries in an arbitrary 3x3 Cartesian tensor. The factor of 1/3 on the trace term ensures that the object S_{ij} is traceless, i.e. \sum_i S_{ii} = 0.
Let’s see how things transform under rotations, starting with the scalar piece. Following what we did for the trace above, we have E \delta_{ij} \rightarrow R_{im} R_{jn} E \delta_{mn} = R_{im} (R^T)_{ni} E = (R^T R)_{ij} E = E \delta_{ij} invoking the orthogonality of R again. So the E \delta_{ij} piece transforms into itself; it is, despite having two indices, behaving like a rank-0 tensor (i.e. a scalar) by not rotating at all.
On to the antisymmetric piece: A_{ij} \rightarrow R_{im} R_{jn} A_{mn} Notice that this is still an antisymmetric tensor in terms of i and j: A_{ji} \rightarrow R_{jm} R_{in} A_{mn} = -R_{jn} R_{im} A_{mn}. It is left as an exercise to prove that this object transforms as a rank-1 tensor. (If you’re trying to prove this, start by writing A_{ij} = \epsilon_{ijk} B_k; you will need to invoke the fact that \det(R) = 1.)
Finally, the symmetric traceless part transforms as S_{ij} \rightarrow R_{im} R_{jn} S_{mn}, which is still symmetric; furthermore, we see that S_{ii} \rightarrow R_{im} R_{in} S_{mn} = \delta_{mn} S_{mn} = S_{mm}, so the trace remains zero. This is the point of the decomposition, of course; in terms of the Cartesian components a generic rotation will mix all 9 components together, whereas with this separation the effects of rotation are block-diagonal.
So in summary, a naive approach indicates that arbitrary rotations mix all 9 components of T_{ij} with each other. However, we can break the components of T_{ij} up in a clever way to isolate three objects that rotate into themselves: a scalar E, a vector A, and the symmetric traceless tensor S. There is some deeper math behind the cleverness, which we’ve talked about before in our discussions back in Chapter 14. The full two-index T_{ij} gives a reducible representation of the rotation group, since it has two separate vector indices that each rotate. We can use tensor decomposition to rework this into a sum over smaller irreducible representations of the rotation group. In particular, if we remember the tensor decomposition rules for SO(3) from before, the decomposition of two l=1 objects is given by \mathbf{1} \otimes \mathbf{1} = \mathbf{2} \oplus \mathbf{1} \oplus \mathbf{0} where since the dimension of \mathbf{j} is (2j+1), the right-hand side is a sum of objects of dimension 5, 3, and 1 - exactly the three objects S, A, E we built above.
28.2 Spherical tensors
The above is a good general story, but we need to be more explicit about how a tensor operator transforms under rotations if we want to get to e.g. selection rules for an electric quadrupole operator. We already know how scalar and vector operators transform, but how do we deal with the symmetric tensor S_{ij}?
We found a hint in our discussion of vector operators in the previous chapter: one fact we noticed is that a vector in spherical basis rotates in exactly the same way as an l=1 spherical harmonic, both transforming under the three-dimensional \mathbf{1} representation of the rotation group. Now we’ve argued that S_{ij} should transform under the \mathbf{2} representation as a five-component object; this should be the same transformation as an l=2 spherical harmonic.
So how do the higher spherical harmonics transform under rotations? Let’s write down the general formula: we start with Y_l^m(\vec{n}) = \left\langle \vec{n} | lm \right\rangle. Under a general rotation, the direction ket rotates as \ket{\vec{n}} \rightarrow \hat{\mathcal{D}}(R) \ket{\vec{n}} = \ket{\vec{n}'}. We can then write out the new spherical harmonic in terms of the ones oriented in the old direction: Y_l^m(\vec{n}') = \bra{\vec{n}} \hat{\mathcal{D}}(R)^{\dagger} \ket{lm} \\ = \sum_{l',m'} \left\langle \vec{n} | l'm' \right\rangle \bra{l'm'} \hat{\mathcal{D}}(R)^{\dagger} \ket{lm} \\ = \sum_{m'} Y_l^{m'}(\vec{n}) (\mathcal{D}_{mm'}^{(l)}(R))^\star. So the components of Y_l^m rotate according to the appropriate Wigner D-matrix: repeating the formula which we wrote before, {\mathcal{D}}^{(j)}_{m'm}(R) \equiv \bra{jm'} \exp \left( \frac{-i(\hat{\vec{J}} \cdot \vec{n}) \phi}{\hbar} \right) \ket{jm}. This is indeed a block matrix of size (2j+1), so it will give us a 5x5 matrix for our l=2 spherical harmonic. We thus take this relation to give us the definition of a spherical tensor operator: a spherical tensor operator of rank k, written \hat{T}_q^{(k)}, transforms under rotations as \hat{\mathcal{D}}^\dagger(R) \hat{T}_q^{(k)} \hat{\mathcal{D}}(R) = \sum_{q'=-k}^{k} \mathcal{D}_{qq'}^{(k)}{}^\star(R) \hat{T}_{q'}^{(k)}. In other words, spherical tensor operators are constructed such that under rotation, their components transform according to the corresponding Wigner D-matrix. (I’m saying “we take this as a definition”, but the more precise statement is “this is how any object which transforms in the \mathbf{k} representation of SO(3) will transform under rotation.) Note that this is written out using spherical basis for the indices of our tensor and of the D-matrix, which is the natural thing to do given how the D-matrix is defined.
As we found before with vectors, taking this general relation and looking at an infinitesimal rotation leads to a set of commutation relations. Skipping the details, the key results are as follows:
A spherical tensor T_q^{(k)} of rank k is a (2k+1)-dimensional object whose components (in spherical basis) satisfy the commutation relations
[\hat{J}_z, \hat{T}_q^{(k)}] = \hbar q \hat{T}_q^{(k)}, and [\hat{J}_{\pm}, \hat{T}_q^{(k)}] = \hbar \sqrt{(k \mp q)(k \pm q + 1)} \hat{T}_{q \pm 1}^{(k)}.
These relations imply that the tensor transforms under the \mathbf{k} irrep of the rotation group SO(3).
Show that applying the definition above for a rotation by infinitesimal angle \epsilon leads to the commutation relations above.
Answer:
If we substitute the infinitesimal rotation \mathcal{D}(R) = \left(1 - \frac{i (\hat{\vec{J}} \cdot \vec{n}) \epsilon}{\hbar} \right) then the spherical tensor \hat{T}_q^{(k)} must satisfy \left(1 + \frac{i (\hat{\vec{J}} \cdot \vec{n}) \epsilon}{\hbar} \right) \hat{T}_q^{(k)} \left(1 - \frac{i (\hat{\vec{J}} \cdot \vec{n}) \epsilon}{\hbar} \right) \\ = \sum_{q'=-k}^{k} \hat{T}_{q'}^{(k)} \bra{kq'} \left(1 + \frac{i (\hat{\vec{J}} \cdot \vec{n}) \epsilon}{\hbar} \right)\ket{kq}. The first term on the right just gives \hat{T}_q^{(k)}, canceling the same term on the left. From the remaining order-\epsilon terms, we end up with a commutator: [\hat{\vec{J}} \cdot \vec{n}, \hat{T}_q^{(k)}] = \sum_{q'} \hat{T}_{q'}^{(k)} \bra{kq'} \hat{\vec{J}} \cdot \vec{n} \ket{kq}. Having the commutator for an arbitrary direction \vec{n} isn’t very convenient, but we can just plug in the cardinal directions; having all of the commutation relations for x,y,z is equivalent to this equation. Doing so and following up with some tedious algebra will let you reconstruct the relations given above.
Although I won’t go into the fine details of deriving them, in the same way that we did for vector operators, we can find two selection rules implied by these commutation relations: matrix elements of the form \bra{\alpha_f, j_f, m_f} \hat{T}_q^{(k)} \ket{\alpha_i, j_i, m_i} satisfy the m-selection rule m_f = m_i + q and the triangular selection rule |j_i - k| \leq j_f \leq j_i + k. A vector is just a spherical tensor of rank 1; plugging in k=1 gives us back the selection rules for vectors we found before as a special case. (If you want to prove these things, the simplest way is through the Wigner-Eckart theorem below, from which these just follow from the properties of Clebsch-Gordan coefficients.)
Using the defining commutation relation above for a spherical tensor of rank k=1, show that you recover exactly the spherical basis form of a vector operator \hat{V}_q that we found in the last chapter, including the normalization:
\begin{aligned} \hat{V}_1^{(1)} &= -\frac{\hat{V}_x + i\hat{V}_y}{\sqrt{2}}, \\ \hat{V}_0^{(1)} &= \hat{V}_z, \\ \hat{V}_{-1}^{(1)} &= \frac{\hat{V}_x - i\hat{V}_y}{\sqrt{2}}. \end{aligned}
Answer:
If we let j=z, then we find \begin{aligned} [\hat{J}_z, \hat{V}_x] &= i\hbar \hat{V}_y \\ [\hat{J}_z, \hat{V}_y] &= -i\hbar \hat{V}_x \\ [\hat{J}_z, \hat{V}_z] &= 0. \end{aligned} Combining the first two commutators, we see that [\hat{J}_z, \hat{V}_x \pm i \hat{V}_y] = i\hbar \hat{V}_y \pm \hbar \hat{V}_x = \pm \hbar (\hat{V}_x \pm i \hat{V}_y) So these combinations of \hat{V}_x and \hat{V}_y, along with \hat{V}_z on its own, will satisfy the defining relation for a rank-1 spherical tensor. The remaining commutator is needed to fix the relative normalization. [\hat{J}_{\pm}, \hat{V}_z] = -[\hat{V}_z, \hat{J}_x] \mp i [\hat{V}_z, \hat{J}_y] \\ = -i \hbar \epsilon_{zxy} \hat{V}_y \pm \hbar \epsilon_{zyx} \hat{V}_x \\ = \mp \hbar (\hat{V}_x \pm i \hat{V}_y) and comparing to the formula for [\hat{J}_{\pm}, \hat{T}_q^{(k)}] above, we find the extra factor of -\sqrt{2} that yields exactly the spherical-basis expansion we wrote down above.
28.2.1 Application: electric multipole expansion
Let’s come back to the idea of multipole expansion for an external electric field, which motivated us initially to go down this path. Now that we know about spherical tensors, there is a nice direct connection we can make to multipole expansion. This is likely something you’ve seen already in an electromagnetism class, but the only result we will need is that the electric potential V(\vec{r}) = qU(\vec{r}) can be written as a series expansion in the form V(\vec{r}) = \sum_{l=0}^{\infty} \sum_{m=-l}^{l} f_{l,m}(r) Y_l^m(\theta, \phi). Given that the \vec{E} field is external, the potential V must also satisfy the Laplace equation \nabla^2 V(\vec{r}) = 0. It is straightforward to show that taken together, these equations lead to an expansion of the quantum potential V(\vec{r}) in terms of a set of electric multipole operators V(\hat{\vec{r}}) = \sum_{l=0}^{\infty} \sum_{m=-l}^{l} c_{lm} \hat{Q}_m^{(l)}, where the \hat{Q}_m^{(l)} are defined in terms of their position space matrix elements by \bra{\vec{r}'} \hat{Q}_m^{(l)} \ket{\vec{r}} = q \sqrt{\frac{4\pi}{2l+1}} r^l Y_l^m(\theta, \phi) \delta(\vec{r} - \vec{r}'). with the extra normalization factors added to match on to standard definitions. For example, since Y_0^0 is just the constant 1/\sqrt{4\pi}, the l=0 multipole operator reduces to just the charge q of our quantum particle, as it should. Moving on to l=1, you can easily convince yourself that the three \hat{Q}_1{}^m operators are just the components of a vector, specifically the vector \vec{d} = q \vec{r}, i.e. the electric dipole.
Here are the steps I skipped above from the ordinary multipole expansion to the operator one. Starting from the Laplacian equation, as we’ve seen before the angular part of the Laplacian is just the orbital angular momentum operator, \nabla^2 V(\vec{r}) = \frac{1}{r} \frac{\partial^2}{\partial r^2} (rV(\vec{r})) - \frac{\hat{\vec{L}}{}^2}{\hbar^2 r^2} V(\vec{r}) Applying this to the power series for V(\vec{r}), the angular momentum operator just acts on the spherical harmonics to give back \hbar^2 l(l+1), so that we end up with a very simple equation for the radial functions: \frac{1}{r} \frac{\partial^2}{\partial r^2} (r f_{l,m}(r)) - \frac{l(l+1)}{r^2} f_{l,m}(r) = 0. We’ve solved this equation before; the solutions are r^l and r^{-(l+1)}, and as usual we’re interested in working near the origin (since our electric field is being generated from “outside” the region of interest), and thus we discard the divergent solutions and simply have f_{l,m}(r) = \sqrt{\frac{4\pi}{2l+1}} c_{l,m} r^l. Here the c_{l,m} are unknown coefficients, and we’ve added some extra normalization factors for later convenience.
In general, the multipole operator \hat{Q}^{(l)} is a rank l spherical tensor by construction. Matrix elements of multipole operators between different \ket{\alpha, l, m} states correspond to transition matrix elements, as for the dipole operator that we’ve already looked at. But for the moment, it’s more interesting to focus on diagonal matrix elements, which correspond to energy corrections in perturbation theory - or can be thought of as properties of the states being studied: \bra{\alpha, l, m} \hat{Q}_q^{(k)} \ket{\alpha, l, m}. Inserting the dipole operator (k=1) or the quadrupole operator (k=2) gives (up to constants of proportionality, including ones that depend on m) the “dipole moment” or “quadrupole moment” of a particle in this state. Without computing anything explicitly, we can use what we know to derive interesting selection rules. The triangular selection rule tells us that |l - k| \leq l \leq l+k which is equivalent to the statement that 0 \leq k \leq 2l. In other words, a state with angular momentum quantum number l cannot have any electric multipole moments with k > 2l.
We can get an extra constraint with parity; using the explicit form of the \hat{Q}_q^{(k)} with parity, we can show that the reduced matrix element is zero unless (-1)^{2l - k} = 1, which means k has to be even. Writing this out in equation form, the selection rules are: \bra{l} \hat{Q}^{(k)} \ket{l} = 0\ \textrm{unless}\ k = 0, 2, ..., 2l.
Here are a few simple examples of how this can be applied to atomic and nuclear systems:
- For the ground state of the hydrogen atom (l=0), all multipole moments (except the charge) must be zero.
- Any orbital angular momentum eigenstate has no electric dipole moment (k=1 multipole), up to effects that break parity invariance.
- The deuteron (a proton and neutron bound together) is observed to have an electric quadrupole moment. This tells us that its ground state is not an l=0 eigenstate.
28.3 Combination of spherical tensors
Combining two spherical tensors to form another spherical tensor is often a very useful technique; in fact, we started this section with such an example, the quadrupole moment operator which is built from two position vectors.
In fact, we already know how to do this: the rules for combination of spherical tensors are exactly the same as those for addition of angular momentum, and the coefficients are just the Clebsch-Gordan coefficients! If X_{q_1}^{(k_1)} and Z_{q_2}^{(k_2)} are irreducible spherical tensors of rank k_1 and k_2, then T_q^{(k)} = \sum_{q_1, q_2} \left\langle k_1 k_2; q_1 q_2 | k_1 k_2; k q \right\rangle X_{q_1}^{(k_1)} Z_{q_2}^{(k_2)} is an irreducible spherical tensor of rank k. The proof of this statement is straightforward; we just look at how both sides transform under rotation, and end up with products of Clebsch-Gordan coefficients to simplify on the right-hand side. I won’t go through the proof here since it’s messy, but it’s on page 251 of Sakurai if you’re interested. There’s also an “inverse formula” giving the components of the product X_{q_1}^{(k_1)} Z_{q_2}^{(k_2)}, which is occasionally useful, but it is basically just taking the equations written above and solving for the (X Z) product you want.
This formula gives us explicit ways to construct higher-rank spherical tensors from lower-rank ones, or to decompose a Cartesian tensor into spherical components.
Here, you should complete Tutorial 13 on “Combination of spherical tensors”. (Tutorials are not included with these lecture notes; if you’re in the class, you will find them on Canvas.)
As an example, to write out the rank-2 spherical tensor components of the dyadic tensor \hat{U}_i \hat{V}_j, all we have to do is look up a table of Clebsch-Gordan coefficients for the addition of two j=1 states to give j=2. This gives, for example, the result \hat{T}_{\pm 2}^{(2)} = \hat{U}_{\pm 1} \hat{V}_{\pm 1} \\ \hat{T}_{\pm 1}^{(2)} = \frac{1}{\sqrt{2}} (\hat{U}_{\pm 1} \hat{V}_0 + \hat{U}_0 \hat{V}_{\pm 1}) \\ \hat{T}_{0}^{(2)} = \frac{1}{\sqrt{6}} (\hat{U}_{1} \hat{V}_{-1} + 2 \hat{U}_0 \hat{V}_0 + \hat{U}_{-1} \hat{V}_1). Taking both U and V to be the position operator, we can use this to obtain the form of the quadrupole moment operator in Cartesian coordinates: from the definition Q_{ij} = q (3r_i r_j - r^2 \delta_{ij}) we can readily find the results (note that keeping the normalization consistent adds a factor of \sqrt{3/2}, readily seen from the q=0 component) \hat{Q}_{\pm 2}^{(2)} = \frac{\sqrt{6}}{4} q (x \pm iy)^2 \\ \hat{Q}_{\pm 1}^{(2)} = \mp \frac{\sqrt{6}}{2} q z(x \pm iy) \\ \hat{Q}_{0}^{(2)} = \frac{1}{2} q (3z^2 - r^2).
Due to the appearance of Clebsch-Gordan coefficients, the combination of spherical tensors to get other tensors follows the usual set of selection rules, which we can understand in terms of representations of the rotation group. As we already argued, the decomposition of the dyadic tensor can be written using tensor decomposition in the form \mathbf{1} \otimes \mathbf{1} = \mathbf{2} \oplus \mathbf{1} \oplus \mathbf{0} and indeed, the selection rules of the Clebsch-Gordans require only spherical tensors of rank k=0,1,2 to appear in the sum for a product of two vectors. On some level we already knew this, but the nice thing about our more concrete formula is that we have the coefficients of the decomposition as well, so we can do all of this explicitly.
Tensors of rank higher than two are rarely encountered, but if you do run into one, now you have the formalism to decompose it into spherical tensor components. The trick is to use the tensor decomposition rules from Chapter 14, together with the fact that the direct product, like ordinary multiplication, is distributive over (direct) sums. For example, suppose we were to construct a three-index tensor from vectors, \hat{T}_{ijk} = \hat{U}_i \hat{V}_j \hat{W}_k. How will this decompose into spherical tensors? Well, \mathbf{1} \otimes \mathbf{1} \otimes \mathbf{1} = \mathbf{1} \otimes (\mathbf{2} \oplus \mathbf{1} \oplus \mathbf{0}) \\ = (\mathbf{3} \oplus \mathbf{2} \oplus \mathbf{1}) \oplus (\mathbf{2} \oplus \mathbf{1} \oplus \mathbf{0}) \oplus \mathbf{1} \\ = \mathbf{3} \oplus \mathbf{2}_2 \oplus \mathbf{1}_3 \oplus \mathbf{0} where the subscript tells us how many distinct spherical tensors there will be of a given rank. As always, we should check the number of states: for a three-index tensor formed in this way we expect to find 27 unique components, and on the right we have 7+2(5)+3(3)+1 = 27, so everything checks out. If we actually want to calculate matrix elements of \hat{T}_{ijk}, we still need the coefficients of this decomposition; those we can obtain using the above formulas and Clebsch-Gordan tables.
28.4 The Wigner-Eckart theorem
I’m going to begin by stating the Wigner-Eckart theorem; then I’ll explain what it means. I include a proof in my notes, but it is technical enough that I have relegated it to an aside that you can find below if you’re interested.
Given a spherical tensor operator \hat{T}_q^{(k)}, its matrix elements with respect to eigenstates of angular momentum satisfy the relation \bra{\alpha', j', m'} \hat{T}_q^{(k)} \ket{\alpha, j, m} = \left\langle jk; mq | jk; j' m' \right\rangle \frac{\bra{\alpha' j'} | \hat{T}^{(k)} | \ket{\alpha j}}{\sqrt{2j'+1}}, where \bra{\alpha' j'} | \hat{T}^{(k)} | \ket{\alpha j} is known as the reduced matrix element, and is independent of the magnetic quantum numbers m,m',q.
Once again, the Clebsch-Gordan coefficients have shown up on the right-hand side here. Exactly what the reduced matrix element is varies depending on what operator we’re evaluating, which is why we don’t have a precise formula for it; we’ll see some examples shortly.
What does the Wigner-Eckart theorem mean? It’s simply a statement of rotational symmetry: angular-momentum eigenstates \ket{j,m} with the same j but different m are related to each other by rotations. Thus, if we calculate one matrix element involving \ket{j,m} and \ket{j',m'}, we get all of the other ones by applying rotations (which simply takes the form of Clebsch-Gordan coefficients in the formula.) In fact, this is generally how we compute the reduced matrix element in practice: we calculate the left-hand side for one choice of magnetic quantum numbers, and then use the formula to get all the other choices for free.
For our example of the radiative dipole transition 3d \rightarrow 2p hydrogen, selection rules already reduced the number of matrix elements we needed to calculate from 45 down to 9. The Wigner-Eckart theorem allows us to calculate just 1 matrix element, and rotational symmetry does the rest of the work for us.
Speaking of selection rules, the Wigner-Eckart theorem immediately implies our two selection rules m' = m+q and |j-k| \leq j' \leq j+k for spherical tensor operators without having to worry about integrals over spherical harmonics, simply due to the Clebsch-Gordan coefficients. (Of course, a matrix element which passes the selection rules is not guaranteed to be non-zero; other properties of \hat{T}^{(k)} can give further restrictions, like parity in our example from last time.)
Whenever I talk about Wigner-Eckart, the reduced matrix element is the source of a lot of confusion; it isn’t really a valid matrix element on its own. To get matrix elements, we add a Clebsch-Gordan, which we can think of as containing all of the rotational dependence. So in some sense, the reduced matrix element is the “rotationally invariant part” of a given matrix element involving \ket{jm} states.
There are a few useful ways to rewrite the Wigner-Eckart theorem to get corollaries, some of which have names. This way doesn’t have a name as far as I know, but I find it conceptually very useful. The notation gets a little dense, but bear with me:
\bra{\alpha', j', m'_2} \hat{T}_{q_2}^{(k)} \ket{\alpha, j, m_2} = \frac{\left\langle jk; m_2 q_2 | jk; j' m'_2 \right\rangle}{\left\langle jk; m_1 q_1 | jk; j' m'_1 \right\rangle} \bra{\alpha', j', m'_1} \hat{T}_{q_1}^{(k)} \ket{\alpha, j, m_1}.
In words: all matrix elements that only differ by changing the magnetic quantum numbers m, m', q are related to each other by Clebsch-Gordan coefficients. This is the real heart of what Wigner-Eckart means. We never have to calculate anything called a “reduced matrix element” if we don’t want to: we just calculate one regular matrix element, and then rotation (via C-G coefficients) gives us all the rest.
Let’s go through the proof of the theorem. We focus on the evaluation of the commutator of \hat{T}_q^{(k)} with the angular momentum ladder operators: \bra{\alpha', j', m'} [\hat{J}_{\pm}, \hat{T}_q^{(k)}] \ket{\alpha, j, m} = \hbar \sqrt{(k \mp q)(k \pm q + 1)} \bra{\alpha', j', m'} \hat{T}_{q \pm 1}^{(k)} \ket{\alpha, j, m} However, we can also just evaluate the ladder operators against the angular-momentum eigenstates on either side. This gives us the equation \sqrt{(j' \pm m')(j' \mp m' + 1)} \bra{\alpha', j', m' \mp 1} \hat{T}_q^{(k)} \ket{\alpha, j, m} = \\ \sqrt{(j \mp m)(j \pm m + 1)} \bra{\alpha', j', m'} \hat{T}_q^{(k)} \ket{\alpha, j, m \pm 1} + \\ \sqrt{(k \mp q)(k \pm q + 1)} \bra{\alpha', j', m'} \hat{T}_{q \pm 1}^{(k)} \ket{\alpha, j, m}. We can more compactly write the above equation as \sqrt{(j' \pm m')(j' \mp m' + 1)} X_{m' \mp 1, m}^{q,k} = \\ \sqrt{(j \mp m)(j \pm m + 1)} X_{m',m \pm 1}^{q, k} + \sqrt{(k \mp q)(k \pm q + 1)} X_{m',m}^{q \pm 1, k} Although I didn’t focus on it in these notes, it so happens that the Clebsch-Gordan coefficients obey a very similar-looking recursion relation: \sqrt{(j \pm m)(j \mp m + 1)} C_{m_1, m_2}^{j, m \mp 1} = \\ \sqrt{(j_1 \mp m_1)(j_1 \pm m_1 + 1)} C_{m_1 \pm 1, m_2}^{j,m} + \sqrt{(j_2 \mp m_2)(j_2 \pm m_2 + 1)} C_{m_1, m_2 \mp 1}^{j,m} These equations are basically identical, if we make the identifications (j,m) \rightarrow (j_1, m_1) \\ (k,q) \rightarrow (j_2, m_2) \\ (j', m') \rightarrow (j,m). This does not imply that our spherical-tensor matrix elements X_{m',m}^{q,k} are equal to the Clebsch-Gordan coefficients; we can multiply the X’s by an overall constant factor, and they will still satisfy this linear equation. The normalization of the Clebsch-Gordan coefficients was fixed by orthonormality of the angular-momentum eigenstates that they connect together, but since now we have matrix elements of an unknown operator \hat{T}_q^{(k)}, we have no such normalization condition.
So our matrix elements X_{m',m}^{q,k} must be proportional to the Clebsch-Gordan coefficients we would obtain for addition of angular momenta j and k to obtain j'. The recursion relation we have above relates matrix elements with different magnetic quantum numbers m,m',q, but not with different j,j',k, so we see that the unknown normalization constant can depend on the latter numbers. We write the normalization as the reduced matrix element times an extra constant: \bra{\alpha', j', m'} \hat{T}_q^{(k)} \ket{\alpha, j, m} = \left\langle jk;mq | jk;j'm' \right\rangle \frac{\bra{\alpha' j'}|\hat{T}^{(k)}|\ket{\alpha j}}{\sqrt{2j'+1}}. The constant \sqrt{2j'+1} is a normalization constant, the use of which isn’t immediately obvious. To see why we include it, let’s take the squared amplitude and then sum over magnetic quantum numbers on both sides: \sum_{m,m',q} |\bra{\alpha', j', m'} \hat{T}_q^{(k)} \ket{\alpha, j, m}|^2 = \frac{1}{2j'+1} |\bra{\alpha', j'}|\hat{T}^{(k)}|\ket{\alpha,j}|^2 \sum_{m,m',q} |\left\langle jk;mq | jk;j'm' \right\rangle|^2 How do we evaluate the sum? Summing over m and q here is just insertion of a complete set of states: \sum_{m,m',q} |\left\langle jk;mq | jk;j'm' \right\rangle|^2 = \sum_{m,m',q} \left\langle jk;j'm' | jk;mq \right\rangle \left\langle jk;mq | jk;j'm' \right\rangle \\ = \sum_{m'} \left\langle j'm' | j'm' \right\rangle \\ = (2j'+1), i.e. the number of distinct m' states. So our normalization ensures that the square of the reduced matrix element is just the sum of the squares of the full matrix elements at each magnetic quantum number m',m,q.
28.4.1 Reduced matrix element for angular momentum
Let’s try to do a quick example calculation for what a reduced matrix element looks like, before we move on. A particularly simple operator to consider is the angular momentum operator \hat{\vec{J}} (we’ll also need this reduced matrix element for something else below!) We already know that we can write the spherical tensor components in the following way: \hat{J}_{\pm 1}^{(1)} = \mp \frac{1}{\sqrt{2}} \hat{J}_{\pm}, \\ \hat{J}_{0}^{(1)} = \hat{J}_z. Since the reduced matrix element is q-independent, we can obtain it just by studying the q=0 component here. We can also ignore the \alpha quantum numbers, since j and m are the only quantum numbers that \hat{\vec{J}} will act on. Finally, we must set j'=j, since we already know that matrix elements between different j values will vanish. The Wigner-Eckart theorem thus gives us \bra{j,m'} \hat{J}_0^{(1)} \ket{j,m} = \left\langle j1;m0 | j1;jm' \right\rangle \frac{\bra{j}|\hat{J}^{(1)}|\ket{j}}{\sqrt{2j+1}}.
This is not useful yet, because there are unknown quantities on both sides of the equation! Fortunately, the left-hand side is easy to evaluate, remembering that \hat{J}_0^{(1)} = \hat{J}_z: \bra{j,m'} \hat{J}_0^{(1)} \ket{j,m} = \hbar m\delta_{mm'}. Now we just need a Clebsch-Gordan coefficient, but not one we’ll easily find in a table since we’re adding j=1 to a second arbitrary j. This is a great place to see some general formulas that will let us quickly find the answer. To do that, I’m going to introduce an alternate notation that you should be exposed to.
My convention here is slightly different from Sakurai’s: he divides by (2j+1) and not (2j'+1). Unfortunately he never explains this choice, so I’m not sure exactly what the motivation is for using that normalization. Many books will not include an extra normalization factor at all; this is all a matter of conventions, so whatever you’re doing just make sure you’re consistent!
28.5 Wigner 3j symbols
We define a new object called the Wigner 3j-symbol, which looks like a small 2 \times 3 matrix: \left( \begin{array}{ccc} j_1 & j_2 & j_3 \\ m_1 & m_2 & m_3 \end{array} \right) The Clebsch-Gordan coefficients are related to this object in the following way: \left\langle j_1 j_2; m_1 m_2 | j_1 j_2; j m \right\rangle = (-1)^{j_1-j_2+m} \sqrt{2j+1} \left( \begin{array}{ccc} j_1 & j_2 & j \\ m_1 & m_2 & -m \end{array} \right). There are two good reasons to define the 3j-symbols. First, they are basically the Clebsch-Gordan coefficients with nicer symmetry properties. Any cyclic permutation of the columns of a 3j-symbol is equal, and any other permutation of the columns picks up a factor of (-1)^{j_1+j_2+j}, for example \left( \begin{array}{ccc} j_1 & j_2 & j \\ m_1 & m_2 & m \end{array} \right) = \left( \begin{array}{ccc} j & j_1 & j_2 \\ m & m_1 & m_2 \end{array} \right) = (-1)^{j_1+j_2+j} \left( \begin{array}{ccc} j_2 & j_1 & j \\ m_2 & m_1 & m \end{array} \right). Flipping the signs of all of the m values also gives the same overall sign: \left( \begin{array}{ccc} j_1 & j_2 & j \\ -m_1 & -m_2 & -m \end{array} \right) = (-1)^{j_1+j_2+j} \left( \begin{array}{ccc} j_1 & j_2 & j \\ m_1 & m_2 & m \end{array} \right). Some of the other formulas we’ve derived look much nicer in terms of the 3j-symbols. In particular, our formula for integration over three spherical harmonics takes a much more symmetric-looking form: \int d\Omega Y_{l_1}^{m_1}(\theta, \phi) Y_{l_2}^{m_2}(\theta, \phi) Y_{l_3}^{m_3}(\theta, \phi) \\ = \sqrt{\frac{(2l_1+1)(2l_2+1)(2l_3+1)}{4\pi}} \left( \begin{array}{ccc} l_1 & l_2 & l_3 \\ 0 & 0 & 0 \end{array} \right) \left( \begin{array}{ccc} l_1 & l_2 & l_3 \\ m_1 & m_2 & m_3 \end{array} \right). The 3j-symbols satisfy selection rules, just like the Clebsch-Gordan coefficients; however, because of the notational rearrangement the rules look slightly different. For example, in the above integral the right-hand side vanishes unless m_1 + m_2 = -m_3, and the three l values must all satisfy the triangle inequality with the other two.
The second reason is that there is a standard formula, the Racah formula, for the value of an arbitrary 3j-symbol. I won’t reproduce the Racah formula here, since it’s rather messy (although it’s the kind of thing which is quite well-suited to implementation in a computer program), but it allows us to derive relatively compact formulas for certain special cases, like the one we’re dealing with here. In particular, we can derive the special-case formula we need, \left( \begin{array}{ccc} j & 1 & j \\ m & 0 & -m \end{array} \right) = (-1)^{1-j-m} \frac{m}{\sqrt{j(j+1)(2j+1)}}. Before we go back to our derivation, I’ll just note in passing that there exist generalizations of the 3j symbols, such as the 6j-symbols for addition of three angular momenta, and the 9j-symbols for addition of four angular momenta. For things at the level of this class, if we encounter a situation where we have to add three angular momenta it’s easier to just add two of them first, and then add the third to the combination.
Finally, back to our reduced matrix element derivation. Replacing our Clebsch-Gordan coefficient with a 3j symbol, and using the fact that m'=m, we have \bra{j,m'} \hat{J}_0^{(1)} \ket{j,m} = (-1)^{j-1+m} \left( \begin{array}{ccc} j & 1 & j \\ m & 0 & -m \end{array} \right) \frac{\bra{j}|\hat{J}^{(1)}|\ket{j}}{\sqrt{2j+1}} \\ = \frac{1}{2j+1} \frac{m}{\sqrt{j(j+1)}} \bra{j}|\hat{J}^{(1)}|\ket{j}. Since the left-hand side is just equal to \hbar m, we thus find for the reduced matrix element \bra{j}|\hat{J}^{(1)} |\ket{j} = \hbar (2j+1) \sqrt{j(j+1)}. I’ll stop here, but it would be good practice to use this to evaluate some matrix elements with q = \pm 1, and compare them to what you get using the ladder operators \hat{J}_{\pm} - they had better agree!
28.6 The Zeeman effect
Let’s start seeing some practical applications of all this machinery, beginning with an example we studied before in the context of 2p hydrogen (see Section 14.4.1), namely the splitting of energy levels due to an applied external magnetic field. The Hamiltonian picks up a contribution of the form \hat{W} = \frac{eB}{2m_e c} (\hat{L}_z + 2\hat{S}_z). We calculated the effect of this contribution for the example of the 2p orbital of a hydrogen atom explicitly; now we’re prepared for a completely general treatment. This term breaks the rotational symmetry of the electron, so that the states \ket{j m} are no longer energy eigenstates. However, as long as B is small we can do perturbation theory, working with the B=0 eigenstates.
Let’s start with \hat{L}_z; we’ll just study the first-order perturbation \left\langle \hat{W} \right\rangle, meaning matrix elements with the same state \ket{jm} on both sides. If we attack it directly with the Wigner-Eckart theorem, we have the following, suppressing the \alpha index: \bra{jm} \hat{L}_z \ket{jm} = \bra{jm} \hat{L}_0^{(1)} \ket{jm} = \left\langle j1;m0 | j1;jm \right\rangle \frac{\bra{j} | \hat{L}^{(1)} |\ket{j}}{\sqrt{2j+1}}. Normally, we would proceed by picking some values for m and q that make the left-hand side simplest to evaluate, and then use Wigner-Eckart to get the general formula. Unfortunately, \ket{jm} aren’t eigenstates of \hat{\vec{L}}, only of \hat{\vec{J}}, so there isn’t really any case where we can just easily compute the left-hand side.
Let’s zoom out slightly and think about exactly what Wigner-Eckart is telling us, which will lead us to another approach. Normally we use Wigner-Eckart to relate matrix elements of a single operator to each other. But it also tells us something interesting about matrix elements of different operators of the same rank. To be concrete, let’s write out the same matrix element but for \hat{J}_z instead: \bra{jm} \hat{J}_z \ket{jm} = \left\langle j1;m0 | j1;jm \right\rangle \frac{\bra{j} | \hat{J}^{(1)} | \ket{j}}{\sqrt{2j+1}}. If the value of j is held fixed, then all of the m dependence is contained in the Clebsch-Gordan coefficients, which are the same in both cases. This means that the two expectation values for arbitrary m are proportional to each other, \bra{jm} \hat{L}_0^{(1)} \ket{jm} = \frac{\bra{j} | \hat{L}^{(1)} |\ket{j}}{\bra{j} | \hat{J}^{(1)} |\ket{j}} \bra{jm} \hat{J}_0^{(1)} \ket{jm}.
In fact, for the moment we’re assuming the initial and final states are equal for simplicity, but this statement extends to arbitrary matrix elements between \ket{jm} and \bra{j'm'}, and to other spherical tensors. What we are seeing is a special case of a simple corollary of the Wigner-Eckart theorem known as the replacement theorem:
Given any two spherical tensors \hat{X}_q^{(k)} and \hat{Z}_q^{(k)} of the same rank k, their matrix elements satisfy the relation
\bra{\alpha', j', m'} \hat{X}_q^{(k)} \ket{\alpha, j, m} = \frac{\bra{\alpha' j'}|\hat{X}^{(k)}|\ket{\alpha j}}{\bra{\beta' j'}|\hat{Z}^{(k)}|\ket{\beta j}} \bra{\beta', j', m'} \hat{Z}_q^{(k)} \ket{\beta, j, m}
i.e., they are equal up to a constant which does not depend on the magnetic quantum numbers m, m', q.
If we think of the structure of the entire matrix in this (2j+1) \times (2j'+1) subspace, then for any two spherical tensors of the same rank, the replacement theorem tells us that their matrices are identical up to a constant rescaling. In other words, the matrix structure in this space is entirely dictated by rotational symmetry.
Back to our concrete example for the Zeeman effect: the formula we left off with had two reduced matrix elements in it, but we don’t really care what those are, we just care that for fixed j they are both constants. This means that a more useful way to rewrite the replacement theorem between \hat{\vec{L}} and \hat{\vec{J}} is, for arbitrary q, \bra{jm'} \hat{L}_q^{(1)} \ket{jm} = c_j^{(L)} \bra{jm'} \hat{J}_q^{(1)} \ket{jm} where the constant doesn’t depend on m, m', or q. (We’re holding j fixed since that’s all we need, and anyway matrix elements of \hat{\vec{J}} vanish between different j values.)
Now we just need to determine the constant in front; it secretly depends on some reduced matrix elements, but we can try to get it more directly. Consider the operator \hat{\vec{L}} \cdot \hat{\vec{J}}. We can expand this out in terms of squared angular momentum operators: \hat{\vec{L}} \cdot \hat{\vec{J}} = \hat{\vec{L}} \cdot (\hat{\vec{L}} + \hat{\vec{S}}) \\ = \hat{\vec{L}}{}^2 + \frac{1}{2} (\hat{\vec{J}}{}^2 - \hat{\vec{L}}{}^2 - \hat{\vec{S}}{}^2) \\ = \frac{1}{2} (\hat{\vec{J}}{}^2 + \hat{\vec{L}}{}^2 - \hat{\vec{S}}{}^2) so that we have \bra{jm}\hat{\vec{L}} \cdot \hat{\vec{J}}\ket{jm} = \frac{\hbar^2}{2} \left[ j(j+1) + l(l+1) - s(s+1) \right]. Now we make use of the replacement theorem: within a subspace of fixed j, due to the relation between matrix elements given by the replacement theorem, we can replace the operator \hat{\vec{L}} with c_j^{(L)} \hat{\vec{J}}. So the same expectation value is also equal to \bra{jm} \hat{\vec{L}} \cdot \hat{\vec{J}} \ket{jm} = c_j^{(L)} \bra{jm} \hat{\vec{J}}{}^2 \ket{jm} = c_j^{(L)} \hbar^2 j(j+1). This lets us read off the constant as c_j^{(L)} = \frac{j(j+1) + l(l+1) - s(s+1)}{2j(j+1)}. Now we can easily get the matrix element we wanted: \bra{jm} \hat{L}_z \ket{jm} = c_j^{(L)} \bra{jm} \hat{J}_z \ket{jm} = \hbar m c_j^{(L)}. To get the full Zeeman effect, we have to find the matrix elements of \hat{S}_z as well: this involves the same steps of using the replacement theorem and looking at the operator \hat{\vec{S}} \cdot \hat{\vec{J}}. Doing so leads to the similar-looking result \bra{jm} \hat{S}_z \ket{jm} = \hbar m c_j^{(S)} = \hbar m \frac{j(j+1) + s(s+1) - l(l+1)}{2j(j+1)}. Putting the pieces back together, for the perturbation \hat{W} given above, the first-order energy shift is given by E^{(1)} = \bra{jm} \hat{W} \ket{jm} = g_j \frac{eB}{2m_e c} \hbar m, where g_j, also known as the Landé g-factor, contains all of the information on the various angular momentum quantum numbers; from the other pieces of our calculation, we find g_j = \frac{3}{2} + \frac{s(s+1) - l(l+1)}{2j(j+1)}. If we have simply s=1/2, for example in a hydrogenic atom, then the expression for g will simplify substantially: we must have j = l \pm 1/2 and so g_j = \begin{cases} 1 + \frac{1}{2l+1}, & j = l + 1/2; \\ 1 - \frac{1}{2l+1}, & j = l - 1/2. \end{cases} As we saw before for just the 2p orbital, the magnetic field completely splits apart the energies of the \ket{nljm} eigenstates. This small splitting of the \ket{nljm} energy eigenvalues is known as the Zeeman effect, completing our collection of hydrogen responses to \vec{E} and \vec{B} fields.
The results we’ve found above can also be obtained using another corollary of the Wigner-Eckart theorem, known as the projection theorem. The projection theorem states: for a vector operator in spherical basis \hat{V}_q^{(1)}, its matrix elements are given by the formula
\bra{\alpha', j, m'} \hat{V}_q^{(1)} \ket{\alpha, j, m} = \frac{\bra{\alpha', j, m} \hat{\vec{J}} \cdot \hat{\vec{V}} \ket{\alpha, j, m}}{\hbar^2 j(j+1)} \bra{jm'} \hat{J}_q^{(1)} \ket{jm}.
There is further discussion and a proof of this theorem in Sakurai. I decided not to include it in the main part of these notes - the physics content of the projection theorem is just the relation of different vector operators to \hat{\vec{J}}, in precisely the way we did things for the g-factor above. Also, the theorem is only really useful if we can calculate \hat{\vec{J}} \cdot \hat{\vec{V}}, for which the main example are the \hat{\vec{L}} and \hat{\vec{S}} operators we already studied.
For completeness, I include my own proof of the projection theorem in this aside.
We can relate the matrix elements of any vector operator to the elements of the angular momentum operator: \bra{\alpha', j, m'} \hat{V}_q^{(1)} \ket{\alpha, j, m} = \frac{\bra{\alpha' j}|\hat{V}^{(1)}|\ket{\alpha j}}{\bra{\alpha j}|\hat{J}^{(1)}|\ket{\alpha j}} \bra{\alpha, j, m'} \hat{J}_q^{(1)} \ket{\alpha, j, m} (Note that this only works for matrix elements of \hat{\vec{V}} between eigenstates with the same j.) To determine the reduced matrix elements, we start by evaluating the matrix elements of the dot product \hat{\vec{J}} \cdot \hat{\vec{V}}. First, notice that the dot product in terms of spherical basis vectors looks noticeably different: U_q^{(1)} \cdot V_q^{(1)} = U_0^{(1)} V_0^{(1)} - U_{1}^{(1)} V_{-1}^{(1)} - U_{-1}^{(1)} V_1^{(1)}. You can verify that this reduces to the normal dot product if we go back to Cartesian coordinates. Now, let’s choose m'=m and evaluate: \bra{\alpha', j,m} \hat{\vec{J}} \cdot \hat{\vec{V}} \ket{\alpha, j, m} = \bra{\alpha', j, m} (\hat{J}_0 \hat{V}_0 - \hat{J}_1 \hat{V}_{-1} - \hat{J}_{-1} \hat{V}_1) \ket{\alpha, j,m } \\ = m \hbar \bra{\alpha', j, m} \hat{V}_0^{(1)} \ket{\alpha, j, m} + \frac{\hbar}{\sqrt{2}} \sqrt{(j+m)(j-m+1)} \bra{\alpha', j, m-1} \hat{V}_{-1}^{(1)} \ket{\alpha, j, m} - \\ \frac{\hbar}{\sqrt{2}} \sqrt{(j-m)(j+m+1)} \bra{\alpha', j, m+1} \hat{V}_{1}^{(1)} \ket{\alpha, j, m} where the conventions for spherical basis map into our standard definitions of the angular momentum ladder operators as \hat{J}_{\pm 1}^{(1)} = \mp \frac{1}{\sqrt{2}} \hat{J}_{\pm}. The actual coefficients of all of these terms aren’t so important, actually; what really matters is that all three terms are equal to some function of j and m times some matrix element of \hat{V}_q^{(1)}. But by the Wigner-Eckart theorem, all three of these matrix elements are themselves equal to a function of j and m times the reduced matrix element for \hat{V}. Thus, we can collect everything together and rewrite \bra{\alpha', j,m} \hat{\vec{J}} \cdot \hat{\vec{V}} \ket{\alpha, j, m} = c_{jm} \bra{\alpha' j}|\hat{\vec{V}}{}^{(1)}|\ket{\alpha j} Furthermore, we know that the c_{jm} can’t actually be a function of m at all, since \hat{\vec{J}} \cdot \hat{\vec{V}} is a scalar operator; so we can rewrite the constants as simply c_j. To finish the derivation, we note that the c_{j} don’t depend on our choice of \alpha' or \hat{\vec{V}} either, so if we choose \hat{\vec{V}} = \hat{\vec{J}} and let \alpha' = \alpha, we find that \bra{\alpha, j,m} \hat{J}{}^2 \ket{\alpha,j,m} = c_j \bra{\alpha j}|\hat{\vec{J}}{}^{(1)}|\ket{\alpha j}. We can calculate c_j from this, but all we really wanted was the ratio between the reduced matrix elements of \hat{\vec{J}} and \hat{\vec{V}}: \frac{\bra{\alpha', j, m}| \hat{\vec{V}}{}^{(1)} |\ket{\alpha, j,m}}{\bra{\alpha, j, m}| \hat{\vec{J}}{}^{(1)} |\ket{\alpha, j, m}} = \frac{\bra{\alpha', j, m} \hat{\vec{J}} \cdot \hat{\vec{V}} \ket{\alpha, j, m}}{\hbar^2 j(j+1)}. Plugging this back in to the relation above gives us the projection theorem.