$$
\newcommand{\D}{\mathrm{d}}
\newcommand{\E}{\mathcal{E}}
\newcommand{\order}[2][1]{#2^{(#1)}}
\newcommand{\reals}{\mathbb{R}}
$$
In view of stability analysis, the eigenvalues \(\alpha\) and eigenvectors \(x\) of the hessian of the energy are expanded in this chapter to second order in \(\eta\)
Code
from sympy import *
from lsk.display import *
from lsk.symbols import *
from lsk.energy import *
x0_V, x0_W = symbols(r"{\order[0] {x} _V} {\order[0] {x} _W}" )
x1_V, x1_W = symbols(r"{\order[1] {x} _V} {\order[1] {x} _W}" )
x2_V, x2_W = symbols(r"{\order[2] {x} _V} {\order[2] {x} _W}" )
α0 , α1 , α2 = symbols(r"{\order[0]{\alpha }} {\order[1]{\alpha }} {\order[2]{\alpha }} " )
χ0 = IndexedBase(r"\order[0]{\chi}" )
χ1 = IndexedBase(r"\order[1]{\chi}" )
x = (x0_V + x0_W) + η * (x1_V + x1_W) + η** 2 / 2 * (x2_V + x2_W) + O(η** 3 )
α = α0 + η * α1 + η** 2 / 2 * α2 + O(η** 3 )
display_latex_dict({"x" : x, r"\alpha" : α}, num_cols= 1 )
\(\displaystyle \begin{aligned}x&={\order[0]{x}_W} + {\order[0]{x}_V} + \eta \left({\order[1]{x}_V} + {\order[1]{x}_W}\right) + \frac{\eta^{2} \left({\order[2]{x}_V} + {\order[2]{x}_W}\right)}{2} + O\left(\eta^{3}\right)\\\alpha&={\order[0]{\alpha}} + \eta {\order[1]{\alpha}} + \frac{\eta^{2} {\order[2]{\alpha}}}{2} + O\left(\eta^{3}\right)\\\end{aligned}\)
Note that the eigenvector \(x\) is projected onto \(V\) and \(W\) : \(\order[k]{x}_V \in V\) and \(\order[k]{x}_W \in W\) . It will be convenient to expand the \(V\) -component in the \((v_1, \ldots, v_m)\) basis
\[
\order[k]{x}_V = \order[k]{\chi}_i \, v_i.
\]
We immediately have the following simplification rules
rules = {
E2 * v[i]: 0 ,
E2 * x0_V: 0 ,
E2 * x1_V: 0 ,
E2 * x2_V: 0
}
We focus in this chapter on potentially unstable eigenmodes, for which the eigenvalue \(\alpha\) might be negative in the vicinity of \(\eta = 0\) . It will be shown that these eigenmodes are necessarily such that \(\order[0]{\alpha} = 0\) . In that case, \(\order[0]{x}_W = 0\) and
\[
\order[1]{x}_W = \order[1]{\lambda} \, \order[0]{\chi}_j \, w_{j} + \order[0]{\chi}_j \, \order[1]{\xi}_k \, w_{jk}.
\tag{7.1}\]
The coefficients \(\order[1]{\alpha}\) , \(\order[2]{\alpha}\) , \(\order[0]{\chi}_i\) and \(\order[1]{\chi}_i\) solve the following equations
\[
\bigl( E_{ijk} \, \order[1]{\xi_k} + \order[1]\lambda \, \dot{E}_{ij} \bigr) \, \order[0]{\chi_j} = \order[1]\alpha \, \order[0]{\chi_i},
\tag{7.2}\]
and
\[
\begin{aligned}
\bigl[ E_{ijkl} \, \order[1]{\xi}_k \, \order[1]{\xi}_l + \order[1]{\lambda} \, \bigl( 2\dot{E}_{ijk} \, \order[1]{\xi}_k + \order[1]{\lambda} \, \ddot{E}_{ij} \bigr)+ E_{ijk} \, \order[2]{\xi}_k &\\
+ \order[2]{\lambda} \, \dot{E}_{ij} \bigr] \order[0]{\chi}_j
+ 2\bigl( E_{ijk} \, \order[1]{\xi}_k + \order[1]{\lambda} \, \dot{E}_{ij} \bigr) \, \order[1]{\chi}_j &= 2 \order[1]{\alpha} \, \order[1]{\chi}_i + \order[2]{\alpha} \, \order[0]{\chi}_i.
\end{aligned}
\tag{7.3}\]
The eigenvalue problem
The eigenvalues \(\alpha \in \reals\) and eigenvectors \(x \in U\) of the hessian are such that
\[
\E_{, u u} [u(\eta), \lambda(\eta); x, \hat{u}] = \alpha \, \langle x, \hat{u} \rangle \quad \text{for all} \quad \hat{u} \in U,
\tag{7.4}\]
where \(\eta \mapsto \lambda(\eta)\) and \(\eta \mapsto u(\eta)\) define the bifurcated branch.
In Chapter 5 and in Chapter 6 , the following asymptotic expansions of \(\lambda(\eta)\) , \(u(\eta)\) and \(\E_{,uu}[u(\eta), \lambda(\eta); \bullet, \bullet]\) were derived
H0 = E2
H1 = E3 * u1 + λ1 * E2_dot
H2 = (E4 * u1 * u1 + E3 * (u2_V + u2_W)
+ 2 * λ1 * E3_dot * u1 + λ1 ** 2 * E2_ddot + λ2 * E2_dot)
H = expand(H0 + η * H1 + η** 2 / 2 * H2 + O(η** 3 ))
Code
display_latex_long_equation(r"\E_ {uu} [u(\eta), λ(\eta)]" , H, terms_per_line= 6 )
\(\displaystyle \begin{aligned}\E_{uu}[u(\eta), λ(\eta)]={} &\E_{2} + \frac{\E_{3} \eta^{2} {\order[2]{u}_V}}{2} + \frac{\E_{3} \eta^{2} {\order[2]{u}_W}}{2} + \E_{3} \eta {\order[1]{u}} + \frac{\E_{4} \eta^{2} {\order[1]{u}}^{2}}{2} + \dot{\E}_2 \eta {\order[1]{\lambda}}\\ &+\dot{\E}_3 \eta^{2} {\order[1]{\lambda}} {\order[1]{u}} + \frac{\dot{\E}_2 \eta^{2} {\order[2]{\lambda}}}{2} + \frac{\ddot{\E}_2 \eta^{2} {\order[1]{\lambda}}^{2}}{2} + O\left(\eta^{3}\right)\end{aligned}\)
We define the left-hand side and right-hand side of the variational problem that defines the eigenpairs \((\alpha, x)\) at all orders.
lhs = (H * x * u_hat).subs(rules).expand()
rhs = (α * x * u_hat).subs(rules).expand()
The terms of order 0, 1 and 2 in \(\eta\) are identified below.
The variational problem of order 0
lhs0 = lhs.coeff(η, 0 ).subs(rules)
rhs0 = rhs.coeff(η, 0 )
The lowest-order problem reads: find \(\order[0]{x}_V \in V\) , \(\order[0]{x}_W \in W\) and \(\order[0]\alpha\in\reals\) such that, for all \(\hat{u} \in U\)
Code
display_latex_equation(lhs0, rhs0)
\(\displaystyle \E_{2} \hat{u} {\order[0]{x}_W}=\hat{u} {\order[0]{\alpha}} {\order[0]{x}_V} + \hat{u} {\order[0]{\alpha}} {\order[0]{x}_W}\)
to be understood as
\[
\E_2(\order[0]x_W, \hat{u})
= \order[0]\alpha \langle \order[0]x_V, \hat{u} \rangle
+ \order[0]\alpha \langle \order[0]x_W, \hat{u} \rangle.
\]
This variational equation is tested with an element \(\hat{w} \in W\) . Then, \(\langle \order[0]{x}_W, \hat{w} \rangle\) vanished and we are left with finding \(\order[0]{x}_W \in W\) such that, for all \(\hat{w} \in W\) ,
\[
\E_2(\order[0]x_W, \hat{w})
= \order[0]\alpha \langle \order[0]x_W, \hat{w} \rangle.
\]
If \(\order[0]{x}_W \neq 0\) , then \(\order[0]{\alpha}\) is an eigenvalue of \(\E_2\) over \(W\) . Therefore, \(\order[0]{\alpha} > 0\) and \(\alpha > 0\) in the vicinity of the critical point \(\lambda = \lambda_0\) : the resulting eigenmode is stable . So, in order to find potentially unstable modes, we need to consider that \(\order[0]{x}_W = 0\) and \(\order[0]{\alpha} = 0\) .
rules[α0 ] = 0
rules[x0_W] = 0
The variational problem of order 1
lhs1 = lhs.coeff(η, 1 ).subs(rules)
rhs1 = rhs.coeff(η, 1 ).subs(rules)
The problem reads: find \(\order[O]x_V \in V\) , \(\order[1]x_W \in W\) and \(\order[1]\alpha\in\reals\) such that, for all \(\hat{u} \in U\)
Code
display_latex_equation(lhs1, rhs1)
\(\displaystyle \E_{2} \hat{u} {\order[1]{x}_W} + \E_{3} \hat{u} {\order[0]{x}_V} {\order[1]{u}} + \dot{\E}_2 \hat{u} {\order[0]{x}_V} {\order[1]{\lambda}}=\hat{u} {\order[0]{x}_V} {\order[1]{\alpha}}\)
Testing first with \(\hat{u} = v_i\) delivers the following variational problem: find \(\order[0]{x}_V \in V\) such that, for all \(i=1, \ldots, m\)
Code
d = {
u_hat: v[i],
x0_V : χ0 [j] * v[j],
u1 : ξ1 [k] * v[k]
}
lhs1a = lhs1.subs(d).subs(rules)
rhs1a = rhs1.subs(d).subs(rules).subs(χ0 [j] * v[j] * v[i], χ0 [i])
display_latex_equation(lhs1a, rhs1a)
\(\displaystyle \E_{3} {\order[0]{\chi}}_{j} {v}_{i} {v}_{j} {v}_{k} {{\order[1]{\xi}}}_{k} + \dot{\E}_2 {\order[1]{\lambda}} {\order[0]{\chi}}_{j} {v}_{i} {v}_{j}={\order[1]{\alpha}} {\order[0]{\chi}}_{i}\)
and Eq. (7.2 ) is retrieved.
The test function is now picked in \(W = V^\perp\) and we get the following variational problem: find \(\order[1]{x}_W \in W\) such that, for all \(\hat{w} \in W\) ,
Code
d = {
u_hat: w_hat,
x0_V : χ0 [i] * v[i],
u1 : ξ1 [j] * v[j]
}
lhs1b = lhs1.subs(d).subs(rules)
rhs1b = rhs1.subs(d).subs(rules).subs(v[i] * w_hat, 0 )
display_latex_equation(lhs1b, rhs1b)
\(\displaystyle \E_{2} \hat{w} {\order[1]{x}_W} + \E_{3} \hat{w} {\order[0]{\chi}}_{i} {v}_{i} {v}_{j} {{\order[1]{\xi}}}_{j} + \dot{\E}_2 \hat{w} {\order[1]{\lambda}} {\order[0]{\chi}}_{i} {v}_{i}=0\)
(observe that, in the RHS, \(\langle v_i, \hat{w} \rangle = 0\) since \(V\) and \(W\) are orthogonal subspaces). The solution to the above problem is expressed as a linear combination of the \(w_{ij}\) and \(w_i\) –defined by the variational problems (2.5 ) and (2.4 ), respectively–, delivering Eq. (7.1 ).
rules[x1_W] = χ0 [k] * ξ1 [l] * w[k, l] + λ1 * χ0 [k] * w[k]
The variational problem of order 2
The terms of second order are tested against elements of \(V\) only.
lhs2 = expand(2 * lhs.coeff(η, 2 ).subs(u_hat, v[i]).subs(rules))
rhs2 = expand(2 * rhs.coeff(η, 2 ).subs(u_hat, v[i]).subs(rules))
Use some orthogonality conditions in the right-hand side.
rhs2 = rhs2.subs({
x0_V * v[i]: χ0 [i],
x1_V * v[i]: χ1 [i],
v[i] * w[k]: 0 ,
v[i] * w[k, l]: 0
})
Plug expansions \[
\order[1]{u} = \order[1]{\xi}_i \, v_i,
\quad
\order[2]{u}_V = \order[2]{\xi}_i \, v_i,
\quad
\order[0]{x}_V = \order[0]{\chi}_i \, v_i,
\quad \text{and} \quad
\order[1]{x}_V = \order[1]{\chi}_i \, v_i.
\]
d = dict ()
d[E2_dot * v[i] * x0_V] = χ0 [j] * E2_dot * v[i] * v[j]
d[E2_dot * v[i] * x1_V] = χ1 [j] * E2_dot * v[i] * v[j]
d[E2_ddot * x0_V * v[i]] = χ0 [j] * E2_ddot * v[i] * v[j]
d[E3 * v[i] * x1_V * u1] = χ1 [j] * ξ1 [k] * E3 * v[i] * v[j] * v[k]
d[E3 * v[i] * x0_V * u2_V] = χ0 [j] * ξ2 [k] * E3 * v[i] * v[j] * v[k]
d[E3_dot * v[i] * x0_V * u1] = χ0 [j] * ξ1 [k] * E3_dot * v[i] * v[j] * v[k]
d[E4 * v[i] * x0_V * u1 * u1] = (χ0 [j] * ξ1 [k] * ξ1 [l]
* E4 * v[i] * v[j] * v[k] * v[l])
lhs2 = lhs2.subs(d)
Rename some indices.
lhs2 = lhs2.subs({
χ0 [k] * E2_dot * v[i] * w[k]: χ0 [j] * E2_dot * v[i] * w[j],
χ0 [k] * ξ1 [l] * w[k, l]: χ0 [j] * ξ1 [k] * w[j, k]
})
\[
\begin{aligned}
\E_3(\order[0]{x}_V, \order[2]{u}_W, v_i)
={} & \order[0]{\chi}_j \, \order[1]{\xi}_k \, \bigl[
\order[1]{\xi}_l \, \E_3(v_i, v_j, w_{kl})
+ 2\order[1]{\lambda} \, \E_3(v_i, v_j, w_k) \bigr]\\
={} & \order[0]{\chi}_j \, \order[1]{\xi}_k \, \order[1]{\xi}_l \, \E_3(v_i, v_j, w_{kl})
+ 2\order[1]{\lambda} \, \order[0]{\chi}_j \, \order[1]{\xi}_k \, \dot{\E}_2(v_k, w_{ij})
\end{aligned}
\]
lhs2 = lhs2.subs(E3 * v[i] * x0_V * u2_W,
(χ0 [j] * ξ1 [k] * ξ1 [l]
* E3 * v[i] * v[j] * w[k,l]
+ 2 * λ1 * χ0 [j] * ξ1 [k]
* E2_dot * v[k] * w[i, j])).expand()
\[
\order[0]{\chi}_k \, \E_3(v_i, \order[1]{u}, w_k)
= \order[0]{\chi}_j \, \order[1]{\xi}_k \, \E_3(v_i, v_k, w_j)
= \order[0]{\chi}_j \, \order[1]{\xi}_k \, \dot{\E}_2(v_j, w_{ik})
\]
lhs2 = lhs2.subs(χ0 [k] * E3 * v[i] * u1 * w[k],
χ0 [j] * ξ1 [k] * E2_dot * v[j] * w[i, k]).expand()
\[
\begin{aligned}
\order[0]{\chi}_j \, \order[1]{\xi}_k \, \E_3(v_i, w_{j, k}, \order[1]{u})
= {} & \order[0]{\chi}_j \, \order[1]{\xi}_k \, \order[1]{\xi}_l \, \E_3(v_i, v_l, w_{j, k})\\
&= \tfrac{1}{2} \order[0]{\chi}_j \, \order[1]{\xi}_k \, \order[1]{\xi}_l \, \bigl[ \E_3(v_i, v_k, w_{j, l}) + \E_3(v_i, v_l, w_{j, k}) \bigr]
\end{aligned}
\]
lhs2 = lhs2.subs(χ0 [j] * ξ1 [k] * E3 * v[i] * w[j, k] * u1,
χ0 [j] * ξ1 [k] * ξ1 [l] / 2
* (E3 * v[i] * v[k] * w[j, l]
+ E3 * v[i] * v[l] * w[j, k])).expand()
lhs2 = lhs2.subs({
E2_dot * v[i] * v[j]: E_dot[i, j],
E2_ddot * v[i] * v[j]: (E_ddot[i, j] - 2 * E2_dot * v[i] * w[j]),
E3 * v[i] * v[j] * v[k]: E[i, j, k],
E3_dot * v[i] * v[j] * v[k]: (E_dot[i, j, k]
- E2_dot * v[i] * w[j, k]
- E2_dot * v[j] * w[i, k]
- E2_dot * v[k] * w[i, j]),
E4 * v[i] * v[j] * v[k] * v[l]: (E[i, j, k, l]
- E3 * v[i] * (v[j] * w[k, l]
+ v[k] * w[j, l]
+ v[l] * w[j, k]))
}).expand()
And Eq. (7.3 ) is finally retrieved.
Code
expected = ((E[i, j, k, l] * ξ1 [k] * ξ1 [l]
+ λ1 * (2 * E_dot[i, j, k] * ξ1 [k] + λ1 * E_ddot[i, j])
+ E[i, j, k] * ξ2 [k]
+ λ2 * E_dot[i, j]) * χ0 [j]
+ 2 * (E[i, j, k] * ξ1 [k] + λ1 * E_dot[i, j]) * χ1 [j])
assert expand(lhs2 - expected) == 0
expected = 2 * α1 * χ1 [i] + α2 * χ0 [i]
assert expand(rhs2 - expected) == 0