+1(978)310-4246 credencewriters@gmail.com
  

Description

make sure it is:

1. uniquely named

2. m-files

3. function and script

MAE 316 Main Project
Spring 2021 Full Assignment
April 19, 2021
This is a problem about forced-convective heat transfer, as we might have in an airplane wing.
Airplane wings are generally not rectangular in shape, but this one will be rectangular to keep
things relatively simple. One end of the wing is near the engine, which exposes it to a constant
temperature, Tref . Outside, the ambient temperature is T0 , which is less than Tref . Heat is removed
from the wing via forced convection.
The flux is Fourier’s law, with a correction in the y-direction due to the changing thickness of
the wing:
q = −κ(1 + y)∇T
(1)
The source term is Newton’s law of cooling:
f = −η(T − T0 )
(2)
where η is the convection heat transfer coefficient and T0 is the temperature of the ambient air.
Forced convection is described by a dimensionless quantity called the Nusselt number, or Nu.
The definition of the Nusselt number is a ratio of convective to conductive heat transfer in the fluid:
ηy
Nu =
(3)
κf
where η is the convective heat transfer coefficient, κf is the heat transfer coefficient in the fluid,
and y is the length along the sheet in the direction of the fluid flow. For the particular situation of
1
forced convection over a planar surface, the Nusselt number can be approximated as a function of
the Reynolds number Re, which characterizes the flow, and the Prandtl number Pr, which depends
on the molecular properties of the fluid:
Nu = 0.332Re
1/2
1/3
Pr

= 0.332
ρvy
µ
1/2
Pr1/3
(4)
where ρ is the fluid density, v is the velocity of the flow and µ is the dynamic viscosity. Given
these two expressions for Nu along with the other flow and fluid properties, one can arrive at an
expression for the convection heat loss coefficient η in terms of y.
Derive the governing equation for the problem, including boundary conditions. Use finite differences to discretize, and write stencils for an arbitrary node in the center of the domain along with
stencils for all boundaries. Assume that the boundaries are insulated except for the left boundary,
which is fixed at Tref .
Code Implement the solution in MATLAB and produce runs for the following conditions:
Pr = 0.71
(5)
µ = 1.81 × 10−5
(6)
ρ = 1.2041
(7)
κf = 0.024
(8)
Lx = 5
(9)
Ly = 1
(10)
T0 = 0
(11)
Tref = 300
(12)
as well as for all combinations of v = {200, 400} and κ = {100, 200}. For the number of intervals
in the y direction use both 8 and 32 (with a regular mesh) for the run with v = 200 and κ = 100,
and comment on the difference you find.
2
Linear least squares is the most famous and probably the most-used curve-fitting technique. Essentially, the method enables us to find, in some sense, the ’best fit’ of a function
to some data. That is, we find the best estimate that we can for the parameters of the
function so that the function corresponds to the data in a manner that is “optimum” in the
least-squares sense (the meaning of this will be explained). The only requirement is that the
functing is linear in the parameters.
A typical example is the fitting of a polynomial to data. Polynomials are good candidates
for linear least squares because the parameters of the polynomial – that is, the coefficients,
are linear:
f (x) = a0 + a1 x + a2 x2 + a3 x3 + · · ·
(1)
If we have some set of data – which we might call {x, f } (the bold type indicates that
these are vectors), then fitting the above polynomial to that data entails finding some set
of coefficients (parameters) a which produces the “best” fit. The function f is linear with
respect to the parameters – that is, there are no nonlinear combinations of parameters such
as a2 a3 .
Figure 1 shows a set of data that might be well-described – on average – by a straight
line. But how do we find the line that produces the “best” fit? Clearly the red line in the
figure is a better fit than the green one, but we need a metric – a single number – to rate
the fitness of every candidate line. We can then optimize this metric to find the line that
produces the best fit according to our criterion.
Figure 1: a dataset and two candidate fit lines
There are many ways to produce such a metric, but the most common method is that
of least squares. The optimality metric in least squares is the sum of the squares of the
1
differences between the y-values of the dataset and the y-values calcuated by the candidate
fit line at the same x-values:
N
X
φ=
(yi − ỹi )2
(2)
i=1
where φ is our metric, y is our dataset and
ỹ = mx + b
(3)
is our candidate line.
Finding the optimum or the minimum over all candidate parameter sets

m
θ=
b
(4)
is now our task. To accomplish this it’s useful to introduce the problem as one of regularizing
an overdetermined system. Let’s say that we were to naively try and set up a system of
equations to find an m and b for a line that runs through every single datapoint. This of
course is impossible, as we’ll find out. But this system of equations would be:
y1 = mx1 + b
y2 = mx2 + b
y3 = mx3 + b
..
..
.
.
yN = mxN + b
or, in matrix-vector form,
 
x1
y1
 y 2   x2
  
 ..  =  ..
 .   .

yN
xN

1

1
 m
..  b ⇒ y = Aθ
.
1
(5)
The system (5) is “overdetermined” because there are more equations than unknowns.
Clearly this system has no solution as written, as should be clear from the observation
that no single straight line will connect all of the datapoints in fig. 1. We require a regularization – that is, some transformation of (5) that will reduce the size of the system
to two equations. This is what the minimization of the metric φ provides: “least-squares
regularization” of (5).
We also recognize that we can write the metric φ as a vector norm:
φ = ky − ỹk2
2
(6)
where the notation k·k indicates the Euclidean norm: the vector length. The Euclidean
norm is formed by the dot product of the vector with itself – that is, squaring every term in
the vector and summing – then taking the square root. The square of this norm is therefore
the summed squares of all the elements in the vector, as given in (2).
Applying that logic to the rightmost equation in (5):
φ = (y − ỹ)T (y − ỹ) = (y − Aθ)T (y − Aθ) = yT y − yT Aθ − θ T AT y + θ T AT Aθ
(7)
The two terms yT Aθ and θ T AT y are transposes of each other – and thus equal, since they
are both scalars. Combining, the above equation becomes:
φ = yT y − 2θ T AT y + θ T AT Aθ
(8)
Minimizing φ with respect to θ amounts to setting the partial derivatives of φ with
respect to all of the elements of θ equal to zero:
∂φ
=0
∂θ1
∂φ
=0
∂θ2
.. ..
. .
∂φ
=0
∂θP
where P is the total number of parameters (the size of the vector θ). In matrix-vector terms
this means:
−2AT y + 2AT Aθ = 0 ⇒ AT Aθ = AT y
(9)
Equation (9) is the formula we’re looking for. It leads to a practical procedure for fitting
functions to data using linear least squares:
1. form the overdetermined system Aθ = y
2. regularize by left-multiplying both sides by AT
3. solve the regularized system to obtain θ = (AT A)−1 AT y
3
Nonlinear problems are the most common types of problems found in engineering.
Consider the following flux q and source term g for a one-dimensional problem:
dT
dx
g = −1
q = −κT
(1)
(2)
which is identical to the example problem used for the intro to finite differences, except that
the heat transfer coefficient now depends on the temperature. As you might expect, this
temperature-dependent behavior of the heat transfer coefficient is the rule, not the exception!
The continuity equation yields:
dT 2
d2 T
+ κT 2 = 1
κ
(3)
dx
dx
with the corresponding second-order stencil
T − 2T + T
T − T 2
i+1
i
i−1
i+1
i−1
+ κTi
=1
κ
∆x
∆x2
(4)
Equation (4) represents a system of nonlinear equations. If it were linear, it could be represented by a linear system, and we could solve the problem by implementing a matrix and
vector like we have before. But such an approach is not feasible for this system – what
should we do?
Newton’s method with one unknown variable is the best way to introduce this
approach before we get to the case of solving for multiple unknows as we must in order to
find the solution to the problem represented by (4).
The method is demonstrated graphically in figure 1. The function plotted in blue is the
(nonlinear) function to be solved, f (x) = x2 , with a solution at x = 0. We start with an
initial guess, x0 = 0.8. We then draw a tangent line at the location of the initial guess.
The point at which the tangent line crosses the x-axis is the updated guess, x1 = 0.4. From
here the process starts again, with another tangent line drawn at x1 . It’s clear from this
illustration that eventually the process will converge at the solution.
Now that the process is intuitively clear let’s put some math on it. We get the equation
of the tangent line by using the slope (the derivative of the function at x0 ) and a point on
the line (x0 , f (x0 )). Finding the intercept:
f (x0 ) = f 0 (x0 )x0 + b ⇒ b = f (x0 ) − f 0 (x0 )x0
(5)
yields the equation for the line:
y = f 0 (x0 )x + f (x0 ) − f 0 (x0 )x0
(6)
Setting y = 0 yields the final formula:
0 = f 0 (x0 )(x1 − x0 ) + f (x0 ) ⇒ x1 = x0 − f (x0 )/f 0 (x0 )
1
(7)
Figure 1: Newton’s method with one unknown
Before we move on to the multi-dimensional formula we should talk about what is really
happening here and think about a more formal way to derive the formula (7). The red line
is a linearization of the function f at x0 . The linearized version of f is easy to find the
intercept for – we just have to solve a linear equation as reflected by (7).
How do we formally linearize a function? With a Taylor expansion:
1
f (x) = f (x0 ) + f 0 (x0 )(x − x0 ) + f 00 (x0 )(x − x0 )2 + · · ·
2
(8)
Truncating the expansion after the linear term yields:
f (x) ≈ f (x0 ) + f 0 (x0 )(x − x0 )
(9)
Setting the right-hand side of (9) equal to zero and solving for x yields the one-dimensional
formula for Newton’s method (7).
Multi-dimensional Newton’s method applies to the sets of nonlinear equations represented by (4). The first step is to arrange the equations such that they all consist of an
expression involving some (or all) of the unknowns, and setting them all equal to zero. This
means we can think of the set of equations as a vector-valued function:
f (T) = 0
2
(10)
where the function f is vector-valued in the sense that it takes a vector T as input and
returns a vector (the result of every equation) as output. The solution to the system is the
vector of temperatures T that yields the zero vector when plugged into f .
To establish an iterative method we take the same approach as in the single-variable
case: we linearize the function about an initial guess and solve the resulting linear problem,
leading to the updated guess:
f (T) ≈ f (T0 ) + f 0 (T0 )(T − T0 )
(11)
Before setting the right-hand side equal to zero and solving for T1 , we should consider the
nature of f 0 (T0 ). Clearly it is a matrix, since every term in the expression is a vector, and
the second term on the right is f 0 (T0 ) left-multiplying the vector (T − T0 ), meaning that
f 0 (T0 ) must be a matrix. It is a matrix called the Jacobian:
∂f1
∂T2
∂f2
∂T2
 ∂f1
∂T1
∂f2
∂T1


f (T) =  .
 ..
0
∂fN
∂TN
where


f1 (T)
 f2 (T) 


f 0 (T) =  ..  ;
 . 
···
···

∂f1 
∂TN
∂f2 
∂TN 
···
∂fN
∂TN
.. 
. 

T1
 T2 
 
T =  .. 
 . 
fN (T)
(12)

(13)
TN
The form of the Jacobian comes from the multi-dimensional nature of the function f and the
nature of the Taylor expansion.
Setting the right side of (11) equal to the zero vector and re-arranging yields the multidimensional Newton’s method algorithm:
T1 = T0 − [f 0 (T0 )]−1 f (T0 )
The way this goes in implementation is the following:
1. initialize with T0 and some tolerance Ï„
2. calculate an initial residual vector r0 = f (T0 )
3. set a residual container vector r = r0
4. WHILE krk/kr0 k > Ï„
5. compute f 0 (T0 )
6. solve f 0 (T0 )s = r
3
(14)
7. set T1 = T0 − s
8. calculate r = f (T1 )
9. shift T0 = T1
10. end WHILE
The notation k·k denotes the Euclidean norm – i.e., the length of the vector. A typical
relative tolerance τ is on the order of 10−6 , although that can vary.
Newton’s method thus effectively turns the solution of a nonlinear problem into a series of
linear problems. This means that we can use all of the iterative methods for linear problems
that we’ve learned about, so long as the Jacobian is sparse. Will it be sparse? Comparing
(4) and (12) reveals that it will be: only the partial derivatives for Ti−1 , Ti and Ti+1 appear
in (4), meaning that all of the other entries on the row of the Jacobian pertaining to the
particular equation represented by the stencil will be non-zero.
The other question is even more critical: is this going to work? The answer is ‘maybe’.
With Newton’s method there are no guarantees, except that if you start close enough to the
solution you will get there. Much of the strategy in dealing with nonlinear problems is finding
the art of getting close to the solution: in time-dependent problems, for instance, reducing
the time step gets you closer to the solution for the next time step. Time-independent
problems (those in equilibrium or steady-state) can be converted into iterative processes
employing a kind of pseudo-time, where the simulation moves stepwise from some easily
obtained solution to the more difficult target.
Algorithmically speaking, one thing that helps greatly in implementing Newton’s method
is Armijo’s rule, in which the “full Newton step” s appearing in the algorithm above is
tested to ensure that it results in a reduction of the residual over the previous step. This is
accomplished by inserting another while loop inside of the existing one above, after step 6.
The new while loop tests to make sure that the new residual is lower than the old one. If it
isn’t, T1 is recalculated after cutting s in half. The while loop ends when the new residual
is lower or when the step has been cut to a very small length, allowing the algorithm to go
‘uphill’ a slight amount. This often can enable the algorithm to work its way out of a local
minimum.
4
Finite differences in two spatial dimensions shares a lot in common with the onedimensional version. Let’s fix ideas right from the beginning by considering the following
flux and source expressions:
q = −κ∇T
f = 0;
(1)
(2)
−∇ · q + f = 0
h
i
⇒ ∇ · −κ∇T = 0
(3)
⇒ ∇2 T = 0
(5)
The continuity equation then yields:
(4)
Now imagine that this governing equation (which takes a famous form called Laplace’s equation) is valid on the following domain: If we want to use finite differences to solve this system
Figure 1: the domain for Laplace’s equation – the origin of coordinates is at the bottom left;
the rightmost edge is a fixed temperature T1 and half the leftmost edge is a fixed temperature
T2 , whereas every other boundary is insulated
it might be easier to write it like this:
∂ 2T
∂ 2T
+
=0
∂x2
∂y 2
(6)
We have, of course, finite difference approximations for second derivatives, but we’ll need to
introduce another set of indices for the second dimension. In the x-direction we can still use
the index i, whereas in the y-direction we could go for j, such that a generic node in the
center of the discretized domain and all of its neighbors looks like this: So when we discretize
(6), we get
Ti+1,j − 2Ti,j + Ti−1,j Ti,j+1 − 2Ti,j + Ti,j−1
+
=0
(7)
∆x2
∆y 2
If we consider that we have a regular mesh, which means that ∆x = ∆y, we get the stencil
Ti+1,j + Ti−1,j − 4Ti,j + Ti,j+1 + Ti,j−1 = 0
1
(8)
Figure 2: a single node on the domain, with its neighboring nodes
Implementation is a little bit different for the case of a multi-dimensional domain. The
main reason for this is the way the vector of temperatures is constructed. In the onedimensional case, we could simply create a vector out of the unknown temperatures, with
the positions of temperatures in the vector connected with the positions of nodes in the
domain:


T1
 .. 
 . 


Ti−1 


T =  Ti 
(9)


T
 i+1 
 . 
 .. 
Tn
Now however we’re faced with the problem of forming a vector containing temperatures at
nodes that are arrayed in a two-dimensional mesh. How do we accomplish that?
The most reasonable method seems to be: we stack them into the vector one row at a
time:


T1,1
 … 


T 
 i,1 
 . 
 .. 


Tn,1 


 T1,2 
T=
(10)

 .. 
 . 


 Ti,2 


 .. 
 . 


Tn,2 
..
.
and so on, until we have put all rows in the domain, from the row corresponding to j = 1 to
the row corresponding to j = m, into the vector.
What does that mean for the matrix? Consider how the terms that appear in the stencil
2
(8) are arranged in the temperature vector:

..
.



Ti,j−1 
 . 
 . 
 . 


Ti−1,j 


 Ti,j 


Ti+1,j 
 . 
 .. 


T

 i,j+1 
..
.
(11)
How are the coefficients that appear in the stencil arranged in the matrix as a result?



… … …




· · · 1 · · · 1 −4 1 · · · 1 · · ·


· · · 1 · · · 1 −4 1 · · · 1 · · ·
(12)




· · · 1 · · · 1 −4 1 · · · 1 · · ·

..
..
..
..
..
.
.
.
.
.
In other words, instead of the tri-diagonal matrix we obtained in the 1-D problem (where
every entry is zero except for the main diagonal and the two diagonals directly above &
below it), we now have two extra diagonals located n entries away – on either side – from
the main diagonal on every row. This is kind of an issue for Gaussian elimination, which
is O(n) complex for the tri-diagonal case but not for higher dimensions: when the nonzero
diagonal on the lower triangle is eliminated, one of the entries that was previously zero
becomes nonzero. This is the motivation for using iterative solvers that take advantage of
the matrix sparsity.
3
Finite differences in time share a lot of similarities with finite differences in space. There
are a few special considerations, however. Consider for starters the mass-spring system of
Figure 1: If we go back to the equilibrium analysis for this system, it’s a force balance:
Figure 1: two free masses and springs – imagine them moving
AT CAu = f
(1)
where the left-hand side of the equation is the sum of spring forces holding the masses up,
while f are the gravitational forces that pull them down. In equilibrium the forces balance,
but when they are out of balance the resultant force will create an acceleration, which can
be described by Newton’s second law:
M ü = f − AT CAu
(2)
(recall that the downward direction here is positive) where the diagonal mass matrix M is:

m1 0
M=
(3)
0 m2
1
Our goal is to solve the system of equations (2) using finite differences. Since this system
is an initial-value problem, we’ll need to first look at the nature of initial-value problems to
determine what changes we need to make in our approach.
A look at Figure 2 gives a sense of the issue we face: given u(0) ≡ u0 , we have to come
up with a formula for u1 , using finite differences. If the problem is second-order like (2),
this implies we’d require a second-difference approximation. For time-dependent problems,
those look like this:
un+1 − 2un + un−1
(4)
ü(tn ) ≈
∆t2
and immediately it becomes clear: if n = 0 such that tn = t0 , then we require u−1 , which
is a point outside of the domain. The only approximation that seems available to us is the
forward-difference approximation for a first derivative:
u̇(tn ) ≈
un+1 − un
∆t
(5)
which suggests that the first thing that must happen is to reduce the system to first order.
Figure 2: a grid for finite differences for initial value problems
Order reduction is possible by introducing a new time-dependent state to the system. To
illustrate, let’s simplify the system (2) to a second-order problem with just one state. That
is, the new system will be a single mass and a single spring. Ignoring gravity, the equations
then reduce to
mü + cu = 0
(6)
Let’s simplify further by setting
c=m=1
(7)
ü + u = 0
(8)
such that the equation becomes
The solution to (8) is likely to be a trig function – both sine and cosine qualify, and the
initial conditions will determine which one we choose. The initial conditions
u(0) = 0
u̇(0) = 1
(9)
(10)
u(t) = sin t
(11)
specify that the solution is
2
This analytical solution will help us assess different schemes for solving time-dependent
problems with finite differences.
To reduce the order of the problem, we introduce a new time-dependent state:
v = u̇
(12)
and substitute that into (8). The new system becomes
u̇ = v
v̇ = −u
(13)
(14)
u(0) = 0
v(0) = 1
(15)
(16)
with the initial conditions
In the ‘state space’ in which we plot u(t) = sin t against v(t) = cos t, we get a perfect circle.
This stability of the exact solution – absent any dampening or injection of energy into the
system, the mass will oscillate in a perfect sine wave forever – is what we seek in a numerical
scheme.
Different schemes for the solution of the system of equations can be derived by choosing
different forms for the right hand side of the system (13)-(14). The derivatives, which are
on the left, are discretized with the forward difference approximation. On the right we must
therefore choose what to put in for the states: the time step n (the current time step) or
the time step n + 1 (the next time step, which we’d like to solve for). Let’s look at three
different options.
Forward Euler uses the current states:
un+1 − un
= vn
∆t
vn+1 − vn
= −un
∆t
(17)
(18)
Re-arranging, we get:
or
un+1 = un + ∆tvn
vn+1 = −∆tun + vn
(19)
(20)

un+1
1
∆t un
=
vn+1
−∆t 1
vn
(21)
Forward Euler is also called an explicit scheme because the formula above creates an
explicit route from the current state to the next one, without having to invert any matrices.
3
The matrix on the right-hand side we’ll call GF . The way the scheme works is that we
multiply the current state by GF to get the next one, and then again to get the state after
that:

un+1
un
n+1 u0
2 un−1
= GF
(22)
= GF
= GF
vn−1
v0
vn+1
vn
It’s clear then that this scheme – or any numerical scheme for a time-dependent problem –
involves raising this matrix to an arbitrarily high power. This can tell us something about
the scheme’s stability. Given that
GnF = SΛn S −1
(23)
it’s the eigenvalues of GF that determine its stability: if any of them are greater than 1, the
scheme will add (spurious) energy into the system and it will tend to blow up; if all of the
eigenvalues are less than 1 the scheme will (spuriously) dampen the system and it will stop
oscillating – only if all eigenvalues are exactly equal to 1 will the scheme be stable.
The eigenvalues of GF are √
easy to calculate: λF = 1 ± i∆t; these are complex numbers,
the magnitudes of which are 1 + ∆t2 , which is greater than 1. We therefore expect the
Forward Euler scheme to be unstable: it will add numerical energy into the system and will
tend to blow up. Explicit schemes can still be used, but they must be used with caution,
with a very small timestep (the rate of the explosion can be reduced by choosing a small ∆t)
and for short overall times.
Backward Euler is what results if we make the other choice for the right-hand side, the
state at the n + 1 timestep:
un+1 − un
= vn+1
∆t
vn+1 − vn
= −un+1
∆t
(24)
(25)
Re-arranging then gives:
or
un+1 − ∆tvn+1 = un
vn+1 + ∆tun+1 = vn
(26)
(27)

1 −∆t un+1
u
= n
∆t
1
vn+1
vn
(28)

This is also called an implicit scheme because we have to invert a matrix to find the
next state. If we call the matrix on the left above G−1
B then it’s clear that the matrix that
serves the same function as GF does for the Forward Euler is its inverse, GB . Since G−1
B has
eigenvalues that are the same as those of GF , this means that all eigenvalues of GB will be
less than one, meaning that Backward Euler will be unstable with numerical dampening –
the spring will slow down and ultimately stop as time goes to infinity.
4
Crank-Nicolson is – literally – an even combination of Backward and Forward Euler.
That is, half of the right-hand side of the system is states at the current time step, and half
is states at the next time step:
1
un+1 − un
= (vn + vn+1 )
∆t
2
vn+1 − vn
1
= − (un + un+1 )
∆t
2
(29)
(30)
Re-arranging:
1
1
un+1 − ∆tvn+1 = un + ∆tvn
2
2
1
1
∆tun+1 + vn+1 = − ∆tun + vn
2
2
(31)
(32)
leading to:

1 − 12 ∆t un+1
1
=
1
1
∆t
1
vn+1
− 2 ∆t
2
1
∆t
2

un
1
vn
(33)
If we call the matrix on the left GCN −I and the matrix on the right GCN −II , then the
transfer matrix (the matrix we multiply the n time states by to get the n + 1 states) is
G−1
CN −I GCN −II . Since GCN −I and GCN −II have the same eigenvalues, it’s clear that the
eigenvalues of G−1
CN −I GCN −II will be equal to one. Crank-Nicolson is thus the stable scheme
we’ve been searching for.
Block matrix notation is how we write a stencil for the multi-mass problem we started
with. The first thing we’ll do is to invoke the principle of superposition which enables us
to separate the equilibrium problem from the dynamic one: we solve the equilibrium first,
then reset the zero point for our displacements to that equilibrium point. This accounts for
the gravitational forces, so we can remove them from the dynamics given in (2). The new
dynamics then become
M ü + AT CAu = M ü + Ku = 0
(34)
As above we start with a reduction of the second order problem to first order by introducing
new states:
u̇ = v
v̇ = −M −1 Ku
(35)
(36)
It’s important to realize that if there are two masses in our system each of the vectors u and v
have two elements, meaning that each of the expressions in the system (35)-(36) as written
represent two equations (making four equations in total). Crank-Nicolson discretization
yields:
1
un+1 − un
= (vn + vn+1 )
∆t
2
vn+1 − vn
1 −1
= − M K(un + un+1 )
∆t
2
5
(37)
(38)
Re-arranging:
∆t
∆t
vn+1 = un +
vn
2
2
∆t −1
∆t
M Kun+1 + vn+1 = − M −1 Kun + vn
2
2
un+1 −
which finally yields

I
− ∆t
I
2
∆t
−1
M K
I
2

un+1
I
=
∆t
vn+1
− 2 M −1 K
∆t
I
2

un
vn
I
(39)
(40)
(41)
Equation (41) features stacked vectors and block matrices: each written entry in the
vectors is itself a 2 × 1 vector, and each entry appearing in the matrices is a 2 × 2 matrix.
Thus the entire system is 4 × 4, although it’s written as if it were a 2 × 2. The shorthand is
perhaps not necessary in this case, but it would be if there were many more than two springs
in the system. This is a stencil for the masses-and-springs dynamic system, which can be
used for coding an implementation of the Crank-Nicolson scheme.
6

Purchase answer to see full
attachment

  
error: Content is protected !!