The principle of human motor control can be summarized with the scheme in the Figure below. The individual elements of this control loop can then be simulated individually, and often linear time invariant systems form a good first approximation.

*Human Motor Control: The dotted lines indicate the feedforward control pathway, and the gray lines the
feedback control pathway. Elements that form part of both are shown in solid black lines.*

*Linear Time Invariant Systems (LTIs)* are systems that can be described
by a first order differential equation.

- Linearity
- The definitive test for a linear system is that if input
\(x_1(t)\) produces output \(y_1(t)\) and \(x_2(t)\)
produces \(y_2(t)\), then the input \(a x_1(t)+b x_2(t)\)
must produce the output \(a y_1(t) + b y_2(t)\). This is
*superposition*and is a property of linear systems. - Time invariance
- This means that the system coefficients do not change for the period of our investigation.

These systems have the advantage that a sine input always leads to a sine output, with the same frequency.

*Amplitude and Phase of a sine-wave.*

Oscillations can be described elegantly with complex exponentials, because Euler’s identity tell us:

\[\label{eq:Euler}
{e^{j\omega t}} = \cos (\omega t) + j\sin (\omega t)\]

Note that the only purpose of using complex numbers is to keep the arithmetic as simple as possible. Using complex numbers, if the input to our system is \({e^{j\omega t}}\), the output must have the form

*Representation of a complex number in polar coordinates.*

\[out(t) = r \cdot {e^{i\delta }} \cdot {e^{i\omega t}} = G(i\omega ) \cdot {e^{i\omega t}}\]

r quantifies the change in amplitude (or the gain), and \(\delta\) the phase shift introduced by our system. So with one complex number, \(G(i\omega)\) , we can completely characterize the effect of our system on a sine input with that frequency. \(G(i\omega)\) is therefore called the transfer function of our system.

*Here “s” indicates the complex frequency :math:`i omega`.*

All linear systems obey superposition. A linear system is one described by linear equations. \(y = kx\) is linear (\(k\) is a constant). \(Y = sin(x)\), \(y = x^2\), \(y = log(x)\) are obviously not. Even \(y = x+k\) is not linear, one consequence of superposition is that if you double the input (\(x\)), the output (\(y\)) must double and, here, it doesn’t. The differential equation

\[a\frac{{{d^2}x}}{{d{t^2}}} + b\frac{{dx}}{{dt}} + cx = y\]

is linear.

\[a\frac{{{d^2}x}}{{d{t^2}}} + b\frac{{dx}}{{dt}}x + c{x^2} = y\]

is not, for two reasons which I hope are obvious.

*The basic idea underlying Linear Systems Analysis.*

So, if we can break down f(t) into a bunch of sine waves, the *Transfer
Function G(s)* for a linear system can quickly give us the gain and
phase for each sine wave or *harmonic*. This will give us all the output
sine waves, and all we have to do is add them all up and voila! the
desired output!

This is illustrated in the Figure above. The input f(t) (the square wave is only for illustration) is decomposed into the sum of a lot of harmonics on the left using the Fourier Transformation to find their amplitudes. Each is passed through \(G(j \omega)\). \(G(jk \omega_0)\) has a gain and a phase shift which. The resulting sinusoids \(G(jk \omega_0) sin (k\omega_0 t)\) can then all be added up as on the right to produce the final desired output shown at lower right.

The Figure above illustrates the basic method of all transforms including Laplace transforms so it is important to understand the concept (if not the details). In different words, f(t) is taken from the time domain by the transform into the frequency domain. There, the system’s transfer function operates on the frequency components to produce output components still in the frequency domain. The inverse transform assembles those components and converts the result back into the time domain, which is where you want your answer. Obviously you couldn’t do this without linearity and superposition.

In the simulation of mechanical systems (or differential equations in
general), this conversions from the time to the frequency domain is
typically performed with the *Laplace Transformation*. Given any time
course f(t), its Laplace transform F(s) is

\[\label{eq:Laplace}
F(s) = \int\limits_0^\infty {f(t) \cdot {e^{ - st}}dt}\]

while its inverse is

\[\label{eq:invLaplace}
f(t) = \int\limits_S {F(s) \cdot {e^{st}}ds}\]

where the integration is over the s-plane, S. s is sometimes referred to
as *complex frequency*.

Laplace Transformations are a generalization of Fourier Transformation. With Fourier Transformations we have dealt only with sine waves, \(e^{j \omega t}\) . Put another way, we have restricted \(s\) to \(j \omega\) so that est was restricted to \(e^{j \omega t}\). But this is unnecessary, we can let \(s\) enjoy being fully complex or \(s = \sigma + j \omega\) . This greatly expands the kinds of functions that \(e^{st}\) can represent.

*The Laplace Transform uses exponentially changing sinusoids.*

The Figure above is a view of the s-plane with its real axis (\(\sigma\)) and imaginary axis (\(j \omega\)). At point 1, \(\omega=0\) and \(-\sigma\) is negative so \(e^{st}=e^{-\sigma t}\), which is a simple decaying exponential as shown. At points 2 and 3 (we must always consider pairs of complex points recall from Eq. [eq:complexSinCos] that it took an \(e^{j \omega t}\) and an \(e^{-j \omega t}\) to get a real \(sin \omega t\) or \(cos \omega t\) ) we have \(-\sigma<0\) and \(\omega \neq 0\) , so \(e^{-\sigma t} e^{j \omega t}\) is a damped sine wave as shown. At points 4 and 5, \(\sigma=0\) so we are back to simple sine waves. At points 6 and 7, \(\sigma > 0\) so the exponential is a rising oscillation.

At 8,\(\sigma > 0\), \(\omega=0\) so we have a plain rising exponential. So this Figure shows the variety of waveforms represented by \(e^{st}\).

So Equation for the Laplace Transformation says that f(t) is made up by summing an infinite number of infinitesimal wavelets of the forms shown in the Figure above. F(s) tells you how much of each wavelet est is needed at each point on the s-plane. That weighting factor is given by the Laplace Transform. f(t) is decomposed into an infinite number of wavelets as shown above, each weighted by the complex number F(s). They are then passed through the transfer function G which now is no longer \(G(j \omega)\) (defined only for sine waves) but G(s) defined for . The result of F(s)G(s) which tells you the amount of est at each point on the s-plane contained in the output. Using the inverse Laplace Transformation on F(s)G(s) takes you back to the time domain and gives you the output.

For us, the most important aspect of the Laplace transformation is

\[\label{eq:LaplaceDG}
\frac{{dx(t)}}{{dt}}\xrightarrow{{LaplaceTransform}}s \cdot X(s) - x(0)\]

As a reminder, in your Mathematics lecture, the following notation was used to express the same equation:

\[\mathcal{L}[y'(x)] = s\mathcal{L}[y(x)] - y(0)\]

This equation states that a Laplace transformation converts a differential equation (as a function of time) into an algebraic equation, thereby allowing us to easily solve the differential equation in the frequency domain.

In these systems one is concerned with force, displacement and its rate of change, velocity. Consider a simple mechanical element – a spring.

*F is the force, x is the length, and k the spring constant.*

*Hook’s Law* states

\[\label{eq:spring}
F = k x\]

where k is the spring constant.

Another basic mechanical element is a viscosity typical of the shock absorbers in a car’s suspension system, a hypodermic syringe, or of a muscle.

*Plunger in a cylinder, with a viscosity r.*

The relationship is

\[\label{eq:damper}
F = r\frac{{dx}}{{dt}} \to rs\tilde x\]

Where \(~\) indicates the Laplace-transformed variable. That is, a constant force causes the element to change its length at a constant velocity. r is the viscosity. The element is called a dashpot.

Let us put the two elements together: this
combination is sometimes referred to as
*Voigt-Element*, and is a good first approximation
for a muscle model. The force F acting on the mass is divided between the
two elements, and – since the two elements are arranged in parallel – the
overall force is \(F = F_k + F_r\), or

\[\tilde F(s) = (k + rs)\,\tilde x(s)\]

*The parallel combination of a spring and a damper is called “Voigt-Element”*

If we take F to be the input and x the output

\[\label{eq:firstOrderTF}
\frac{{\tilde x(s)}}{{\tilde F(s)}} = G(s) = \frac{1}{{sr + k}} = \frac{{1/k}}{{s\frac{r}{k} + 1}} = \frac{{1/k}}{{sT + 1}}\]

where T = r/k is the system time constant. This is called a first order lag. Fig. [fig:dampedOscillator] is a simplified model of a muscle.

The next level of realism (or complexity) is the addition of a *mass m*.
Compared to the Voigt element, we now also
have to include the intertial force, \(F = m\frac{d^2x}{dt^2}\)

In that case the differential equation that describes the movement of the mass is now given by

\[m\frac{d^2x}{dt^2} + r\frac{dx}{dt} + kx = F(t)\]

Applying the Laplace transformation gives us the algebraic equation

\[m s^2 \tilde{x}(s) + r s \tilde{x}(s) + k \tilde{x}(s) = \tilde{F}(s)\]

Writing output over input, we get the transfer function for a damped oscillator

\[\frac{\tilde{x}}{\tilde{F}} = \frac{1}{m s^2 + r s + k}\]

Iif you have two blocks or transfer functions in cascade, \(G(j \omega)\) and \(H(j \omega)\), the combined transfer function is just their multiplication

\[out = H*G*in\;.\]

Since we are only dealing with linear systems, the sequence can be inverted \(G*H = H*G\).

For graphical representations (such as the Bode Plot below) it can be convenient to plot the logarithm of the transfer gain, since the log of the combined function just the sum of the logs:

*Serial combination of Transfer Functions G and H.*

\[\log (G(j\omega ) \times H(j\omega )) = \log (G(j\omega )) + \log (H(j\omega )\]

When we plot the transfer function of a first order lag on a linear-linear scale, we get

*You can see that as frequency :math:`omega = 2 pi f` goes up, the gain goes down, approaching zero, and the
phase lag increases to 90 deg.*

Bode decided to plot the gain on a log-log plot. By doing this you stretch out the low frequency part of the \(\omega\) axis and get a clearer view of the system’s frequency behavior. The result will appear:

*Bode Plot.*

An interesting frequency is \(\omega = 1/T\) , since then \(|G| = 1/\sqrt 2 = 0.707\) and \(\angle G = - {\tan ^{ - 1}}(1) = - {45^ \circ }\) . Below this frequency, \(log(|G|)\) can be closely approximated by a horizontal straight line at zero ( log(1)=0 ). At high frequencies, above \(\omega = 1/T\), \(|G|\) falls off in another straight line with a slope of 20dB/dec, which means it falls by 10 (20dB) if \(\omega\) increases by 10. The phase is a linear-log plot. The main point in Fig. [fig:bodePlot] is a very simple way to portray what this lag element does to sine wave inputs ( \(e^{j \omega t}\) ) of any frequency.

For Linear, Time-Invariant systems (LTI systems), the input and output have a simple relationship in the frequency domain :

\[Out(s) = G(s)*In(s)\]

where the transfer function G(s) can be expressed by the algebraic function

\[G(s) = \frac{{num(s)}}{{den(s)}} = \frac{{n(0)*{s^0} + n(1)*{s^1} + n(2)*{s^2} + ...}}{{d(0)*{s^0} + d(1)*{s^1} + d(2)*{s^2} + ...}}\]

In other words, specifying \(\vec{n}\) and \(\vec{d}\) , the coefficients of the numerator and denominator, uniquely characterizes the transfer function. This notation is used by some computational tools to simulate the response of such a system to a given input.

Different tools can be used to simulate such a system. For example, the response of a low-pass filter with a time-constant of 7 sec to an input step at 1 sec has the following transfer function

\[G(s) = \frac{1}{{7s + 1}}\]

and can be simulated as follows:

If you work on the command line, you can use the *Control System Toolbox* of MATLAB

```
% Define the transfer function
num = [1];
tau = 7;
den = [tau, 1];
mySystem = tf(num,den)
% Generate an input step
t = 0:0.1:30;
inSignal = zeros(size(t));
inSignal(t>=1) = 1;
% Simulate and show the output
[outSignal, tSim] = lsim(mySystem, inSignal, t);
plot(t, inSignal, tSim, outSignal);
```

In Python you can do the same thing with the module *signal* of the
Python package *SciPy*.

```
# Import required packages
import numpy as np
import scipy.signal as ss
import matplotlib.pylab as mp
# Define transfer function
num = [1]
tau = 7
den = [tau, 1]
mySystem = ss.lti(num, den)
# Generate inSignal
t = np.arange(0,30,0.1)
inSignal = np.zeros(t.size)
inSignal[t>=1] = 1
# Simulate and plot outSignal
tout, outSignal, xout = ss.lsim(mySystem, inSignal, t)
mp.plot(t, inSignal, tout, outSignal)
mp.show()
```