In the first section of this chapter we will investigate how movement parameters can be determined when we use marker-based systems to track a limb movement. In the second section, we will analyze movement recordings with inertial tracking systems.

Position and Orientation from Optical Sensors

Optical tracking systems like the Optotrac (Figure below) can provide high resolution position information for markers attached to objects.


Figure 66: Optotrac system.


Figure 67: Pi indicate the position of the markers, “o” the middle of the markers, and “*” the location of the Center-Of-Mass (COM).


Figure 68: Markers for 3D position measurements.

In our example, we have three attached markers to the right lower arm. The positions of these markers are measured, and stored as \({\vec P_i}(t),i = 1,2,3\) . The two questions are:

  1. How do we find the position \(\vec{x}(t)\) , linear velocity \(\vec v(t) = \frac{{d\vec x(t)}}{{dt}}\) , and linear acceleration?
  2. And how do we get orientation \(\mathbf{R}(t)\) , angular velocity \(\vec{\omega}(t)\) , and angular acceleration \(\frac{d\vec \omega }{dt}\)?

If we want to define position and orientation of an object in 3 dimensions, we need the positions of 3 points \({\vec P_i}(t)\) which are firmly connected to the object. The only requirements on these points are that they a) must not lie on a line, and b) are visible.

The position of an object is generally taken to be its center of mass (COM). To simplify things, let us collapse the position of the markers into their middle (marked with \("\circ"\) in the arm-cartoon above).

\[\vec M(t) = \frac{{\sum\limits_{i = 1,2,3} {{{\vec P}_i}(t)} }}{3}\]

With \(\vec C(t)\) the location of the COM, the vector from the markers to the COM is given by

(1)\[\vec r(t) = \vec C(t) - \vec M(t)\]

Orientation in Space

Let us start with finding the orientation of our object. To do so, we need to find the rotation matrix R which describes the orientation of our object. Since a rotation matrix is no more than three columns of orthogonal unit vectors, we need to find three orthogonal unit vectors which are uniquely defined through our marker points \(\vec{P_i}(t)\) .


Figure 69: Gram-Schmidt-Orthogonalization.

In general, the line \(\overrightarrow {{P_1}{P_2}}\) is not perpendicular to \(\overrightarrow {{P_1}{P_3}}\) . In order to uniquely define a normalized, right handed coordinate system characterizing position and orientation of an object, we can use a procedure called Gram-Schmidt-Orthogonalization:

(2)\[\begin{split}\begin{array}{l} {{\vec a}_1} = \frac{{{{\vec P}_2} - {{\vec P}_1}}}{{\left| {{{\vec P}_2} - {{\vec P}_1}} \right|}}\\ {{\vec a}_2} = \frac{{{{\left( {{P_{13}}} \right)}_{ \bot a1}}}}{{\left| {{{\left( {{P_{13}}} \right)}_{ \bot a1}}} \right|}},with\,\,{\left( {{P_{13}}} \right)_{ \bot a1}} = \left( {{{\vec P}_3} - {{\vec P}_1}} \right) - \left( {{{\vec a}_1} \cdot \left( {{{\vec P}_3} - {{\vec P}_1}} \right)} \right)\,{{\vec a}_1}\\ {{\vec a}_3} = {{\vec a}_1} \times {{\vec a}_2} \end{array}\end{split}\]

The second formula in Eq [eq:GramSchmidt] is obtained by looking at the decomposition of a vector \(\vec{b}\) into one component parallel to \(\vec{a}\) , and one component perpendicular: \(\vec b = {\vec b_\parallel } + {\vec b_ \bot }\) (see Figure below), and utilizing the fact that \(\left| \vec{a} = 1 \right|\)

(3)\[\begin{split}\begin{array}{l} {{\vec b}_\parallel } = \vec{a} \left| {\vec b} \right|\cos \alpha = \vec{a} \left( \vec b \cdot \frac{\vec a}{\left| {\vec a} \right|} \right)\\ {{\vec b}_ \bot } = \vec b - {{\vec b}_\parallel } \end{array}\end{split}\]

Figure 70: Decomposition of a vector b into two components: one parallel to a, and one perpendicular to a.

The three orthogonal unit vectors \({\vec a_i}(t)\) define the rotation matrix \(\mathbf{R}\) which describes the orientation of our object

\[\begin{split}{\bf{R}}(t) = \left[ {\begin{array}{{20}{c}} {{{\vec a}_1}(t)}&{{{\vec a}_2}(t)}&{{{\vec a}_3}} \end{array}(t)} \right]\end{split}\]

If we call the rotation of the object relative to the starting (or reference) orientation at \(t=t_0\) \({{\bf{R}}_{mov}}\) , we get

\[\begin{split}\begin{aligned} {\bf{R}}(t) & = {{\bf{R}}_{mov}}(t) \cdot {\bf{R}}({t_0})\\ {{\bf{R}}_{mov}}(t) & = {\bf{R}}(t) \cdot {\bf{R}}{({t_0})^{ - 1}}\end{aligned}\end{split}\]

Note that rotation matrices are not the only way to describe the orientation of an object. The same orientation can also be described with an equivalent rotation vector or quaternion.

Position in Space

Once we know the location of the markers and their orientation, the position of the COM is uniquely defined, even if the markers are mounted eccentrically from the COM. For every point, the movement is given by the sum of the COM-movement, and the rotation about the COM. With

\[\vec r(t) = {{\bf{R}}_{mov}}(t) \cdot \vec r({t_0})\]

and using Eq we get

\[\vec C(t) = \vec M(t) + \vec r(t) = \vec M(t) + {{\bf{R}}_{mov}}(t) \cdot \vec r({t_0})\]

This gives us the position of our object in space.

Velocity and Acceleration

The formulas for finding the linear velocity and linear acceleration of an object are trivial. If the position of an object is denoted with \(\vec{C}(t)\), linear velocity and acceleration are given by

\[\begin{split}\begin{aligned} \vec v(t) & = \frac{{d\vec C(t)}}{{dt}}\\ \vec a(t) & = \frac{{{d^2}\vec C(t)}}{{d{t^2}}}\end{aligned}\end{split}\]

From the orientation of our object, we can easily calculate the corresponding angular velocity \(\vec{\omega}\), as has been described in the previous chapter. The angular acceleration can be obtained from the angular velocity through simple differentiation

\[\overrightarrow {AngAcc} = \frac{{d\vec \omega }}{{dt}}\]

Position and Orientation from Inertial Sensors

Details of IMUs

A very good overview over inertial measurement units IMUs is provided in , and a number of the following details have been take from that article. The most common type of IMUs in biomedical applications are strapdown systems, sensors that can be attached to your body. They are typically produced with MEMS (“micro-electro-mechanical systems”) technology, which enables the creation of very small electromechanical devices using semiconductor fabrication techniques. Such IMUs can contain three types of sensors

  • Gyroscopes … to measure angular velocity
  • Accelerometers … to measure linear accelerations
  • Magnetometers … to measure the 3D orientation of the local magnetic field.

Figure 71: The MTX-sensor from Xsens measures linear acceleration, angular velocity, and the local magnetic eld, and calculates orientation and position.


Gyroscopes indicate the angular velocity with respect to space, expressed in a body-fixed coordinate system. MEMS gyroscopes make use of the Coriolis effect, which states that in a frame of reference rotating at angular velocity \(\vec{\omega}\), a mass m moving with velocity \(\vec{v}\) experiences a force \(\vec{F_c}\) :

(4)\[{\vec F_c} = 2m\,\left( {\vec v \times \vec \omega } \right)\]

MEMS gyroscopes contain vibrating elements to measure the Coriolis effect. The simplest geometry consists of a single mass which is driven to vibrate along a drive axis, as shown in Figure below. When the gyroscope is rotated a secondary vibration is induced along the perpendicular sense axis due to the Coriolis force. The angular velocity can be calculated by measuring this secondary rotation.


Figure 72: A vibrating mass gyroscope.

Linear Accelerometers

Linear acceleration can be measured either mechanically, or with a solid state device. Mechanical measurements can be performed e.g. with mechanical accelerometers (Figure below), or with Surface Acoustic Wave (SAW) accelerometers.


Figure 73: A mechanical accelerometer.

A SAW accelerometer consists of a cantilever beam which is resonated at a particular frequency, as shown in the Figure below. A mass is attached to one end of the beam which is free to move. The other end is rigidly attached to the case. When an acceleration is applied along the input axis the beam bends. This causes the frequency of the surface acoustic wave to change proportionally to the applied strain. By measuring this change in frequency the acceleration can be determined.


Figure 74: Sketch of a surface acoustic wave accelerometer.

Accelerometers measure the gravito-inertial force (GIF), i.e. the sum of gravity and inertial forces caused by linear accelerations

(5)\[{\vec F_{GIF}} = {\vec F_{gravity}} + {\vec F_{LinAcc}} = \vec g + \frac{{{d^2}\vec x}}{{d{t^2}}}\]


Typically, the orientation of the local magnetic field is determined by the earth-magnetic field. However, in the presence of devices like old CRT-screens the local magnetic field may be strongly distorted. If you use the magnetic field data in your analysis, check the output for your experimental setup!

IMU Artefacts

The analysis of IMU signals sketched out above makes it clear that position measurements with IMUs require high quality sensors, for two reasons:

  • To obtain orientation or position, the measurement signals have to be integrated once or twice, respectively. Thus any bias on the measurement signal effects the orientation linearly with time, and position even quadratically with time.
  • Small errors in the orientation signal induce erroneous position measurements. For example, a tilt error of 0.05\(^\circ\)will cause a component of the acceleration due to gravity with magnitude 0.0086 m/s2 to be projected onto the horizontal axes. This residual bias causes an error in the horizontal position which grows quadratically to 7.7 m after only 30 seconds.

To reduce these problems, two approaches are possible: additional measurement signals can be used to provide information about the absolute orientation with respect to space. Or information external restrictions can be used to improve the accuracy.

The first option is used by MEMS devices which also include a magnetometer, to measure the direction of the local magnetic field. This is approximately the direction of the earth magnetic field. However, it can also be affected by devices like computer monitors etc. This additional information provides a constant orientation, and allows to partially compensate erroneous measurement signals.

External restrictions of the measurement can also be used to compensate for measurement errors. For example, I can ask the subject repeatedly to bring the sensor into a well-known position/orientation. Or I can use knowledge about the experiment, e.g. that during walking the movement of the sole of the shoe is typically zero while the shoe is on the ground.

Orientation in Space

Inertial sensors typically give you the linear acceleration \(\overrightarrow {acc}\) and the angular velocity \(\vec{\omega}\) of the sensor. But you have to watch out: these values are measured by the sensor (obviously), which means that you get linear acceleration and angular velocity in sensor coordinates. Let us write down the corresponding formulas, using the following notation:

\({\vec x^{space}}\) is a vector expressed in space coordinates, and \({\vec x^{object}}\) the corresponding vector expressed with respect to the object. If at time \(t_i\) we

  • know the initial conditions \(\vec x({t_i}),\,{\mathbf{R}}({t_i}),\overrightarrow {vel} ({t_i})\),
  • know the measured values \({\overrightarrow {acc} ^{object}}\) and \({\vec \omega ^{object}}\) , and
  • assume that during the period \(\Delta t\) the linear acceleration and angular velocity remain constant,

then we can calculate the new position and orientation. Let us first deal with the orientation, as it does not depend on the linear acceleration.

The orientation can be obtained from the angular velocity recordings measured by the inertial sensors \({\vec \omega ^{object}}\) . How can we use these angular velocities to re-construct the current orientation of the object? To understand this question, let us start with an object with its orientation with respect to space given by the rotation matrix \({\bf{R}}_{object,start}^{space}\) . The subsequent rotation of this object for a short duration \(\Delta t\) about a constant, space-fixed axis \(\vec{n}\) with a constant velocity \(\omega\) is described by \(\Delta {{\bf{R}}^{space}} = \Delta {{\bf{R}}^{space}}(\vec n,\omega *\Delta t)\) . Then the new orientation of the object is given by

(6)\[{\bf{R}}_{object,new}^{space} = \Delta {{\bf{R}}^{space}} \cdot {\bf{R}}_{object,start}^{space}\]

Note that the sequence of the matrices in the matrix multiplication is essential. So far everything should be clear. Now watch out, the next step is somewhat tricky. If the angular acceleration is recorded from the object (e.g. from an inertial tracking device mounted on the object), we first have to transform it from the object-fixed reference frame to a space-fixed reference frame. Let \(\Delta {{\bf{R}}^{object}}\) describe the movement as seen from the object, and \({\bf{R}}_{object}^{space}\) the orientation of the object with respect to space. Then the movement with respect to space is given by

(7)\[\Delta {{\bf{R}}^{space}} = {\bf{R}}_{object}^{space} \cdot \Delta {{\bf{R}}^{object}} \cdot {\left( {{\bf{R}}_{object}^{space}} \right)^{ - 1}}\]

Inserting Eqn. (7) into (6), and noting that for short durations \({\bf{R}}_{object}^{space} \approx {\bf{R}}_{object,start}^{space}\) we get

(8)\[{\bf{R}}_{object,new}^{space} = {\bf{R}}_{object,start}^{space} \cdot \Delta {{\bf{R}}^{object}} \cdot {\left( {{\bf{R}}_{object,start}^{space}} \right)^{ - 1}} \cdot {\bf{R}}_{object,start}^{space} = {\bf{R}}_{object,start}^{space} \cdot \Delta {{\bf{R}}^{object}}\]

Comparing this to the equation describing the relation between orientation and anular velocity (see previous chapter), we see that the only thing that changes as we switch from an angular movement recorded with respect to space to an angular movement recorded with respect to the object is the sequence of the matrix multiplication! For practical calculations it is easiest to determine the orientation from the angular velocity using quaternions, as we have already done in chapter [sec:angVel]. There we used angular velocities that described the angular movement with respect to space \({\vec \omega ^{space}}\). Now if instead we use angular velocities measured with respect to the object \({\vec \omega ^{object}}\), Eqs and tell us that the only thing that we have to change is the sequence of the rotations. In other words, using quaternions the final orientation of the object is given by

(9)\[q(t) = q(t_0) \circ \Delta q^{object}(t_1) \circ \Delta q^{object}(t_2) \circ ... \circ \Delta q^{object}(t_{n - 1}) \circ \Delta q^{object}(t_n)\]


(10)\[\Delta \vec{q}^{object}(t_i) = \vec{n}(t)\sin (\frac{{\Delta \phi ({t_i})}}{2}) = \frac{{{{\vec \omega }^{object}}({t_i})}}{{\left| {{{\vec \omega }^{object}}({t_i})} \right|}}\sin \left( {\frac{{\left| {{{\vec \omega }^{object}}({t_i})} \right|\Delta t}}{2}} \right)\]

with \(q_0\) the starting quaternion, and \(q_i^{object}\) the quaternion corresponding to the vector part \(\Delta {\vec q_i} = \Delta \vec q({t_i})\) .

Position in Space

The linear acceleration forces acting on a sensor are the sum of gravity, and the linear accelerations elicited by our movements in space:

(11)\[{\overrightarrow {acc} _{measured}} = {\overrightarrow {acc} _{movement}} + {\overrightarrow {acc} _{gravity}}\]

Thereby the acceleration corresponding to gravity points up (!): gravity pulls our hair down, corresponding to an acceleration up.

So if we want to know the linear acceleration, we first have to subtract gravity from the measured acceleration signal. For that, the orientation with respect to space has first to be determined from the gyroscope signals (see Fig. below).


Figure 75: Strapdown inertial navigation algorithm

If we know the initial position \(x(t=0)\) and the initial velocity \({\left. {\frac{{dx}}{{dt}}} \right|_{t = 0}}\), we can integrate the acceleration signals to obtain velocity and position in space.


\[\begin{split}{\vec g^{space}} = \left( {\begin{array}{{20}{c}} 0\\ 0\\ { - 9.81} \end{array}} \right)\frac{m}{{{s^2}}}\,\, \Rightarrow \,\,\overrightarrow {acc} _{gravity}^{space} = \left( {\begin{array}{{20}{c}} 0\\ 0\\ {9.81} \end{array}} \right)\frac{m}{{{s^2}}}\end{split}\]

If our head is tilted, then acceleration does not point down any more, but into a direction depending on the orientation of our head, or in general of the measuring object, with respect to space. If the orientation of the object with respect to space is denoted by \({\bf{R}}_{object}^{space}\) , then the measured direction of gravity is

\[\begin{split}\begin{aligned} \vec g_{gravity}^{object} & = {\left( {{\bf{R}}_{object}^{space}} \right)^{ - 1}} \cdot \vec g_{gravity}^{space}\\ {\bf{R}}_{object}^{space} \cdot \vec g_{gravity}^{object} & = \vec g_{gravity}^{space}\end{aligned}\end{split}\]

As we are interested in the movement of the object in space, let us look at Eq , from a space fixed coordinate system. Since

\[\overrightarrow {acc} _{measured}^{space} = {\bf{R}}_{object}^{space} \cdot \overrightarrow {acc} _{measured}^{object}\]

the linear acceleration caused by movement in space is

(12)\[\overrightarrow {acc} _{movement}^{space} = {\bf{R}}_{object}^{space} \cdot \overrightarrow {acc} _{measured}^{object} - \overrightarrow {acc} _{gravity}^{space}\]

We are always interested in the position with respect to space: the position of the object with respect to itself obviously remains a constant zero! When the acceleration through movement with respect to space is known, the position change can be found through integration:

(13)\[\begin{split}\begin{array}{l} \overrightarrow {vel} (t) = \overrightarrow {vel} ({t_0}) + \int\limits_{{t_0}}^t {\overrightarrow {acc} (t')\,dt'} \\ \vec x(t) = \vec x({t_0}) + \int\limits_{{t_0}}^t {\overrightarrow {vel} (t'')\,dt''} = \vec x({t_0}) + \overrightarrow {vel} ({t_0})*(t - {t_0}) + \int\limits_{{t_0}}^t {\int\limits_{{t_0}}^{t''} {\overrightarrow {acc} (t')\,dt'} } \,dt'' \end{array}\end{split}\]

When starting with a stationary object, i.e. \(\overrightarrow {vel} ({t_0}) = 0\), the change in position is given by

\[\Delta \vec P(t) = \vec P(t) - \vec P({t_0}) = \int\limits_{{t_0}}^t {\int\limits_{{t_0}}^{t''} {\overrightarrow {acc} (t')\,dt'} } \,dt''\]

Measuring the acceleration at discrete times \(t_i (i=0,..,n)\), we have to replace Eqn. (13) with discrete equations:

(14)\[\begin{split}\begin{array}{l} \overrightarrow {vel} ({t_{i + 1}}) = \overrightarrow {vel} ({t_i}) + \overrightarrow {acc} ({t_i})*\Delta t\\ \vec x({t_{i + 1}}) = \vec x({t_i}) + \overrightarrow {vel} ({t_i})*\Delta t + \frac{{\overrightarrow {acc} ({t_i})}}{2}*\Delta {t^2} \end{array}\end{split}\]

with the sampling period. Note that in Eqn. (14) , is the acceleration component caused by the linear movement with respect to space (Eqn. (12) ).

Example: Pendulum

As an example, let us consider the signals measured by an IMU that is attached to a pendulum.


Figure 76: IMU attached to a pendulum.

from __future__ import division
'''Simulation of a "simple" pendulum.
This is not as simple as it looks. The signs are a bugger (force vs acceleration, up vs down,
orientation of coordinate system). Also, I was surprised how quickly one gets artefacts in the
reconstruction: already with a sample rate of 1 kHz, artefacts sneak in!

Author: Thomas Haslwanter
Date:   Dec-2014
Ver:    1.1

import numpy as np
import matplotlib.pyplot as plt
from numpy import pi, cos, sin, zeros, nan, arange, vstack, array, ones, cross, r_
from thLib import savgol
from scipy.integrate import cumtrapz
import pandas as pd

g = 9.81    # gravity, [m/s^2]
lengthPendulum = 0.20    # [m]

def generateData(outFile = r'C:\temp\test.dat'):
    '''Simulate a pendulum in 2D: the equation of motion is \frac{d^2 \phi}{dt^2} = -\frac{g}{l} \sin(phi)
    Also, write the data to an out-file, so that they can be used for an exercise.'''
    tMax = 2   # [sec] duration of simulation
    rate = 10000  # sampling rate, [Hz]
    dt = 1./rate
    time = arange(0,tMax,dt)
    # differential equation of motion
    accFunc = lambda phi: -g/lengthPendulum * sin(phi)
    # Memory allocation
    phi = nan*ones(len(time))
    omega = nan*ones(len(time))
    phi = nan*ones(len(time))
    # Initial conditions
    phi[0] = 0.1    # rad
    omega[0] = 0    # rad/s
    tStatic = 0.1   # [sec], initial static setup
    # Numerical integration
    for ii in range(len(time)-1):
        if time[ii] < tStatic:     # static initial condition
            omega[ii+1] = 0
            omega[ii+1] = omega[ii] + accFunc(phi[ii])*dt
        phi[ii+1] = phi[ii] + omega[ii+1]*dt    # Euler-Cromer method, is more stable
    # Find the position, velocity, and acceleration
    # The origin is at the center of the rotation
    pos = lengthPendulum * vstack([sin(phi), -cos(phi)]).T
    vel = savgol(pos, 5,3,1,rate)
    acc = savgol(pos, 5,3,2,rate)

    # Add gravity    
    accGravity = r_[0,g]
    gifReSpace = accGravity + acc
    # Transfer into a body-fixed system
    gifReBody = np.array([rotate(gif, -angle) for (gif, angle) in zip (gifReSpace, phi)])
    # Quickest way to write the "measured" data to a file, with headers
    df = pd.DataFrame(vstack((time, phi, omega, gifReBody.T)).T, columns=['time', 'phi', 'omega', 'gifBHor', 'gifBVer'])
    df.to_csv(outFile, sep='\t')
    print('Data written to {0}'.format(outFile))
    return df

def showData(data, phi, pos, length):
    '''Plots of the simulation, and comparison with original data'''
    fig, axs = plt.subplots(1,3)
    # Show Phi
    axs[0].plot(data['time'], phi)
    axs[0].set_xlabel('Time [sec]')
    axs[0].set_ylabel('Phi [rad]')
    # Show Omega
    axs[1].plot(data['time'], data['omega'])
    axs[1].set_xlabel('Time [sec]')
    axs[1].set_ylabel('Omega [rad/s]')
    # Show measured force
    axs[2].plot(data['time'], data[['gifBHor', 'gifBVer']])
    axs[2].set_xlabel('Time [sec]')
    axs[2].set_ylabel('GIF re Body [m/s^2]')
    axs[2].legend(('Hor', 'Ver'))
    # x/y plot of the position
    fig2, axs2 = plt.subplots(1,2)
    axs2[0].plot(pos[:,0], pos[:,1])
    axs2[0].set_title('Position plot')
    axs2[1].plot(length*np.sin(phi), 'r')
def rotate(data, phi):
    '''Rotate 2d data in column form'''
    Rmat = array([[cos(phi), -sin(phi)],
                  [sin(phi), cos(phi)]])
    rotated = Rmat.dot(data.T).T
    return rotated

def reconstruct_movement(omega, gifMeasured, rate, length):
    ''' From the measured data, reconstruct the movement '''
    # Initial orientation
    phi0 = np.arctan2(gifMeasured[0,0], gifMeasured[0,1])
    # Calculate phi, by integrating omega
    phi = cumtrapz(omega, dx=dt,initial=0) + phi0
    # From phi and the measured acceleration, get the movement acceleration
    accReSpace = np.array([rotate(gif, angle) - array([0, g]) for (gif, angle) in zip (gifMeasured, phi)])
    # Position and Velocity through integration
    vel = nan*ones(accReSpace.shape)
    pos = nan*ones(accReSpace.shape)
    init = length*array([sin(phi[0]), -cos(phi[0])])
    for ii in range(accReSpace.shape[1]):
        vel[:,ii] = cumtrapz(accReSpace[:,ii], dx=dt, initial=0)
        pos[:,ii] = cumtrapz(vel[:,ii],      dx=dt, initial=0)
        pos[:,ii] += init[ii]    # initial condition
    return (phi, pos)

if __name__=='__main__':
    '''Main part'''
    outFile = r'C:\temp\test.dat'
    data = generateData(outFile = outFile)
    # Get the data: this is just to show how such data can be read in again
    #data = pd.io.parsers.read_table(outFile, skipinitialspace=True)
    # From the measured data, reconstruct the movement:
    # First, find the sampling rate from the time
    dt = np.diff(data['time'][:2].values)
    phi, pos = reconstruct_movement(omega = data['omega'].values,
                                    gifMeasured = data[['gifBHor','gifBVer']].values,
    # Show the results - this should be a smooth oscillation
    showData(data, phi, pos, lengthPendulum)