ARM Architecture

Learn how to Estimate Encoder Velocity With out Making Silly Errors: Half II (Monitoring Loops and PLLs)

Yeeehah! Lastly we’re able to deal with some extra intelligent methods to determine the speed of a place encoder. In half I, we appeared on the fundamentals of velocity estimation. Then in my final article, I talked a bit of about what’s obligatory to guage totally different sorts of algorithms. Now it is time to begin describing them. We’ll cowl monitoring loops and phase-locked loops on this article, and Luenberger observers partially III.

However first we want a reasonably easy, however attention-grabbing, instance system to remember. And right here it’s:

Think about we’ve a inflexible pendulum: a metal weight mounted on a skinny metal rod hooked up to a bearing, topic to a number of forces:

  • gravity
  • friction (from the bearing and from viscous drag by means of the air)
  • two everlasting magnets, every mounted in a case on the high and backside of the pendulum swing
  • two electromagnets, every mounted barely off heart of the pendulum swing

The electromagnets are triggered electronically, and impart a really brief and really sturdy impulse to the pendulum; there are sensors within the base that accumulate the period of time the pendulum stays on the high or backside of the swing, and when the accumulators attain a threshold, they reset and set off the electromagnet pulses.

Gravity and friction are simple to mannequin. We talked about that final time, for a inflexible pendulum with none of this magnet humorous stuff:

$$start{eqnarray} frac{dtheta}{dt} &=& omega frac{domega}{dt} &=& -frac{g}{L} sin theta – B omega finish{eqnarray}$$

the place θ is the angle of the pendulum relative to the underside of its swing, ω is its angular velocity, g = the gravitational acceleration (9.8m/s2 at sea stage on Earth), L is the pendulum size, and B is a damping coefficient.

The everlasting magnets trigger loads of engaging pressure when the pendulum is shut by, however barely any when it will get additional away, so the oscillation frequency will get bigger on the backside of the swing. The electromagnets give the pendulum a kick to maintain it from settling on the high or backside; the rationale they’re mounted barely off heart is to make them extra environment friendly at exerting torque on the pendulum. (In the event that they had been mounted precisely at high or backside, they usually triggered when the pendulum was precisely aligned, then there can be no torque exerted, only a pull downwards countered by the strain within the rod. Being off heart adjustments the course of pressure to have a sideways element.)

We can’t be utilizing this pendulum instance quantitatively as we speak, however preserve it behind your thoughts — it is an instance the place more often than not the place adjustments very predictably, however generally it adjustments quickly and could also be exhausting to trace.

We’ll measure place utilizing an incremental encoder. Let’s take into consideration this a bit rigorously. Given place readings, what sort of data do we all know, and what sort of errors can we make?

Data Concept and Estimation
(properly, some handwaving at the very least)

For those who get into hard-core estimation concept, you will take care of all kinds of matrix equations to take care of a number of variables and the cross-correlation of their likelihood distributions. There’s phrases like covariance matrices and Gram-Schmidt Orthogonalization and the Fisher Data Matrix and the Cramer-Rao Certain. I as soon as understood these rather well for just a few weeks whereas taking a category in faculty. Alas, that mild bulb has dimmed for me….

Anyway, don’t be concerned about making an attempt to grasp these things. One main take-away from it’s that the ideas of customary deviation (how a lot variation there may be in a measurement as a consequence of randomness) and data are associated. The Cramer-Rao certain principally says that data and variance (the sq. of normal deviation) are inversely associated: the extra variance your measurements have, the much less data you’ve got.

For Gaussian distributions, the relationships are precise, and if I mix unbiased measurements of the identical underlying parameter M in an optimum method, the quantity of knowledge accumulates linearly. So if I’ve a measurement M1 of that parameter with a variance of three.0 (data = 1/3), and an unbiased, uncorrelated measurement M2 of the identical parameter with a variance of 6.0 (data = 1/6), then the optimum mixture of the 2 will yield a mixed measurement ( hat{M} ) with a variance of two.0 (data = 1/2 = 1/3 + 1/6). It seems that this optimum mixture occurs to be ( hat{M} = frac{2}{3}M_1 + frac{1}{3}M_2 ). It is a weighted linear sum of the person measurements: the weights at all times sum to 1.0, and the weights have the identical proportion as every measurement’s quantity of knowledge.

So one central idea of estimation is that you attempt to make optimum use of the knowledge you’ve got. If I mix measurements in one other method, I’ll have much less data. (With the M instance earlier than, if I make the naive estimate ( hat{M} = frac{1}{2}M_1 + frac{1}{2}M_2 ), then the ensuing variance occurs to be ( (frac{1}{2})^2 occasions 3.0 + (frac{1}{2})^2 occasions 6.0 = 2.25 ), which is bigger, so my estimate is barely much less prone to be as correct.)

Again to our pendulum + encoder instance, what do we all know, and what sort of errors do we’ve?

We all know the fundamental equations of the pendulum’s movement; most importantly, realizing the pendulum place at one instantaneous in time signifies that we’re pretty prone to realize it at a time shortly thereafter. The place measurements are extremely correlated, so we should always attempt to make the very best use of all measurements we are able to take. Similar factor with the speed: so long as the torque on the pendulum is not too excessive, realizing the pendulum velocity at one instantaneous means we’re pretty prone to realize it at a time shortly thereafter.

Here is what errors forestall us from realizing the pendulum place with good precision:

  • The place measurements have quantization errors, some as a consequence of the truth that we’re utilizing an encoder with finite decision, and a few as a consequence of the truth that the encoder edges could comprise small place errors as a consequence of manufacturing tolerances. (I mentioned this in a earlier article.)
  • The place measurements have noise: since it is a digital system that is minimal, however we are able to often get encoder glitch pulses.
  • If we cease listening to the pendulum for just a few seconds, and somebody spins it round just a few occasions once we’re not trying, we can not inform the distinction: we’ve no absolute reference for the variety of bodily rotations made by the pendulum.
  • We could not know the entire pendulum parameters precisely (gravity, pendulum size, mass, viscous drag)
  • We could not be capable to mannequin a few of the dynamics precisely (everlasting magnets)
  • We could not be capable to predict the disturbances precisely (the electromagnetic pulses, or somebody coming by and giving it a whack)

The most effective estimators are those that take all these results under consideration, scale back the web estimation error, and are strong to sudden errors. The thought of robustness is refined: for those who design an estimator that is nice when you recognize the pendulum parameters precisely and solely takes one encoder studying per second, it may nonetheless offer you a really low error, however then somebody squirts WD-40 into the pendulum pivot to vary the viscous drag and the estimator begins being method off.

What’s improper with the forms of estimators we talked about in Half I? Effectively, nothing actually; they’re easy, however they’re simply not optimum, and even shut. For those who’re taking place measurements 10,000 occasions a second, and also you compute velocity by taking every place measurement and subtracting off the place measured 1 second earlier, then you definitely ignore all of the potential data out there in these different 9,998 readings between the 2.

Frequent Constructions of Extra Refined Estimators

With that, let’s go shortly over a small menagerie of estimator buildings.

  • Part-locked loops (PLLs) — given oscillating inputs, a variable-frequency oscillator makes an attempt to comply with these inputs through the use of a section detector and a suggestions loop to attempt to drive the section error to zero.
  • Monitoring loops — an estimator makes an attempt to comply with the enter through the use of an error and a suggestions loop to attempt to drive the error to zero. (Just like a PLL, however with out the oscillating half.)
  • Luenberger observers — a sort of monitoring loop that tries to simulate the dynamics of the actual system, and provides a corrective time period based mostly on a hard and fast acquire utilized to the distinction between actual and simulated techniques.
  • Adaptive filters (e.g. Least Imply Sq. or Recursive Least Squares filters) — a sort of monitoring loop that tries to simulate the dynamics of the actual system, then makes use of the error between actual and simulated system to regulate the parameters of the simulation
  • Kalman filters — primarily a Luenberger observer with a variable acquire, the place the variable acquire is derived from estimates of measurement noise to find out the optimum acquire.

Adaptive and Kalman filters are finest utilized in instances the place the sources of noise or error are “bizarre” — that’s, they’ve a distribution that’s considerably Gaussian in character and uncorrelated with the measurements. Kalman filters had been developed for steering techniques within the aerospace trade: issues like radar and GPS and trajectory monitoring are actually good functions. Kalman filters additionally do very properly when the signal-to-noise ratio varies with time, as they will adapt to such a state of affairs. I usually learn articles that attempt to apply Kalman filters in sensorless place estimators utilized in motor management, and it is saddening to see this, since in these functions the errors are extra usually as a consequence of imperfect cancellation of coupling between system states, as a substitute of random noise, and the errors are something however Gaussian or uncorrelated. Likewise on this encoder utility: quantization noise from a place encoder shouldn’t be actually a great match for an adaptive filter or Kalman filter, so I will not talk about it additional.

The remaining three buildings talked about listed below are related. I will cowl PLLs and monitoring loops subsequent, leaving Luenberger observers for Half III. Since monitoring loops are pretty common, and PLLs and Luenberger observers are particular forms of monitoring loops, it is smart to cowl monitoring loops first.

Monitoring loops

Here is a extremely easy instance. For example we’ve a steady place sign, and once we measure it, we get the place sign together with additive Gaussian white noise:

import numpy as np
import matplotlib.pyplot as plt

t = np.linspace(0,4,5000)
pos_exact = (np.abs(t-0.5) - np.abs(t-1.5) - np.abs(t-2.5) + np.abs(t-3.5))/2
pos_measured = pos_exact + 0.04*np.random.randn(t.measurement)

fig = plt.determine(figsize=(8,6),dpi=80)
ax = fig.add_subplot(1,1,1)

Now, your first response could be, “Hey, let’s simply filter out the noise with a low-pass filter!” That is not such a foul thought, so let’s do it:

import scipy.sign

def lpf1(x,alpha):
    '''1-pole low-pass filter with coefficient alpha = 1/tau'''
    return scipy.sign.lfilter([alpha], [1, alpha-1], x)
def rms(e):
    '''root-mean sq.'''
    return np.sqrt(np.imply(e*e))
def maxabs(e):
    '''max absolute worth'''
    return max(np.abs(e))

alphas = [0.2,0.1,0.05,0.02]
estimates = [lpf1(pos_measured, alpha) for alpha in alphas]

fig = plt.determine(figsize=(8,6),dpi=80)
ax = fig.add_subplot(2,1,1)
for y in estimates:
ax.legend(['exact'] + ['$alpha = %.2f$' % alpha for alpha in alphas])

ax = fig.add_subplot(2,1,2)
for alpha,y in zip(alphas,estimates):
    err = y-pos_exact
    ax.set_ylabel('place error')
    print 'alpha=%.2f -> rms error = %.5f, peak error = %.4f' % (alpha, rms(err), maxabs(err))
alpha=0.20 -> rms error = 0.01369, peak error = 0.0470
alpha=0.10 -> rms error = 0.01058, peak error = 0.0366
alpha=0.05 -> rms error = 0.01237, peak error = 0.0348
alpha=0.02 -> rms error = 0.02733, peak error = 0.0498

And right here we’ve the identical downside we bumped into once we had been taking a look at evaluating algorithms in Half 1.5: there is a tradeoff between noise stage and the consequences of section lag and time delay. A easy low-pass filter does not do very properly monitoring ramp waveforms: the time delay causes a DC offset.

With a monitoring loop, we attempt to mannequin the system and drive the steady-state error to zero. Let’s mannequin our system as a velocity that varies, and combine the estimated velocity to get place. The speed would be the output of a proportional-integral management loop pushed by the place error.

def trkloop(x,dt,kp,ki):
    def helper():
        velest = 0
        posest = 0
        velintegrator = 0
        for xmeas in x:
            posest += velest*dt
            poserr = xmeas - posest
            velintegrator += poserr * ki * dt
            velest = poserr * kp + velintegrator
            yield (posest, velest, velintegrator)
    y = np.array([yi for yi in helper()])
    return y[:,0],y[:,1],y[:,2]

[posest,velest,velestfilt] = trkloop(pos_measured,t[1]-t[0],kp=40.0,ki=900.0)

fig = plt.determine(figsize=(8,6),dpi=80)
ax = fig.add_subplot(2,1,1)

ax = fig.add_subplot(2,1,2)
err = posest-pos_exact
ax.set_ylabel('place error')
print 'rms error = %.5f, peak error = %.4f' % (rms(err), maxabs(err))
rms error = 0.00724, peak error = 0.0308

The RMS and peak error listed below are lower than within the 1-pole low-pass filter. Not solely that, however within the course of, we get an estimate of velocity! We really get two estimates of velocity. One is the integrator of the PI loop used within the monitoring loop, the opposite is the output of the PI loop. Let’s plot these (integrator in blue, PI output in yellow):

fig = plt.determine(figsize=(8,6),dpi=80)
ax = fig.add_subplot(1,1,1)
vel_exact = (t > 0.5) * (t < 1.5) + (-1.0*(t > 2.5) * (t < 3.5))


The PI output seems to be horrible; the PI integrator seems to be okay. (There’s fairly a little bit of noise right here, so it is actually troublesome to get a great output sign.)

Which one is best to make use of? Effectively, for show functions, I might use the integrator worth; it does not comprise excessive frequency noise. For enter right into a suggestions loop (like a velocity controller), I would use the PI output straight, for the reason that high-frequency stuff will most probably get filtered out anyway.

So the monitoring loop is best than a plain low-pass filter, proper?

Effectively, in actuality there is a trick right here. This monitoring loop is a linear filter, so it may be written as an everyday IIR low-pass filter. The factor is, it is a 2nd-order filter, whereas we in contrast it towards a 1st-order low-pass filter, in order that’s not likely honest.

However by writing it as a monitoring loop, we get a extra bodily that means to filter state variables — and extra importantly, if we wish to, we are able to take care of nonlinear system habits utilizing a monitoring loop that features nonlinear components.

For these of you curious about the Laplace-domain algebra (for the remainder of you, skip to the subsequent part) the estimated place ( hat{x} ) and estimated velocity ( hat{v} ) behave like this (fast refresher: ( 1/s ) is the Laplace-domain equal of an integrator):

$$start{eqnarray} hat{x}&=&frac{1}{s}hat{v}cr hat{v}&=&(frac{k_i}{s} + k_p)(x-hat{x}) finish{eqnarray}$$

which we are able to then remedy to get

$$hat{x} = frac{k_ps + k_i}{s^2}(x-hat{x}) $$

after which (after a bit of extra algebraic manipulation)

$$hat{x} = frac{frac{k_p}{k_i}s + 1}{frac{1}{k_i}s^2 + frac{k_p}{k_i}s + 1}x $$

which is only a low-pass filter with two poles and one zero, whereas the 1-pole low-pass filter is

$$hat{x} = frac{1}{tau s+1}x$$

The error in these techniques is

$$tilde{x} = x-hat{x} = frac{frac{1}{k_i}s^2}{frac{1}{k_i}s^2 + frac{k_p}{k_i}s + 1}x $$


$$tilde{x} = frac{tau s}{tau s+1}x$$

If we use the Closing Worth Theorem, the steady-state error for each of those to a place step enter is zero, however the steady-state error for a place ramp enter (velocity step enter) is nonzero for the 1-pole low-pass filter, whereas it’s nonetheless zero for the monitoring loop. That is due to the zero within the monitoring loop’s switch operate.

Want to trace place in case of fixed acceleration? Then go forward and add one other integrator… simply be sure you analyze the switch operate and add a proportional time period to that integrator so the ensuing filter is steady.

Part-locked loops (PLL)

Monitoring loops are nice! There’s a particular class of monitoring loops to deal with issues the place you will need to lock onto the section or frequency of a periodic sign. These are known as phase-locked loops (go determine!), they usually often include the next construction:

The thought is that you’ve got a voltage-controlled oscillator (VCO) creating some output, that goes by means of a suggestions filter, will get in contrast in section towards the enter with a section detector, and the section error sign goes by means of a loop filter earlier than it’s used as a management sign for the VCO. The loop filter’s enter and output are primarily DC indicators proportional to output frequency; the opposite indicators within the diagram are periodic indicators. The suggestions filter is often only a passthrough (no filter) or a frequency divider. Most microcontrollers as of late with PLL-based clocks have a divide-by-N block within the suggestions filter, which has the web impact that the output of the PLL multiplies the enter frequency by N. This manner you possibly can take, for instance, an 8 MHz crystal oscillator and switch it right into a 128MHz clock sign on the chip: because of this, you need not distribute high-frequency clock indicators in your printed circuit board, only a low-frequency clock, and it’ll get multiplied up inner to the microcontroller. At steady-state, the indicators within the PLL are sine waves or sq. waves, aside from the VCO enter which is a DC voltage; the inputs to the section detector line up in section. (Digital PLLs are doable as properly, during which case the VCO is changed by a digitally-controlled oscillator with a digital enter representing the management frequency.)

One easy instance of a PLL is the place the section detector is a multiplier that acts on sine waves, the loop filter is an integrator, and there’s no suggestions filter, only a passthrough. On this case you’ve got

$$start{eqnarray} V_{in} &=& A sin phi_i(t) cr phi_i(t) &approx& omega t + phi_{i0}cr V_{out} &=& B sin phi_o(t) cr V_{pd} &=& V_{in}V_{out} = AB sin phi_i(t) sin phi_o(t) cr &=& frac{AB}{2} (cos (phi_i(t) – phi_o(t)) + cos(phi_i(t) + phi_o(t)))

The section detector outputs a sum and distinction frequency: if the output frequency is the about identical, then the sum time period ( cos(phi_i(t) + phi_o(t)) ) is about double the enter frequency, and the distinction time period ( cos(phi_i(t) – phi_o(t)) ) is at low frequency. The loop filter is designed to filter out the double-frequency time period, and combine the low-frequency time period:

$$start{eqnarray} V_{VCO_in} &approx& Ksin(phi_i(t) – phi_o(t)) + f_0 finish{eqnarray}$$

This can attain equilibrium with fixed section distinction between ( phi_i(t) ) and ( phi_o(t) ) → the loop locks onto the enter section!

On the whole, phase-locked loops with sine-wave indicators are likely to have dynamics that appear to be this:

$$start{eqnarray} frac{dx}{dt} &=& Asin tilde{phi} = Asin(phi_i – phi_o) cr tilde{omega} &=& -x – Bsin tilde{phi} – Csin 2omega tcr frac{dtilde{phi}}{dt} &=& tilde{omega} finish{eqnarray}$$

For those who’re not conversant in a majority of these equations, your eyes could glaze over. It seems that they’ve a really related construction to the inflexible pendulum equations above! (The plain pendulum equations, with solely gravity, inertia, and damping — no magnets.) With a great loop filter, the excessive frequency amplitude C may be very small, and we are able to neglect this time period. At low values of section error ( tilde{phi} ), the section error oscillates with a attribute frequency and a decaying amplitude. At excessive values of ( tilde{phi} ) there are some bizarre behaviors, which might be much like that of a pendulum spinning round earlier than it settles down.

t = np.linspace(0,5,10000)
def simpll(tlist,A,B,omega0,phi0):
    def helper():
        phi = phi0
        x = -omega0
        omega = -x - B*np.sin(phi)
        it = iter(tlist)
        tprev = it.subsequent()
        yield(tprev, omega, phi, x)
        for t in it:
            dt = t - tprev
            # Verlet solver:
            phi_mid = phi + omega*dt/2
            x += A*np.sin(phi_mid)*dt
            omega = -x - B*np.sin(phi_mid)
            phi = phi_mid + omega*dt/2
            tprev = t
            yield(tprev, omega, phi, x)
    return np.array([v for v in helper()])

v = simpll(t,A=1800,B=10,omega0=140,phi0=0)
omega = v[:,1]
phi = v[:,2]

fig = plt.determine(figsize=(8,6), dpi=80)
ax = fig.add_subplot(2,1,1)
ax = fig.add_subplot(2,1,2)
ax.set_ylabel('$tilde{phi}/2pi$ ',fontsize=20)


That is typical of the habits of phase-locked loops: as a result of there isn’t a absolute section reference, with a big preliminary frequency error, you will get cycle slips earlier than the loop locks onto the enter sign. It’s usually helpful to plot the habits of section and frequency error in section house, slightly than as a pair of time-series plots:

fig = plt.determine(figsize=(8,6), dpi=80)
ax = fig.add_subplot(1,1,1)
ax.set_xlabel('section error (cycles) = $tilde{phi}/2pi$', fontsize=16)
ax.set_ylabel('velocity error (rad/sec) = $tilde{omega}$', fontsize=16)

We will additionally strive graphing a bunch of trials with totally different preliminary circumstances:

fig = plt.determine(figsize=(8,6), dpi=80)
ax = fig.add_subplot(1,1,1)

t = np.linspace(0,5,2000)
for i in xrange(-2,2):
    for s in [-2,-1,1,2]:
        omega0 = s*100
        v = simpll(t,A=1800,B=10,omega0=omega0,phi0=(i/2.0)*np.pi)
        omega = v[:,1]
        phi = v[:,2]
        ok = math.ground(phi[-1]/(2*np.pi) + 0.5)
        phi -= ok*2*np.pi
        for cycrepeat in np.array([-2,-1,0,1,2])+np.signal(s):
ax.set_xlabel('$tilde{phi}/2pi$ ',fontsize=20)
ax.set_ylabel('$tilde{omega}$ ',fontsize=20)


Loopy-looking, huh? The trajectory in section house oscillates till it will get shut sufficient to one of many steady factors, after which swirls round with reducing amplitude.

PLLs ought to at all times be tuned correctly — there are tradeoffs in selecting the 2 positive factors A and B that have an effect on loop bandwidth and damping, and likewise noise rejection and lock acquisition time. I’ll cowl that in one other article, however for now we’ll attempt to preserve issues at a reasonably excessive stage.

Is a PLL related to our encoder instance? Effectively, sure and no.

The “no” reply (PLLs aren’t related) is true if we use a devoted encoder counter; except for preliminary location by way of an index pulse, the encoder counter will at all times give us a precise place. We need not guess whether or not the encoder is at place X or place X+1 or place X+2. If we wish to clean out the place and estimate velocity, we are able to use an everyday monitoring loop and know with certainty that we are going to at all times find yourself on the proper place.

The “sure” reply (PLLs are related) is true if we use an encoder counter in a really noisy system. We could get spurious encoder counts that trigger us to slide within the 4-count encoder cycle. (00 -> 01 -> 11 -> 10 -> 00) On this case a PLL could be very helpful as a result of it is going to reject excessive frequency glitches. Alternatively, if we’re utilizing place sensors which might be extra analog in nature (resolvers or analog corridor sensors, or sensorless estimators), PLLs are very applicable, particularly if they’re a set of analog sensors. Here is why:

Scalar vs. vector PLLs

Let’s take a look at that good previous sine wave once more:

t = np.linspace(0,1,1000)
tpts = np.linspace(0,1,5)
f = lambda t: 0.9*cos(2*np.pi*t)
''' f(t)  = A*cos(omega*t)'''
fderiv = lambda t: -0.9*2*np.pi*sin(2*np.pi*t)
''' f'(t) = -A*omega*sin(omega*t)'''
fig = plt.determine(figsize=(8,6),dpi=80); ax=fig.add_subplot(1,1,1)
phasediff = 6.0/360
for t in tpts:
    slope = fderiv(t)
    a = 0.1
ax.set_xticklabels(['%d' % x for x in np.linspace(0,360,13)]);

Here is two sine waves, really; the 2 are 6° aside in section. (Six levels of separation! Ha! Sorry, could not resist.) Take a look at the distinction between the ensuing indicators at totally different factors within the cycle. Close to 90° and 270°, when the sign is close to zero, the slope is giant, and we are able to simply distinguish these two indicators by their values on the identical time. When the sign is close to its extremes, nonetheless, the slope is close to zero, and the indicators are very shut to one another. Increased slope offers us extra section data. We can also’t inform precisely the place the sign is in section simply by taking a look at it at one time limit: if the sign worth is 0, is the section at 90° or 270°? They’ve the identical worth. Or if these indicators are representing the cosine of place, we will not inform whether or not the place is shifting backwards or forwards, since ( cos(x) = cos(-x) ).

Now suppose we’ve two sine waves 90° aside:

t = np.linspace(0,1,1000)
f = lambda A,t: np.vstack([0.9*np.cos(t*2*np.pi),

Right here we are able to estimate section through the use of each indicators! When one sign is at its excessive, and the slope is zero, we get little or no data, however we are able to get helpful data from the opposite sign, which is passing by means of zero and is at most slope. It seems that the optimum strategy to estimate section angle from given measurements of those two indicators at a single instantaneous is to make use of the arctangent: φ = atan2(y,x). We will establish the section angle of those indicators at any level within the cycle, and may distinguish whether or not the section goes forwards and backwards. We will even estimate the error of the section estimate: if the indicators have amplitude A, and there may be additive Gaussian white noise on each indicators with rms worth n, the place n is small in comparison with A, it seems that the ensuing error within the section estimate has rms worth of n/A in radians, unbiased of section:

def phase_estimate_2d(A,n,N=20000):
    t = np.linspace(0,1,N)
    xy_nonoise = f(A,t)
    xy = xy_nonoise + n * np.random.randn(N,2)
    x = xy[:,0]; y = xy[:,1]
    def principal_angle(x,c=1.0):
        ''' discover the principal angle: between -c/2 and +c/2 '''
        return (x+c/2)%c - c/2
    phase_error_radians = principal_angle(np.arctan2(y,x) - t*2*np.pi, 2*np.pi)
    plt.ylabel('section error, radians')
    print 'n/A = %.4f' % (n/A)
    print 'rms section error=",rms(phase_error_radians)
n/A = 0.0222
rms section error =  0.0223209290452

Now suppose we’ve a excessive noise state of affairs:

n/A = 0.2778
rms section error =  0.293673249387

Oh, expensive. When the sign plus noise ends in readings close to (0,0) it will get type of nasty, and the section can immediately flip round. Let”s say we’re making measurements of x and y once in a while, then calculating the section utilizing the arctangent, and we derive successive angle estimates of three°, 68°, -152°, -20°, 17°, 63°. Did the angle wander close to zero, with a noise spike at 68° and -152°, which we should always filter out? Or did it improve reasonably quick, wrapping round 1 full cycle from 3°, 68°, 208°, 340°, 377°, 423°? We will not inform; the principal angles are the identical.

One huge downside with the atan2() methodology is that it solely tells us the principal angle, with no regard to previous historical past. If we wish to assemble a coherent historical past, we’ve to make use of an unwrapping operate:

angles = np.array([90,117,136,160,-171,-166,-141,-118,-83,-42,-27,3,68,-152,-20,17,63])
angles2 = angles+0.0; angles2[ierror] = 44
unwrap_deg = lambda deg: np.unwrap(deg/180.0*np.pi)*180/np.pi
ax.legend(('principal angle','unwrapped angle'),'finest')

And the issue is that in indicators with excessive frequency content material, one single pattern that has a noise spike can result in a cycle slip, as a result of we will not distinguish a noise spike from a legit change in worth. Within the graph above, two totally different angle values measured at index 13 trigger us to choose totally different numbers of revolutions in unwrapped angle. Lowpass filtering after the arctan is not going to assist us out; lowpass filtering earlier than the arctan will trigger a section error. There are two higher options:

  • sampling quicker (this lets us see giant spikes extra simply) and slew-rate limiting the unwrapped output
  • a phase-locked loop

A phase-locked loop will filter out noise extra simply. Since we’ve two indicators x and y, we want a vector PLL slightly than a scalar PLL. Top-of-the-line approaches for a vector PLL with two sine waves 90 levels out of section, is a quadrature mixer: if we use a section detector (PD) that computes the cross product between estimated and measured vectors, we get a really good outcome. If the incoming angle φ is unknown, then

$$start{eqnarray} x &=& A cos phicr y &=& A sin phicr hat{x} &=& cos hat{phi}cr hat{y} &=& sin hat{phi}cr mathrm{PD output} &=& hat{x}y – hat{y}x cr &=& A cos hat{phi} sin phi – A sin hat{phi} cos phi cr &=& A sin (phi – hat{phi})cr &=& A sin tilde{phi} finish{eqnarray}$$

Simply as a reminder: the ^ phrases are estimates; the ~ phrases are errors, and the “plain” x and y are measurements.

There is not any excessive frequency time period to filter out right here! That is one of many huge benefits of a vector PLL over a scalar PLL; if we are able to measure (or derive from a sequence of measurements) quadrature parts which might be proportional to ( cos phi ) and ( sin phi ), we do not want as a lot of a filter. In actuality, imperfect section and amplitude relationship signifies that there will probably be some double-frequency time period that makes it into the output of the section detector, however the amplitude must be pretty small.

A vector PLL is a monitoring loop on the x and y measurements, however based mostly on state variables by way of section angle and its derivatives. (Or to say it a distinct method: measurements are in rectangular coordinates, however state variables are in polar coordinates.) That is type of the very best of each worlds, as a result of we are able to use details about cheap adjustments in angle and amplitude, however not have to fret about angle unwrapping errors if we get single noise spikes, since we do not ever must convert from principal angle (-180° to +180°) to unwrapped angle. If noise at one specific instantaneous causes our (x,y) measurements to return near zero, the section detector output will probably be small and the impact on the PLL output will probably be minimal.

Okay, properly what does this must do with encoders?

You caught me — we have veered off onto a tangent, and it does not have a lot to do with digital encoders. (Resolvers and analog sensors, sure. Digital encoders, no.) However I needed you to see the large image earlier than delving into the world of observers.


Monitoring loops

  • Helpful for combining data from many samples to reduce the impact on noise
  • Very common, with many specialised varieties (PLLs, Kalman Filters, observers, and so forth.)
  • Normally with zero steady-state error even from ramp inputs
  • Have filter state which has bodily that means
  • Linear monitoring loops are a sort of low-pass filter, however common low go filters do not need zero steady-state error from ramp inputs, and do not essentially have filter state that has bodily that means

Part-locked loops

  • A monitoring loop for periodic indicators, the place an absolute section reference can’t be distinguished (e.g. 0° vs. 360° vs. 720°)
  • Cycle slips are doable
  • A divider within the suggestions path could make the PLL act like a frequency multiplier
  • Scalar PLLs must filter out high-frequency parts of section detector output
  • Part estimation in vector techniques is superior to section estimation in scalar techniques, the place ambiguity in course and section can’t be prevented (analogous to the distinction between the 2 forms of arctangent: the two-input atan2() and the single-input atan() )
  • Part detection yields larger data when the slope of sign change is larger, and decrease data when the incoming sign is at a minimal or most
  • Part detectors in vector PLLs can have a a lot smaller high-frequency element, and subsequently require much less filtering
  • Vector PLLs that use quadrature mixers are superior to techniques utilizing arctan() + unwrap + monitoring loop, as a result of they’re extra resistant to spurious cycle slips attributable to noise spikes

Hope you discovered one thing helpful, and joyful monitoring!

Subsequent up: Observers!

© 2013 Jason M. Sachs, all rights reserved.

You may additionally like… (promoted content material)

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button