ARM Architecture

Tolerance Evaluation – Jason Sachs

Right this moment we’re going to speak about tolerance evaluation. This can be a matter that I’ve danced round in a number of earlier articles, however by no means actually touched upon in its personal proper. The closest I’ve come is Margin Name, the place I mentioned a number of completely different methods of figuring out design margin, and ran by means of some calculations to justify that it was secure to permit a specific amount of present by means of an IRFP260N MOSFET.

Tolerance evaluation is utilized in electronics to find out the quantity of variation of some amount in a circuit design. It could possibly be voltage, or present, or resistance, or amplifier achieve, or energy, or temperature. If you find yourself designing a circuit, the elements that you simply use could have some nominal worth, like a 4.99kΩ resistor, or a 33pF capacitor, or a 1.25V voltage reference, or a 3.3V regulator. That 4.99kΩ resistance is simply the nominal worth; in actuality, the producer claims — and I mentioned what this implies in an article on datasheets — that the resistance is 4.99kΩ ± 1%, that’s, between 4940 and 5040 ohms. Truly, most elements have multiple nominal worth to characterize their habits; even a easy resistor has a number of:

  • resistance R
  • temperature coefficient of resistance
  • thermal resistance in °C / W
  • most voltage throughout the resistor
  • working and storage temperature ranges

Extra complicated elements like microcontrollers might have a whole bunch of parameters described of their datasheet.

At any fee, what you care about isn’t the variation within the part, however the variation of one thing essential in your circuit: some amount X. Tolerance evaluation means that you can mix the variations of part values and decide the variation of X. Right here is an instance:

I work with digitally-controlled motor drives. It’s often a good suggestion to place a {hardware} overvoltage detector in a motor drive, in order that if the voltage throughout the DC hyperlink will get too massive, the drive shuts off earlier than elements hooked up to the DC hyperlink, like capacitors and transistors, can get broken. That is essential if the drive is working in regeneration, the place the motor is used to offer braking torque to the mechanical load and to transform the ensuing power into electrical type on the DC hyperlink, the place it has to go someplace. In an electrical bike or a hybrid automotive, this power will circulate again right into a battery. In some massive AC mains-connected programs, it might circulate again into the facility grid. In any other case, it is going to trigger the DC hyperlink voltage to rise, storing some power within the DC hyperlink capacitor, till one of many elements fails or the motor drive stops regenerating. (Trace: you don’t need a part to fail; you need the motor drive to cease regenerating.) The management of the motor drive is completed by means of firmware, however you by no means know when one thing can go incorrect, and that’s why it’s essential to have an unbiased analog sensor that may shut down the gate drives for the facility transistors if the DC hyperlink voltage will get too excessive.

So right here’s the aim of a hypothetical overvoltage detector for a 24V nominal system:

  • Present a 5V logic output sign that’s HIGH if the DC hyperlink voltage ( V_{DC} < V_{OV} ) (regular operation) and LOW if ( V_{DC} ge V_{OV} ) (overvoltage fault) for some overvoltage threshold ( V_{OV} ).
  • ( V_{OV} > 30V ) to permit some design margin and keep away from false journeys
  • ( V_{OV} < 35V ) to make sure that capacitors and transistors don’t expertise voltage overstress.
  • The previous thresholds are for slowly-changing values of ( V_{DC} ). (Instantaneous voltage detection is neither doable nor fascinating: all actual programs have some type of noise and parasitic filtering.)
  • Underneath all circumstances, ( V_{DC} ge 0 )
  • The logic output sign shall detect an overvoltage and produce a LOW worth for any voltage step reaching a minimum of 40V in not more than 3μs after the start of the step.
  • Overvoltage spikes between ( V_{OV} – 1.0V ) and 40V, which exceed ( V_{OV} – 1.0V ) for not more than a 100ns pulse out of each 100μs, shall not trigger a LOW worth. (This units a decrease sure on noise filtering.)
  • The overvoltage detector can be situated in an ambient temperature between -20 °C and +70 °C.
  • A 5V &pm; 5% analog provide is accessible for sign conditioning circuitry.

In the meanwhile, neglect concerning the dynamic facets of this circuit (the 40V step detected in not more than 3μs, and the 100ns spikes as much as 40V not inflicting a fault) and give attention to the DC accuracy, particularly that the circuit ought to journey someplace between 30V and 35V.

Let’s additionally assume, in the meanwhile, that now we have a comparator circuit that journeys precisely at 2.5V. Then all we have to do is create a voltage divider in order that this 2.5V comparator enter corresponds to a DC hyperlink voltage between 30V and 35V.

We most likely need to select resistors in order that their nominal values correspond to the midpoint of our allowable voltage vary: that’s, 32.5V in produces 2.5V out. That’s a ratio of 13:1, so we might use R1 = 120K and R2 = 10K. Besides when you have a look at the normal 1% resistor values ( (the so-called E96 most popular numbers), you’ll see the closest is R1 = 121K and R2 = 10K, for a ratio of 13.1:1. That will get us to 32.75V in → 2.5V out.

But when these are 1% resistors, then their room-temperature values can differ by as a lot as &pm;1%:

  • R1 = 121K nominal, 119.79K minimal, 122.21K most
  • R2 = 10K nominal, 9.9K minimal, 10.1K most

If we need to undergo a worst-case evaluation, the extremes in resistor divider ratio are at minimal R1 and most R2, and at minimal R2 and most R1:

  • R1 = 119.79K, R2 = 10.1K → 12.86 : 1 → 32.15V in : 2.5V out
  • R2 = 122.21K, R1 = 9.9K → 13.34 : 1 → 33.36V in : 2.5V out

This will get to be tedious to do by hand, and it may possibly assist to make use of spreadsheets, or evaluation in MATLAB / MathCAD / Scilab / Python / Julia / R or no matter your favourite scientific computing setting is:

import numpy as np

def resistor_divider(R1, R2):
    return R2/(R1+R2)

def worst_case_resistor_divider(R1,R2,tol=0.01):
    Calculate worst-case resistor divider ratio
    for R1, R2 +/- tol
    Returns a 3-tuple of nominal, minimal, most
    return (resistor_divider(R1,R2),

ratios = np.array([1.0/K for K in worst_case_resistor_divider(121.0, 10.0, tol=0.01)])

def print_range(r, title, format="%.3f"):
    rs = sorted(record(r))
    print(("%-10s nom="+format+", min="+format+", max="+format) % (title, rs[1], rs[0], rs[2]))
print_range(ratios, 'ratios:')
print_range(ratios*2.5, 'DC hyperlink:')
ratios:    nom=13.100, min=12.860, max=13.344
DC hyperlink:   nom=32.750, min=32.151, max=33.361

I’ve simply assumed we had been going to make use of 1% resistors right here, as a result of I’ve expertise selecting resistors. Even so, it helps to double-check assumptions, in order of this writing, listed below are the lowest-cost chip resistors for 120K (5%) or 121K (1% or much less) in 1000 amount from Digi-Key. Worth is per thousand, so $2.33 per thousand = $0.00233 every. (I’m wanting up 120K/121K somewhat than 10K as a result of 10K is a standard, dirt-cheap worth.)

The upshot of that is that 1% and 5% chip resistors at the moment are across the identical value. You may save a teensy bit with a 5% resistor: $0.00233 vs $0.00262 is a distinction of $0.00029 every, or 29 cents extra per thousand for 1% resistors. That is a lot smaller than the price of a pick-and-place machine to assemble the part on a PCB; it’s more durable to get a great estimate of how a lot which may price, however when you have a look at on-line meeting price calculators for an estimate, you will get an concept. Right here’s one which quotes $0.73 meeting price per board at 1000 boards, 10 completely different distinctive elements, 100 SMT elements on one facet of the board solely; when you enhance to 200 SMT elements it’s $1.28 per board, and 300 SMT elements it’s $1.76 per board — at that fee, you’re a few half-cent to position every half, so don’t quibble on the distinction between resistors that price 0.233 cents vs. 0.262 cents every. 5% and 1% chip resistors successfully are the identical value; you’re going to pay extra to have them assembled than to purchase them.

So 1% 0603 or 0402 resistors must be your default selection. I’d most likely select 0603 for many boards, since there’s not an excessive amount of value distinction, and on prototypes I can solder these by hand, if I really want to; 0402 and smaller require extra ability than I can deal with. (DON’T SNEEZE!)

0.5% resistors are a bit costlier at about 1.5 – 1.7 cents every.

0.1% resistors are nonetheless costlier at 4.6 – 4.7 cents every, however that’s not too dangerous when you want the accuracy.

(The 1% and 5% resistors are thick-film chip resistors; the 0.5% and 0.1% resistors are thin-film. For essentially the most half that is simply an inside building element, however extra on this topic in a bit.)

At any fee, we found out that we might use 1% resistors and have our 2.5V threshold on the comparator equal to one thing between 32.15V and 33.36V on the DC hyperlink.

Are we completed but?

No, as a result of there are a bunch of different issues that decide resistor values. Let’s have a look at the datasheets for KOA Speer RK73H ($4.08 per thousand for 0603 1% RK73H1JTTD1213F) and Panasonic ERJ ($8.72 per thousand for 0603 1% ERJ-3EKF1213V):

Each have a temperature coefficient of &pm;100ppm/°C. Our ambient spec of -20°C to +70°C deviates from room temperature by 45°C. (Let’s assume the board doesn’t warmth up considerably.) That may add one other 4500ppm = 0.45%.

Then there are all these gotchas the place the resistance can change by 0.5% – 3% from overload, soldering warmth, speedy change in temperature, moisture, endurance at 70°C, or excessive temperature publicity. The extra critical one is soldering warmth.


Sure, that’s proper, your resistors meet their marketed 1% specs once they aren’t linked to something; when you truly need to solder them to a board, their resistance worth will change. A few of you’re studying this and pondering, “Positive, after all the resistance will change, it’s some operate of temperature R(T).” That means reversible resistance change, again to its authentic worth, after the half cools down. However the resistance may additionally endure irreversible change. A few of that’s most likely as a consequence of slight bodily or chemical adjustments within the resistor brought on by heating and cooling, and a few of it’s because of the pressure positioned on the half by the solder solidifying. I discovered just a few paperwork on this topic. From a Vishay utility word titled “Studying Between the Strains in Resistor Datasheets”

The tip buyer should additionally consider whether or not a tolerance provided by a producer is de facto
sensible. For instance, some floor mount skinny movie chip resistors are provided in very tight
tolerances for very low resistance values. That’s spectacular on the datasheet however not
suitable with meeting processes. As these resistors are mounted on the board there’s a
resistance change as a consequence of solder warmth. The solder terminations soften, circulate, and re-solidify with
modified resistance values. For low-value resistors the quantity of resistance change is way
higher than the required tolerance. Having paid a premium value for an impractically tight
tolerance, the client finally ends up with looser-tolerance resistors as soon as they’re assembled on the

One research, Capacitors and Resistors Mounting Information Survey Based mostly on Business Producers’ Public Paperwork, mentions sulfur contamination:

Sulphur contamination is principally related to use and reliability of thick-film chip resistor
with Ag-system as inside termination. The silver within the inside termination may be very vulnerable to
contamination by way of sulphur which produces silver sulphide in chip resistors. Silver is so vulnerable to
mixture with sulphur that the sulphur diffuses by means of the outer termination layers to the inside
termination forming silver sulphide. Silver sulphide sadly makes the termination materials nonconductive and successfully raises the resistance worth till it’s basically open circuit. The response
velocity on this case is influenced by sulphur gasoline density, temperature and humidity tremendously. This
course of might be initiated or inhibited already by heat-stress whereas mounting.

Inert gasoline ambiance mounting (as talked about on this chapter above) might be really useful
as a prevention measure to suppress the sulphide contamination points.

As for the consequences of pressure, one concern is piezoresistivity: you possibly can learn one other Vishay utility word titled “Mechanical Stress and Deformation of SMT Parts Throughout Temperature Biking and PCB Bending”. No good sound bites on this one, aside from the in conclusion:

  • The piezoresistive impact may cause important resistance
    adjustments to thick movie chip resistors, particularly when the
    PCB bends, temperature adjustments happen, or the
    elements expertise stress when they’re embedded
    or molded. The part’s TCR can be additionally affected.

  • These results will not be seen in skinny metallic movie chip resistors.

In order that’s another excuse why the SMT resistors which might be higher than 1% tolerance are thin-film. This impacts SMT greater than through-hole elements, as a result of there’s not a lot pressure aid for chip elements; through-hole elements a minimum of have results in permit the half to keep away from mechanical stress when the board flexes slightly bit. Neither of the resistor datasheets I confirmed above, nevertheless, mentions the impact of pressure on resistance.

La la la la la, let’s simply fake we didn’t hear all that, and now we have &pm; 1.45% resistance tolerance as a consequence of half variation and temperature coefficients.

ratios = np.array([1.0/K for K in worst_case_resistor_divider(10.0, 121.0, tol=0.0145)])
print_range(ratios, 'ratios:')
print_range(ratios*2.5, 'DC hyperlink:')
ratios:    nom=13.100, min=12.754, max=13.456
DC hyperlink:   nom=32.750, min=31.885, max=33.640

Now we’re 31.89V to 33.64V. Nonetheless inside our spec of 30-35V for ( V_{OV} ). Are we completed but?

No — we want the remainder of the circuit, it’s not only a voltage divider.

However earlier than we go there, let’s have a look at how the resistor tolerance impacts the voltage divider ratio.

import matplotlib.pyplot as plt
%matplotlib inline

alpha = np.arange(0.001, 1 + 1e-12, 0.001)
Rtotal = 10e-3   # this does not matter
R1 = alpha*Rtotal
R2 = (1-alpha)*Rtotal
for whichfig, ytext, ysym in [(1,'Ratio','rho'),
    fig = plt.determine()
    ax = fig.add_subplot(1,1,1)
    for tol in [0.1, 0.05, 0.01]:
        r_nominal, r_min, r_max = worst_case_resistor_divider(R1, R2, tol)
        S = np.most(r_nominal-r_min, r_max-r_nominal) / tol
        y = S if whichfig == 1 else S/alpha
        ax.plot(alpha, y, label="$delta = $%.1f%%" % (tol*100))
    ax.legend(loc="finest", fontsize=11, labelspacing = 0)
    ax.set_ylabel('$%s$' % ysym,fontsize=14)
    nt = np.arange(11)
    xt = nt * 0.1
    ax.set_title(('%s $%s(alpha, delta) = (V - bar{V})/(%sdeltacdot V_{rm in})$n'
                +'$R_1=alpha R, R_2=(1-alpha R), V = alpha V_{rm in}$') 
                 % (ytext, ysym, '' if whichfig == 1 else 'alpha cdot'));

OK, what are we ? The highest graph, ( rho(alpha, delta) ) is the ratio of the voltage divider error to the resistor tolerance, the place ( alpha = ) the nominal voltage divider ratio, and ( delta ) is the resistor tolerance. The underside graph, ( S(alpha, delta) ) is the sensitivity of the voltage divider output; we simply divide by the nominal voltage divider ratio, so ( S = frac{rho}{alpha} ).

Listed below are three concrete examples:

  • ( R_1 = R_2 = R ) and ( delta = ) 1%. Then ( alpha = R_1/(R_1+R_2) = 0.5 ) and the output can differ from 0.99/(0.99+1.01) = 0.495 to 1.01/(1.01+0.99) = 0.505. This can be a &pm;0.005 output error, and if we divide by ( delta = 0.01 ) we get ( rho = 0.5 ) after which ( S = rho / alpha = 1. )

  • ( R_1 = R, R_2 = 4R ) and ( delta = ) 1%. Then ( alpha = R/5R = 0.2 ) and the output can differ from 0.99/(0.99+4.04) = 0.1968 to 1.01/(1.01+3.96) = 0.2032. This can be a &pm;0.0032 output error, and if we divide by ( delta = 0.01 ) we get ( rho = 0.32 ) after which ( S = rho / alpha = 1.6. )

  • ( R_1 = 4R, R_2 = R ) and ( delta = ) 1%. Then ( alpha = 4R/5R = 0.8 ) and the output can differ from 3.96/(3.96+1.01) = 0.7968 to 4.04/(4.04+0.99) = 0.8032. This can be a &pm;0.0032 output error, and if we divide by ( delta = 0.01 ) we get ( rho = 0.32 ) after which ( S = rho / alpha = 0.4. )

Some essential takeaways are:

  • The ratio ( rho approx 2alpha(1-alpha) ) and sensitivity ( S approx 2(1-alpha) ).
  • Absolute error in voltage divider output is symmetrical with ( alpha ) and reaches a most for ( alpha=0.5 ) and may be very low for ( alpha ) close to 0 or 1.
  • Sensitivity of voltage divider output for ( alpha << 1 ) is roughly ( S=2 ). If I’m dividing down a a lot increased voltage to a decrease voltage, this implies if I exploit 1% resistors I can count on about 2% achieve error, or if I exploit 0.1% resistors I can count on about 0.2% achieve error.
  • Sensitivity of voltage divider output for ( beta << 1 ) the place ( beta = 1-alpha ) is roughly ( S=2beta ). This implies if I need a voltage divider ratio that may be very near 1, and I exploit 1% resistors I can count on a a lot decrease achieve error. In one in every of my earlier articles on Thevenin equivalents I used the instance of R1 = 2.10kΩ, R2 = 49.9Ω the place ( alpha = 0.9768, beta = 0.0232 ), and which means for 1% resistors I can count on a achieve error of solely about 0.0464%.
ratios = np.array([1.0/K for K in worst_case_resistor_divider(2100, 49.9, tol=0.01)])
print_range(ratios, "ratios", '%.5f')
print "sensitivity S", ratios[1:3] - ratios[0]
ratios     nom=1.02376, min=1.02329, max=1.02424
sensitivity S [ 0.00048004 -0.00047053]

Overvoltage Detector, Half 2: The Different Stuff at DC

Right here’s the entire circuit we’re going to be :

Choosing 2.5V Voltage References

First we want a 2.5V supply, so we are able to examine the output of our voltage divider to it.


In concept, I just like the TL431 kind of shunt voltage reference. It’s a three-terminal machine that’s type of like a precision transistor: if the reference terminal is lower than its 2.5V threshold, it doesn’t conduct from cathode to anode; if it’s higher than its 2.5V threshold, it does conduct.

TL431s are low-cost and ubiquitous. You need a 0.5%-tolerance 2.5V reference for lower than 10 cents in amount 1000? You bought it. The Diodes Inc. AN431 is accessible in 0.5% grade from Digi-Key for about 7 cents in amount 1000. That is pin- and function-compatible with the TL431. (Mess up the pinout? There’s the AS431, identical value, which swaps ref and cathode pins, suitable with the TL432.)

The one draw back is that its voltage accuracy is specified at 10mA, in order that’s type of an influence hog. You may run it down as little as 1mA, however then you need to use the specification for dynamic impedance, ( Z_{KA} ) to determine how a lot the voltage adjustments at 1mA. For the AN431, it’s a most of 0.5Ω, so for a change from 10mA all the way down to 1mA (ΔI of -9mA), the voltage might drop by as a lot as 4.5mV, which provides one other 0.18% to the efficient accuracy.

Low-current TL431

The following step up from these are the ON Semiconductor NCP431B which you should purchase from Digi-Key at 9.9 cents every in amount 1000. These work all the way down to a minimum of 60μA, and their voltage accuracy is specified at 1mA. The dynamic impedance ( Z_{KA} ) is specified between 1mA and 100mA (identical 0.5Ω most), however there is no such thing as a spec for 60μA to 1mA — they do present a determine 36 (“Knee of Reference”) claiming a typical 4.5A/V = 0.22Ω, and you could possibly resolve to make use of the 0.5Ω most worth and double it for good measure: 1 ohm instances (100μA – 1mA) = 0.9mV, which is lower than 0.04% of two.5V. However there’s no spec, so how will you presumably know whether or not you possibly can belief the voltage accuracy at these low currents? You can have a component that regulated to 2.45V at 100μA and it will meet the specification however characterize a 50mV error from nominal.

Diodes Inc has the AP431 for 8.6 cents (amount 1000) from Digi-Key with related specs: &pm;0.5% at 1mA, works all the way down to 100μA cathode present, dynamic impedance ( Z_{KA} ) < 0.3Ω from 1mA to 100mA. However nothing helpful for figuring out voltage accuracy under 1mA.

Diodes Inc additionally has the ZR431 which it inherited from Zetex, specified at 10mA and no specs under 10mA.

TI has the same ATL431LI for 17 cents (qty 1000) from Digi-Key, &pm;0.5% at 1mA, works all the way down to 100μA cathode present, dynamic impedance ( Z_{KA} ) < 0.65Ω from 1mA to 15mA, and nothing about voltage accuracy under 1mA.

These guys are both copying one another’s collective blunders, or there’s a conspiracy, a type of mini-Phoebus cartel with regards to specifying voltage accuracy under 1mA. Sigh.
My guess is that it was Zetex’s fault for poor specsmanship of the ZR431, after which everyone simply copied the overall type of the datasheet, with out bothering to make any claims about low-current voltage accuracy.

LM4040 / LM4041

The following step up are the LM4040 and LM4041 voltage references; these have specified voltage accuracy at 100μA operation, and can be found from numerous producers. The LM4040 is a set voltage reference, and the LM4041 is an adjustable reference primarily based on a 1.23V bandgap voltage, type of an upside-down TL431. For precision circuits, until you want the adjustability, the LM4040 is a better option; in any other case, you’ll want so as to add your personal resistor divider which can elevate the efficient tolerance. For the LM4040, when you get the A grade model, it’s 0.1% accuracy, however you’ll pay additional for that. Listed below are some choices for the C grade (0.5% accuracy), costs from Digi-Key at 1000 piece amount:

TI additionally has the TL4050 which has some good specs but it surely’s costlier.

Sequence references

Lastly, if you’re working with micropower designs and you actually need to ensure low present, or it’s essential decrease elements depend, there are collection references which provides you with a buffered voltage reference, like those listed under, however you’ll pay extra for them, usually within the 50-60 cent vary in 1000 amount.

Designing with 2.5V shunt references

I’m going to stay with the ON Semi NCP431B, and simply use it at 1mA — though I nonetheless assume it’s a tragedy you could’t depend on the voltage spec under 1mA.

For the NCP431BI, the voltage specification at 1mA present over its temperature vary is 2.4775V to 2.5125V.

Our 5V &pm; 5% provide can go as little as 4.75V. We’ll use a 2.00kΩ shunt resistor with it to ensure a minimal cathode present of (4.75V – 2.5125V) / (2.00kΩ × 1.0145) = 1.103mA. (Keep in mind: the issue of 1.0145 comes from the 1% resistor vary on high of the 4500ppm swing as a consequence of 100ppm/°C tempco and &pm;45°C swing. That is barely above the 1mA voltage specification, and leaves 103μA above spec, which is rather more than the max gate present of 190nA.)

On the opposite facet of the tolerance ranges, we might have as a lot as 5.25V, with a cathode present of as much as (5.25 – 2.4775) / (2.00kΩ × 0.9855) = 1.41mA. The specification on dynamic impedance ( Z_{KA} ) < 0.5Ω tells us we’d see as a lot as (1.41 – 1mA) * 0.5Ω = 0.205mV enhance as a consequence of worst-case cathode present tolerance, making our general voltage reference vary:

  • 2.500V nominal
  • 2.5127V most (2.5125V + 0.205mV)
  • 2.4775V minimal

Choosing comparators

We additionally want a comparator. The essential system necessities are that we wish one that may be powered from 5V in ambient temperatures of -20 to +70°C, has a brief sufficient response time, and doesn’t introduce a lot voltage error. Our system requirement of at most 5 microsecond response time for a 40V 1μs pulse means, at first look, that we’ll most likely want a quick response, say round 1μs or much less, however there are some elements that work each for and in opposition to us to satisfy the system requirement.

Apart from that, it’s a matter of excellent judgment and frugality. Essentially the most cheap comparators, by far, are of the LM393 selection. Digi-Key value in 1000 amount is about the identical; the bottom is the ON Semi LM393DR2GH in a SOIC-8 package deal, at about 8.4 cents. Others from TI, ST, and Diodes Inc. are within the 8.5 – 10 cent vary.

They’re so low-cost that you simply can’t purchase a single comparator for much less cash; the LM393 is a twin comparator, with open-collector output, and when you’re not going to make use of the second comparator, you need to learn the positive print within the datasheet, which says that unused pins must be grounded.

There are a few essential specs within the LM393 datasheet to notice:

  • Offset voltage. That is &pm; 5mV max at 25°C and &pm; 9mV max over the total temperature vary; this successfully provides to the two.5V reference tolerance; &pm; 9mV is 0.36% of two.5V

  • Response time. That is usually 1.3μs for a 100mV step change with 5mV overdrive, which signifies that to show the comparator from output excessive to output low, we begin with Vin- 95mV under Vin+, after which enhance Vin- to 5mV above Vin+. Consider this machine as a stability scale: if the 2 inputs are almost equal, then the output can change slowly, whereas if they’re completely different sufficient, the stability will tip shortly to notice which is bigger. Comparator datasheets will often have graphs displaying typical response time vs. overdrive degree. The ON Semi LM393 doesn’t, and that is one purpose it might be higher to select one other half. Listed below are the response time graphs from the TI LM393 datasheet — I want the unique from Nationwide Semiconductor earlier than they had been acquired by TI, however sadly TI hasn’t maintained earlier variants, so we’re caught with the extra complicated TIified model:

    You’ll word that the output transition from high-to-low is quicker than the low-to-high transition. The rationale for this can be clearer if we have a look at the simplified equal circuit — which is a part of why these elements are so low-cost. They’re easy!

    All we actually have here’s a Darlington bipolar differential pair (Q1-This autumn), loaded down by a present mirror (Q5 and Q6), with an open-collector output stage (Q7 and Q8).

    • When the constructive enter is bigger than the unfavorable output, then greater than half of the 100μA present supply flows by means of Q3 than Q2; Q2, Q5, and Q6 have the identical present flowing by means of them, so extra present flows by means of Q3 than Q6, and that turns Q7 on which turns Q8 off, and the output is open-collector.

    • When the unfavorable enter is bigger than the constructive output, the reverse is true: greater than half of the 100μA present supply flows by means of Q2 than Q3, which suggests extra present flows by means of Q6 than Q3, and that turns Q7 off which turns Q8 on, and the output is pulled low.

    The rationale the excessive → low transition is quicker than the low → excessive transition is as a result of the output transistor Q8 has storage time to come back out of saturation. It’s a bit puzzling why Nationwide didn’t make a model of the LM393 comparator with a Baker clamp on the output transistor to hurry up this time. It’s additionally too dangerous the LM393 doesn’t have separate specs for turn-on and turn-off transition instances — though since they’re typical somewhat than most specs, you may as effectively simply use the graphs for info as an alternative. (Or use a component just like the ON Semi TL331 which lists typical values within the spec tables.)

    Anyway, that is essential as a result of now we have a system requirement to detect overvoltage and transition from high-to-low inside a bounded time, however no time requirement to transition within the different route. So in our explicit utility, we care concerning the high-to-low response time.

Different specs of significance to make sure it is going to work for our utility are:

  • Frequent-mode voltage vary: all the way down to zero (due to the PNP enter stage), as much as Vcc – 2.0V over the total temperature vary (Vcc – 1.5V at 25°C) — we want this to work at 2.5V enter, and Vcc might be as little as 4.75V, so we are able to assist an enter voltage vary as much as 4.75 – 2.0 = 2.75V. That represents a voltage margin of a quarter-volt (2.75V – 2.5V).

  • Enter bias present (400nA max) and enter offset present (150nA max): The LM393 is a bipolar machine, not CMOS, so the inputs will not be completely high-impedance. Enter bias present is the present flowing by means of every enter. Enter offset present is the distinction between the 2 enter bias currents. In case your enter sources have low sufficient impedance, you possibly can ignore enter offset present and simply analyze the enter bias currents; in any other case, you possibly can attempt to match supply impedances so the voltage drop throughout your supply impedances cancel to some extent. An higher sure for voltage error in both case is ( Delta R I_{textrm{bias}} + RI_{textrm{ofs}}. )

    On this utility, we’re utilizing R1 = 10K, R2 = 121K, so the supply impedance is 10K || 121K = 9.24K, and the voltage error on the comparator enter is 9.24K × a most of 400nA = 3.7mV. That is small (0.15%) however not zero. It’s not onerous to match the enter impedances to 10K || 121K. On this case, the worst-case voltage error on the comparator inputs is ( Delta R I_{textrm{bias}} + RI_{textrm{ofs}} ) = 9.24K × 0.02 × 400nA + 9.24K × 150nA = 1.46mV.

  • Working temperature vary: Right here we’re in bother. The LM393 is rated for an working vary of 0 – 70°C, however we want a circuit that works all the way down to -20°C.

C, I, M (Temperature rankings)

For these of us engineers of a sure age, the letters CIM imply one thing:

  • C = industrial (0 – 70°C)
  • I = industrial (“chilly” – 85°C) the place “chilly” assorted by producer: for instance, -40°C for TI and Motorola, -25°C for Nationwide Semiconductor
  • M = army (-55°C – 125°C), often in ceramic somewhat than plastic packages

TI used the CIM lettering system, typically CIME or CIMQ; see for instance the TLC272 and TLC393 — the TLC393 datasheet states

The TLC393C is characterised for operation over the industrial temperature vary of TA = 0°C to 70°C. The
TLC393I is characterised for operation over the prolonged industrial temperature vary of TA = −40°C to 85°C.
The TLC393Q is characterised for operation over the total automotive temperature vary of TA = −40°C to 125°C.
The TLC193M and TLC393M are characterised for operation over the total army temperature vary of
TA = −55°C to 125°C.

“E” (prolonged) was typically -40°C to +125°C. Presumably protection of the -55°C to -40°C vary was tough to design and take a look at, and other than army and aerospace utilization, it isn’t a frequent want in circuit design.

ON Semiconductor seems to make use of one thing related, a minimum of for the NCP431:

  • C = 0 to +70°C
  • I = -40 to +85°C
  • V = -40 to +125°C

Nationwide Semiconductor used the half quantity to point temperature vary: for instance, the LM393 datasheet contains the LM193 (army temp vary), LM293 (industrial), and LM393 (industrial), so the LM3xx collection was industrial, LM2xx was industrial, and LM1xx was army.

Different producers like Burr-Brown and Linear Expertise simply tended to design every part for -40°C to +85°C by default, typically with military-grade variants to cowl the -55°C to +125°C vary. That is now the extra typical habits for more moderen gadgets from most producers. As an alternative of seeing 3 or 4 temperature grades, new gadgets might have only one or 2, with completely different specs protecting the 0 to 70 or -40 to +85 ranges.


At any fee, to cowl our -20°C to +70°C vary, we want the LM293, not the LM393. (And for the reference, we’ll want the NCP431BI.) This isn’t a giant deal these days (it was extra important 10-20 years in the past; the economic and army vary gadgets had been costlier and fewer frequent): Digi-Key sells the LM293ADR for slightly below 10 cents at 1000 amount.

Higher comparators

We might additionally use the TI LM393B or LM2903B comparators, that are principally LM293 with higher specs in virtually each space (it’s a part of the identical datasheet):

  • temperature vary (LM393B = -40°C to +85°C; LM2903B = -40°C to +125° C)
  • offset voltage: 2.5mV at 25°C, 4mV over temperature vary
  • enter bias and offset present: 50nA max enter bias present, 25nA max enter offset present (vs. 400nA, 150nA for LM193/293/393)
  • provide voltage: 3-36V working, as in comparison with 2-30V for the LM193/293/393 (word that now we have to surrender ultra-low provide voltage, however that’s okay in our utility)
  • response time: 1μs typ. (vs. 1.3μs for LM393)
  • quiescent present: 800μA worst-case (vs. 2.5mA for the LM393)
  • output low voltage: 550mV max at 4mA sink (vs 700mV max at 4mA sink for the LM393)

Frequent-mode enter voltage is similar.

Value from Digi-Key in 1000 amount is about 9.4 cents for the LM393B and 9.0 cents for the LM2903B. Because the LM2903B has a wider temperature vary for a similar specs, and is barely cheaper — an instance of value inversion! — we’ll use the LM2903B.

The opposite type of specs which might be accessible in costlier comparators embrace:

  • decrease offset voltage (uncommon)
  • push-pull output as an alternative of open-collector
  • rail-to-rail enter
  • CMOS enter for supporting high-impedance functions
  • quicker response
  • micropower
  • built-in voltage reference

We don’t want them for our utility — though a built-in voltage reference could be cost-effective if we might discover a half that has about the identical complete value as the two.5V reference and the comparator — however you need to learn about them in case you want these kinds of issues. Only for a few examples, you possibly can have a look at the ON Semi NCS2250 or TI LMV762 or TI TLV3011 or Maxim MAX40002. The least-expensive comparator with built-in voltage reference that I might discover is the Microchip MCP65R41T-2402E for 33 cents at Digi-Key, and that prices greater than the voltage reference and comparator we picked; for functions which might be size-constrained, this type of machine could be acceptable.


To assist the comparator swap shortly and keep away from noise sensitivity when its enter is across the voltage threshold, we have to add some constructive suggestions. We don’t want a lot; just a few millivolts is ample. The best approach to do that is put slightly resistance between our 2.5V supply and the comparator’s + enter. Maybe 1kΩ. Then add 1MΩ from the comparator’s output to the + enter. This can type a 1001:1 voltage divider, including roughly 2.5mV if the output is at 5V, and subtracting roughly 2.5mV if the output is at 0V.

Now, in actuality we don’t attain both 5V or 0V output: on the high finish, it is determined by the pullup resistance of our open-collector circuit in collection with the 1MΩ resistor — the LM2903B’s specs are listed with a 5.1kΩ pullup resistor, so as an alternative of a 1001:1 voltage divider, we’ll have successfully a 1006:1 voltage divider, including a minimum of roughly 2.49mV hysteresis to the brink to show the comparator output low.

If you happen to actually need to incorporate the consequences of resistor tolerance over temperature vary, then run the numbers for (1MΩ+5.1kΩ)×(1&pm;0.0145) and 1kΩ × (1&mnplus;0.0145):

hyst = np.array([K*2500 for K in worst_case_resistor_divider(1005.1, 1, tol=0.0145)])
print_range(hyst, 'hysteresis (mV):')
hysteresis (mV): nom=2.485, min=2.414, max=2.558

That’s solely about &pm; 72μV, which is de facto small, and represents lower than 0.003% error in comparison with the two.5V threshold, which is insignificant in comparison with the dominant sources of error — particularly the 0.5% accuracy of the reference itself.

On the turn-off level, the place output is transitioning from low to excessive, the LM2903B has a max spec of output voltage low at 0.55V with present of 4mA or much less; the 1001:1 voltage divider will give us roughly (2.5 – 0.55)/1001, subtracting a minimum of roughly 1.95mV hysteresis to the brink to show the comparator output excessive.

Because the impact of resistance tolerance on turn-off hysteresis is small and never very essential to our utility, we’ll ignore it.

Designing with the LM2903B

Listed below are the error sources for the LM2903B:

  • Offset voltage: 4mV max over temperature
  • Enter bias present: 50nA max over temperature — with our 121K / 10K enter voltage divider on the unfavorable enter, this results in further efficient offset voltage of at most 50nA × (121K || 10K) = 0.46mV, which is low sufficient that we don’t must care about matching enter resistance on the posiive enter, so long as its supply resistance is smaller.

That’s a complete enter voltage offset error of 4.46mV.

Placing It All Collectively

Okay, so right here’s our full circuit design:

  • R1 = 121kΩ
  • R2 = 10.0kΩ
  • R3 = 2.00kΩ
  • R4 = 1.00kΩ
  • R5 = 1.00MΩ
  • R6 = 5.1kΩ
  • U1 = 1/2 LM2903B
  • U2 = NCP431B
  • C1 = 56pF
  • C2 = 100pF

We’ll talk about the explanations for choosing these capacitor values within the subsequent part.

  • R1 and R2 set the voltage divider ratio for comparability in opposition to the voltage reference producing Vthresh = 2.5V.
  • R3 units the shunt present into the NCP431B to a minimum of 1.1mA worst-case, so it’s undoubtedly greater than the 1mA degree at which the voltage reference is specified
  • R4 and R5 set the approximate hysteresis degree = &approx; R4/R5 × (Vout – Vthresh)
  • R6 units pulldown present when the output is low; this worth simply matches the 5.1kΩ worth cited within the datasheet. (if the worth is simply too low, then it will increase present consumption and will violate the comparator specs for output voltage degree, that are for 4mA or much less; if the worth is simply too excessive, then the switching pace will endure and, within the excessive, might not attain a legitimate logic excessive)

We are able to now decide the worst-case DC thresholds for comparator output switching, by combining the tolerance evaluation we accomplished earlier:

  • Resistor divider ratio R1/R2: nominal=13.100, minimal=12.754, most=13.456
  • Voltage reference: nominal=2.500V, minimal=2.4775V, most=2.5127V
  • Comparator:

    • Whole enter voltage error (together with enter offset voltage + enter bias present) is at most 4.46mV
    • Hysteresis to show output low: add &approx; 2.49mV to the + enter (this has a roughly &pm;3% variation as a consequence of resistor tolerances, however that error is down round 72μV)
    • Hysteresis to show output excessive: subtract between 1.95mV and a pair of.5mV

The enter voltage ranges are subsequently:

Flip-on: (no overvoltage → overvoltage)

  • 32.783V nominal = 13.1 × (2.500V + 2.49mV)
  • 31.572V minimal = 12.754 × (2.4775V − 4.46mV + 2.49mV − 72μV)
  • 33.905V most = 13.456 × (2.5127V + 4.46mV + 2.49mV + 72μV)

General tolerance is about 3.4-3.7%, and consists roughly of:

  • &pm;2.7% tolerance from resistor divider
  • −0.9%, +0.5% tolerance from voltage reference
  • &pm; 0.18% tolerance from enter voltage error of comparator

Flip-off: (overvoltage → no overvoltage)

  • 32.717V nominal = 13.1 × (2.500V − 2.5mV)
  • 31.508V minimal = 12.754 × (2.4775V − 4.46mV − 2.5mV − 72μV)
  • 33.846V most = 13.456 × (2.5127V + 4.46mV − 1.95mV + 72μV)


  • 65mV nominal = 13.1 × (2.49mV + 2.5mV)
  • 57mV minimal = 12.754 × (2.49mV − 72μV + 1.95mV)
  • 67mV most = 13.456 × (2.49mV + 72μV + 2.5mV)

These ranges are effectively inside our 30V – 35V requirement for DC voltage journey threshold.

Overvoltage Detector, Half 3: Dynamics

Electronics don’t reply immediately to adjustments, so now we have to consider the dynamics of our enter and our circuit. This includes the selection of capacitor values and presumably the comparator.

NCP431B Bypassing

Capacitor C2 is only a bypass capacitor for the NCP431B, used to dampen high-frequency noise. Many of the TL431-style shunt references have a type of anti-Goldilocks habits, the place the reference is secure when the parallel capacitance is small or massive, however it might oscillate when the capacitance is excellent. Determine 19 from the NCP431B datasheet exhibits this:

Since we’re not utilizing it with a voltage divider to bump up the cathode-to-anode voltage ( V_{KA} ) past the two.5V worth, we’re caught with curve A, which says that the parallel capacitance ought to both be under about 1nF or above 10μF for cathode currents above 400μA. (Determine 18 exhibits cathode currents within the 0-140mA vary, but it surely’s basically not possible to learn the boundaries for 1mA cathode present — which is somewhat unlucky, for the reason that voltage spec for this half is at 1mA; neither of Figures 18 or 19 are very useful for currents within the 1-10mA vary.)

At any fee, we’ll select C2 = 100pF, which is low sufficient to remain under the decrease capacitance restrict, however excessive sufficient to maintain the output low-impedance at excessive frequencies. Simply as a double-check: at f=10MHz, the capacitor impedance is ( Z=1/j2pi fC rightarrow left|Zright| = 159Omega ). Determine 13 within the datasheet exhibits typical dynamic output impedance vs. frequency, with about 0.5Ω at 1MHz and about 4Ω at 10MHz, so a 100pF isn’t going to vary that a lot, and even a 1000pF on the fringe of stability would nonetheless have increased impedance than the curve in Determine 13. However passive elements are low-cost insurance coverage; it’s onerous to be 100% sure that the silicon will dampen noise with out some type of capacitance hanging on the output.

Different gadgets, such because the LM4040, are designed to be secure with any capacitive load, however you’ll typically pay extra.

Comparator response time

OK, so far as the comparator response time goes, now we have to have a look at the LM2903B datasheet. Figures 30, 31, 36, and 37 assist characterize typical comparator response time as a operate of overdrive.

Now, now we have a response time requirement for a 40V enter transient. That is approach above the DC threshold for our comparator circuit. When tolerances are at their worst, the enter voltage divider is 13.456 : 1, and the utmost threshold for the circuit is 33.905V, or 2.520V on the comparator “−” enter. If now we have an enter voltage of simply over 33.905V, it is going to journey the comparator ultimately, but it surely may take a very long time. To make sure a quicker response, we have to exceed this worst-case comparator threshold by some nominal quantity: that is the overdrive degree. The datasheet specifies typical response time at 5mV or higher. At 5mV, the everyday propagation delay is 1000ns.

(Curiously, whereas the high-to-low output delay can be decrease than low-to-high, it appears to be like like for very low overdrive ranges, the low-to-high output delay is decrease.)

I’m going to learn these figures off the +85°C graph of Determine 30:

  • 1000ns for 5mV
  • 620ns for 10mV
  • 410ns for 20mV
  • 260ns for 50mV
  • 200ns for 100mV

And right here’s how we’ll make the most of the overdrive curve: I’ll decide a few capacitor values for C1, and we’ll have a look at the RC rest curves for a step enter from 0V → 40V. (which yields 2.973V on the output of the voltage divider when it’s at its worst-case worth of 13.456 : 1)

With this worst-case voltage divider, the Thevenin-equivalent resistance is (121K + 1.45%) || (10K − 1.45%) = 9.12kΩ. Let’s see what occurs if we use C1 = 47pF.

def scale_formatter(Okay):
    def f(worth, tick_number):
        return worth * Okay
    return plt.FuncFormatter(f)

def show_comparator_response(R1nom, R2nom, C, Rtol, Ctol):
    tmax = 4e-6
    t = np.arange(-0.1,1,0.001) * tmax
    #                  (500e-3,145e-9),
    #                  (1000e-3,135e-9)
    ov_comp = ovtresp_comparator[:,0]
    t_comp = ovtresp_comparator[:,1]
    tresp_requirement = 3e-6
    Vthresh_max = 2.520
    R1 = R1nom*(1+Rtol)
    R2 = R2nom*(1-Rtol)
    Rth = 1.0/(1.0/R1 + 1.0/R2)
    RC = Rth*C*(1+Ctol)
    Okay = R2 / (R1+R2)
    # Driving sign
    Vin_end = 40
    y_end = Vin_end*Okay
    u = (t >= 0) * y_end
    y = (t >= 0) * y_end * (1-np.exp(-t/RC))
    # time for RC filter to succeed in a specific overdrive degree above Vthresh_max
    y_ov = Vthresh_max+ov_comp
    t_ov = -RC*np.log((y_ov-y_end)/(0-y_end))

    fig = plt.determine(figsize=(7,4))
    ax = fig.add_subplot(1,1,1)

    xlim = [-0.1*tmax, tmax]
    ylim = [0,3]
    ax.plot(xlim, [Vthresh_max, Vthresh_max],coloration="pink",dashes=[3,2],linewidth=0.8)
    ax.plot([tresp_requirement, tresp_requirement], ylim, coloration="pink",dashes=[3,2],linewidth=0.8)
    ax.plot(t_ov+t_comp, y_ov, '-', coloration="pink")
    tresp_min = (t_ov+t_comp).min()
    #for t1,t2,y1 in zip(t_ov,t_comp,ov_comp):
    #    print t1*1e6,t2*1e6,y1
    ax.fill_betweenx(y_ov, t_ov, t_ov+t_comp, coloration="pink", alpha=0.25)
    ax.set_xlabel(u'time (u00b5s)')
    ax.set_ylabel(u'Voltage (V)')
    ax.annotate(u"$t_min = $%.2fu00b5s" % (tresp_min*1e6), 
                  xy=(tresp_min, Vthresh_max), xycoords="information",
                  xytext=(0,-50), textcoords="offset factors",
                  dimension=14, va="middle", ha="middle",
                  bbox=dict(boxstyle="spherical", fc="w"),
    ax.set_title(('Comparator response to RC filter; steady-state voltage = %.2fV (%.3fV @comp)n'
                  +'thresh voltage = %.2fV (%.3fV @comp), R1=%.1fK+%.0f%%, R2=%.1fK-%.0f%%, C=%.0fpF+%.0f%%')
                 % (Vin_end, y_end, Vthresh_max/Okay, Vthresh_max, 
                    R1nom/1e3, Rtol*100, R2nom/1e3, Rtol*100, C*1e12, Ctol*100),
show_comparator_response(121e3, 10e3, 47e-12, 0.0145, 0.05)

Right here slightly clarification is required.

  • The blue step is the divided-down voltage from an enter step from 0V to 40V on the DC hyperlink, with no capacitive load.
  • The inexperienced curve is the voltage on the comparator “−” enter, brought on by RC filtering.
  • The horizontal dashed line at 2.52V represents the worst-case highest voltage on the “−” enter that may journey the comparator. (Nominal is at 2.5V + 2.49mV = 2.502V, keep in mind?)
  • The vertical dashed line at 3μs represents our time restrict to reply to this step.
  • The pink curve is the everyday comparator response time added to the inexperienced curve.

For that pink curve, think about just a few instances:

  • that the inexperienced curve got here as much as the comparator threshold of two.52V and stayed there. No overdrive. This takes about 0.85μs, however the comparator might take ceaselessly to modify, as a result of there is no such thing as a overdrive. (No overdrive = takes ceaselessly.)

  • that the inexperienced curve got here as much as 2.525mV, after which stopped rising. That represents a 5mV overdrive, additionally at round t=0.85μs after the enter step, and it will usually take one other microsecond for the comparator output to modify with 5mV overdrive, for a complete of about 1.85μs

  • that the inexperienced curve got here as much as 2.92V and stayed there, with 400mV overdrive. This time, with 400mV overdrive it solely takes 150ns for the comparator to modify, however the inexperienced curve took 1.82μs to get to that time, for a complete of 1.97μs. (Excessive overdrive = comparator switches shortly, however capacitor takes ceaselessly to get to that time.)

  • lastly, there’s a candy spot at round 100mV overdrive, which the inexperienced curve reaches at round 0.96μs, and with 100mV overdrive the everyday response time is 200ns, for a complete of 1.16μs.

So we are able to count on the comparator to modify output low roughly 1.16μs after the enter voltage step happens, maybe a bit earlier for the reason that enter doesn’t simply keep there however as an alternative retains rising.

This complete response time of 1.16μs is fairly fast and now we have numerous margin between that and our 3μs requirement. What about elevating the capacitance slightly bit, to 68pF:

show_comparator_response(121e3, 10e3, 100e-12, 0.0145, 0.05)

Not dangerous, that takes about 2.2μs complete response. What about 150pF?

show_comparator_response(121e3, 10e3, 150e-12, 0.0145, 0.05)

Um… simply previous the sting of our 3μs deadline.

I’d most likely decide 120pF, which produces a complete response time of roughly 2.56μs on the excessive finish of its tolerance, and nonetheless has some room to accommodate stray capacitance:

show_comparator_response(121e3, 10e3, 120e-12, 0.0145, 0.05)

Your least expensive 120pF &pm;5% NP0 50V 0603 capacitor at 1000-piece amount at Digi-Key’s the Waisin 0603N121J500CT at about 1.1 cents every. If you happen to’re keen to make use of 0402 capacitors, decide the Waisin 0402N121J500CT,
at slightly below 0.77 cents every. (0201 are even cheaper at about 0.66 cents every for the Murata GRM0335C1H121JA01D. If we are able to reside with 100pF, because it’s a extra normal worth, we are able to discover 0402 Yageo CC0402JRNPO9BN101 capacitors at 0.55 cents every.)

C0G/NP0 capacitors are extra secure over temperature than X5R/X7R/Y5V capacitors; they price extra at increased capacitance, however when you’re underneath 1000pF, typically there’s no important price premium to utilizing C0G/NP0 capacitors. That is the type of capacitor you need to use for tight-tolerance filtering; decide the &pm;5% variety when you can. And at 120pF there’s no price premium for utilizing 5% tolerance. Lastly, the voltage score of fifty or 100V is “free” if you’re at these low capacitance values, so don’t trouble attempting to optimize and purchase a 10V or 25V half to decrease price.

The flip facet of filtering: ignoring momentary spikes

We even have a requirement to stop 100ns pulses from ( V_{OV} – 1.0V ) to 40V reaching ( V_{OV} ) and inflicting an overvoltage. Let’s examine to ensure our 120pF filter capacitor does the trick — truly, to make certain, we’ll use the low facet of the capacitor tolerance, 120pF − 5% = 114pF:

t = np.arange(-0.25,3,0.001)*1e-6
dt = t[1]-t[0]
u1 = (t >= 0)*1.0
tpulse = 100e-9
u2 = (t >= tpulse) * 1.0

R1 = 121e3
R2 = 10e3
Rth = 1.0/(1.0/R1 + 1.0/R2)
RC = 120e-12 * 0.95 * Rth

fig = plt.determine(figsize=(7,7))
for row in [1,2]:
    ax = fig.add_subplot(2,1,row)

    for V_OV, label in [(31.572,'minimum $V_{OV}$'),
                        (32.783,'nominal $V_{OV}$'),
                        (33.905,'maximum $V_{OV}$')]:
        v_pre_spike = (V_OV - 1.0)
        v_in = v_pre_spike + (40-v_pre_spike)*(u1-u2)
        dV1 = (40-v_pre_spike)*u1*(1-np.exp(-tpulse/RC))
        y = (v_pre_spike 
           + (40-v_pre_spike)*(u1-u2)*(1-np.exp(-t/RC))
           + dV1*u2*(np.exp(-(t-tpulse)/RC))
        y2 = (v_pre_spike 
           + (40-v_pre_spike)*u1*(1-np.exp(-t/RC)))
        hl = ax.plot(t,y-V_OV,label=label)
        c = hl[0].get_color()
        ax.plot(t,y2-V_OV, dashes=[4,2],coloration=c)
        ax.plot(t,v_in - V_OV, linewidth=0.5, coloration=c)

    if row == 1:
    ax.set_xlim(t.min(), t.max())
    ax.legend(loc="decrease proper", fontsize=11, labelspacing=0)
    ax.set_ylabel(u'$V_{in} - V_{OV}$ (V)', fontsize=13)
    if row == 2:
        ax.set_xlabel(u'time (microseconds)')
fig.suptitle(u'Brief pulse rejection: RC=%.2f$mu$s' % (RC/1e-6),y=0.93);

It does, with some however not an enormous quantity of margin. (Initially I assumed up a pulse requirement of 500ns from ( V_{OV}-0.5V ) to 40V however that did NOT WORK.)

There’s a positive line right here: we want a filter that’s gradual sufficient that it’s going to block these spikes, however quick sufficient that it’s going to let overvoltages journey the comparator in lower than 3μs.

Different ideas

Worst-case vs. root-sum-squares

I do most of my work assuming worst-case all over the place. That is like Murphy’s Legislation on steroids: R1 is at its tolerance restrict on the low facet, and R2 is at its tolerance restrict on the excessive facet, and U2’s reference is on the sting of its tolerance restrict, with the precise route for all these elements to conspire in opposition to me and provides me a worst-case output.

On the entire, that is actually unlikely, a lot extra unlikely than any of the person elements being on the sting of their restrict… that it might be overly pessimistic.

One other method is to make use of the root-sum-squares of the person part tolerances. This can be a little naive, as a result of not all the tolerances weight equally in figuring out the boundaries of system variability. However you need to use Monte Carlo evaluation, the place you simulate a big random variety of values. For instance, let’s simply take the voltage divider, and assume these 1% resistors have a Gaussian distribution with a normal deviation of, say, 0.2%. Then we are able to strive one million samples:


Rstd = 0.002
N = 1000000
R1 = 121e3 * (1 + Rstd*np.random.randn(N))
R2 = 10e3 * (1 + Rstd*np.random.randn(N))
fig = plt.determine(figsize=(7,11))
ax = fig.add_subplot(3,1,1)
ax.hist(R1/1000, bins=100)
ax = fig.add_subplot(3,1,2)
ax.hist(R2/1000, bins=100)
a = R2/(R1+R2)
ax = fig.add_subplot(3,1,3)
ax.hist(a, bins=100)
import pandas as pd

def get_stats(x):
    x0 = np.imply(x)
    s = np.std(x)
    dev = max(np.max(x-x0), np.max(x0-x))
    return dict(imply=x0, max=np.max(x), min=np.min(x),std=s,
               normstd=s/x0, normdev=dev/x0)

df = pd.DataFrame([get_stats(x) for x in [R1, R2, a]], index=['R1','R2','a'],
def tagfunc(x):
    return pd.Sequence([('K' 
                       if'R') and not k.startswith('norm')
                       else 'ratio',
                       for ok in x.index], x.index)
def formatfunc(x):
    tag, v = x
    if tag == 'Okay':
        return '%.3f Okay' % (v*1e-3)
        return '%.5f' % v
df.apply(tagfunc).type.applymap(lambda cell: 'text-align: proper').format(formatfunc)



121.000 Okay

10.000 Okay



119.885 Okay

9.908 Okay



122.120 Okay

10.097 Okay



0.242 Okay

0.020 Okay










The desk above is considerably terse:

  • a is the voltage divider ratio
  • imply is the imply worth ( mu_x ) of all samples
  • min is the minimal worth ( x_min ) of all samples
  • max is the utmost worth ( x_max ) of all samples
  • std is the usual deviation ( sigma_x ) of all samples
  • normstd is the normalized normal deviation (( sigma_x/mu_x ))
  • normdev is the normalized worst-case deviation = ( max(x_max-mu_x, mu_x-x_min)/mu_x )

For this set of samples, the worst-case deviation of R1 is 0.925%, the worst-case deviation of R2 is 0.973%, and the worst-case deviation of a is 1.245%.

Evaluate these outcomes with a worst-case evaluation method if somebody instructed us R1 had 0.925% tolerance and R2 had 0.973% tolerance:

R1_nom = 121e3
R2_nom = 10e3
a_nom = R2_nom/(R1_nom+R2_nom)
def showsign(x):
    return '-' if x < 0 else '+'
for s in [-1,+1]:
    R1 = R1_nom*(1+s*0.00925)
    R2 = R2_nom*(1-s*0.00973)
    a = R2/(R1+R2)
    print "R1=121Kpercents0.925%%, R2=10Kpercents0.973%% => a=%.5f = a_nom*%.5f (%+.2f%%)" % (
        showsign(s), showsign(-s), a, a/a_nom, (a/a_nom-1)*100
R1=121K-0.925%, R2=10K+0.973% => a=0.07768 = a_nom*1.01767 (+1.77%)
R1=121K+0.925%, R2=10K-0.973% => a=0.07501 = a_nom*0.98260 (-1.74%)

In different phrases, Monte Carlo evaluation provides us a sure of &pm;1.245% for the voltage divider ratio, however worst-case evaluation provides us a sure of &pm;1.77% for the voltage divider ratio.

Worst-case evaluation is at all times pessimistic (assuming you’ve taken into consideration all doable elements that produce error — which isn’t simple, and even sensible… however the main ones are going to be part tolerance and temperature coefficients, and that’s about the most effective you are able to do) and Monte Carlo evaluation is… optimistic? life like? The issue is you could’t inform until you recognize the error distributions.

If I purchase a reel of &pm;1% surface-mount chip resistors, I’ve completely no concept what the distribution of their resistance goes to be, besides they’ll all be very more likely to be inside 1% of their nominal values at 25°C, as a result of that’s what the producer claims. Suppose I’ve bought a reel of 5000 10kΩ resistors. Then 2500 of them might measure 10.1kΩ and 2500 might measure 9.9kΩ. Or they could all be 10.1kΩ. Or they could be uniformly distributed between 9.9kΩ and 10.1kΩ. Or they could have a decent regular distribution round 9.93kΩ (say, a imply of 9.93kΩ and normal deviation 2.4Ω) for this reel, but when I purchase one other reel manufactured from a special batch of uncooked supplies, then they could have an analogous tight regular distribution round 10.02kΩ. Perhaps the resistors manufactured on Thursday nights are usually 20 ohms higher than different producers, as a result of the manufacturing facility foreman is a silly jerk and likes the temperature in his manufacturing facility to be just a few levels hotter than the 20°C &pm; 1°C specified by the corporate’s engineering employees, and that throws off a few of the manufacturing processes barely…. probably that may nonetheless move the 1% tolerance take a look at, though the foreman must be fired for including pointless sources of error.

The distributions are probably to be considerably Gaussian. It’s simply you could’t belief that to be the case. There’s an apocryphal story I learn someplace, however can’t discover, and might be false, that way back, the 1% resistors and 5% resistors had been two completely different grades from the identical manufacturing course of, in different phrases:

  • every resistor was measured
  • if the resistor was inside 1% of nominal, it went into the 1% pile
  • if the resistor was not inside 1% of nominal, however was inside 5% of nominal, it went into the 5% pile
  • if the resistor was not inside 5% of nominal, it went into the trash

This type of case might produce some unusual distributions:


Rnom = 10e3
R = Rnom * (1 + 0.015*np.random.randn(N))

bin_size = 0.002
bins = np.arange(0.92,1.08001,bin_size) * Rnom
counts, _ = np.histogram(R,bins=bins)
bin_center = (bins[:-1] + bins[1:])/2.0

alternatives = [('1%', 'green',lambda x: abs(x-1) <= 0.01),
              ('5%', 'yellow',lambda x: ((0.01 < abs(x-1)) & (abs(x-1) <= 0.05))),
              ('reject', 'red',lambda x: 0.05 < abs(x-1))]

for identify, coloration, select_func in alternatives:
    ii = select_func(bin_center/Rnom)[ii], counts[ii], width=bin_size*Rnom,
           coloration=coloration, label="%s (N=%d)" % (identify, sum(counts[ii])))
ax.set_xlim(0.92*Rnom, 1.08*Rnom)
ax.legend(fontsize=12, labelspacing=0)
ax.set_title('Distribution of resistors chosen from $sigma=0.015$')
ax.set_xlabel('resistance (ohms)')
<matplotlib.textual content.Textual content at 0x10e44d610>

Right here now we have a traditional distribution with ( sigma=0.015 R ) (150 ohms).

  • about half of them are 1% resistors with the inexperienced distribution: largely uniformly distributed with a slight clustering round nominal
  • about half are 5% resistors with the yellow distribution: a traditional distribution with a niche within the middle; most are within the 1-2% tolerance vary
  • round 0.08% of them are rejected as a result of they’re greater than 5% from nominal

Numerical solutions for this Gaussian distribution — somewhat than samples from a Monte Carlo course of — might be decided utilizing the cumulative distribution operate scipy.stats.norm.cdf:

import scipy.stats

stdev = 0.015
def cdf_between(r1, r2=None):
    cdf1 = scipy.stats.norm.cdf(r1/stdev)
    if r2 is None:
        return 1-cdf1
        return scipy.stats.norm.cdf(r2/stdev)-cdf1

# the two* is to seize left-side and right-side distributions
N1pct = 2*cdf_between(0,0.01)
N5pct = 2*cdf_between(0.01,0.05)
ranges = [0, 0.005, 0.01, 0.02, 0.05]
for i, r0 in enumerate(ranges):
    cdf0 = scipy.stats.norm.cdf(r0/stdev)
        r1 = ranges[i+1]
        tail = False
        r1 = None
        tail = True
    fraction = 2*cdf_between(r0,r1)   
    if not tail:
        print "%.1f%% - %.1f%%: %.6f (%.2f%% of %d%% tolerance)" % (r0*100,r1*100,fraction,
                                                         fraction/(N1pct if r0 < 0.01 else N5pct)*100,
                                                         1 if r0 < 0.01 else 5)
        print "     > %.1f%%: %.6f" % (r0*100,fraction)
print "%.6f: 1%% tolerance" % N1pct
print "%.6f: 5%% tolerance" % N5pct
0.0% - 0.5%: 0.261117 (52.75% of 1% tolerance)
0.5% - 1.0%: 0.233898 (47.25% of 1% tolerance)
1.0% - 2.0%: 0.322563 (63.98% of 5% tolerance)
2.0% - 5.0%: 0.181564 (36.02% of 5% tolerance)
     > 5.0%: 0.000858
0.495015: 1% tolerance
0.504127: 5% tolerance

  • 49.50% of those apocryphal resistors had been graded as 1%

    • 52.75% of them lower than 0.5% from nominal
    • 47.25% of them between 0.5% and 1% tolerance

  • 50.41% of those apocryphal resistors had been graded as 5%

    • 63.98% of them between 1% and a pair of% tolerance
    • 36.02% of them between 2% and 5% tolerance

  • 0.09% of those apocryphal resistors had been greater than 5% from nominal

Grading should still be completed for some digital elements (maybe voltage references or op-amps), but it surely’s not an ideal manufacturing technique. The demand for various grades might fluctuate with time, and is unlikely to match up completely with the yields from numerous grades. Suppose that you’re the manufacturing VP of Danalog Vices, Inc., which produces the DV123 op-amp in two grades:

  • an “A” grade op-amp with enter offset voltage lower than 1mV
  • a “B” grade op-amp with 1mV – 5mV enter offset.

Suppose additionally that the manufacturing course of finally ends up with 40.7% within the “A” grade, 57.2% within the “B” grade, and a pair of.1% as yield failures.

Perhaps in 2019, there have been orders for 650,000 DV123A op-amps and 800,000 DV123B op-amps. To satisfy this demand, Danalog Vices fabricated wafers with sufficient cube for 1.8 million elements: 732,600 DV123A, 1,029,600 DV123B, and 37,800 yield failures, assembly demand and slightly additional. On the finish of the yr, there are 82,600 extra DV123A in stock and 229,600 extra DV123B in stock.

Now in 2020, the forecasted orders are 900,000 DV123A and 720,000 DV123B op-amps. (Some main buyer determined they wanted increased precision.) You don’t have many choices right here… making 2.2 million cube would produce 895,400 DV123A op-amps and 1,258,400 DV123B op-amps. Mixed with the earlier yr’s stock, this could be sufficient to satisfy demand plus 78,000 additional DV123A op-amps and 768,000 DV123B. Tons of extra B grade op-amps.

That’s not going to work very effectively. If the fraction of buyer orders of A grade elements is way increased than the pure yield of A grade elements, then there can be an extra of B grade elements. If we had too many A grade op-amps, Danalog Vices might package deal and promote them as B grade op-amps, however an extra of B grade op-amps will find yourself as scrapped stock.

There’s no life like approach to shift the manufacturing course of to make extra A grade op-amps by means of grading alone. We might add a laser-trimming step on the manufacturing line to enhance B-grade cube till they meet A-grade specs, which provides some price.

To leap out of this grading quagmire and again to the general level I’m attempting to make: you can’t be certain of the error distribution of elements. Can’t can’t can’t. The very best you may have the ability to do is get characterization information from the producer, however this could be for one pattern batch and will not be consultant of the manufacturing course of by means of the product’s full life cycle.

Characterization information

Whether or not you’ll find this characterization information within the datasheet is de facto hit-or-miss. Some datasheets don’t have it in any respect. Some have restricted info, as within the LM2903B datasheet:

The datasheet lists a specification as &pm;2.5mV offset voltage at 25°C. The characterization graph exhibits 62 samples inside &pm;1.0mV offset voltage, with little variation over temperature.

A extra detailed instance of the sort of characterization information is from the MCP6001 op-amp datasheet, which exhibits histograms of enter offset voltage, offset voltage tempco, the offset voltage curvature or quadratic temperature coefficient (!), and enter bias present.

Right here’s Determine 2-1, displaying an offset voltage histogram of round 65000 samples:

The MCP6001 datasheet claims &pm;4.5mV most at 25°C. I crunched some numbers primarily based on studying the histogram and got here up with a imply of −0.3mV with a normal deviation of σ=1.04mV; if this had been consultant of the inhabitants as an entire, then the boundaries of &pm;4.5mV are roughly −4σ and +4.6σ, and for a traditional distribution, would have anticipated yield failures of roughly 32ppm under −4.5mV and 2ppm above 4.5mV. (These are simply the outcomes of scipy.stats.norm.cdf(-4) and scipy.stats.norm.cdf(-4.6).)

The principle worth of the characterization graphs (to me, a minimum of) will not be as numerical information that I can depend upon straight, however somewhat that they present a roughly Gaussian distribution (and never, say, a uniform distribution) and present how conservative the producer is in selecting minimal/most limits given this characterization information. You hear “six sigma” bandied about lots — which might be interpreted in a technique as having limits equal to 6 normal deviations from the imply — and for a Gaussian distribution, this represents about 2 failures per billion samples protecting each low-end and high-end tails. (2*scipy.stats.norm.cdf(-6))

Word, nevertheless, the positive print initially of the Typical Efficiency Curves part of the MCP6001 datasheet:

The graphs and tables offered following this word are a statistical abstract primarily based on a restricted variety of
samples and are offered for informational functions solely. The efficiency traits listed herein
will not be examined or assured. In some graphs or tables, the information introduced could also be outdoors the required
working vary (e.g., outdoors specified energy provide vary) and subsequently outdoors the warranted vary.

So use the specs! I like to recommend you ignore worst-case evaluation solely at your personal peril.

Mitigating methods

We’ve talked lots about how completely different sources of error — resistance tolerance, comparator enter offset voltage, temperature coefficients, and many others. — contribute to complete uncertainty of a circuit parameter like threshold voltage. It paints a grim image; you’ll find that aside from the best of circuits, it’s onerous to offer an general error of lower than 1%.

There are, nevertheless, methods to compensate for the consequences of part tolerances. I depend a minimum of three:

  • we are able to use ratiometric design methods to scale back the impact of sure error sources
  • we are able to calibrate our circuitry
  • we are able to use digital sign processing to scale back the necessity for elements with tolerance

Ratiometric design

Ratiometric design is a technique of circuit design the place measurements are made from the ratio of two portions somewhat than their absolute values. In the event that they share some frequent supply of error, then that error will cancel out. I talked about this in an article on thermistor sign conditioning. If I’ve a 3.3V provide feeding a voltage divider, and the identical 3.3V provide used for an analog-to-digital converter, then the ADC studying would be the voltage divider ratio ( R2/(R1+R2) ) — plus ADC achieve/offset/linearity errors — and won’t be topic to any variation within the 3.3V provide itself.

Or, suppose there are causes to keep away from a voltage divider configuration, and as an alternative I want a present supply to drive a resistive sensor, as proven within the left circuit under:

Right here the ADC studying (as a fraction of fullscale voltage ( V_{ref} )) is ( I_0R_{sense}/V_{ref} ), which is delicate to errors in each the present ( I_0 ) and the voltage reference ( V_{ref} ).

We are able to deal with this resistive sensor ratiometrically with the circuit on the precise, through the use of a reference resistor ( R_{ref} ) and a pair of analog multiplexers U1, U2. Right here now we have to take two readings, ( x_1 = I_0R_{sense}/V_{ref} ) and ( x_2 = I_0R_{ref}/V_{ref} ); if we divide them, we get ( x_1/x_2 = R_{sense}/R_{ref} ) which is delicate solely to tolerances within the two resistors; variations in present ( I_0 ) and voltage ( V_{ref} ) cancel out.


Calibration includes a measurement of an correct, recognized reference. If I’ve some machine with some measurement error that’s constant — for instance a achieve and offset — then I can measure a number of recognized inputs throughout a calibration step, and use these measurements to compensate for machine errors.

One quite common occasion of calibration is using a tare weight with a scale — the burden of a platform or container is unimportant, so when that platform or container is empty, we are able to weigh it and use the measurement as a reference to subtract from a second measurement. Once you go to the deli counter at a grocery store and get a half-pound or 200g of sliced turkey, the dimensions is routinely calibrated to an empty measurement first; then the burden of the turkey is decided utilizing a measurement relative to the empty measurement.

That type of measurement calibrates out the offset however not the achieve; a achieve calibration would require some normal weight for use, like a normal 1kg weight.

Calibration might be completed throughout manufacturing with exterior tools (1kg weights, voltage or temperature requirements, and many others.) — this may be considerably time-consuming or pricey. After manufacturing, such measurements are doable solely at restricted instances and with substantial expense.

Crucial side of counting on calibration is to make sure that the calibration measurements stay legitimate.

If a circuit is liable to voltage offset, and we need to use calibration to compensate for that offset, we have to be sure that the offset doesn’t change considerably throughout the time of use: drift as a consequence of time and temperature adjustments can eradicate the good thing about calibration. In reality, extreme drift could make a measurement utilizing calibration worse than with out that calibration — suppose some machine measures voltage and has a worst-case accuracy of 2mV. The voltage offset throughout calibration could be +1.4mV; if it drifts to −1.4mV then the ensuing accuracy together with calibration is 2.8mV error. So measurement drift is a critical concern. Tare weight is a simple circumstance to keep away from the consequences of drift: individuals or sliced turkey or vehicles on a scale might be measured just a few seconds after a tare step to calibrate out offset, which is mostly too quick for adjustments in temperature or time drift. Alternatively, laboratory take a look at tools like oscilloscopes or multimeters usually are used for 12 months between calibrations, in order that they must be designed for low drift and low temperature coefficient.

Digital sign processing

DSP can even assist to take away the necessity for analog elements that introduce errors. Analog sign conditioning remains to be essential to deal with low-level amplification and high-frequency points, however different operations like squaring or logarithms or filtering or making use of temperature compensation might be completed within the digital area, the place numeric errors might be made arbitrarily small.

One of many main achievements of DSP has been in equalization in communications. 56K dialup modems and DSL each characterize a triumph of DSP over the restrictions of analog sign processing. We now take it without any consideration that now we have Web bandwidths of 40Mbps. I keep in mind the outdated acoustic-coupled 300-baud modems: think about transmitting a file at 30 bytes per second. That’s about 2.6 megabytes per day. There’s solely a lot you are able to do with analog sign conditioning earlier than you run into the challenges of part tolerances. DSP eliminates all that — assuming you possibly can pattern and course of the information quick sufficient.

Simply be certain that to keep away from utilizing software program as an overused crutch — the mantra of “Oh, we are able to repair that in software program” makes me cringe. If there are design errors within the analog area, they are often rather more complicated and expensive to repair (and confirm!) within the digital area. Mismatches in elements or noise coupling are issues that must be dealt with earlier than they get inside a microcontroller. Considered one of my pet peeves is using single-ended current-sense amplifiers in motor drives. Present sense resistors are comparatively cheap lately: you should purchase 10mΩ 1% 1W 1206 chip resistors for lower than 10 cents in amount 1000 that may produce a high-quality present sense sign with out including a number of additional voltage drop. However they need to be used with a differential amplifier to take away the consequences of common-mode voltage that outcomes from parasitic circuit resistance and inductance. This common-mode voltage is tough (if not not possible) to “simply repair in software program” — it may possibly change with temperature, causes undesirable errors in analog overcurrent sensing, and may introduce coupling between present paths within the circuit.

Sources of measurement error must be well-understood. The overvoltage circuit I mentioned on this article is an efficient instance; if I simply design a circuit and construct it and take into account it completed as a result of “effectively, it appears to work”, then each supply of error represents a latent danger in my design. Understanding and bounding these dangers is the important thing to a profitable design.


Right this moment we went by means of a grand tour across the concept of part tolerances.

We checked out this overvoltage detection circuit:

  • R1 = 121kΩ
  • R2 = 10.0kΩ
  • R3 = 2.00kΩ
  • R4 = 1.00kΩ
  • R5 = 1.00MΩ
  • R6 = 5.1kΩ
  • U1 = 1/2 LM2903B
  • U2 = NCP431B
  • C1 = 56pF
  • C2 = 100pF

We examined the assorted sources of part tolerance error, together with:

  • static errors

    • mismatch of R1 / R2
    • different sources of error apart from the “1%” tolerance listed on a invoice of supplies (for instance, temperature coefficient and mechanical pressure)
    • accuracy of voltage reference U2
    • comparator enter offset voltage and enter bias present

  • dynamic errors

    • noise filtering with C1
    • comparator response time vs. overdrive

We talked about some facets of choosing the voltage reference and comparator, and about comparator hysteresis.

We explored using statistical evaluation (Monte Carlo strategies) as a extra optimistic different to worst-case evaluation, and investigated the problems of part error distribution.

Lastly, we checked out strategies of mitigating part error:

  • ratiometric measurements
  • calibration
  • digital sign processing

Alongside the best way, we touched upon numerous minor tangents:

  • the worth of 5% / 1% / 0.5% / 0.1% chip resistors
  • irreversible resistance adjustments upon soldering right into a PCB
  • voltage divider error sensitivity to resistor tolerance, as a operate of the nominal voltage divider ratio ( alpha ), particularly ( S approx 2(1-alpha). ) So a voltage divider ratio close to 1 has hardly any error, whereas small voltage divider ratios virtually double the resistor tolerance: a ten:1 voltage divider utilizing 1% resistors can have a worst-case error of roughly 2%.
  • the inner structure of the LM393 comparator
  • grading of elements primarily based on measured values

I hope you are taking away some helpful methods for managing part error in your subsequent mission.

Thanks for studying!

© 2020 Jason M. Sachs, all rights reserved.

You may additionally like… (promoted content material)

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button