...

In my last post on worst-case tolerance analysis I concluded with the fact that the worst-case method, although extremely safe, is also extremely expensive.

Allow me to elaborate, then offer a resolution in the form of statistical tolerance analysis.

## cost

A worst-case tolerance analysis is great to make sure that your parts will always fit, but if you're producing millions of parts, ensuring each and every one works is expensive and, under most circumstances, impractical.

Consider these two scenarios.

- You make a million parts, and it costs you $1.00 per part to make sure that every single one works.
- You make a million parts, but decide to go with cheaper, less accurate parts. Now your cost is $0.99 per part, but 1,000 parts won't fit.

In the first, scenario, your cost is:

$1.00/part * 1,000,000 parts = $1,000,000

In the second scenario, your cost is:

$0.99/part * 1,000,000 parts = $990,000,

but you have to throw away the 1,000 rejects which cost $0.99/part. So your total cost is:

$990,000+1,000*$0.99=$990,990. Which means you save $9,010.

Those actual numbers are make-believe, but the lesson holds true: by producing less precise (read: crappier) parts and throwing some of them away, you save money.

Sold yet? Good. Now let's take a look at the theory.

## statistical tolerance analysis: theory

The first thing you'll want to think of is the bell curve. You may recall the bell curve being used to explain that some of your classmates were smart, some were dumb, but most were about average.

The same principle holds true in tolerance analysis. The bell curve (only now it's called the "normal distribution") states that when you take a lot of measurements, be it of test scores or block thicknesses, some measurements will be low, some high, and most in the middle.

Of course, "just about" and "most" doesn't help you get things done. Math does, and that's where the normal distribution (and excel... attachment below) come in.

*sidebar: Initially I planned on diving deep into the math of RSS, but Hileman does such a good job on the details, I'll stick with the broad strokes here. I highly suggest printing out his post and sitting down in a quiet room, it's the only way to digest the heavy stuff.*

## the normal distribution and "defects per million"

Using the normal distribution, you can determine how many defects (defined as parts that come in outside of allowable tolerances) will occur. The standard unit of measure is "defects per million", so we'll stick with that.

There are two numbers you need to create a normal distribution, and they are represented by μ (pronounced "mew") and σ (pronounced ("sigma")

- μ is the mean, a measure of the "center" of a distribution.
- σ is the standard deviation, a measure of how spread out a distribution is. For example, the number sets {0,10} and {5,5} both have an average of 5, but the {0,10} set is spread out and thus has a higher standard deviation.

Using one of our blocks (remember those?) as an example...

### Let's say you measure five blocks like the one above (in practice it's best to measure 30 at the very least, but we'll keep it at 5 for the example) and get the following results:

- x1 = 1.001"
- x2 = 0.995"
- x3 = 1.000"
- x4 = 1.001"
- x5 = 1.003"

The average (μ) is 1.000 ( and the standard deviation (σ) is .003. Plug those into a normal distribution, and your tolerances break down like this. (see the 'after production' tab in the attached excel file for formulas)

If you require the blocks to be 1.000±.003 (±1σ), the blocks will pass inspection 68.27% of the time... 317,311 defects per million.

If you require the blocks to be 1.000±.006 (±2σ), the blocks will pass inspection 95.45% of the time... 45,500 defects per million

If you require the blocks to be 1.000±.009 (±3σ), the blocks will pass inspection 99.73% of the time...2,700 defects per million

and so on.

Using the data above you can say with confidence (assuming you measured enough blocks!) that if you were to use a million blocks, all but 2700 of them would come in between 0.991 and 1.009.

## root sum square and the standard deviation

If you've followed the logic closely you may notice a catch-22. Ideally, you want to do a tolerance analysis before you go to production, but how can you determine μ or σ without having samples to test... which you will only get after production?

You make (and state... repeatedly) assumptions

The μ part is easy. You just assume that the mean will be equal to the nominal (in our case, 1.000). This is usually a solid assumption and only begins to get dicey when you talk about the nominal shifting (some like to plan for up to 1.5σ!) over the course of millions of cycles (perhaps due to tool wear), but that is another topic.

For σ, a conservative estimate is that your tolerance can be held to a quality of ±3σ, meaning that a tolerance of ±.005 will yield you a σ of 0.005/3 = 0.00167.

Let's play this out... If you are stacking five blocks @ 1.000±.005, you need to add up the five blocks to get μ, and take the square root of the sum of the squares of the standard deviation of the tolerances (wordy I know), which looks like this... SQRT([.005/3]^2+[.005/3]^2+[.005/3]^2+[.005/3]^2+[.005/3]^2)... (you divide by 3 because you are assuming that your tolerances represent 3 standard deviations)

That's as wordy as I'm going to get on the math (the post is already longer than i'd like), you can see it working for yourself in the 'before production' tab in the attached excel file for formulas)

Just remember to treat those numbers with the respect that they deserve and that industry-accepted assumptions are no replacement for a heart-to-heart (and email trail) with your manufacturer . Trying to push a manufacturer to hold tolerances they aren't comfortable with us a draining and often futile exercise.

The tolerances dictate the design, not the other way around.

update: My series of posts on worst-case, root sum square, and monte carlo tolerance analysis started off as just a brief introduction to the basics. Since then I have heard from a number of you asking for a clear, concise (everything else out there is so heavy), usable guide to both the math behind tolerance analysis and real-world examples of when to use it. I'm currently working on it, but would love to hear what YOU would like out of it. Let me know in the comments or contact me through the site.
Pingback: hard anodizing | Product Design Notebook

Pingback: statistical tolerance analysis basics: worst-case tolerance analysis | Product Design Notebook

Pingback: designing with nylon | Product Design Notebook

Pingback: statistical tolerance analysis basics: Monte Carlo Simulation | Product Development Notebook

Pingback: MEMS: a brief introduction (MIT OCW) | Product Development Notebook

Pingback: Lo, my 100 subscribers, who are you? | Product Development Notebook

Pingback: recommended GD&T training | Product Development Notebook