TechWhirl (TECHWR-L) is a resource for technical writing and technical communications professionals of all experience levels and in all industries to share their experiences and acquire information.
For two decades, technical communicators have turned to TechWhirl to ask and answer questions about the always-changing world of technical communications, such as tools, skills, career paths, methodologies, and emerging industries. The TechWhirl Archives and magazine, created for, by and about technical writers, offer a wealth of knowledge to everyone with an interest in any aspect of technical communications.
Subject:Re: Tolerance in text From:Geoff Lane <geoff -at- gjctech -dot- co -dot- uk> To:TECHWR-L <techwr-l -at- lists -dot- techwr-l -dot- com> Date:Tue, 21 Feb 2006 17:21:48 +0000
On Tuesday, February 21, 2006, Jonathan West wrote;
>> Perhaps as important is implied tolerance. Any dimension has an
>> implicit rounding tolerance. So, "25 mm" actually means
>> "25 mm + 0.49 mm / - 0.50 mm" because any dimension in that range
>> rounds to 25 mm to the nearest millimetre, which is the accuracy
>> implied by omitting the first decimal place.
> I disagree with this concept of implied tolerance. My disagreement comes
> from my background as an author and/or editor of several thousand pages of
> national and international standards, where the correct writing and
> understanding of tolerances is important.
---
FWIW, I have to disagree with your disagreement. My disagreement comes
from decades on the "shop floor" in a variety of disciplines (and
another couple of decades writing about it) -- not from a theoretical
standpoint. If you're building a ship 254 ft long, the actual finished
length is permitted to be no less than 253 ft 6 in and must be less
than 254 foot 6 in. The naval architect doesn't need to write the
tolerance because *in that industry* the implied tolerance is
understood.
Theorists can pontificate all they want. They can say that no
inferences shall be made, but shop floor practices can make those
inferences. If nothing else, the precision with which a dimension is
given implies the precision with which you must measure it.
How tolerance is implied depends on the industry, which IIRC the OP
didn't specify. I'll grant that many industries don't infer accuracy
from the way a dimension is written. However consider the following
examples:
1. 250 ft ± 1 in
Without the explicit tolerance, the implied tolerance is five feet.
Thus, the specified tolerance tells workers that the implied
tolerance is too inaccurate. The specified tolerance gives the
order of accuracy required. Because the tolerance is given to the
nearest inch, in some industries the worker is entitled to work to
the nearest inch and the finished dimension would be accepted if
more than 249 feet 10 1/2 inches and less than 250 feet 1 1/2
inches.
You can check this dimension with a surveyor's tape because it is
accurate enough over the specified distance.
2. 250 ft 0 in ± 1.0 in
Without the explicit tolerance, the implied tolerance is half an
inch. Again, the specified tolerance tells workers that the implied
tolerance is too inaccurate and indicates the degree of accuracy
required. Because the first decimal place is specified, the worker
must work to the nearest tenth of an inch and the work would be
accepted only if the finished dimension is greater than or equal
to 249 ft 10.95 inches and less than 250 ft 1.05 inches.
You cannot check this dimension with a surveyor's tape because it
isn't accurate enough.
So while the two specifications might seem equivalent they are not.
*In some industries*, the precision way in which a dimension is
specified can indicate how critical is that dimension.
Consider the following example for a pin diameter to fit a 25 mm hole:
3. 25 mm + 0 / - 0.5 mm
This is so obviously a loose fit (and with a half millimetre
tolerance, a "rattling good fit" at that). The implied accuracy for
the oversize is absolute whereas the implied accuracy for the
undersize is less rigorous. Functionally, it probably wouldn't
matter if the pin came out at 24.497 mm. However, it would if the
pin were 25.001 mm because that would take the fit into a different
class.
Now 24.975 ± 0.25 mm might at first glance seem the same as example
3 - but it is not. Example 3 implies a loose fit that the symmetric
tolerance does not. Does it make a difference? Resoundingly, "Yes!"
If you can infer that a loose fit is required and the assembly
binds, you can adjust things to meet the designer's intentions.
It's part of that thing we call "craftsmanship".
During my apprenticeship I was taught to infer the criticalness of a
dimension from the accuracy with which that dimension was given.
During my journeyman's time, I learned to concentrate on critical
dimensions and to use pragmatism on all others. Critical dimensions
must be checked, but non-critical dimensions can be transferred, etc.
without measurement. I learned that it was unacceptable to waste time
and money on getting something "just so" for no functional benefit.
The bottom line is to identify industry practices and write
accordingly. If a dimension is critical, those practices might require
the specification you write to imply that. That said, identifying how
critical is a dimension and writing accordingly will not be
misinterpreted even in industries that do not infer tolerances. At the
very least, it's good practice to indicate the precision with which
each dimension must be measured.
WebWorks ePublisher Pro for Word features support for every major Help
format plus PDF, HTML and more. Flexible, precise, and efficient content
delivery. Try it today!. http://www.webworks.com/techwr-l