Hubbry Logo
search
logo
837180

Total indicator reading

logo
Community Hub0 Subscribers
Read side by side
from Wikipedia
An indicator used to measure thickness
Technical symbol for total run-out

In metrology and the fields that it serves (such as manufacturing, machining, and engineering), total indicator reading (TIR), also known by the newer name full indicator movement (FIM), is the difference between the maximum and minimum measurements (the range), that is, readings of an indicator, on the planar, cylindrical, or contoured surface of a part,[1] showing its amount of deviation from flatness, roundness (circularity), cylindricity, concentricity with other cylindrical features, or similar conditions. The indicator traditionally would be a dial indicator; today dial-type and digital indicators coexist.

The earliest expansion of "TIR" was total indicated run-out and concerned cylindrical or tapered (conical) parts, where "run-out" (noun) refers to any imperfection of form that causes a rotating part such as a shaft to "run out" (verb), that is, to not rotate with perfect smoothness. These conditions include being out-of-round (that is, lacking sufficient roundness); eccentricity (that is, lacking sufficient concentricity); or being bent axially (regardless of whether the surfaces are perfectly round and concentric at every cross-sectional point). The purpose of emphasizing the "total" in TIR was to duly maintain the distinction between per-side differences and both-sides-considered differences, which requires perennial conscious attention in lathe work. For example, all depths of cut in lathe work must account for whether they apply to the radius (that is, per side) or to the diameter (that is, total). Similarly, in shaft-straightening operations, where calibrated amounts of bending force are applied laterally to the shaft, the "total" emphasis corresponds to a bend of half that magnitude. If a shaft has 0.1 mm TIR, it is "out of straightness" by half that total, i.e., 0.05 mm.

Today TIR in its more inclusive expansion, "total indicator reading", concerns all kinds of features, from round to flat to contoured. One example of how the "total" emphasis can apply to flat surfaces as well as round ones is in the topic of surface roughness, where both peaks and valleys count toward an assessment of the magnitude of roughness. Statistical methods such as root mean square (RMS) duly address the "total" idea in this respect.

The newer name "full indicator movement" (FIM) was coined to emphasize the requirement of zero cosine error. Whereas dial test indicators will give a foreshortened reading if their tips are on an angle to the surface being measured (cosine error), a drawing callout of FIM is defined as referring to the distance traveled by the extremity of the tip—not by the lesser amount that its lever-like action moves the needle. Thus a FIM requirement is only met when the measured part itself is truly in geometric compliance—not merely when the needle sweeps a certain arc of the dial.

The "TIR" abbreviation is still more widely known and used than "FIM". This is natural given that (1) many part designs that are still being manufactured are made from decades-old engineering drawings, which still say "TIR"; and (2) generations of machinists were trained with the term "TIR", whereas only recent curriculum uses "FIM".

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Total indicator reading (TIR), also referred to as full indicator movement (FIM), is a fundamental metrological measurement in precision engineering that quantifies the total variation in a dial indicator's readings—specifically, the difference between the maximum and minimum values—when the indicator is applied to the surface of a rotating part or feature.[1][2] This metric captures discrepancies in geometry, such as deviations from true circularity, by recording the indicator's response over a full 360-degree rotation relative to a datum axis or centerline.[1][3] In manufacturing and mechanical engineering, TIR serves as a critical tool for evaluating part quality, particularly for features like shafts, cylinders, and couplings, where it assesses attributes including roundness, concentricity, and runout to ensure components meet tight tolerances before assembly into larger systems.[1] For instance, in measuring roundness, TIR represents the total diametric deviation, where a bilateral tolerance of ±0.004 inches translates to a TIR of 0.008 inches, highlighting the full extent of radial wobble or eccentricity.[3] Historically, TIR emphasized the dial face's observed change, but the ASME Y14.5 standard has shifted toward FIM to focus on the actual movement of the indicator tip, reducing potential errors from instrument mechanics and promoting clearer specifications on engineering drawings.[2] This evolution underscores TIR's role in quality control, where values exceeding specified limits signal the need for machining adjustments or rejection, thereby preventing issues like vibration, wear, or misalignment in operational machinery.[1][2]

Fundamentals

Definition

Total indicator reading (TIR) is a metrological measurement that captures the total variation observed on an indicator gauge as a part undergoes a full rotation or linear traversal, specifically defined as the difference between the maximum and minimum readings recorded during this process.[4] This technique is commonly applied using dial indicators or similar contact probes positioned against the surface of interest to detect deviations from an ideal geometry.[1] Mathematically, TIR is calculated as:
TIR=Maximum readingMinimum reading \text{TIR} = \text{Maximum reading} - \text{Minimum reading}
For instance, if the indicator shows a highest positive excursion of +0.002 inches and a lowest negative excursion of -0.001 inches, the TIR would be 0.003 inches.[4][5] This straightforward subtraction provides a direct quantitative measure of the overall indicator movement without requiring separate analysis of individual deviations.[6] The primary purpose of TIR is to quantify geometric deviations, such as those affecting roundness, cylindricity, or flatness, relative to a specified reference axis or plane in precision components.[4] In the context of rotating parts, TIR effectively represents the total diametric deviation, as the full range of indicator travel corresponds to the complete variation across the part's diameter when measured radially about the axis of rotation.[3] This makes it a key metric for assessing the functional integrity of cylindrical features, ensuring minimal eccentricity or out-of-roundness that could impact performance in assemblies.[6]

Historical development

Dial indicators, essential tools for precise measurement in metrology, were introduced in the late 19th century to quantify small distances and angular deviations. The first patent for such a device was filed by John Logan, a watchmaker from Waltham, Massachusetts, on May 15, 1883, describing it as an improvement in gages for measuring shoulders, grooves, or holes in articles.[7] This innovation built on earlier mechanical gauges, enabling more accurate readings through a dial mechanism that amplified minute movements.[8] Initially, the concept of total indicator reading emerged in the context of runout measurements for rotating parts, with the term "total indicated run-out" applied specifically to cylindrical or conical components. Here, "run-out" referred to the noun describing radial or axial deviations during rotation, as captured by the full sweep of the indicator over one revolution.[9] This terminology focused on the total variation observed, often half the reading representing eccentricity in cylindrical surfaces.[6] As metrology advanced into the mid-20th century, the term evolved to "total indicator reading" (TIR) to accommodate broader applications beyond rotating cylindrical parts, including assessments of flat surfaces and non-conical geometries. This shift reflected the growing use of dial indicators in general precision inspection, where the full range of indicator movement quantified surface irregularities without implying rotation-specific runout.[5] In contemporary standards, the term "full indicator movement" (FIM) has gained prominence as a more precise descriptor, particularly in international contexts and ASME Y14.5 guidelines, to emphasize the actual displacement of the indicator tip while accounting for potential cosine errors in measurement setup. Previously used U.S. terms like TIR and full indicator reading (FIR) align in meaning with FIM but have been standardized under this newer nomenclature for clarity in global manufacturing.[10][11]

Measurement principles

Basic setup and procedure

The basic setup for measuring total indicator reading (TIR) involves securing the workpiece on a stable rotation fixture, such as V-blocks or precision centers, to allow smooth 360-degree rotation around a defined datum axis while minimizing any external vibrations or deflections. The dial indicator is mounted on a rigid stand, like a magnetic base or height gauge, positioned so its probe contacts the surface perpendicularly with a light preload (typically 0.1-0.3 mm) to ensure consistent readings without excessive force. This configuration establishes a reliable reference for capturing variations in the surface relative to the datum, often on a surface plate for flatness control or a lathe spindle for rotational accuracy.[12][13][14] The measurement procedure begins by zeroing the indicator at a starting position on the surface, then rotating the workpiece a full 360 degrees while observing and recording the maximum and minimum readings on the dial. For features requiring axial traversal, such as longer surfaces, the indicator is slowly moved parallel to the datum axis along the entire length, repeating the rotation at multiple points to capture the total variation; TIR is calculated as the difference between the overall maximum and minimum values observed. This process ensures the measurement reflects the full range of indicator movement, providing a direct assessment of surface uniformity relative to the reference.[12][13][14] Datum selection is critical for consistent TIR measurements, requiring the establishment of a primary reference axis or plane—such as the centerline of a shaft or a mounting face—that simulates the functional orientation of the part in assembly. The datum must be clearly defined and fixtured to avoid introducing errors from misalignment, with the workpiece rotated about this axis to isolate true geometric deviations. For instance, in measuring a cylindrical shaft, the part is mounted between centers or in V-blocks aligned to the datum axis (e.g., an end face or bearing journal), and the indicator probe is placed against the outer diameter; rotation of the shaft allows recording of the TIR as the total diametric variation, highlighting any eccentricity or out-of-roundness.[12][13][14]

Types of indicators used

Mechanical dial indicators are analog precision instruments featuring a needle pointer on a graduated dial face, commonly employed in manual TIR setups for their reliability and ease of use in workshop environments. These gauges typically utilize a spring-loaded plunger that contacts the surface, converting linear motion into rotational movement via a rack-and-pinion mechanism to drive the pointer. Resolutions of these indicators generally range from 0.001 mm to 0.0001 inch (0.0025 mm), allowing for accurate capture of maximum and minimum readings during rotation to compute TIR.[15][16] Digital indicators represent an electronic evolution of dial indicators, incorporating LCD displays for direct numerical readouts and enhanced functionality in TIR measurements. They often include built-in features such as automatic TIR calculation, data storage, and SPC (Statistical Process Control) output via USB or RS-232 interfaces, facilitating integration with computer-aided inspection systems. With resolutions matching or exceeding mechanical counterparts—typically 0.001 mm or finer—and improved repeatability, digital models reduce reading errors and support tolerance go/no-go judgments.[7][17] Test indicators, also known as lever dial test indicators, are compact tools designed for probing edges and irregular surfaces, frequently used alongside standard dial indicators for preliminary alignment checks in TIR applications. Their lever-actuated contact point enables access to confined spaces where plunger types are impractical, providing resolutions around 0.01 mm or 0.0005 inch for detecting deviations in flatness or squareness. These indicators excel in setup verification for machining operations, ensuring workpiece positioning before full TIR assessment.[18][7] Specialized indicators, including lever-type and plunger-type variants, are tailored to specific surface geometries in TIR evaluations, with lever types suited for angular or contoured profiles and plunger types for flat or cylindrical ones. Lever indicators, often synonymous with test models, offer flexible contact angles for oblique measurements, while plunger indicators provide direct axial probing for radial runout. Both types maintain high precision, with lever models typically limited to shorter travel (up to 2 mm) compared to plungers (up to 25 mm or more), optimizing their use based on the part's configuration.[19][18]

Applications

In machining and manufacturing

In machining and manufacturing, total indicator reading (TIR) is commonly employed to assess the roundness and concentricity of precision components such as shafts, bearings, and gears, ensuring they deviate minimally from a perfect circular form relative to a reference axis.[20][21] By rotating the part while a dial indicator captures the full range of deviation, TIR quantifies total runout, which encompasses both form errors like out-of-roundness and positional errors like eccentricity, thereby verifying that machined surfaces maintain uniform geometry.[22] TIR plays a critical role in quality control during operations such as grinding, turning, and milling, where it verifies adherence to tight tolerances and helps prevent issues like excessive vibration, uneven wear, or premature failure in rotating assemblies.[20][21] For instance, in grinding processes, roundness measurements using TIR identify setup irregularities that could lead to lobing or waviness, allowing operators to adjust wheel balance or dressing to achieve smoother surfaces and extended component life.[20] In turning and milling, TIR checks confirm diametric uniformity, reducing friction and balancing forces that might otherwise cause dynamic imbalances in high-speed applications.[22] A representative example is measuring TIR on a lathe-turned cylinder, where the workpiece is mounted between centers or in a chuck, and an indicator is positioned against the outer diameter; rotation reveals any non-uniformity, with acceptable TIR values typically held below 0.001 inches for precision fits.[22][21] In modern CNC environments, TIR measurements have been integrated through automated probe systems that enable in-process monitoring, allowing real-time adjustments without halting production.[23] Touch probes, such as those from Renishaw's OMP series, contact the workpiece during turning or milling cycles to gauge runout and concentricity, feeding data back to the machine control for compensatory tool paths or alerts on deviations.[23] This automation enhances efficiency in grinding operations by scanning surfaces for form errors mid-process, minimizing scrap and ensuring consistent quality across batches.[23][20]

In shaft alignment and assembly

In shaft alignment and assembly, total indicator reading (TIR) plays a crucial role in laser-free methods for ensuring precise coupling of rotating machinery components, particularly by employing dial indicators to quantify misalignment without advanced optical tools. This approach measures coupling misalignment by recording TIR across the rims (for parallel misalignment) of the shafts, allowing technicians to assess deviations in shaft centerlines that could otherwise lead to operational inefficiencies.[24][25][26] The reverse indicator method, a standard procedure in this context, involves mounting rigid brackets with two dial indicators (typically with 0.001-inch resolution and 0.100- to 0.300-inch travel range), each on one shaft contacting the rim of the opposite shaft at the coupling diameter and separated axially by the bracket span. Shafts are then rotated together through 360 degrees, with readings taken at the 12, 3, 6, and 9 o'clock positions to capture signed deflections (positive or negative relative to a zeroed top position); these values are averaged—such as (top + bottom)/2 for vertical offset—to determine parallel and angular misalignment after compensating for bracket sag. TIR is calculated as the difference between the maximum and minimum readings for each indicator, providing data for computational adjustments like angularity = (indicator 1 reading + indicator 2 reading) / (2 × coupling span distance), where readings are the TIRs from the two rim indicators.[24][25][26] Achieving low TIR values—typically targeting offsets below 2 mils (0.002 inches) and angularity under 0.5 mils per inch for machinery operating at 0-4,000 RPM—is essential for preventing excessive vibration, which can accelerate bearing wear, increase energy consumption, and cause premature failure in coupled systems. These TIR measurements directly inform corrective actions, such as adding or removing shims under machine feet for vertical alignment or using jacking bolts for horizontal shifts, ensuring the shafts run within acceptable tolerances to maintain smooth rotation and extend equipment life.[24][25] A representative example is the alignment of motor-pump shafts in industrial pumping systems, where TIR readings of 24 mils (moveable indicator) and 8 mils (probe indicator) in the vertical plane yield an angularity of approximately 1.23 mils per inch; adjustments based on these values, such as adding approximately 29 mils of shims at the front feet and 42 mils at the rear feet, reduce misalignment to under 2 mils offset and 0.5 mils per inch angularity, thereby minimizing vibration-induced energy losses and seal wear in continuous-operation environments.[24][25]

TIR versus runout

Runout in geometric dimensioning and tolerancing (GD&T) is represented by a specific symbol that encompasses both circular runout (radial variation at individual cross-sections) and total runout (variation across the entire feature surface relative to a datum axis), as defined in ASME Y14.5-2018.[27] This control ensures that features like shafts or cylinders maintain acceptable variation during rotation, preventing issues in assembly and function. Measurements for runout compliance typically involve rotating the part around the datum axis while using a dial indicator to capture deviations, a process closely aligned with total indicator reading (TIR) techniques.[27] TIR serves as a fundamental measurement method in metrology, quantifying the full range of indicator movement— the difference between the maximum and minimum readings obtained during rotation— to evaluate geometric features.[12] In the context of runout assessment, TIR provides the raw data that directly informs whether a part meets the specified runout criteria, often by traversing the indicator along the feature while the part rotates.[12] Historically tied to the term "total indicated run-out," TIR has evolved as the practical basis for verifying runout under standards like ASME Y14.5.[27] The primary distinction lies in their roles: runout is a formalized tolerance value specified in a feature control frame (e.g., 0.01 mm), defining the maximum allowable variation for compliance, whereas TIR is the empirical measurement value derived from the indicator.[27] To verify adherence, the obtained TIR is compared against the runout tolerance; if the TIR value falls within the limit, the feature is acceptable.[12] If the measured TIR exceeds the designated runout tolerance, the part is deemed out of specification, potentially resulting in rejection, rework, or further analysis to identify manufacturing defects.[27] This threshold evaluation underscores TIR's utility as a verification tool, ensuring runout controls maintain precision in applications like machining where excessive variation could compromise performance.[12]

TIR versus full indicator movement

Total indicator reading (TIR) and full indicator movement (FIM) are closely related terms in metrology, both generally representing the total variation observed when assessing surface deviations with a dial or digital indicator. However, TIR typically refers to the difference between the maximum and minimum readings on the indicator dial, which may include small errors from the instrument's mechanics, such as cosine error in lever-type indicators. In contrast, FIM specifically denotes the actual linear movement of the indicator tip, excluding such mechanical influences for greater accuracy.[2][28] FIM is the preferred term in modern international standards, including ISO 1101, and is used exclusively in ASME Y14.5 to emphasize precise tip travel and avoid confusion with runout measurements.[29][2] TIR continues to be used in some legacy American engineering practices, though it is not defined in ASME Y14.5. Despite these nuances, the terms are often used interchangeably in practice when mechanical errors are negligible, with both calculated as the maximum minus minimum reading or movement.[30][31] This near-equivalence applies to both rotational and non-rotational assessments, such as checking flatness by traversing an indicator across a surface; here, FIM or TIR quantifies overall deviation without implying rotation. The adoption of FIM in global standards promotes measurement clarity, particularly in standardized or collaborative settings, while TIR lingers in established U.S. industries.[5][30]

Standards and limitations

Measurement units and tolerances

TIR measurements are expressed in linear units aligned with prevailing regional standards. In imperial systems, TIR is typically reported in inches, with subdivisions into thousandths of an inch (mils) or ten-thousandths for enhanced precision. In metric systems, values are given in millimeters or micrometers (microns, where 1 micron = 0.001 mm). A key conversion factor is 1 mil = 0.001 inch = 25.4 microns.[32] Acceptable TIR tolerances vary by application and component criticality to ensure operational reliability. For precision shafts up to 3 inches in diameter, a maximum TIR of 0.0005 inches is commonly required in workmanship standards. For general machinery components, tolerances up to 0.005 inches are typical in alignment and manufacturing contexts. These limits are adjusted based on factors such as rotational speed and functional demands.[33][34] ASME Y14.5 establishes guidelines for geometric dimensioning and tolerancing (GD&T), including runout tolerances that control surface deviations during rotation and are often derived from or verified by TIR assessments. ISO 1101 defines geometric tolerances for form, orientation, location, and run-out, providing a framework where TIR measurements support compliance verification. TIR plays a direct role in assessing these runout specifications across industries.[12]

Common limitations and error sources

One significant source of error in total indicator reading (TIR) measurements arises from operator handling, particularly improper mounting of the dial indicator, which can introduce cosine errors and parallax. Cosine error occurs when the indicator's plunger or lever arm is not perpendicular to the measured surface, causing the reading to be foreshortened by the cosine of the angle between the plunger axis and the normal to the surface; for instance, at a 10° angle, the measured value is approximately 98% of the true displacement, leading to underestimation of TIR by up to several percent in angled setups.[35] Parallax error, meanwhile, results from viewing the indicator dial at an oblique angle, displacing the apparent position of the needle relative to the scale and introducing inaccuracies of 0.1 mm or more in vernier readings.[35] These operator-induced errors are common in manual setups and can be mitigated by ensuring perpendicular alignment and direct line-of-sight viewing, though they remain prevalent in field applications like shaft alignment.[36] Part-related factors, such as surface finish irregularities and thermal expansion, further compromise TIR accuracy by altering the indicator's contact and the part's geometry during measurement. Rough or uneven surface finishes, including tool chatter marks or grinding lobing, can cause the dial indicator tip to bounce or inconsistently contact the surface, inflating TIR readings by capturing micro-variations unrelated to true runout; for example, centerless grinding imperfections may distort radial measurements into apparent diametral deviations.[37] Thermal expansion exacerbates this by causing differential lengthening of the part material—calculated as ΔL = α × L × ΔT, where α is the coefficient of thermal expansion, L is length, and ΔT is temperature change—potentially shifting shaft positions by 0.004 inches over a 60°F rise in a 10-inch stainless steel shaft, thus invalidating static TIR readings taken at ambient conditions.[38] These issues are particularly problematic in machining environments where ambient heat or recent processing leaves surfaces non-uniform, leading to false deviations unless measurements are taken under controlled thermal conditions.[39] Fixture limitations, such as inadequate rotation accuracy in V-blocks or mandrels, introduce systematic errors by imparting unintended wobble or tilt to the part during rotation. In V-block setups, angular deviations in the block's V-angle (e.g., a 0.02° mismatch between blocks) can cause guide tilt, resulting in profile errors up to 11.4 µm, which manifest as exaggerated TIR values since the part does not rotate about its true axis.[40] Mandrel inaccuracies, including slight bends or poor fit, similarly induce false eccentricities, with stick-slip friction in the fixture amplifying variability by up to 10% in multi-point readings.[37] Such errors reduce the method's overall accuracy to about 19% relative to reference radial techniques, underscoring the need for precision fixtures in high-stakes applications.[40] Finally, resolution constraints in standard indicators limit the detection of subtle errors in high-tolerance components, where sub-micron deviations are critical. Conventional dial indicators typically offer resolutions of 0.001 inches (0.025 mm) or 0.0005 inches (0.0127 mm), which may overlook TIR variations below 1 µm in precision parts like aerospace shafts requiring tolerances under 0.0001 inches.[41] This limitation arises from the mechanical scale's graduation density and hysteresis in the gear train, compounding with other errors to mask true form deviations; for sub-micron work, higher-resolution electronic or laser-based alternatives are often necessary to achieve verifiable accuracy.[35]

References

User Avatar
No comments yet.