Recent from talks
Nothing was collected or created yet.
Time and frequency transfer
View on Wikipedia
Time and frequency transfer is a scheme where multiple sites share a precise reference time or frequency. The technique is commonly used for creating and distributing standard time scales such as International Atomic Time (TAI). Time transfer solves problems such as astronomical observatories correlating observed flashes or other phenomena with each other, as well as cell phone towers coordinating handoffs as a phone moves from one cell to another.
Multiple techniques have been developed, often transferring reference clock synchronization from one point to another, often over long distances. Accuracy approaching one nanosecond worldwide is economically practical for many applications. Radio-based navigation systems are frequently used as time transfer systems.
In some cases, multiple measurements are made over a period of time, and exact time synchronization is determined retrospectively. In particular, time synchronization has been accomplished by using pairs of radio telescopes to listen to a pulsar, with the time transfer accomplished by comparing time offsets of the received pulsar signal.
Examples
[edit]Examples of time and frequency transfer techniques include:
- Simultaneous observation methods:
- Two-way transfer methods:
- Network methods:
One-way
[edit]In a one-way time transfer system, one end transmits its current time over some communication channel to one or more receivers.[4]: 116 The receivers will, at reception, decode the message, and either just report the time, or adjust a local clock which can provide hold-over time reports in between the reception of messages. The advantage of one-way systems is that they can be technically simple and serve many receivers, as the transmitter is unaware of the receivers.
The principal drawback of the one-way time transfer system is that propagation delays of the communication channel remain uncompensated except in some advanced systems. Examples of a one-way time transfer system are the clock on a church or town building and the ringing of their time-indication bells; time balls, radio clock signals such as LORAN, DCF77 and MSF; and finally the Global Positioning System which uses multiple one-way time transfers from different satellites, with positional information and other advanced means of delay compensations to allow receiver compensation of time and position information in real time.
Two-way
[edit]In a two-way time transfer system, the two peers will both transmit and receive each other's messages, thus performing two one-way time transfers to determine the difference between the remote clock and the local clock.[4]: 118 The sum of these time differences is the round-trip delay between the two nodes. It is often assumed that this delay is evenly distributed between the directions between the peers. Under this assumption, half the round-trip delay is the propagation delay to be compensated. A drawback is that the two-way propagation delay must be measured and used to calculate a delay correction. That function can be implemented in the reference source, in which case the source capacity limits the number of clients that can be served, or by software in each client. The NIST provides a time reference service to computer users on the Internet,[5] based on Java applets loaded by each client.[6] The two-way satellite time and frequency transfer (TWSTFT) system being used in comparison among some time laboratories uses a satellite for a common link between the laboratories. The Network Time Protocol uses packet-based messages over an IP network.
Historically, the telegraphic determination of longitude was an important way to connect two points. It could be used one-way or two-way, with each observatory potentially correcting the other's time or position. Telegraphy methods of the 19th century established many of the same techniques used in modern times, including round-trip time delay calculations and time synchronization in the 15 to 25 millisecond range.[7]
Common view
[edit]The time difference between two clocks may be determined by simultaneously comparing each clock to a common reference signal that may be received at both sites.[8] As long as both end stations receive the same satellite signal at the same time, the accuracy of the signal source is not important. The nature of the received signal is not important, although widely available timing and navigation systems such as GPS or LORAN are convenient.
The accuracy of time transferred in this way is typically 1–10 ns.[9]
GNSS
[edit]Since the advent of GPS and other satellite navigation systems, highly precise, yet affordable timing is available from many commercial GNSS receivers. Its initial system design expected general timing precision better than 340 nanoseconds using low-grade "coarse mode" and 200 ns in precision mode.[10] A GPS receiver functions by precisely measuring the transit time of signals received from several satellites. These distances combined geometrically with precise orbital information identify the location of the receiver. Precise timing is fundamental to an accurate GPS location. The time from an atomic clock onboard each satellite is encoded into the radio signal; the receiver determines how much later it received the signal than it was sent. To do this, a local clock is corrected to the GPS atomic clock time by solving for three dimensions and time based on four or more satellite signals.[11] Improvements in algorithms lead many modern low-cost GPS receivers to achieve better than 10-meter accuracy, which implies a timing accuracy of about 30 ns. GPS-based laboratory time references routinely achieve 10 ns precision.[12]
See also
[edit]References
[edit]- ^ Global Positioning System Carrier-Phase
- ^ Time and Frequency Transfer using the phase of the GPS Carrier
- ^ GPS and TV Time Comparison Techniques
- ^ a b Jones, T (2000). Splitting the second. Institute of Physical Publishing.
- ^ "Set your computer clock via the Internet using tools built into the operating system". National Institute of Standards and Technology. Retrieved 2012-12-22.
- ^ Novick, Andrew N.; et al. Time Distribution Using the World Wide Web (PDF). Archived from the original (PDF) on 2016-03-03.
- ^ Dreyer, John Louis Emil (1888). . Encyclopædia Britannica. Vol. XXIII (9th ed.). pp. 392–396.
- ^ Allan, David W.; Weiss, Marc A. (May 1980), "Accurate Time and Frequency Transfer During Common-View of a GPS Satellite" (PDF), 34th Annual Symposium on Frequency Control, pp. 334–346, doi:10.1109/FREQ.1980.200424
- ^ Marc Weiss, Common View GPS Time Transfer, NIST Time and Frequency Division, archived from the original on 2012-10-28, retrieved 2011-11-22
- ^ Department of Defense and Department of Transportation (1994). "USNO NAVSTAR Global Positioning System". Federal Radionavigation Plan. US Navy. Archived from the original on May 24, 2012. Retrieved 2008-11-13.
- ^ "Global Positioning System Timing". U.S. Coast Guard Navigation Center. Retrieved 2008-11-13.
- ^ "GPS and UTC Time Transfer". RoyalTek. Archived from the original on 2010-03-23. Retrieved 2009-12-18.
Time and frequency transfer
View on GrokipediaIntroduction
Definition and Scope
Time and frequency transfer refers to the process of comparing and synchronizing time scales or frequency standards between remote locations, particularly where direct electrical connections are impractical due to distance or environmental barriers.[3] This involves transmitting timing signals or markers to align clocks for synchronization (achieving the same time-of-day) or to adjust oscillators for syntonization (matching frequency stability).[3] The core objective is to enable accurate dissemination of reference times, such as Coordinated Universal Time (UTC), or stable frequency references across global networks.[3] The scope encompasses both time-of-day transfer, which delivers precise hours, minutes, seconds, and dates (e.g., UTC dissemination via radio or satellite signals), and frequency stability transfer, used for calibrating high-precision oscillators like atomic clocks.[3] It includes a range of methods: terrestrial approaches using radio broadcasts (e.g., low-frequency signals like WWVB), satellite-based systems (e.g., GPS for global coverage), and emerging optical techniques via fiber links for ultra-stable transfers.[3] These methods address transfers over local to intercontinental distances, with uncertainties typically in the nanosecond range or better.[4] This field is critical for enabling precise timing in global systems, supporting applications in scientific research (e.g., atomic clock comparisons with stabilities better than 1×10⁻¹⁵), navigation (e.g., GPS positioning accurate to <10 m), and infrastructure like telecommunications and power grids, where sub-nanosecond precision is often required to prevent synchronization failures.[3][5] For instance, GPS time transfer achieves <20 ns accuracy, meeting demands for systems reliant on UTC traceability from over 40 international laboratories.[3] Key concepts include relativistic effects, such as gravitational redshift and time dilation in satellite orbits, which are corrected to avoid systematic errors; multipath propagation, where signal reflections cause biases up to several nanoseconds in GPS receptions; and unique noise sources like white phase noise or multipath-induced fluctuations that degrade short-term stability.[6][4][3]Historical Background
The dissemination of time signals began in the 19th century through telegraph networks, enabling astronomers to synchronize observations across distant locations for longitude determination. At the United States Naval Observatory (USNO), time service via telegraph lines was initiated in 1865, with signals transmitted to the Navy Department and public clocks.[7] Simon Newcomb, a prominent USNO astronomer, advanced these efforts in the 1880s by refining telegraphic time distribution to support precise astronomical computations and navigation.[8] This marked an early milestone in time transfer, shifting from local mechanical clocks to networked synchronization. The transition to radio-based signals occurred in the early 20th century, expanding global reach. In 1923, the National Bureau of Standards (now NIST) launched radio station WWV, initially broadcasting standard frequencies and time signals to calibrate receivers and synchronize clocks nationwide.[9] Mid-century advancements in atomic timekeeping revolutionized precision; the first practical cesium atomic clock was developed in 1955 at the National Physical Laboratory by Louis Essen and J.V.L. Parry, providing a stable frequency reference far superior to quartz oscillators.[10] Considerations from Lorentz transformations, formalized in 1905 for special relativity, began influencing clock comparisons in the post-1960s era to account for relativistic effects in time transfer. Key institutional developments solidified atomic standards internationally. In 1967, the 13th General Conference on Weights and Measures (CGPM) established the second based on cesium-133 transitions, leading to the creation of International Atomic Time (TAI) as a coordinated scale from global atomic clocks, computed by the International Bureau of Weights and Measures (BIPM).[11] A pivotal milestone for frequency stability assessment came in 1966, when David W. Allan introduced the Allan variance in his IEEE paper, offering a time-domain metric to quantify oscillator noise and drift, essential for evaluating atomic frequency standards.[12] The launch of the first GPS satellites in 1978 enabled precise global time and frequency transfer via satellite signals.[13] Optical innovations further enhanced precision in the 1990s. The development of optical frequency combs, pioneered by Theodor W. Hänsch and John L. Hall, provided a method to directly link optical and microwave frequencies, supporting ultra-precise atomic clocks and earning them the 2005 Nobel Prize in Physics. GNSS systems, building on these foundations, now play a central role in modern time transfer networks.Fundamental Principles
Time vs. Frequency Transfer
Time transfer involves aligning the phases or epochs of clocks located at different sites, with the principal objective of determining the absolute time offset Δt between them. This process enables the synchronization of clock readings to a common reference, essential for applications requiring precise epoch knowledge, such as coordinating events across distributed systems.[14] In contrast, frequency transfer focuses on comparing the rates of oscillators or frequency standards, emphasizing the measurement of the fractional frequency deviation y = Δf/f, where Δf represents the deviation from the nominal frequency f. This method prioritizes the stability of the frequency over extended periods, often achieved through averaging techniques to mitigate short-term fluctuations and reveal underlying oscillator performance.[14] These processes are fundamentally interrelated, as discrepancies in frequency lead to accumulating errors in time alignment. The phase difference φ(t) arises from the integration of frequency deviations, given by the equation which illustrates how time offsets build up as the cumulative effect of relative frequency instabilities over time. This relationship underscores that high stability in frequency transfer is crucial for maintaining long-term accuracy in time transfer.[14] Distinct challenges arise in each domain: time transfer is highly sensitive to fixed, one-time delays—such as propagation effects through media—that require precise calibration to avoid systematic offsets in phase alignment. Frequency transfer, however, contends primarily with noise accumulation during the extended integration periods needed for stability assessment, where random fluctuations can degrade the precision of rate comparisons. Propagation effects influence both but are addressed through corrections that preserve the conceptual distinctions in their measurement requirements.[15]Propagation Effects and Corrections
In time and frequency transfer, signal propagation through the Earth's atmosphere introduces significant delays that must be modeled and corrected to achieve high accuracy. The ionosphere, a layer of ionized plasma, causes dispersive delays proportional to the inverse square of the signal frequency, primarily due to free electrons along the propagation path. These delays typically range from 10 to 100 ns, depending on solar activity, time of day, and geographic location, and are quantified using the total electron content (TEC), measured in TEC units (TECU, where 1 TECU = 10^{16} electrons/m²). A differential group delay of 1 ns at L1 frequency corresponds to approximately 2.852 TECU.[16] Modeling involves mapping vertical TEC (VTEC) and projecting it to slant paths via the mapping function, often derived from dual-frequency GPS observations where the ionospheric delay difference between L1 (1.575 GHz) and L2 (1.227 GHz) allows direct computation of TEC as I = 40.3 \cdot TEC / f^2 (in meters), enabling precise corrections.[16][17] The troposphere contributes non-dispersive delays, affecting all frequencies similarly through refraction by neutral gases, with zenith delays typically ranging from 2 to 20 meters (equivalent to about 6.7 to 67 ns). These are partitioned into hydrostatic (dry) and wet components, where the hydrostatic delay dominates (~90%) and can be modeled using zenith hydrostatic delay (ZHD) formulas based on surface pressure, latitude, and height. The Saastamoinen model provides a widely adopted empirical expression for ZHD: where P is surface pressure in hPa, \phi is ellipsoidal latitude in radians, and h is height in km; this yields accuracies with RMS errors around 1.6 cm for ZHD.[18] The wet component, more variable and stochastic, requires estimation from meteorological data or GNSS observations, often using mapping functions like the Niell model to project zenith wet delay (ZWD) to slant paths.[18] Relativistic effects arise from general and special relativity, necessitating corrections for both time and frequency transfers over large baselines or varying gravitational potentials. The Sagnac effect, due to Earth's rotation, introduces a kinematic time delay in rotating reference frames, particularly relevant for satellite-based transfers like GPS. The correction is given by where \vec{\Omega} is Earth's angular velocity vector (magnitude 7.292115 \times 10^{-5} rad/s), \vec{A} is the vector area enclosed by the propagation path, and c is the speed of light; this can reach hundreds of nanoseconds for transcontinental links, depending on the enclosed area.[19] Gravitational redshift, a frequency shift from differing gravitational potentials, affects atomic clocks; for GPS satellites at ~20,200 km altitude, this equates to a fractional shift of about 5.3 \times 10^{-10}, or roughly 45 μs per day if uncorrected.[20][19] These effects are computed using post-Newtonian approximations and applied as deterministic offsets in clock steering models.[20][19] Multipath propagation and noise further degrade signal integrity, especially in satellite links, where reflections from nearby surfaces create geometric delays mimicking longer paths, introducing errors up to several meters in pseudorange measurements. These effects are stochastic and site-dependent, exacerbating noise in time transfer solutions. Correction techniques include multipath mitigation via antenna design (e.g., choke rings) and signal processing, but for dispersive components intertwined with multipath, dual-frequency observations (L1/L2) are essential, as the ionospheric advance on carrier phase and delay on code allow separation and subtraction of first-order effects, reducing residuals to sub-nanosecond levels after TEC estimation.[17] Hardware-induced delays specific to clocks and instrumentation, such as those from cables, antennas, and receivers, must be calibrated to avoid systematic biases in transfer results. Cable delays are linear with length and frequency, while antenna group delays vary with elevation and frequency band, often calibrated using common-view GNSS comparisons against reference stations. Calibration involves measuring total receiver delay (D_X), encompassing antenna (X_S), cable (X_C), and internal (X_R) components, via common-clock setups or traveling receivers, achieving uncertainties below 2 ns for long-baseline links; these constants are then applied as fixed offsets in processing.[21]Transfer Methods
One-Way Techniques
One-way time transfer techniques involve the unidirectional broadcast of a time or frequency signal from a reference clock at a transmitter to a remote receiver, where the time offset between the clocks is computed by subtracting the known emission time and an estimated propagation delay from the measured arrival time. This method relies on the receiver's local clock to timestamp the incoming signal, enabling synchronization without requiring feedback from the receiver. The simplicity of this approach makes it suitable for disseminating time from a central authority to multiple users, though it does not inherently compensate for path asymmetries or instabilities in the propagation medium.[22] Implementations commonly use low-frequency (LF) or medium-frequency (MF) radio broadcasts, such as the NIST-operated WWVB station at 60 kHz, which transmits a binary-coded decimal time code modulated onto a carrier signal, providing UTC(NIST)-traceable time information across North America. For shorter distances, optical fiber links facilitate one-way transfer by propagating laser pulses or modulated signals from the reference site, often employing techniques like binary phase-shift keying (BPSK) for precise timestamping at the receiver. In fiber systems, the signal is typically generated from a stable atomic clock and transmitted over dedicated or shared dark fibers, with the receiver extracting the time code via photodetection and cross-correlation.[23] The primary advantages of one-way techniques include their straightforward design, minimal infrastructure requirements, and low operational costs, allowing widespread dissemination without complex reciprocal measurements. However, disadvantages arise from uncorrected asymmetric delays, introducing a fixed one-way bias that cannot be averaged out, as well as vulnerability to transmitter clock instabilities that propagate directly to all receivers. Propagation effects, such as ionospheric variations in radio signals or temperature-induced length changes in fibers (approximately 30 ps/K/km), must be estimated and subtracted, but without bidirectional verification, residual errors persist.[22][23] The time offset is calculated as: where is the arrival timestamp at the receiver, is the emission timestamp from the reference clock, and is the estimated one-way propagation delay. For radio broadcasts like WWVB, is approximated using the great-circle distance divided by the speed of light, adjusted for groundwave or skywave paths (e.g., ~3.3 ms per 1000 km for groundwave), though diurnal ionospheric shifts can introduce up to 1 µs variability over short paths without further corrections. In optical fiber, delay estimation incorporates the fiber's refractive index and length, monitored via auxiliary temperature sensors or dual-wavelength dispersion to compensate for environmental fluctuations, achieving stabilities better than 40 ps over kilometer-scale links. Detailed models for these propagation corrections are essential to mitigate biases.[22][23] Limitations of one-way techniques include high susceptibility to errors in the transmitter's clock, as any offset or drift affects all downstream users equally, and overall accuracy typically reaches ~100 µs without applied corrections, limited by unmodeled delay variations and receiver hardware uncertainties (e.g., cycle ambiguity in WWVB signals up to 500 µs if uncalibrated). For radio systems, received uncertainties often range from 100 µs to 1 ms in practical scenarios, while fiber implementations can approach 100 ps with active stabilization, though still inferior to bidirectional methods for precision applications.[22]Two-Way Techniques
Two-way techniques in time and frequency transfer involve bidirectional exchange of signals between two stations, allowing the calculation of clock offsets by averaging propagation times in both directions to cancel out common fixed delays such as atmospheric and equipment asymmetries.[24] This reciprocity principle enables high-precision comparisons without requiring precise knowledge of one-way path delays, making it suitable for metrology applications where sub-nanosecond accuracy is essential. Optical variants over fiber or free space, leveraging stabilized lasers and frequency combs, extend this to continental scales with fractional frequency instabilities below 10^{-18} as of 2025, supporting tests of fundamental physics.[24][1] Implementations of two-way techniques include ground-based microwave links and satellite-based systems. Microwave links, operating in the 5-10 GHz range (such as X-band around 8-12 GHz), are commonly used for short- to medium-range transfers between metrology laboratories, like the connection between the Naval Research Laboratory and the U.S. Naval Observatory, where line-of-sight propagation supports direct signal exchange with minimal multipath interference.[25] For longer distances, satellite two-way time transfer (TWTT), particularly two-way satellite time and frequency transfer (TWSTFT) using geostationary satellites in the Ku-band (14 GHz uplink, 11 GHz downlink), facilitates intercontinental comparisons by relaying signals through the satellite transponder.[24] The core equation for determining the clock offset derives from the differenced measurements of signal transit times. Consider two stations, A and B, with clock times and , where the offset is . Each station transmits a signal at its local time and records the local reception time of the incoming signal from the other station. For a synchronized exchange epoch , station A transmits at local time , received at B as , where is the total one-way delay from A to B (including propagation, equipment, and atmospheric effects). Similarly, B transmits at , received at A as . The measured transit times are then and . Averaging over the symmetric delays (assuming ) yields the offset as . To ensure synchronization of exchange intervals, stations coarsely align transmission epochs using a common reference like GPS common-view, with the two-way averaging mitigating residual timing errors in the intervals; multiple exchanges over synchronized periods (e.g., 1-2 minutes in TWSTFT) further average out noise.[24] Corrective terms for residual asymmetries, such as equipment delays at station A (transmit minus receive) and propagation effects including the Sagnac term (where is Earth's angular velocity, the projected area, and the speed of light), are added: , where is the indicated counter reading of reception minus transmission. Relativistic corrections for signal exchanges are applied as needed to account for propagation effects.[24] These techniques achieve sub-nanosecond precision over distances exceeding 1000 km, with TWSTFT demonstrating statistical uncertainties below 1 ns in real-time interlaboratory comparisons, such as those between European metrology institutes over 920 km links.[26][27] However, they require mutual visibility between stations (line-of-sight for microwave or shared satellite access for TWSTFT) and precise coordination of transmission schedules, increasing complexity and cost compared to one-way methods.[24] A variant, pseudo-two-way transfer using geostationary satellites, employs pseudo-random noise (PRN) codes in TWSTFT to enable continuous signal correlation without discrete bursts, enhancing stability by improving signal-to-noise ratios while maintaining the bidirectional cancellation principle.[26]Common-View Methods
Common-view methods enable the indirect comparison of clocks at multiple remote stations by having them simultaneously observe signals from a shared third-party source, such as a satellite or radio beacon, thereby canceling out errors inherent to the source itself. In this approach, each station measures the propagation delay from the source to its location, and the differenced measurements between stations isolate the relative clock offsets while mitigating common errors like the source clock bias. This technique has been foundational in time transfer since the mid-20th century, evolving from ground-based systems to satellite-based implementations for enhanced global reach.[28] The origins of common-view time transfer trace back to the 1960s with the use of Loran-C, a long-range navigation system where stations differenced arrival times of signals from common transmitters to achieve time comparisons with uncertainties of hundreds of nanoseconds over continental distances. By the early 1980s, as GPS became operational, the method transitioned to satellite signals, dramatically improving precision from hundreds of nanoseconds to a few nanoseconds due to the global coverage and stability of atomic clocks on board GPS satellites. This evolution marked a shift from regional, ground-wave propagation systems like Loran-C to the ubiquitous GPS common-view protocol, which remains a standard for international time scale computations today.[29][30] In the GPS common-view implementation, participating stations adhere to a predefined schedule from the International Bureau of Weights and Measures (BIPM), tracking specific satellites for synchronized 13-minute observation windows to ensure overlapping visibility. Receivers at each station record pseudorange measurements, which are then exchanged (typically via email or data networks) and processed to compute the time offset as the difference between the individual station-source delays plus modeled corrections for atmospheric and hardware effects:Δt = (t1 - tsource) - (t2 - tsource) + corrections,
where t1 and t2 are the local clock readings at stations 1 and 2. For GPS-specific processing, this involves forming the inter-station single difference of pseudoranges, equivalent to a double difference when considering the baseline between stations and the satellite. The core time transfer equation is thus: where PR1 and PR2 are the pseudoranges measured at the two stations to the same satellite, and c is the speed of light; additional double-differencing across epochs or frequencies may be applied to further suppress multipath and ionospheric residuals. Ionospheric corrections, such as those from dual-frequency measurements, are incorporated into these differences to refine accuracy, as detailed in propagation effect analyses.[28][30] A primary advantage of common-view methods is the elimination of the need for a direct communication link between stations, relying instead on the broadcast nature of the common source, which facilitates low-cost, global-scale time comparisons with typical accuracies around 1 ns for intercontinental links using daily averages. This has made GPS common-view indispensable for synchronizing national time scales to UTC and maintaining the International Atomic Time (TAI). However, the method is constrained by the geometry of common visibility, limiting observations to periods when both stations can track the same source (often requiring baselines under 7000 km), and it demands precise knowledge of station coordinates (to within 30 cm) to avoid geometric dilution of precision.[31][30][28]
