|His Coordinates|| |
“I am fascinated by the ability to use clocks, especially in space”
says Demetrios Matsakis, Chief Scientist for Time Services at the US Naval Observatory (USNO) in an interview with Coordinates magazine. He shares his views on the range of issues related to timing and importance of satellite clock
What is the importance of satellite clock?
Without precise satellite clocks, GNSS systems as they are designed today would not work, because a satellite’s chief utility comes from broadcasting the time of its clock in analog form. A satellite also broadcasts crucial digital information, such as where the satellite is and how far off its clock is – but those could in theory also be provided by other means, including the internet. Since it takes time for the GNSS analog signals to arrive at the receiver, it can use the difference between its internal time and the received time to infer the distance of the satellite. That’s called the pseudorange, and once you have this value for four or more satellites it’s only a matter of math and digital corrections to figure out where the receiver is and the time as referenced to the GNSS constellation. One important correction is the difference between the time of a GNSS clock, which is free-running, and the system time. Since each satellite is only told its time and orbital elements at specific intervals, once a day for GPS but every 100 minutes for Galileo, any wandering its clock might do in time or space leads to an error in the positioning – for most applications that would of course be a small error.
In the distant future however, it is possible that a GNSS system will be designed with cross links between satellites that will allow almost immediate synchronization of their clocks. In that case, good oscillators could be substituted for clocks. Positioning just needs all the clocks to be at the same time, and the correction for UTC can be derived using constant uploads from the ground via intermediate satellites. Such a scheme is being proposed by researchers at the DLR.
What are more important GNSS applications and their timing requirements?
Measuring an application’s importance in terms of money, financial systems are being increasingly regulated with tighter and tighter requirements – right now the EU requires 100 microseconds for High Frequency Trading. The application with the greatest number of users would be mobile phones. They get their time from cell phone towers, which usually get their time for GNSS and which usually need it at the microsecond level so the cell towers can communicate with each other. Measuring an application in terms of precision, nanosecondlevel requirements come from pure and applied research applications that require synchronizing clocks over a distance. For example, Very Long Baseline Interferometry is based upon synchronizing radio telescopes located around the world so they act like one big one the size of the Earth. This how the Event Horizon Telescope imaged a black hole recently, although in this case they supplemented GNSS with internal adjustments of their data. For many research purposes, such as verifying that neutrinos do not exceed the speed of light, there is no limit to the desired level of accuracy.
Please explain our readers the concept of GNSS time transfer?
Time Transfer is a term for measuring the difference between two clocks. It is never trivial at the nanosecond level, particularly for clocks too far apart to connect with a simple cable. While specialized means exist that utilize pointto- point connections, such as optical fibers and Two Way Satellite Time and Frequency Transfer, GNSS is often the most practical means available. Here is how it works: Users A and B apply properly calibrated GNSS systems to measure the time difference between their local clocks and the time of a GNSS satellite, system, or systems. If you call these differences A-GNSS and B-GNSS, the difference between the two ground clocks is A-B = (A-GNSS)-(BGNSS). There are many different ways to do the measurements, but the basic idea is the same.
How time transfer is different from frequency transfer?
Often misunderstood, frequency transfer is just uncalibrated time transfer. This is because you can get the frequency from data that look like time transfer, whether or not your systems are calibrated. You do that by dividing the difference between consecutive time measurements by the interval between them – this taking the derivative if you know calculus. But it doesn’t go the other way. If you have only the frequency differences, you can’t generate the time differences by just adding up the frequency differences times their intervals.
You also need to know the time at the start – in terms of calculus that would be the constant of integration and in terms of engineering that would be the calibration. As a result, if you have data with time transfer units but are not calibrated, it is frequency transfer because you can’t say anything about the time difference between the clocks but you can say everything about the frequency difference between them.
Do different GNSS systems have different time references?
Yes. In order to operate, GNSS systems have to generate their own time references so the equipment can perform. Since a user can get much improved positioning by combining data from multiple GNSS systems, GNSS systems are moving towards broadcasting the difference between their system time and the system time of cooperating GNSS systems. The GGTO (Galileo/GPS Time Offset) is an example – USNO has for years been measuring this offset and reporting it to their respective operations centers. Of course, the user’s receiver can also infer the difference as a parameter in its position-and-time solution. That effectively removes one satellite from the solution for each new GNSS system used, but it has the advantage of also correcting for any of the receiver’s GNSS-specific calibration biases. None of this provides an answer for which GNSS time to define as “correct”, and which GNSS time to re-reference – the receiver will have to be told that by the user.
Are the time obtained from GNSS satellite signals are related to the international time scale, UTC?
All GNSS systems seek to provide UTC. Technically speaking, UTC is realized only at a participating laboratory, k, and that realization is termed UTC(k). It is defined by an electronic signal generated at that laboratory. UTC can be derived from after-the fact corrections to the UTC(k), published monthly by the International Bureau of Weights and Measures (BIPM) in the Circular T. Since each GNSS system time is set by some sort of control loops to a national lab, or a group of national labs for Galileo, the time as broadcast by GNSS is only a prediction of the UTC(k), which themselves are imperfect predictions/realizations of what UTC will have been determined to be when the Circular T comes out. But these are very good predictions. In the end, the difference between GNSS predictions of UTC are at the level of a few nanoseconds. Most users don’t care, especially as their receivers may have biases much larger than that. Those who do care can usually wait until all the information is available so as to correct their data. I can refer users who need traceability to UTC to an article published in the 2018 proceedings of ION-PTTI, a version of this has just appeared in the March/April 2019 GNSS Solutions. In this article Judah Levine and Michael Lombardi from the National Institute of Standards and Technology (NIST), along with me, describe what would be needed to use GNSS data for traceability to UTC.
The details about how GNSS systems provide UTC can differ. For all but GLONASS, the GNSS clock corrections are first given in terms of their system time, which are continuous time scales that do not jump when leap seconds are inserted. Instead, other digital corrections enable the receiver to infer the number of leap seconds as well as how to relate the satellite clocks to the relevant UTC(k)’s. For GLONASS, no continuous system time is broadcast. Rather GLONASS clock corrections give a prediction of UTC(SU), SU being the identifier for their national timing lab VINIFRI. So GLONASS clocks appear to jump with every leap second – let’s just call it a programmer’s nightmare.
Is there any multi-GNSS clock solution?
This can be easily done inside of GNSS receivers, and people are actively working on how to do best do it at the professional level. Although not important for a receiver in an automobile, there are biases between satellites and systems that need to be worked out. In 2004-2007, involving MITRE and USNO, it was found that biases can be a function of receiver type, receiver setting, and which individual satellite is being observed.
With the advent of satellites that broadcast more than two frequencies, such biases result in measurements of the ionosphere, which requires two frequencies, yielding systematically different values depending on which pair of frequencies is used. Pinning these biases down will be an interesting problem for the years to come.
How to ensure that GNSS receivers are functioning correctly as a reliable source of time?
You are asking a deep question, in many ways equivalent to the question of how do we know any clocks are measuring the right time. Receiver manufacturers should be able to provide an estimate on how well calibrated their products are. Comparing multiple receivers and/ or receivers getting time from multiple GNSS constellations will help. Sanity-checks can be made using direct comparisons with time from a national lab, over the Internet using Network Time Protocol (NTP), or as a last resort over the telephone. Depending on where you are, there may be low-frequency transmissions such as WWVB, JYY, or DCF77, or even LORAN.
Would you like to comment on the clock failures that some of the GNSS systems had to go through?
Any piece of equipment is vulnerable to failures of all sorts, and that is why GNSS systems typically have many safeguards. Unless diagnostic information leads to a satellite being proactively marked unhealthy, there will always be a lag between a failure and the satellite being marked. The more redundancy we have in terms of satellites, and even satellite systems, the better job a receiver can do in detecting and eliminating them on its own. Most of the GPS failures have not been due to clocks – as any reader of your magazine can infer. Many of those failures could have been avoided if receiver manufacturers had correctly programmed the ICD200 – they certainly would have had no problems with the roll-over, leap seconds, or some of the other well-publicized failures.
Any research going on in the domain of clock development? What are the key challenges in clock development?
There is of course a widely publicized rapidly advancing worldwide effort to create fully functional optical clocks, with precisions of 10-18 and better. For many applications it is not just a matter of being more precise. Especially for GNSS it can also involve such things as reductions in weight, size, power consumption, temperature dependence, and sensitivity to shaking. As a scientist, I am fascinated by the ability to use clocks, especially clocks in space, to expand our knowledge of astronomy and for fundamental tests of relativity and quantum mechanics. But equally interesting from an engineering point of view are the benefits of clocks in space being better able to keep time between uploads, and how a better clock inside a receiver might help it find its position faster.
What is your role as Chief Scientist for Time Services at the US Naval Observatory (USN0)?
According to my position description, I have no role in day-to-day operations aside from quality control. This leaves me free to conduct research on various things – in the past few years I have worked on a test of general relativity involving GPS observations word-wide, better ways to steer clocks, the precision of time delivery via the phase of electric power in the USA, how to better predict the rotation of the Earth so GPS can better account for its variations, and as I mentioned how to use GPS to establish traceability to UTC. I also looked into some subnanosecond peculiarities in our GPS receivers, which ultimately resulted in a resolution by the Consultative Committee on Time and Frequency (CCTF) that GNSS manufacturers take steps to minimize the code-phase latching biases. I have held or hold various offices for the IAU, URSI, and ITU, and serve on the technical advisory committee of ION-PTTI. It seems I am frequently asked to referee journal articles, which I like because it helps keep me informed.