> the researchers’ time programmable frequency comb is capable of operating at this quantum limit, where fewer than one photon in a billion reaches its target device. It worked even when the laser was sending out only 40 microwatts of power, or about 30 times less than a laser pointer uses.
> the pulse time and phase are digitally controlled with ±2-attosecond accuracy
> Over 300 km between mountaintops in Hawaii with launched powers as low as 40 μW, distant timescales are synchronized to 320 attoseconds
> at 4.0 mW transmit power, this approach can support 102 dB link loss, more than sufficient for future time transfer to geosynchronous orbits
Very interesting stuff. One of the fun bits of being a time-nut is that there are somewhere around 15 orders of magnitude available to play with. Getting clocks synced to within a few milliseconds is child's play, even for an amateur with $15 of hardware. Getting clocks synced within a few dozen microseconds is possible with GPS and some minor effort. Getting clocks synced within a few nanoseconds requires a lot more effort. I have never attempted anything below the ns range, but this process seems to produce clocks in sync within hundreds of attoseconds! This is within around 10^-16 seconds. Quite amazing if you ask me.
I cannot comprehend. Napkin math says if you move like 50 nanometers within one second this is enough acceleration over time to make these clocks out of sync because relativity. I mean, do points on earth do not move that much by themselves if they are far enough?
It turns out that cheap chip-scale optical atomic clocks would be a sensor revolution. They are sensitive enough to tell the difference between the first and second floor of a building by gravitational effects alone.
Nanosecond level sync is fairly achievable with Precision Time Protocol [1]. This is common in most media systems where small amount of drift will cause phase issues with audio. Not my area, but I believe it's seeing a few using in the DC space too [2] where they may be strong requirements around event ordering for distributed systems.
The White Rabbit Project [3] also extends it to the picosecond realm for research control and instrumentation systems.
Quite possibly yes. I have always been trying to synchronize computer clocks. I was imagining the ms level solution to be plain jane NTP over the internet (which is really quite impressive in its own right). The ns level solution requires a GPS module, but in my lab I am also using a 10MHz OXCO for hold-over as my GPS reception is marginal and I lose it several times a day. This adds a bit to the cost and complexity. Getting ns level timing distributed throughout an ethernet network using PTP is on my list, although my current Raspberry Pi pps solution does not support PTP. I am hoping to find a single board computer with a decent GPIO and an ethernet controller that supports PTP.
Regardless, this is a very deep rabbit hole to go down. For the right kind of nerd, it can be a fun and rewarding experience. :)
Randall Monroe in one of the footnotes of What If? 2 mentioned that one light-nanosecond is very close to one foot (11.8 in vs. 12 in), and "propose" that we should just redefine foot as the same as light-nanosecond.
I do! If you're interested, I'll send you brand new nanoseconds including an Official Certificate stating that they are Officially Certified to be equally long as Hopper nanoseconds (within reasonable manufacturing standards tm of course).
Relevant for anyone who wants to know more about GPS, Bartosz Ciechanowski has this excellent interactive exhibit on how it works, from the basics of triangulation to orbital paths, signal structure, noise-avoidance, etc.
Thanks for the link. I had not heard of this project, though I knew about CERN's experiments with synchronous ethernet. Tiny nit though - the plan was never to sync over the internet, with its variable latency and multiple PHY formats, but instead to provide an ethernet network with links up to 10km long that can provide a timing and phase reference for the LHC.
The idea of just utilizing the bidirectionally continuously active transmission lines of 1000BASE-T to essentially mutually PLL over the highly symmetric please delay wiring repurposing the existing carrier/clock recovery and garnishing with some actual data on top (clock comparing/time sync, configuration), is just ingenious.
IMO this counts as hacker spirit already.
Just a shame the hardware is completely out of the "fun project" price range, even if substantially less fancy components would suffice for most non-CERN use cases of White Rabbit...
e.g. the mentioned synchronous Ethernet, and utilization of single frequency network [0] capabilities for WiFi.
I wonder how this compares to the precision of the GRACE-FO Laser Ranging Interferometers. Maybe this new comb method would allow for newer cheaper versions of the satellites.
LIGO detects the changes in distance between mirrors down to 1/10000th of a proton
1 atto-light-second is a few hydrogen atoms long. So still seems like quite a few orders of magnitude needed for gravity wave detection but perhaps with the lengths involved?
Some more expertise is needed. I would guess probably not but also not so far off as to be crazy.
It will have application in gravitational wave astronomy, not least because tighter bounds on delta-t among detector sites means tighter localization of sources on the sky.
More likely the technique will find its way into time transfer to and among spacecraft equipped with good atomic clocks, which will help with studies of the weak gravitation within the solar system (checking if GR is correct in that limit, and possibly doing some low-hanging fruit on local dark matter detection, e.g. if there is a
tail or wake entrained to Jupiter or the sun), pulsar timings (a big announcement on that is coming next week from NANOGrav), and a number of other interesting experiments. If you're curious about the technical details of that, you can check out the Wolf, Salomon and Reynaud ACES and SAGAS IAU paper doi:10.1017/S1743921309990676 which can be found in its 2009 form as a PDF at <https://www.cambridge.org/core/services/aop-cambridge-core/c...>, which is a good starting point for verification of general relativity enabled by space clocks and time transfer to/among them.
On the gravitational wave astronomy front, good time distribution to enhanced LISA <https://en.wikipedia.org/wiki/Laser_Interferometer_Space_Ant...> or its precursors would help with multimessenger observing: one might better correlate an IceCube neutrino detection and a LISA detection, for example, if at the South Pole one can benefit from ground-space-ground time distribution considered on p. 8-9 (and see Fig 5) of <https://arxiv.org/pdf/2212.12541.pdf>, which is the preprint of the paper summarized in the article linked at the top.
I hope this relates to what you were asking about. I'm afraid I don't understand the points raised in the other replies to your question.
> Last year, scientists drove up Mauna Loa volcano on Hawai‘i, aimed a laser at a reflector positioned on Haleakala peak on Maui, and beamed rapid pulses of laser light through 150 kilometers of turbulent air.
Stupid question ... why would they pick the tops of two volcanoes on two islands instead of two mountain peaks on the continent that have roads between them, overnight shipping for whatever components they may need, easier hiring, and no random lava flows destroying equipment?
Why take a trip to the mountains in Boulder's backyard when you can go to Hawaii? :-)
But also, I imagine it has to do with the Mauna Kea Observatory, where half the setup was.
I'm not sure if it's for the observatory (e.g., it says they used a light source there, or possibly because they have some scientific equipment set up there already), or maybe because of the environment. This is what wikipedia says about the site "The location is near ideal because of its dark skies from lack of light pollution, good astronomical seeing, low humidity, high elevation of 4,205 meters (13,796 ft), position above most of the water vapor in the atmosphere, clean air, good weather and low latitude location." of course the astronomical parts don't matter, but some of the rest is likely relevant.
Also, many mountaintops that high will be covered with snow. Mauna Kea has some, but probably not as much.
If you set two clocks to the same time and put one at the bottom of the ocean and one at the top of a mountain ... after time, they will drift apart ... so is this ultra precise time in space making up for gravity time distortion as well?
It’s much worse than that, last I heard we could measure the differences in time passage separated by only a few vertical feet.
Ultra precise time in space absolutely has to account for relativity changing clock rates based on how deep you are in the gravity well. GPS would be all but useless without it.
The question is which time do we consider to be the “correct” time. Turns out, we’ve decided to use a clock in Colorado as the time of record and then occasionally sync that clock with GPS satellites.
The USNO Alternate Master Clock at Schriever SFB is not the clock of record. It is synchronized to the USNO Master Clock in Washington DC.
The USNO Master Clock generates the US DOD’s official time, but it is also not the clock of record. There is also NIST’s clock, which is the official time for civilian use in the USA. And the NPL’s clock in Teddington for the UK. And ESA’s clock in Noordwijk for Galileo. And the PTB’s clock in Braunschweig for Germany. etc. usw.
All these clocks and many more contribute their measurements and cross-comparisons to the BIPM in Paris on a regular schedule. The BIPM calculates a consensus timescale from these measurements, which takes the form of retrospective corrections published in BIPM Circular T.
Circular T is the time of record. But it is not the most accurate time available because of its relatively short averaging time.
The best time is TT, terrestrial time, a uniform timescale that ticks at the same rate as the SI second as measured on the rotating geoid, i.e. the notional surface of equal gravitational potential which is the general relativity equivalent of mean sea level.
Well, not TT itself, but TT(year). The BIPM periodically publishes retrospective corrections going back several decades, saying what the error in TT was back then based on their best understanding now.
I wonder -- if you can use this to synchronize clocks with very low power signals, could you use this to transmit data with very low power signals? If so, you could conceivably transmit data with very little power over vast distances.
It took some digging, but I've found an accessible copy of the article referenced.[1] It seems the heart of this system uses "balanced detectors" to do the optical detection, allowing a lock. (Something I just learned exists)
The rest is phase locked loops, something I already understand.
Some notable numbers from the paper:
> the researchers’ time programmable frequency comb is capable of operating at this quantum limit, where fewer than one photon in a billion reaches its target device. It worked even when the laser was sending out only 40 microwatts of power, or about 30 times less than a laser pointer uses.
> the pulse time and phase are digitally controlled with ±2-attosecond accuracy
> Over 300 km between mountaintops in Hawaii with launched powers as low as 40 μW, distant timescales are synchronized to 320 attoseconds
> at 4.0 mW transmit power, this approach can support 102 dB link loss, more than sufficient for future time transfer to geosynchronous orbits