FPGA based Time to Digital Converters

There are many ways to measure time of an event precisely. Probably the most common way is to use a counter, as for example used in the timer units of micro-controllers. Unfortunately, this approach is limited in precision to the clock at which the counter runs. A micro-controller gets usually to 10-20ns, while high performance systems might get down to 1ns. If higher precision is needed, then a method that runs independent of the system clock is required. The nice review paper by Józef Kalisz covers most of the architectures used today. In order to minimize the number of analog components and not to rely on custom ASICs a couple of ways to implement TDCs in FPGAs have been devised. The earliest report on such a system was again by Józef Kalisz in 1997, where he achieved 200ps resolution using a simple buffer based tapped delay line on a QuickLogic FPGA and a calibration system to the external 100MHz clock. Using Vernier tapped delay lines and the Nutt Interpolator this could be brought down to 100ps by Ryszard Szplet et. al. in 2000 and further to 45ps by the same group in 2007 (see also the paper from 2009 for details). A major limiting factor when using FPGAs as TDCs is the uneven delay between elements within the same slice and between elements in different slices. The variability of the bin size can be as high as a factor 5. In 2008 Jinyuan Wu and Zonghan Shi presented a way how to get around this limitation by using multiple transitions on the same delay line, dubbed "Wave Union" and a calibration method to measure the width of each bin. Using both strategies together they got the maximum bin width down to 65ns and the RMS error down to 20ps with "Method A" (using a train of 3 transitions) and an RMS error of 12ps with "Method B" (using a ring oscillator as source for multiple transitions). Even with the Union Wave, the inherent resolution limit of the bin width defined by the delay of a single delay element remains the same. To get below 10ps resolution, a different method has to be applied. A very interesting method is the one developed by Petr Panek and Ivan Prochazka in 2007 achieving an RMS error of 1ps. It is based on inducing oscillations in a high Q filter and sampling the resulting voltage with an 100MHz ADC. They further expanded on the theory and the error model in 2008. In 2013 they presented an improvement with which they got below 0.5ps of RMS error. Ripamonti et al. showed in 2010 that it is possible to achieve <10ps resolution with a simple setup of a switched LC-resonator with a Q of just 30 and a 62MHz ADC.