Aspects of synchronization.
Ideally, a data acquisition (DAQ) system should acquire information about an event, regardless of signal type or location, without changing its timing. However, even recording a voltage and a current within the same circuit can be error prone because voltage and current probes typically have different delays. This problem only is accentuated when large distances separate signals.
As Bob Judd, director of marketing at United Electronic Industries, commented, "Synchronization means different things to different people. ... We have a number of customers who use UEIs' PowerDNA Cubes with A/D boards connected to a variety of vibration sensing equipment. The actual sensors and cubes can be hundreds of miles apart, yet it's critical to have the ability to synchronize the samples of all the A/D converters. Taking advantage of the highly accurate 1-pps signal provided by today's GPS receivers, it's actually quite easy to synchronize sampling to the submicrosecond level."
Common to all DAQ applications is the limitation of timing resolution--the sampling period or the time between samples. 1 he concept of a sampling period itself Is further complicated by the choice of A/D converter. Data Translation's Fausto Soares, an application engineer at the company, explained that the Model DT9847 DAQ module uses a sigma-delta A/D converter "which requires a master clock of several times the actual sample rate.., depending on the sample rate range: 256x for 1 kHz to 54 kHz, 128x for 54 kHz to 108 kHz, or 64x for 108 kHz to 216 kHz."
Soares discussed the need for a sync signal to ensure that all A/D converters are aligned to the same master clock cycle. In fact, a common reference clock, a sync signal, and a trigger are distributed to each interconnected DAQ module in a system.
Synchronization in Detail
After considering only a few application examples, it's obvious that synchronization may be required on several levels.
In very high-speed DAQ systems, the initial delays introduced by different types of sensors or probes or just unmatched wire lengths must be corrected before digitizing. Alternatively, known timing offsets can be adjusted post-acquisition, but direct channel-to-channel comparisons then involve interpolation. In critical cases, correction must be made for changes in sensor characteristics that may be functions of time, temperature, humidity, altitude, etc.
Jim Schwartz. product marketing manager for data acquisition at National Instruments, described a DAQ application that needed very tight synchronization. "At NI Week 2013, Dr. Marco Pallavi-cini from Italy's National Institute of Nuclear Physics unveiled a large-scale research project to search for dark matter. ... The experiment, located 1,400 meters below the Gran Sasso Mountains in Italy, required 200 input channels acquiring data at 5 GS/s synchronized to under 300 picoseconds--a timing error of 1 nanosecond equates to over a foot of error at the speed of light."
According to an article on Physics-world.com, nutrinos in all three known forms are the basis of one hypothesis for dark matter, but detecting and measuring their activity are difficult. They are nearly massless, have no charge, and easily pass through most matter. Dr. Pallavicini's experiment proposes to detect the oscillation of nutrinos from one form to another as they interact with a source of other nutrinos or antinutrinos. In particular. he is looking for faster oscillations over a shorter time period that may indicate the existence of yet a fourth type of neutrino--a so-called sterile nutrino.
The article explained, "[the Short Distance Neutrino Oscillations with BoreXino experiment] will establish exactly where each of the source-induced neutrino interactions takes place ... by recording the precise time that the associated light emission reaches several of the photomultiplier tubes. If sterile neutrinos exist, then the number of interactions taking place as a function of distance from the source would show a small but distinct quasi-sinusoidal variation, with a wavelength on the scale of metres--far too short to be caused by normal neutrino oscillation." (1)
Users of slower speed DAQ systems don't have time-alignment concerns on the same scale as nuclear physicists. If your system samples signals at a 10-kHz rate, an event's timing resolution can be no finer than 100 ps. Two digital signals could change state within a microsecond of each other and still be recorded as edges 100 ps apart. Equally, they could change 99 ps apart but within the same sampling period and be recorded as simultaneous events. In general. sensor delays much smaller than a sampling interval can be ignored because quantization has a larger effect on timing.
Nevertheless, signal alignment within microseconds is not an uncommon requirement. Microstar Laboratories' technical marketing director Larry Trammell explained how an array of microphones was used to map jet aircraft sound. He said, "In conjunction with OptiNav imaging researchers, a phased array of 252 electret microphones was distributed over a grid of width 31 meters and length 122 meters to collect data for 3-dimensions-over-time data-set models that reconstruct the wing-tip vortices from jet aircraft. These noisy but invisible vortices are produced by jet aircraft descending for a landing, and can become dangerous if they do not dissipate before subsequent aircraft attempt to land. ... The beam-forming analysis critically depends on propagation time, which shows up in data sets as small (and frequency-dependent) phase variations in acoustic signals picked up by the microphones."
If the inputs to a local DAQ system are time-aligned, each input has its own A/D converter, and all A/Ds are referenced to the same clock, the acquired data also will be time-aligned. However, in very big or geographically dispersed systems, it's virtually impossible to distribute hardwired clock or trigger signals. Instead, as UEI' s Judd commented. GPS synchronization is used to provide unified timing regardless of location. Figure 1 shows a PowerDNA Cube and suggests synchronization across large distances. For some applications. [RIG or the IEEE 1588 Precision Timing Protocol also may be appropriate.
TRIG stands for inter-range instrumentation group and, as the name implies, is closely associated with missile telemetry. TRIG-B is one of the more popular time encoding schemes within the IRIG family of codes and has two flavors. Unmodulated IRIG-B is a level-shifted logic signal that can achieve submicrosecond accuracy, perhaps down to a few tens of nanoseconds. However, because it needs a DC path, it only is suitable for direct, typically short connections.
Modulated IRIG-B is treated more like an audio telephone signal and achieves accuracies in the tens of microseconds. Both forms of TRIG-B can be decoded to provide a pulse every 10 ms as well as a 1-pps signal. In addition, IRIG-B has the benefit of encoding the present time in a seconds, minutes, hours, days, years format, making it useful as a timestamp. (2)
IEEE 1588 provides precise timing for Ethernet systems. It has the capability to update clock synchronization across a network that may dynamically change its configuration. Accuracy in the tens of nanoseconds is possible.
If you need to accurately synchronize A/D sampling between geographically distant points, most experts are using GPS. As discussed by Yokogawa's Barry Bolling, an applications engineer with the company, "[the] ScopeCorder synchronizes multichannel waveform recordings with other remote (distant) multichannel waveform recordings using a combination of GPS and IRIG time code. The GPS hardware is completely onboard, [you] simply connect an antenna. The ... Scope-Corder achieves real-time waveform data synchronization by using WIG ... time-code information and GPS clock data to both synchronize the sampling clock and synchronize the real-time clocks on multiple, remotely located DL850 ScopeCorders."
Time and time interval are two different concepts. Timestamping requires an unambiguous, accurate time. GPS transmits the current time in weeks and seconds relative to epoch midnight Jan. 5, 1980. Your receiver may translate this information into a more friendly seconds, minutes, hours, days, years IRIG format. GPS actually transmits data at an atomic clock-derived 50-Hz rate, and this is the basis of a receiver's 1-pps output.3 Bolling cited today's increasing use of alternative energy sources as a typical application. "The synchronized Scope-Corders can be used at multiple remote locations to monitor the effect of ... intermittent energy sources, as well as the stability of the grid itself. Voltage, current, power, frequency, and distortion can all be remotely monitored. In addition, some sources of energy may be extremely remote, such as a hydroelectric plant that is accessible only by helicopter."
Bolling concluded, "The ScopeCorder's synchronization option features [+ or -]200-ns accuracy, meaning that the recorder's real-time clock and the sampling clock are both synchronized to the GPS clock. Likewise, a remote ScopeCorder equipped with this option also will have both its real-time clock and its sampling clock synchronized to the GPS. Thus, the two (or more) recorders are very well-synchronized with one another."
Measurement Computing' s LGR-5320 series of data loggers features "synchronous operations beyond standard data loggers on the market today," according to Peter Anderson, the company's general manager. However, Anderson also explained, "The LGR-5320 subsystem is based on a scanning architecture. While all analog signals are not acquired at the exact same time, the digital signals are acquired with the first analog signal." The LGR-5320 samples at a 200-kS/s rate, so the delay to subsequent analog channels is equal to 5 is x the position of the channel in the scan list.
With the LGR-5320, you can acquire the activity on several analog and digital channels with 5-ms resolution. Many applications, such as temperature monitoring, involve signals that change relatively slowly. A scanning data logger can execute the scan list at a high-speed sampling rate but repeat the list at a much lower scan rate.
For example, a group of 10 channels could be sampled in a 45-ms burst, but at a composite 100-Hz rate. You need to compare the 45 is required to scan the channels vs. the 10,000 ps between scans to determine if the timing uncertainty caused by the burst is significant. In many cases, scanning as a burst of samples approximates simultaneous sampling well enough that the difference can be ignored. However, at high scan rates, you may not be able to compare signal values from different channels without interpolating.
Storing/Replaying the Data
After you've taken the necessary precautions to ensure synchronization across all the signals being acquired, how do you ensure it's not lost later? A common approach and one used by UEI's Pow-erDNA datalogger is to record metadata describing acquisition parameters in a file header and acquired data in the body of the file. A detailed description of this procedure is given in a UE1 document:
"It is desirable to determine the time-stamp for each scan. Each LOG_n section of the header file contains two values called tStart and tStop, indicating the timestamp of the first and last scan of the corresponding data file. The value is the number of milliseconds that have passed since the start of acquisition. The acquisition start date and time are given by the creationDate and creationTime values in the FILE section of the header file. Also, the clockScan value in the CLOCK_O section of the header file gives the number of scans per second, which can be used to increment a running time-stamp counter that is initialized based on the aforementioned tStart, creationDate, and creationTime values." (4)
In this scheme, individual data-point timestamps are generated as the data is played back--they are not stored as part of a more complex data format. One reason for doing this is to save space, but a more important aspect is to save time. Streaming high-speed data requires an efficient format.
As described in an NI technical paper, "TDMS files organize data in a three-level hierarchy of objects. The top level [lead-in has] a single object that holds specific information [such as] author or title. [After that,] each file can contain an unlimited number of groups, and each group can have an unlimited number of cbannels." (5)
The TDMS (technical data management streaming) format provides a very structured way to describe the data in a file, but it also is very flexible (Figure 2). The comment was made on NI's website that you don't have to redesign the file format as your test requirements change--just extend it as necessary. The basic distinction between metadata and raw data supports both the need to document the details of an experiment as well as record the data from it.
Although the concepts used within TDMS are easy to understand and imitate, the detailed implementation is proprietary. According to an NI TDMS document, "If you need to read a TDMS file with software that implements native support for TDMS (without using any components provided by NI), you will not be able to interpret this data." (5) On the other hand, NI provides a free add-in that supports TDMS data viewing in Excel--a practical solution for small files.
An interesting aspect of TDMS metadata is the use of a timestamp similar to the GPS format. In the case of TDMS, time is measured from the epoch 01/01/1904 and stored as two 64-bit values: one (i64) in whole seconds and the other (u64) in positive fractions of a second.
In another example, Microstar's Trammel explained, "We keep all archiving activity--logging, monitoring, digital filtering, statistical analysis, etc.--in a separate process on its own clock with no impact on the timing of the sampling hardware." Sample timing is related to a high-speed internal clock. Trammell continued, "For aligning intermittent blocks, we use a hardware-based triggering scheme. [After the trigger is received,] there is a temporary timing disruption to locate the nearest internal clock cycle with 20-ns resolution." Sampling then proceeds using the selected internal clock rate with approximately 2-ns resolution.
This kind of response to triggering is a direct consequence of mixing an analog, asynchronous trigger with a free-running sampling clock. Synchronous sampling captures signal activity at fixed intervals, which in general will not be aligned to the trigger instant. At least two solutions to the problem exist. In one, the clock is restarted or stuttered to align with the trigger. While this may be practical for long sampling periods, much greater timing consistency results from Microstar's digital approach.
In the other case, triggering is derived from the acquired samples--so-called digital triggering--and there can be no misalignment between the trigger and subsequent samples. Playback of Microstar data initially involves up to 20-ns data offset from the trigger because that amount of uncertainty was introduced when the data was recorded.
Interestingly, Trammell listed a separate mode in which both timing and A/D samples are recorded as data. "DSP techniques are used to reconstruct samples at any desired location relative to the reference timing source with about 1/1,000 of the sample interval resolution and [+ or -]1/2 the original sample interval offset."
Further Applications Involving Synchronization
Based on this DSP capability, Microstar recently launched the Rotating Machinery Analysis Module, which makes use of two data streams recorded at the same time: one from a shaft encoder and the other from a reference clock. According to the company's press release, "The software applies data-driven synchronization for which a continuously varying rate is determined from pulses of a high-resolution rotary encoder..., and after that, DSP techniques can locate signal values at arbitrary locations relative to the pulse stream with approximately 1/1,000 sample interval precision in real time. You can even trigger data collection on 'rotation domain' data streams."
In a separate example, DATAQ Instruments' vice president Roger Lockhart described how the company's DAQ systems are used to monitor typical multistation rolling mill activity. "Monitoring one or several stations will expose a problem, but not the station causing it. Only a synchronous acquisition process shows the complete story, clearly differentiating between true causal and false echo events."
DATAQ uses a patented daisy-chain approach, shown in Figure 3, that distributes a 16-MHz clock over an unused twisted pair in the CAT-5 Ethernet cable linking the DAQ instruments in an extended system. The unit closest to the PC is designated the master and all the others slaves. PLLs inside each slave ensure that local clocks are locked to the master. Distances up to 100 meters between slave units are supported. Lockhart concluded, "We spec a maximum of 5-[micro]s of jitter between individual instruments in the daisy chain, which supports sampling a throughput of 200 kHz per unit. We do not synchronize with any data source other than our own DAQ systems."
Synchronization does indeed mean different things to different people. However, most synchronization requirements can be accommodated through well-understood techniques. Equally as important as signal acquisition are data storage and subsequent analysis. Is the data format compatible with the analysis software package you want to use? Does the data file structure include all the elements necessary to comprehensively document the state of the experiment as well as that of the DAQ system?
If your DAQ application is very fast, large, or geographically dispersed or has some other unusual characteristic, meeting that need will drive many subsequent design and purchasing decisions.
For More Information
DATAQ Instruments www.rsleads.com/312ee-193
Data Translation www.rs1eads.com/312ee-194
Measurement Computing www.rsleads.com/312ee-195
Microstar Laboratories www.rsieads.com/312ee-196
National Instruments www.rsleads.com/312ee-197
United Electronic Industries www.rsleads.com/312ee-198
Yokogawa Corp. of America www.rsleads.com/312ee-199
(1.) Cartlidge, E., "Sterile-neutrino hunt gathers pace at Gran Sasso," Physics-world.com, June 21, 2013.
(2.) Dickerson, B., IRIG-B Time Code Accuracy and Connection Requirements with comments on IED and system design considerations, Arbiter Systems.
(3.) "Time Systems and Dates--GPS Time," Naval Postgraduate School, 2003.
(4.) PowerDNA UElLogger Data Conversion Procedure, United Electronic Industries, White Paper, May 2007.
(5.) "TDMS File Format Internal Structure," National Instruments, August 2013.
by Tom Lecklider, Senior Technical Editor
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||SPECIAL REPORT--DATA ACQUISITION|
|Date:||Dec 1, 2013|
|Previous Article:||Instrument form factors offer RE test flexibility.|
|Next Article:||Tackling PV inverter and electric motor challenges.|