Data acquisition

akarsh mallya
7 min readOct 28, 2022

I’ve written a series of posts on test engineering which are listed in order here. In previous posts I covered the test system design process, sources of noise, and noise suppression techniques. If you’ve been reading the posts in order, you will by now have some knowledge of how to design a test system and select system components to reduce noise in a circuit. In this post I will discuss the next step: acquiring and analyzing data.

The purpose of a test system is to determine if a Device Under Test (DUT) meets the test specifications. This requires data. A DAQ is used to digitize signals and convert it to data.

Figure 1: A DAQ. Image: DAQ Basics — DAQiFi

What is a DAQ? A DAQ is a device that digitizes signals and converts them into data which can be processed and analyzed by a computer, as seen in Figure 1. A DAQ consists of analog and digital inputs and outputs (IO) which form the interface between the DAQ and the physical world. The DAQ uses software drivers to communicate with the computer over USB or Ethernet and provides APIs to enable programs to access the digitized data.

Why is a DAQ needed? A computer doesn’t work with voltage/current signals, it works with files and binary data. A DAQ converts real world signals into digital data that can be processed by computers.

How does a DAQ work? A DAQ uses analog to digital converter chips (ADCs) to sample a signal at a set frequency, to generate a digital representation of a continuous signal through a process called quantization. The idea is best explained with an image.

Figure 2. Image: https://www.open.edu/openlearn/science-maths-technology/exploring-communications-technology/content-section-3.2

In Figure 2 above Ts is the sampling interval — the time duration between successive samples (data points). When the ADC samples the signal, it converts the value of the signal at that instant in time into a digital representation (binary value) that can be used by computers. The end result of sampling is a series of discrete numeric values e.g. [-0.5, 0.5, 1, 1.1, 0.8, -0.1, -0.6…].

Sampling is analogous to taking a photo — a snapshot of a particular instant in time. And if you take a series of photos (samples) of a dynamic scene (signal), you get a motion picture that seems to faithfully reproduce the event being captured (scatter plot of signal).

Types of DAQs: There are many different DAQs available on the market. Some examples are: National Instruments, MCCDAQ, MAQ20, LabJack. A digital multimeter (DMM) could also be used in place of a DAQ. Many DMMs provide extensibility through add-on cards which provide multiplexing, digital IO, thermocouple inputs.

Selecting a DAQ

How does one select a DAQ for a given application? First, we need to understand DAQ specifications. This section will focus on key specifications to consider when comparing and contrasting DAQs.

Resolution: Specified as number of ADC bits (e.g., 12-bit, 16-bit, 24-bit). The resolution determines the smallest value that can be quantized by the ADC. Example: Consider a 16-bit analog input channel with a range of 0–10V. A 16-bit ADC can represent 2¹⁶ = 65536 binary values. So, the smallest voltage value that can be resolved is 10/65536 = 0.153 mV.

What this means is that the input voltage has to change by at least 0.153mV to be detectable. The resolution is only useful for rough, back of the envelope calculations, because absolute accuracy could be an order of magnitude higher than the resolution.

Accuracy: Check the datasheet for an absolute accuracy specification, which is the sum of all errors (gain and offset error, noise uncertainty) for a given input range. The absolute accuracy is much higher than the ADC resolution.

Example: NI USB-6351 (16-bit) has an absolute accuracy of 800 uV for the -5 to 5V range, whereas the theoretical resolution of a 16-bit ADC is 153 uV. MCCDAQ USB-2408 (24-bit) has an absolute accuracy of 300 μV for the -5 to 5V range, whereas the resolution of a 24-bit ADC is 0.6 μV.

The absolute accuracy of commercial, general purpose 16-bit DAQs can be as high as 3 mV on the 10V range.

A/D converter type: Simultaneous or Multiplexed. A multiplexed DAQ contains a single ADC, whereas a simultaneous sampling DAQ contains one ADC per channel.

In a multiplexed DAQ a multiplexer (MUX) is used to increase channel count. During data acquisition each channel is connected to the MUX in succession to acquire the same amount of data on all channels. Once all channels have been scanned the DAQ outputs data from all channels as one block. However, these channels are not synchronized, the data collected by each channel will be phase shifted as shown in Figure 3. Only a simultaneous sampling DAQ provides synchronous multi-channel data.

Figure 3. Image: https://forums.ni.com/t5/Instrument-Control-GPIB-Serial/Question-on-multiplex-sampling-simultaneous-sample-amp-hold/td-p/3357506

Sampling Rate: A DAQ with a high maximum sampling rate isn’t necessarily better or more accurate. However a high sampling rate might be a requirement for certain applications. Determine the required sampling rate before selecting a DAQ. The DAQ sampling rate should be set 10 times higher than the frequency of the signal being sampled.

Using a DAQ

A DAQ has a driver, and a software API component. The driver enables the DAQ to communicate with a PC. The API enables the test software to interact with the DAQ and perform actions such as turn outputs on/off, read sampled data, enable and disable channels, change sampling rate etc.

Sampling Frequency: Setting the correct sampling frequency is of critical importance to the performance of a test system. Indeed, all of the work done in selecting components, and good wiring practices can be undone if the proper sampling frequency (Fs) is not used.

A core concept to become familiar with is Nyquist frequency (Fn). Nyquist frequency is half of the sampling frequency. Fn = 0.5*Fs. To explain the importance of Nyquist frequency we first need to understand the sampling theorem (Shannon’s sampling theorem).

The theorem states that a signal with the highest frequency component of f Hz can be completely determined by uniform samples taken at a rate of Fs samples per second where Fs ≥ 2f.

Figure 4. Image: Nyquist — Shannon Signal Sampling Theorem

Figure 4 shows a time domain graphical representation of the sampling theorem. A source signal of frequency f is being sampled at various frequencies. When Fs/f < 2 the perceived frequency calculated from the sampled data (dotted green line) will not match source signal frequency.

So, the Nyquist/critical frequency Fn is a corollary of the sampling theorem and specifies the frequency limit of the sampling system. A sampler running at Fs can only accurately reproduce frequency components that are less than or equal to the Nyquist limit in the source signal.

What happens to frequency components that are higher than the Nyquist limit? It would be nice if they were simply rejected, but this is not the case. Frequencies above the Nyquist limit are sampled and reproduced as alias frequencies in the sampled data. This is known as folding/aliasing and is a common source of error.

Once high frequency noise is aliased into the data, the data is irretrievably corrupt and cannot be fixed through any method of post-processing. The aliased noise will be indistinguishable from the real signal.

Aliased frequency can be calculated using the following formula:

fs is the sampling frequency. NINT is the nearest integer function. f is a frequency of interest, and fperceived is the frequency of the signal as it appears in the sampled data.

Example: Suppose we sample a signal at fs = 40 Hz (40 samples/s). What is the aliased frequency of 60 Hz powerline noise?

f_perceived = ABS(60–40*NINT(60/40)) = 20 Hz.

If a signal is being sampled at 40 Hz, then any 60 Hz noise induced in the signal will appear in the sampled data as a 20 Hz signal.

What is the ideal sampling rate?

With the caveat that in Engineering the answer for any question is “it depends”, I’ll list two options based on how the input signal is wired to the DAQ.

  1. Signal Conditioning module with filter: Set the sampling frequency to be at least two times the cutoff frequency Fc, of the filter. The filter limits input bandwidth, so the minimum sampling frequency required to accurately reproduce the band limited input signal is 2*Fc. However most test system applications require more precision than what is obtained by using Fs = 2*Fc for time-domain analysis. In this case set the sampling frequency to 10 times Fc or some other suitable multiple to achieve the required time domain precision in terms of points per cycle.
  2. No analog filter: If you aren’t using an analog filter then the minimum sampling frequency should be a multiple of the powerline frequency. To make the test system globally compatible with 50 or 60 Hz powerline frequency set the sampling frequency to 300 Hz. 300 is the least common multiple of 50 and 60. This sampling frequency is recommended even if the measurements are DC. 300 S/s isn’t considered “fast” and there should be no issue processing data collected at this rate. If you are measuring a high-speed, transient phenomena like in-rush current then the sampling frequency likely has to be in the kilohertz range. Properly measuring inrush current is challenging and is a topic that deserves a standalone article.
    As mentioned previously if the shape of the waveform is important for time-domain analysis then increase the sampling frequency by a factor of 10.

Next article in the series will be on data analysis.

Testing leads to failure, and failure leads to understanding.

— Burt Rutan

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

No responses yet

Write a response