A modern oscilloscope consists first of a front end, the purpose of which is to acquire the signal from the device or equipment of interest. The signal is amplified or attenuated as needed. Then it moves to the analog to digital converter (ADC) for digitization. The next step is for this digital information to be stored in memory.
In a digital oscilloscope, storage is a key concept. Storage is only possible if there is memory. Memory will always have a specific depth. Generally speaking, the more memory the better, but that is not always the case as we shall see.
Sampling rate, memory depth and bandwidth are interrelated. It is necessary to understand how these entities all work together if the oscilloscope is to be used efficiently for debugging emerging projects. Typically, a harmful operating mode is the result of a runt waveform or other glitch caused by a power supply anomaly. To begin, it is necessary to see the bad trace segment so that it can be temporally correlated to the underlying cause, at which point corrective measures can be taken. This is where bandwidth, sampling rate and memory depth become relevant.
Why is a large amount of acquisition memory beneficial? With more memory, it is possible to maintain a high sampling rate over a longer period of time. The higher sampling rate translates to a better chance of finding a bad waveform. The oscilloscope’s effective bandwidth is greater.
But there is a downside to a large memory depth. Under certain conditions it slows the oscilloscope. If the central processing unit cannot keep up with the demands of a deep memory, there will be more dead time.
Dead time, alternatively termed update rate, is a measure of the time an oscilloscope needs to trigger, process the captured data, and finally make data visible in the display. The object of the exercise, when debugging, is to capture an infrequent event. So a short update interval is highly desirable, and it is here that deep memory can degrade scope performance.
To review, an important equation to keep in mind is:
Measurement duration = memory depth/sampling frequency.
Deeper memory enables a higher sampling rate. If the memory is large, it is possible to measure a longer signal. Because the larger memory will increase the update interval, however, the oscilloscope will be slower. The problem with this scenario is that important waveform events may be missed. Alternate triggering methods can mitigate this difficulty.
Memory depth is linked to sample rate, and that metric is of great concern to the instrument’s user. The relationship that relates memory depth to sample rate is:
(Memory depth/time per division) x number of divisions = sample rate.
One thing to remember is that a digital oscilloscope doesn’t always sample at its maximum sample rate. The accuracy with which the analog input signal is displayed depends upon the acquisition memory depth as opposed to the peak sample rate.
Another issue to bear in mind is that if the signal is to display realistically, the sample rate should exceed the signal’s highest frequency component. This Nyquist rate (formulated by Harry Nyquist in 1928) is the lower limit for signal sampling that will not be subject to aliasing. The problem is that when a continuous function is sampled at a constant rate, other functions also match the resulting sample set. By staying above the Nyquist rate, this harmful effect is mitigated.
To create a meaningful display, a digital oscilloscope must “connect the dots.” This amounts to interpolating among the sampled points, a complex mathematical process that consumes finite amounts of time. A larger memory depth generates a greater number of data points and this proclivity can limit the effectiveness of the oscilloscope with respect to the basic debugging process.
The bottom line: Bigger is not always better when it comes to memory depth.