Block chain of work on telemetry data transmission for modern electronic machine networks
Introduction to Sampling Theorem
===========================
With rapid advancement in data acquistion technology (i.e. analog-to-digital and digital-to-analog converters) and the explosive introduction of micro-computers, selected complex linear and nonlinear functions currently implemented with analog circuitry are being alternately implemented with sample data systems.
I. An Intuitive Development: The sampling theorem by C.E. Shannon in 1949 places restrictions on the frequency content of the time function signal, f(t), and can be simply stated as follows: In order to recover the signal function f(t) exactly, it is necessary to sample f(t) at a rate greater than twice its highest frequency component. Practically speaking for example, to sample an analog signal having a maximum frequency of 2Kc requires sampling at greater than 4Kc to preserve and recover the waveform exactly. The consequences of sampling a signal at a rate below its highest frequency component results in a phenomenon known as aliasing. This concept results in a frequency mistakenly taking on the identity of an entirely different frequency when recovered.
This concept results in a frequency mistakenly taking on the identity of an entirely different frequency when recovered. In an attempt to clarify this, envision the ideal sampler of Figure 1(a), with a sample period of T shown in (b), sampling the waveform f(t) as pictured in (c). The sampled data points of f’(t) are shown in (d) and can be defined as the sample set of the continuous function f(t). Note in Figure 1(e) that another frequency component, a’(t), can be found that has the same sample set of data points as f’(t) in (d). Because of this it is difficult to determine which frequency a’(t), is truly being observed. This effect is similar to that observed in western movies when watching the spoked wheels of a rapidly moving stagecoach rotate backwards at a slow rate. The effect is a result of each individual frame of film resembling a discrete strobed sampling operation flashing at a rate slightly faster than that of the rotating wheel. Each observed sample point or frame catches the spoked wheel slightly displaced from its previous position giving the effective appearance of a wheel rotating backwards. Again, aliasing is evidenced and in this example it becomes difficult to determine which is the true rotational frequency being observed.
On the surface it is easily said that anti-aliasing designs can be achieved by sampling at a rate greater than twice the maximum frequency found within the signal to be sampled. In the real world, however, most signals contain the entire spectrum of frequency components; from the desired to those present in white noise. To recover such information accurately the system would require an unrealizably high sample rate. This difficulty can be easily overcome by preconditioning the input signal, the means of which would be a band-limiting or frequency filtering function performed prior to the sample data input. The prefilter, typically called anti-aliasing filter guarantees, for example in the low pass filter case, that the sampled data system receives analog signals having a spectral content no greater than those frequencies allowed by the filter.
Sampling Process:
The definition of proper sampling is quite simple. Suppose you sample a continuous signal in some manner. If you can exactly reconstruct the analog signal from the samples, you must have done the sampling properly. Even if the sampled data appears confusing or incomplete, the key information has been captured if you can reverse the process.
Convolution:
Convolution has been a standard topic in engineering and computing science for some time.Convolving two waveforms in the time domain means that you are multiplying their spectra (i.e. frequency content) in the frequency domain. By "multiplying" the spectra we mean that any frequency that is strong in both signals will be very strong in the convolved signal, and conversely any frequency that is weak in either input signal will be weak in the output signal.
In practice, a relatively simple application of convolution is where we have the "impulse response" of a space. This is obtained by recording a short burst of a broad-band signal as it is processed by the reverberant characteristics of the space. When we convolve any "dry" signal with that impulse response, the result is that the sound appears to have been recorded in that space. In other words, it has been processed by the frequency response of the space similar to how that process would work in the actual space. In fact, convolution in this example is simply a mathematical description of what happens when any sound is "coloured" by the acoustic space within which it occurs, which is in fact true of all sounds in all spaces except an anechoic chamber. The convolved sound will also appear to be at the same distance as in the original recording of the impulse. If we convolve a sound twice with the same impulse response, its apparent distance will be twice as far away.
For instance, in a reverberant space, one might clap one's hands to get a sense of the acoustics of the space. However, a more accurate impulse response would be obtained by firing a starter pistol, as that sound's spectrum would be more evenly distributed and the sound is very short. Given the intrusiveness of such an action as firing a gun, a more acceptable approach would be breaking a balloon and recording the response of the space.
However, the output duration of a convolved signal is the sum of the durations of the two inputs. With reverberation we expect the reverberated signal to be longer than the original, but this extension and the resultant "smearing" of attack transients also occurs when we convolve a sound with itself, or with another sound. Transients are smoothed out, and the overall sound is lengthened (by a factor of two in the case of convolving a sound with itself). When we convolve this stretched version with the impulse response of a space, the result appears to be half way between the original and the reverberant version, a "ghostly" version of the sound, so to speak.
Since most acoustic sounds (but not common electronic and digital sounds, unfortunately) have spectra that taper off with increasing frequency, the high frequencies may be weak when convolved with a spectrum with similar characteristics. Therefore, some programs such as SoundHack allow the high frequencies to be boosted during convolution. This can also result in the result being "hissy" and therefore equalization needs to be applied.
The inverse of convolving two waveforms is multiplying them, as in ring modulation. In this case we are convolving their spectra which is why ring modulation results in the sum and difference frequencies of each component being present in the output, though an understanding of this result depends on the mathematics of the complex domain. In other words, the basic theorem about the time domain and the frequency domain is that multiplication in one domain is equivalent to convolution in the other domain.
Finally, there is a technical difference between "direct convolution", which is a very slow process given that every sample in each signal must be multipled by every sample in the other signal, and the faster version used by programs like SoundHack which analyzes each signal using an FFT (Fast Fourier Transform) then multiplying those results and performing the Inverse FFT to return the result to the time domain. Besides increasing the speed of the calculation (thereby bringing it into a reasonable working process), other variables involved in the analysis phase are brought into play, such as the window shape used in the analysis. However, in practice, this variable only affects the result quite subtly. If a rectangular window is used, convolving A * B is exactly the same as B * A (note that the * symbol is used to denote convolution, as opposed to multiplication). However, other curved forms of windows result in a difference between the two versions, mainly the overall amplitude shape of the result. Convolving a signal with two impulses farther apart and closer together.
Aliasing Error:-
The signals we use in the real world, such as our voices, are called "analog" signals. To process these signals in computers, we need to convert the signals to "digital" form. While an analog signal is continuous in both time and amplitude, a digital signal is discrete in both time and amplitude. To convert a signal from continuous time to discrete time, a process called sampling is used.
The value of the signal is measured at certain intervals in time. Each measurement is referred to as a sample. (The analog signal is also quantized in amplitude, but that process is ignored in this demonstration. See the Analog to Digital Conversion page for more on that.)
When the continuous analog signal is sampled at a frequency F, the resulting discrete signal has more frequency components than did the analog signal. To be precise, the frequency components of the analog signal are repeated at the sample rate. That is, in the discrete frequency response they are seen at their original position, and are also seen centered around /- F, and around /- 2F, etc.
How many samples are necessary to ensure we are preserving the information contained in the signal? If the signal contains high frequency components, we will need to sample at a higher rate to avoid losing information that is in the signal. In general, to preserve the full information in the signal, it is necessary to sample at twice the maximum frequency of the signal. This is known as the Nyquist rate. The Sampling Theorem states that a signal can be exactly reproduced if it is sampled at a frequency F, where F is greater than twice the maximum frequency in the signal.
What happens if we sample the signal at a frequency that is lower that the Nyquist rate? When the signal is converted back into a continuous time signal, it will exhibit a phenomenon called aliiasing. Aliasing is the presence of unwanted components in the reconstructed signal. These components were not present when the original signal was sampled.
In addition, some of the frequencies in the original signal may be lost in the reconstructed signal. Aliasing occurs because signal frequencies can overlap if the sampling frequency is too low. Frequencies "fold" around half the sampling frequency - which is why this frequency is often referred to as the folding frequency.
Sometimes the highest frequency components of a signal are simply noise, or do not contain useful information. To prevent aliasing of these frequencies, we can filter out these components before sampling the signal. Because we are filtering out high frequency components and letting lower frequency components through, this is known as low-pass filtering.
Demonstration of Sampling
The original signal in the applet below is composed of three sinusoid functions, each with a different frequency and amplitude. The example here has the frequencies 28 Hz, 84 Hz, and 140 Hz. Use the filtering control to filter out the higher frequency components. This filter is an ideal low-pass filter, meaning that it exactly preserves any frequencies below the cutoff frequency and completely attenuates any frequencies above the cutoff frequency.
Notice that if you leave all the components in the original signal and select a low sampling frequency, aliasing will occur. This aliasing will result in the reconstructed signal not matching the original signal. However, you can try to limit the amount of aliasing by filtering out the higher frequencies in the signal. Also important to note is that once you are sampling at a rate above the Nyquist rate, further increases in the sampling frequency do not improve the quality of the reconstructed signal.
This is true because of the ideal low-pass filter. In real-world applications, sampling at higher frequencies results in better reconstructed signals. However, higher sampling frequencies require faster converters and more storage. Therefore, engineers must weigh the advantages and disadvantages in each application, and be aware of the tradeoffs involved.
The importance of frequency domain plots in signal analysis cannot be understated. The three plots on the right side of the demonstration are all Fourier transform plots. It is easy to see the effects of changing the sampling frequency by looking at these transform plots.
As the sampling frequency decreases, the signal separation also decreases. When the sampling frequency drops below the Nyquist rate, the frequencies will crossover and cause aliasing.
Experiment with the following applet in order to understand the effects of sampling and filtering.
Review of PCM:
Pulse-code modulation (PCM) is a digital representation of an analog signal where the magnitude of the signal is sampled regularly at uniform intervals, then quantized to a series of symbols in a digital (usually binary) code. PCM has been used in digital telephone systems and is also the standard form for digital audio in computers and the compact disc red book format. It is also standard in digital video, for example, using ITU-R BT.601.
However, straight PCM is not typically used for video in consumer applications such as DVD or DVR because it requires too high a bit rate (PCM audio is supported by the DVD standard but rarely used).
Instead, compressed variants of PCM are normally employed. However, many Blu-ray Disc movies use uncompressed PCM for audio. Very frequently, PCM encoding facilitates digital transmission from one point to another (within a given system, or geographically) in serial form.
In the Diagram, a sine wave (red curve) is sampled and quantized for PCM. The sine wave is sampled at regular intervals, shown as ticks on the x-axis. For each sample, one of the available values (ticks on the y-axis) is chosen by some algorithm (in this case, the ceiling function is used).
This produces a fully discrete representation of the input signal (shaded area) that can be easily encoded as digital data for storage or manipulation. For the sine wave example at right, we can verify that the quantized values at the sampling moments are 9, 11, 12, 13, 14, 14, 15, 15, 15, 14, etc.
Encoding these values as binary numbers would result in the following set of nibbles: 1001, 1011, 1100, 1101, 1110, 1110, 1111, 1111, 1111, 1110, etc. These digital values could then be further processed or analyzed by a purpose-specific digital signal processor or general purpose CPU.
Several Pulse Code Modulation streams could also be multiplexed into a larger aggregate data stream, generally for transmission of multiple streams over a single physical link. This technique is called time-division multiplexing, or TDM, and is widely used, notably in the modern public telephone system.
There are many ways to implement a real device that performs this task. In real systems, such a device is commonly implemented on a single integrated circuit that lacks only the clock necessary for sampling, and is generally referred to as an ADC (analog-to-digital converter).
These devices will produce on their output a binary representation of the input whenever they are triggered by a clock signal, which would then be read by a processor of some sort.
Limitations:
It may be noted that there are two sources of impairment implicit in any PCM system:
- choosing a discrete value near the analog signal for each sample (quantization error)
- between samples no measurement of the signal is made; due to the sampling theorem this results in any frequency above or equal to 1/2 (fs being the sampling frequency) being distorted or lost completely (aliasing error). This is also called the Nyquist frequency.
As samples are dependent on time, an accurate clock is required for accurate reproduction. If either the encoding or decoding clock is not stable, its frequency drift will directly affect the output quality of the device. A slight difference between the encoding and decoding clock frequencies is not generally a major concern; a small constant error is not noticeable.
Clock error does become a major issue if the clock is not stable, however. A drifting clock, even with a relatively small error, will cause very obvious distortions in audio and video signals, for example.
DPCM(Differential or Delta Pulse Code Modulation):-
In conventional PCM, the analog signal may be processed (e.g. by amplitude compression) before being digitized. Once the signal is digitized, the PCM signal is usually subjected to further processing (e.g. digital data compression).
Some forms of PCM combine signal processing with coding. Older versions of these systems applied the processing in the analog domain as part of the A/D process, newer implementations do so in the digital domain.
These simple techniques have been largely rendered obsolete by modern transform-based audio compression techniques.
- Differential (or Delta) pulse-code modulation (DPCM) encodes the PCM values as differences between the current and the previous value. For audio this type of encoding reduces the number of bits required per sample by about 25% compared to PCM.
- Adaptive DPCM (ADPCM) is a variant of DPCM that varies the size of the quantization step, to allow further reduction of the required bandwidth for a given signal-to-noise ratio.
In telephony, a standard audio signal for a single phone call is encoded as 8000 analog samples per second, of 8 bits each, giving a 64 kbit/s digital signal known as DS0. The default signal compression encoding on a DS0 is either μ-law (mu-law) PCM (North America and Japan) or a-law PCM (Europe and most of the rest of the world).
These are logarithmic compression systems where a 12 or 13 bit linear PCM sample number is mapped into an 8 bit value. This system is described by international standard G.711. An alternative proposal for a floating point representation, with 5 bit mantissa and 3 bit radix, was abandoned.
Where circuit costs are high and loss of voice quality is acceptable, it sometimes makes sense to compress the voice signal even further. An ADPCM algorithm is used to map a series of 8 bit PCM samples into a series of 4 bit ADPCM samples.
In this way, the capacity of the line is doubled. The technique is detailed in the G.726 standard.
Later it was found that even further compression was possible and additional standards were published. Some of these international standards describe systems and ideas which are covered by privately owned patents and thus use of these standards requires payments to the patent holders.
Some ADPCM techniques are used in Voice over IP communication.
DM Code Converters:-
A delta modulation to other permutation code signal converter comprises a shift register and up-down counter in series. The counter produces a permutation code output indicative of the running history of the delta modulation input, while the shift register, through weighted coefficient multipliers, produces correction signals to reduce the noise content of the counter output.
In a digital message transmission system having a transmitting terminal and a receiving terminal, a delta modulator at the transmitting terminal for sampling and encoding the message to be transmitted in a binary pulse train, a converter for converting the binary pulse train directly to a permutation code signal having a plurality of digit combinations different from said delta modulation signal.
It said converter comprising a first means for shifting the permutation code signal in one direction in response to one condition of the delta modulation signal and in the opposite direction in response to another condition of the delta modulation signal, and second means for modifying the permutation code signal generated by said first means to conform to the system noise requirements by sampling the delta modulation pulse train in advance of its introduction into said first means and generating correcting pulses in the said permutation code.
It means for combining the outputs of said first and second means to produce a modified signal, and means for sampling and transmitting to said receiving terminal the modified signal at the permutation code rate.
. In a digital transmission system having a transmitting and a receiving terminal, the combination as claimed in claim 1 wherein said first converter means comprises means for producing an output based upon the history of the delta modulation signal and the second converter means comprises means for producing an output that is an interpolation of a plurality of pulses of the delta modulation signal.
A converter for converting a delta modulation signal to a different permutation code signal comprising a shift register and an up-down counter in series, said shift register having a plurality of stages, each stage being connected to a weighted coefficient digital multiplier.
The coefficient value of each multiplier being determined in part by the particular stage of the shift register to which it is connected, the outputs of said up-down counter and said digital multipliers being coded in the particular permutation code desired, and means for combining the outputs of the digital multipliers and the up-down counter to produce an output signal in the aforementioned permutation code.
A converter as claimed in claim 4 wherein said means for combining the outputs comprises a coefficient adder in which the outputs of said digital multipliers are combined and a term adder for combining the output of said up-down counter and the output of said coefficient adder.
In its most common form, delta modulation (ΔM) produces a train of single valued positive and negative pulses, or, as is more often the case, a pulse train of "1's" and "0's," indicative of the instantaneous amplitude difference between the message waveform being encoded and the integrated pulse train. Where the instantaneous amplitude of the message waveform is greater than the output of the integrator into which the pulse train is fed, a positive pulse is generated.
When the amplitude of the message waveform is less than the integrator output, a negative pulse (or no pulse) is produced. At the receiver, the transmitted train of pulses controls the polarity of locally generated pulses, which are in turn applied to an integrator, the output of which, after appropriate filtering, is a reproduction of the original message waveform.
Probability of Error:
In a binary PCM system, binary digits may be represented by two pulse levels. If these levels are chosen to be 0 and A, the signal is termed an on-off binary signal. If the level switches between - A / 2 and A/2 it is called a polar binary signal.
Suppose we are transmitting digital information, and decide to do this using two-level pulses each with period T:
The binary digit 0 is represented by a signal of level 0 for the duration T of the transmission, and the digit 1 is represented by the signal level At. In what follows we do not consider modulating the signal — it is transmitted at baseband. In the event of a noisy Gaussian channel (with high bandwidth) the signal at the receiver may look as follows:
Here the binary levels at the receiver are nominally 0 (signal absent) and A (signal present) upon receipt of a 0 or 1 digit respectively. The function of a receiver is to distinguish the digit 0 from the digit 1. The most important performance characteristic of the receiver is the probability that an error will be made in such a determination.
Consider the received signal waveform for the bit transmitted between time 0 and time T. Due to the presence of noise the actual waveform y(t) at the receiver is-
where f (t) is the ideal noise-free signal. In the case described the signal f (t) is,
In what follows, it is assumed that the transmitter and the receiver are synchronised, so the receiver has perfect knowledge of the arrival times of sequences of pulses. The means of achieving this synchronisation is not considered here. This means that without loss of generality we can always assume that the bit to be received lies in the interval (0, T).
Differential Encoding:-
Encoding in which signal significant conditions represent binary data, such as "0" and "1", and are represented as changes to succeeding values rather than with respect to a given reference.
Note:
An example of differential encoding is phase-shift keying (PSK) in which the information is not conveyed by the absolute phase of the signal with respect to a reference, but by the difference between phases of successive symbols, thus eliminating the requirement for a phase reference at the receiver.
Simple example of transform coding mentioned earlier and instance of this approach.
Here:
- The difference between the actual value of a sample and a prediction of that values is encoded.
- Also known as predictive encoding.
- Example of technique include: differential pulse code modulation, delta modulation and adaptive pulse code modulation -- differ in prediction part.
- Suitable where successive signal samples do not differ much, but are not zero. E.g. Video -- difference between frames, some audio signals.
Bit Synchronizers:
The PCM Bit Synchronizer is a key component in the telemetry ground station. It must reconstruct its input data stream with the highest fidelity to allow the processing system to produce meaningful information.
The most demanding customers consider our bit synchronizers to be the GOLD Standard. GDP offers a series of bit synchronizers to satisfy a wide spectrum of applications. We offer units containing a single bit synchronizer or multiple synchronizers (up to 16).
Optional features include: built in simulator, Forward Error Correction (Viterbi), Automatic bit rate determination, built in BERT, Quadrature signal handling (QPSK, AQPSK, SOQPSK, etc.), and in a multiple synchronizer configuration –Best Source Selection that produces a high fidelity output that is the best combination of bits from multiple sources.
NOTE:
The process of maintaining one operation in step with another. In data transmission sender and receiver should be synchronized at the bit level. Means that sender's and receiver's clock must be synchronized. This is done by physical layer .
Frame Synchronizers:
A digital electronic device which synchronizes two or more video signals.
The frame synchronizer uses one of its inputs as a reference and genlocks the other video signals to the reference's sync and color burst signals.
By delaying the other signals so that each line and field starts at the same time, two or more video images can be blended, wiped and otherwise processed together. (A TBC takes this a step further by synchronizing both signals to a stable reference, eliminating time base errors from both sources.).
Note:
- Frame synchronization is the process in the telecommunications transmission system to align the digital channel (time slot) at the receiving end with the corresponding time slot at the transmission end as it occurs.
- Identifying the end of each minor frame period is the synchronization (sync) word, which is a unique sequence of 1's and 0's.
- The pattern is generally a pseudo-random sequence that is unlikely to occur randomly in the acquired data and usually occupies two words (or more) in the minor frame. The IRIG-106 Standard lists recommended patterns for lengths 16 through 33 bits.
- The first three bits transmitted in a frame sync pattern are always a "1," regardless of LSB or MSB alignment.
- The length of the frame sync is longer than usual data words to reduce the probability of actual data matching it.
- The frame sync should also be commensurate with the number of words in the minor frame (typically, it occupies 1 to 5 percent of the total minor frame).
- An identical pattern is repeated for every minor frame on the assumption that random data will not consistently match the defined pattern.
- The decommutator can then be programmed to lock onto this pattern to begin regenerating the original commutated measurands.
Komentar
Posting Komentar