Accordingly, we find that telephone circuits that respond well to this latter range of frequencies give quite satisfactory commercial telephone service.
Another important point that we have to keep in mind is the unavoidable presence of noise in a communication system. Noise refers to unwanted waves that tend to disturb the transmission and processing of message signals in a communication system. The source of noise may be internal or external to the system.
A quantitative way to account for the effect of noise is to introduce signal-to-noise ratio (SNR) as a system parameter. For example, we may define the SNR at the receiver input as the ratio of the average signal power to the average noise power, both being measured at the same point. The customary practice is to express the SNR in decibels (dBs), defined as 10 times the logarithm (to base 10) of the power ratio. For example, signal-to-noise ratios of 10, 100 and 1,000 correspond to 10, 20, and 30 dBs, respectively.
2. Source of Information
The telecommunications environment is dominated by four important sources of information: speech, music, pictures, and computer data. A source of information may be characterized in terms of the signal that carries the information. A signal is defined as a single-valued function of time that plays the role of the independent variable; at every instant of time, the function has a unique value. The signal can be one-dimensional, as in the case of speech, music, or computer data; two-dimensional, as in the case of pictures; three-dimensional, as in the case of video data; and four-dimensional, as in the case of volume data over time. In the sequel, we elaborate on different sources of information.
(1) Speech is the primary method of human communication. Specifically, the speech communication process involves the transfer of information from a speaker to a listener, which takes place in three successive stages:
(1) Production. An intended message in the speaker's mind is represented by a speech signal that consists of sounds (i.e., pressure waves) generated inside the vocal tract and whose arrangement is governed by the rules of language.
(2) Propagation. The sound waves propagate through the air at a speed of 300m/s, reaching the listener's ears.
(3) Perception. The incoming sounds are deciphered by the listener into a received message, thereby completing the chain of events that culminate[达到顶点] in the transfer of information from the speaker to the listener.
The speech-production process may be viewed as a form of filtering, in which a sound source excites a vocal tract filter. The vocal tract consists of a tube of non uniform cross-sectional area, beginning at the glottis (i.e., the opening between the vocal cords) and ending at the lip. As the sound propagates along the vocal tract, the spectrum (i.e., frequency content) is shaped by the frequency selectivity of the vocal tract; this effect is somewhat similar to the resonance phenomenon observed in organ pipes. The important point to note here is that the power spectrum (i.e., the distribution of long-term average power versus frequency) of speech approaches zero for zero frequency and reaches a peak in the neighborhood of a few hundred Hertz. To put matters into proper perspective, however, we have to keep in mind that the hearing mechanism is very sensitive to frequency. Moreover, the type of communication system being considered has an important bearing on the band of frequencies considered to be "essential" for the communication process. For example, as mentioned previously, a bandwidth of 300 to 3100Hz is considered adequate for commercial telephonic communication.
(2) The second source of information, music, originates from instruments such as the piano, violin, and flute. The note made by a musical instrument may last for a short time interval as in the processing of a key on a piano, or it may be sustained for a long time interval as in the example of a flute player holding a prolonged note. Typically, music has two structures: a melodic structure consisting of a set of simultaneous sounds. Like a speech signal, a musical signal is bipolar. However, a musical signal differs from a speech signal in that its spectrum occupies a much wider band of frequencies that may extend up to about 15 kHz. Accordingly, musical signals demand a much wider channel bandwidth than speech signals for their transmission.下载本文