Clifton Laboratories 7236 Clifton Road  Clifton VA 20124 tel: (703) 830 0368 fax: (703) 830 0711

E-mail: Jack.Smith@cliftonlaboratories.com
 

 

To search within the Clifton Laboratories site, enter your search term below.
 

 


Home
Up
Updates
Current Products
Prior Products - no longer available
Documents
Book
Software Updates
Softrock Lite 6.2
Adventures in Electronics and Radio
Elecraft K2 and K3 Transceivers

 

FM Receivers and De-emphasis

Revision History
Started 04 February 2009
Completed 15 February 2009
16 February 2009: Added photographs of three receivers
 

In sorting through my Elecraft K3's transceiver's transmitting performance, I looked briefly at its FM mode, and the associated FM receiver section. That lead me to look at some older FM receivers stashed away in the furthest reaches of my basement, and I thought the results might be of interest to the broader amateur radio community.

This discussion is in the context of "narrow band" FM of the type used in analog two-way radio equipment and amateur VHF and UHF FM communications. The prevailing standard for the last couple decades or more is for a maximum deviation of 5 KHz and a maximum modulating frequency of 3 KHz or so. These parameters  result in an occupied bandwidth of 16 KHz, and the emission type used to be identified as 16F3, meaning 16 KHz bandwidth, FM modulation voice emission. The current emission description system is more complex, and I'll stick with the 16F3 descriptor for this discussion because it matches the age of most of the equipment to be discussed.

As a reminder, in an FM system "deviation" is the frequency excursion of the signal with a 0 frequency (DC) modulating waveform. In other words, if your FM transceiver were DC coupled throughout, and you connected  a battery into the microphone, the transmitted carrier would shift in one direction by the amount of the maximum deviation. Reversing the battery polarity would cause it to shift in the opposite direction by the same amount. This assumes that the applied voltage is sufficient to drive the  transmitter to its maximum frequency excursion, of course.

The "maximum modulating frequency" is the highest frequency applied to the FM transmitter's modulator. For voice, communications quality audio is generally considered to require a frequency range of 300 Hz to 3000 Hz, although narrower bandwidths are occasionally used.

FM broadcast uses 75 KHz deviation and the highest modulating frequency can be in the 90 KHz range when subcarriers are considered, although the subcarriers are injected at a much lower modulation, typically 10% or 7.5 KHz deviation per subcarrier. I've written about subcarriers at FM & TV Subcarriers.

The FM transmitter's frequency varies in proportion to the applied modulating voltage and it varies at a rate equal to the applied modulating frequency.

Sounds simple enough ... but it isn't. If the deviation is only ±5 KHz, how can the occupied bandwidth be 16 KHz? There's a complicated answer and a simple answer to this question. The simple answer is that the very act of changing a transmitter's frequency rapidly gives rise a complicated series of sidebands, and when we change the transmitted frequency 3,000  times a second, it's fast. That's why our definition of deviation used a battery on the microphone; the modulating frequency is zero, once the initial transient dampens out.

For narrow band FM, a useful tool for estimating the required bandwidth was developed by J.R. Carson of the Bell Telephone Laboratories in 1922, which has since become known as "Carson's rule." Carson's rule defines the relationship between occupied bandwidth, maximum modulating frequency and peak deviation as:

OBW≈2(fmax+Δf)

where
OBW = Occupied bandwidth
fmax is the maximum modulating frequency
Δf is the peak deviation, i.e., the maximum permitted frequency excursion.

If the modulating waveform is normal speech or music, and for small modulation index values, Carson's rule provides a good estimate of the occupied bandwidth. It's not so good for digital data modulation of an FM  transmitter, although if instead of the data symbol rate a Fourier series of the first few harmonics of the data symbol rate are used, the estimate can be reasonably good.

Modulation index, β, is defined as the ratio of the deviation Δf to the modulating frequency fmax:

          β=Δf/fmax

Occupied bandwidth, by the way, has a normally accepted definition—the bandwidth within which is contained 99% of the transmitted (or received) energy. In theory, an FM  transmitter can have energy going to infinity, but for most practical purposes, the signal power is not all that widely spread.

Carson left his mark in other areas, including inventing single sideband transmission in 1915. (It was used for carrier transmission over wire by the Bell System yars before it was used for radio transmissions.)

Let's look at some measured data.

We'll start with a 50 MHz signal from an HP8657A signal generator. The 8657A is set for FM modulation, 5 KHz deviation. (Remember that deviation is generally referred to as the single sided deviation.)

The image below shows a spectrum analyzer image of the signal with no modulation applied. The response is not a zero width spike because the spectrum analyzer has a finite resolution bandwidth, 300 Hz in this case.

 


As we discussed earlier, what happens when we modulate the signal generator at 5 KHz but with a very low modulating frequency? The bandwidth should be 10 KHz. We can determine this either from Carson's rule with the modulating frequency set to 0, or by applying common sense. If we move the signal generator's tuning dial ±5 KHz slowly, we expect the bandwidth of the resulting signal to be 10 KHz.

The spectrum analyzer image below shows the result of modulating the 8657A with a 5.33 Hz sine signal. I've set the spectrum analyzer for "peak hold" function so the image shows the composite of the slow frequency sweeping. The spectrum analyzer shows a bandwidth of 10.4 KHz, which differing from the theoretical 10.0 KHz slightly due to equipment tolerance.

 


If we increase the modulating  frequency to 100 Hz, we see a similar appearing response. This time, I've enabled the Advantest R3463 spectrum analyzer's "occupied bandwidth" (OBW) automatic measurement feature. OBW is usually defined as the 99% bandwidth; 99% of the transmitted energy is contained within the OBW. As we'll see later, because 99% of the energy is within the OBW we still may be concerned with energy outside the OBW.

Carson's rule says the OBW of this combination should be 2*(5+0.1) or 10.2 KHz. The spectrum analyzer computes it as 10.45 KHz, which again is reasonable considering the 8657A's deviation is only accurate to ±5%.

 


If we increase the modulating frequency to 1 KHz, we see the occupied bandwidth increases. Carson's rule says the OBW should be 2*(5+1) = 12.0 KHz. Again, we see very good agreement between Carson's prediction and the measured OBW. These parameters result in a modulation index, β, of 0.2.

 


Increasing the modulating signal to 3 KHz, increases the measured OBW to 17.9 KHz. Carson's rule predicts 2*(5+3) = 16 KHz.  β is 0.6 in this case. We see  Carson's rule beginning to under predict the occupied bandwidth as β increases beyond around 0.5.

 


If we increase the modulating frequency to 10 KHz, the OBW as predicted with Carson's rule is 2*(5+10) = 30 KHz, whilst the spectrum analyzer measures 21.4 KHz. β in this case is 2, well beyond the 0.5 or so we regard as the upper limit of good accuracy from Carson's rule.

Note also that this combination of high modulating frequency and low deviation results in sidebands with significant energy ±20 KHz from the carrier, and not inconsiderable energy at ±30 KHz. With 3 KHz as the maximum modulating frequency, there's no measurable energy above ±15 KHz.

The relationship amongst deviation, modulating frequency and sideband spacing and amplitude (and therefore occupied bandwidth) is defined by a Bessel function. In theory, the FM signal has sidebands extending to plus and minus infinity, but in practice the energy drops off to negligible levels at finite spans.
 


As said at the outset, Carson's rule is valid only where the modulation index is relatively small, say β < 0.5 or 0.6.

If instead of a clean sine wave modulating signal, we feed broadband noise into the 8657A signal generator, the spectrum looks quite different, as seen below. The broadband noise is from the internal source of an HP3562A dynamic signal analyzer. The noise signal is centered at 5 KHz with 70% of the noise energy within 1 KHz of the center frequency.

 

Of course, these tests use a signal generator as the modulate source. The 8657A is designed to have a flat modulating frequency response, unlike a typical FM communications transmitter which intentionally limits the audio frequency response by rolling off frequencies above 3 KHz or so.

FM Broadcast stations operate  with 75 KHz deviation and a maximum audio frequency response typically limited to 15 KHz. (This figure excludes various subcarriers also modulating the carrier.)

The modulating index in this case is 15 KHz / 75 KHz, or 0.2, so Carson's rule should provide decent accuracy. We predict the bandwidth with Carson's rule as 2*(15+75) = 180 KHz. We measure 191.5 KHz, representing about 5% error.
 


Pre-emphasis and de-emphasis

Absent explicit audio shaping, the noise output of an AM or SSB receiver is relatively flat over the receiver's selective bandwidth whether a signal is present or not.

For example, the spectrum analyzer image below shows the noise output of my K3 receiver in SSB mode, with 2.8 KHz bandwidth selected, with a single signal at  beat note of 400 Hz present. Except for the low frequency roll off below 250 Hz and the upper rolloff at 2.8 KHz, the noise level has a flat response. The noise power in a 10 Hz bandwidth centered at 500 Hz is close to identical with the power in a 10 Hz bandwidth centered at 2500 Hz.
 

The image below is the same setup, with without the signal present. The level is different because the receiver's AGC tries to maintain a constant audio level. The point of interest, however, is that the noise spectrum is flat with frequency, whether  there is a signal present or not.
FM receivers have a quite different noise characteristic—the noise is relatively flat when no signal is present, but when receiving a signal, two things happen. First, the overall noise level drops. Second, the remaining noise increases with frequency. When a signal is received, in theory, an FM receiver's demodulated noise energy in a small frequency interval δf  centered at frequency f is proportional to f2. This results in a rising noise characteristic, increasing at 6 dB/octave or 20 dB/decade.

The reason for an FM receiver's rising noise characteristic is well explained by D. Roddy in Satellite Communications 4th Ed., pp 270-271, reproduced below. Note that this analysis is valid only for the stated conditions—where a carrier much stronger than the input noise is present.

The upshot of this is that a theoretically perfect FM receiver has a noise characteristic at the output of the detector that increases 6 db/octave or 20 db/decade when a signal is present. (If the noise voltage spectral density is proportional to frequency, then doubling  the frequency (one octave) doubles the noise  voltage, representing a 6 dB increase (20*log(2) = 6 dB).

It's easier for me to illustrate FM receiver noise concepts with a wideband receiver, in this case a Heath AR-29 AM/FM tuner I built many years ago. I've modified the tuner to bring a buffered sample of the discriminator output to the back panel.

With no input signal, the discriminator's output is noise, at a level essentially constant with frequency, as reflected in the image below.
 

What happens when we put an unmodulated signal into the AR-29's antenna connection? (Both the tuner and signal generator are set to the same frequency, of course.)

Two things happen, as mentioned before.

  • The absolute noise level decreases. At 10 KHz, for example, with no antenna connected, the noise is about -65 dBV/Hz. With the unmodulated carrier applied, the noise at 10 KHz drops to about -100 dBV/Hz, a 35 dB reduction. The reduction of noise with a signal is known as "quieting."
  • Second, note how the noise spectrum shifts from flat to having a pronounced rise with frequency.
If we increase the applied signal to a very high level, -40 dBm for example, the noise reduction stops as the noise level is dominated by other noise sources in the signal generator, the  tuner and in the test equipment. This condition is known as "full quieting," no further noise reduction is provided by increasing the signal level. At this point,  the noise at 10 KHz is -120 dBVHz, representing 55 dB reduction over no signal noise.
 
The FM receiver's rising noise characteristic over most of its useful signal range suggests a strategy for noise reduction—use a low pass RC network after the discriminator to counteract the rising noise characteristic. This, of course, will roll off the signal as well, so we offset the roll off by using an RC high pass network to increase the modulation of higher frequency components of the modulating waveform. This process is called "pre-emphasis" at the transmitter and "de-emphasis" at the receiver.

This sounds a lot like getting something for nothing, but it works. In FM broadcast, if a fully modulated 1 KHz tone deviates the transmitter 75 KHz, then we expect a 10 KHz tone of the same level to deviate the transmitter much more than 75 KHz, due to pre-emphasis. In fact the amount of extra modulation amounts to a 14 dB increase or 375 KHz deviation. Something doesn't sound right here, does it? The main reason pre-emphasis/de-emphasis works is that as the modulating frequency increases, there's much less energy found, and hence the level of the 10 KHz energy is so low that even after pre-emphasis the deviation remains less than the 75 KHz maximum.

In FM broadcasting in the US, the standard pre-emphasis / de-emphasis is "75 microseconds." This means that the RC network has a time constant of 75 microseconds. To convert this to a 3 dB corner frequency, take the reciprocal:

ω=1/75x10-6 = 13.33 radians/sec, or 2.1 KHz.

The image below shows the concept. Below  the 2.1 KHz, both the transmitter and receiver are flat. Above that,  the pre-emphasis at the transmitter is exactly offset by the de-emphasis at the receiver, so the net is a flat frequency response, measured from transmitter input to receiver output.
 

The figure below shows how well the AR-29's de-emphasis network flattens out the noise increase. The blue trace is the demodulated signal directly from the tuner's discriminator, It shows the rising noise characteristic up to about 50 KHz. The red trace is the tuner's right channel output. In addition to de-emphasis, the tuner's audio output has a low pass filter cutting off around 17 KHz, with specific notches to reduce stereo pilot leakage (19 KHz) and stereo subcarrier leakage (centered at 38 KHz). All are visible in the audio output channel.
Sweeping the AR-29's frequency response with a signal generator in FM modulation mode and using the tracking sweep function of the HP3562A lets us look more closely at the de-emphasis curve.

The predicted -3 dB point is 2.12 KHz, and the measured -3 dB point is 2.107 KHz, certainly quite close.

Looking at the swept frequency response over a wider scale, we also see that it conforms well to the 20 dB/decade slope.
 
Returning to FM communications equipment, the same concept applies, except the de-emphasis curve is defined differently. EIA/TIA standard 603 establishes a 6 dB/octave or 20 dB/decade curve between 300 Hz and 3000 Hz. Below 300 Hz, a land mobile receiver's audio is sharply rolled off to prevent hearing the continuous tone-coded squelch system, CTCSS. (Also known by the trade names Private Line or Channel Guard or Quiet Channel to name a few.) Above 3 KHz, or so, the audio is sharply rolled off as  there is no communications benefit achieved from passing these higher frequencies.
 

I measured Elecraft's de-emphasis implementation in my K3 transceiver, with the results shown below. The corner frequency is perhaps a bit higher than the 300 Hz standard, but it's a very good implementation overall. The low frequencies are sharply rolled off to reduce CC

I also ran similar sweeps on a couple of old land mobile receivers. The first is an  RCA Super-Basephone  receiver. This is an early all solid state VHF receiver, from the days when RCA was a player in the land mobile radio business.

The frequency response curve is curious, to say the least, with the 6 dB de-emphasis taking effect around 1 KHz. I believe, but can't put my hands on the documents to prove it, that the land mobile standard in the 1950's and 60's when this receiver was manufactured was the same as the current standard.

The receiver is reasonably well constructed, with good shielding (note the finger stock on the hinged access panel, for example) and all the input/output leads filtered with feed through capacitors. This receiver has provisions for an optional tone squelch module, although it is not so equipped.
 

Going back further into history, I also looked at a General Electric Progress Line receiver. This was the last vacuum tube land mobile receiver GE made and this particular receiver was manufactured before CTCSS and hence has no low frequency roll off. Like the RCA, the 6 dB/decade de-emphasis is implemented beginning at 1 KHz or so and the 6 dB/octave standard is not all that well maintained.

This receiver includes the base station power supply rack frame into which the receiver strip attaches. The receiver strip is the same as used in the mobile Progress Line packages. The two large shields next to the power transformer are the 290 KHz IF transformer/filter section.
 

 

The last communications  receiver I looked at is a Motorola MX-360 hand portable unit dating from the mid 1980's. It applies de-emphasis beginning at 400 Hz, but at a rate of 12 dB/octave, not 6.

The MX-360 was made in several power levels and with several battery packages. The one I have is the high power version, with a high capacity battery and it's close to 12 inches tall.