JTRS – Continued

In my opening article I wrote briefly about modulation to include FM (Frequency Modulation) and AM (Amplitude Modulation). I also discussed PCM (Pulse Code Modulation), which is in the digital domain. Virtually all  modern military communications equipment operate in the digital domain. That is, they convert the analog human input into a digital stream. Also recall that using standard PCM, we generate a data stream of 64,000 bits per second.

For the time being, I’d like you to view that digital stream as a water pipe of some diameter “d”. In the paragraphs that follow I will discuss the concept of spectrum and bandwidth utilization as it will help you understand some of the challenges designers will need to address as JTRS (jitters) is implemented.

Spectrum is the distribution or totality of the operating environment. In other words, it is the frequency range that the system can operate in given its design and antenna characteristics. Regrettably it is both costly and limited; therefore, designers need to do everything possible to optimize its utilization. On the other hand, bandwidth is the utilization of the available spectra. The graphic below should help you with that concept.

Figure 1

Figure 1 illustrates the concept of spectrum with a large circle that represents the range of frequencies that SINCGARS operates within. The smaller circles represent a communications channel. What you should take away from this graphic is that there is a limit to the number of channels that the given spectrum will accommodate. Therefore, to add additional capacity, designers must add to spectrum, reduce the bandwidth of the channel or a combination of both. This leads us to a very high level discussion on codecs (coders decoders) which is how designers address the channel bandwidth requirements or the size of the smaller circles in figure 1.

The codec, is a fixed or programmable circuit used to processes a digital stream to achieve a desired output. In voice communications where we are bandwidth limited, we use codecs that aggressively compress our 64,000 bit stream into something much smaller. The concept of compression is hard to visualize but I am going to take a stab at it.

A three-second PCM sample generates 3 x 64,000 bits or 192,000 bits, that’s a large amount of data. Keeping that in mind lets dissect a typical radio call.

  1. Sender:   Romeo Bravo. Rome Bravo. Rome Bravo this is Quebec Charlie. Radio Check Over.
  2. Receiver: Quebec Charlie this is Romeo Bravo I read you 5 by 5 Over.

In this particular transmission there is a silence between each letter and each word; rather than representing that silence with an 8 bit digital stream of zeros (00000000) we could compress it to, say, a three bit stream. For example, one bit to indicate the presence of silence and two bits to indicate its duration; effectively going from “00000000” to “001”. So, by applying this processing to our PCM stream we can successfully reduce the bandwidth requirements from 64,000 bits per second to say 12,000 bits per second, which makes it possible for us to reduce the size of the small circles in our graphic so the available spectrum will carry more channels.

The secondary benefit to compressing the data rate is that the communications stream is less vulnerable to SNR (signal to noise ratio) impairments.

So far we’ve only discussed voice communications, but consider what happens when we add video and data transmissions to the system – we’ve now placed additional demands on the available spectrum. Finally, we place further demands on spectrum when we add the concept of an IP based networkable communications architecture.

In another article I’ll give you an overview of the SDR (Software Definable Radio) and some of the standards established to ensure interoperability; so, stay tuned and send any questions or comments my way. I’ll do the best I can to answer them for you.

This entry was posted in Comms and tagged , . Bookmark the permalink.