This will be a growing repository of information regarding DTx, DTS, and SFN technology for ATSC and ATSC M/H applications.
What does DTx, DTS, and SFN mean and what is it good for?
Well, the acronyms themselves stand for:
DTx - Distributed Transmitter
DTS - Distributed Transmission System
SFN - Single Frequency Network
In the context of ATSC television, these terms are used to describe how two or more transmitters can simulcast the same signal from multiple locations. Each of the participating transmitters in such a network would be broadcasting identical information at exactly the same time. It is a technique that essentially takes multiple transmitters spread over a geographic region, synchronizes them, and makes them behave is if they were a single logical transmitter with many parts.
If you're wondering why anyone would want to do this, there are many reasons to consider. Some of the major reasons include:
1) The television spectrum has become an increasingly crowded space. If there is a need for extra transmitters to fill-in some coverage holes, it's not so easy to find a vacant channel for these transmitters to occupy. Instead of using translator stations (re-broadcasting on a new channel), the DTS approach lets these auxiliary transmitters share the same channel as the primary transmitter. This is where we get the term Single Frequency Network.
2) Mobile DTV is coming. This has many ramifications, but one of the big issues is how to get the signal to these portable devices. While stationary OTA users at home can install a rooftop antenna to pull in distant stations, mobile DTV users don't have that option. There need to be more transmitters installed closer to where the mobile DTV devices will be used.
3) Simply increasing the power of existing transmitters is not good enough. If transmitter power is increased to make up for the limited antenna of mobile DTV devices, this will create more problems than it solves. It would cause even more spectrum crowding / interference, the energy costs would be astronomical, and it's just impractical (perhaps even dangerous) to deal with so much transmitter power. With DTS, each transmitter can put out much less power since the distance between the user and the nearest transmitter will usually be shorter. The average signal strength throughout the coverage region becomes more uniform.
It is a bit like comparing the single high-power "monolithic" transmitter model to the multiple low-power "cellular" transmitter model. Of course there are other things to consider like the cost of deploying / maintaining multiple transmitter sites, but for the scope of this discussion, we'll focus on the technology itself more than the business decisions that go along with it.
vs.
How Does It Work?
A transmitter participating in a DTS network is functionally not that different from a regular DTV transmitter. The biggest difference is the requirement for a synchronization reference at each transmitter. This synchronization reference is what keeps all of the individual transmitters in lock step with each other and makes them behave as one.
In theory, any high accuracy distributed timing system can be used, but for all practical purposes, GPS is the timing reference of choice. GPS is available globally, it has very good timing accuracy (atomic clocks are on board every satellite), it has long term stability, and the receiver hardware is relatively inexpensive to implement.
Each of the remote transmitters can be told exactly when to emit their ATSC symbols relative to their local timing reference. If all goes according to plan, then all of the transmitters will be broadcasting the same symbols at the same time no matter where they are located.
Of course none of these synchronization mechanisms existed in ATSC equipment before, so in most cases, DTS support will involve some equipment upgrades at the transmitters and the studio (where a lot of the communications and controls begin). GPS reception equipment (antennas, cables, and receivers) might also be new to some transmitter facilities.
The Challenge
All of this may seem straightforward enough with some potentially very nice benefits, but, like many things in life, there are some hidden complexities to be worked out.
In an ideal world, one would expect that a DTS makes everything easier for the DTV receiver. When there are multiple transmitters nearby, you'd expect the receiver to simply use the strongest available signal and decode a beautiful DTV picture. If we look at a simple two transmitter example, where receiver performance is determined purely by the strongest signal available, we'd expect the coverage map to look something like this:
However, in reality this is not true because we are actually dealing with two signals here. While the receiver is trying to lock on to one of the signals, the other signal is acting as interference. The other signal might be farther away and not as strong, but it is interfering with the stronger signal nonetheless.
Depending on the location of the receiver, there will be places where one transmitter dominates over the other, or vice versa. In places where the signals approach equal power, the stronger signal might still be decodable, but the weaker signal behaves like a noise floor that limits the usable margin. In some areas, there will be a cross-over point where the signals are at equal power. Using our simple two transmitter example, the picture would look more like this:
But then you might ask yourself, "If these two transmitters are transmitting identical information, why would they interfere with each other?" Good question. The answer is that there is almost always a difference in the time of arrival for the signals. The receiver is almost always closer to one transmitter than the other.
The only points that are an equal distance from both transmitters are on a hyperbola that runs between them (straight down the middle in our symmetric example). Every other place on the map will have different arrival times for the two signals. Even though the signals are traveling at the speed of light, most places will have one signal arriving before the other. And even though the transmitters are sending identical information, the receiver will be seeing a digital signal and an identical delayed copy of the same signal superimposed on itself...
Since the 8-VSB (eight level VSB) signal contains mostly random data patterns, the resulting mix of constructive and destructive interference between the two patterns will make an almost random, undecipherable mess that the receiver cannot decode. As you get closer to one transmitter and further away from the other one, the noise contribution of the weaker signal diminishes.
Whenever two transmitters have overlapping coverage areas (imagine the coverage area of each transmitter by itself), there can be mutual interference problems. The severity of interference will depend on the relative signal strengths at each location. The worst interference will be in places where the signals are very close to equal power levels.
By the way, these delayed signal overlap issues are exactly the same as what you would see in a static multipath interference scenario. If there is static multipath in the environment (e.g., reflections off walls, buildings, trees, mountains, etc.), the receiver will see multiple copies of the same signal delayed by various amounts and at different power levels. In the days of analog television, these afterimages could be seen as "ghosts" in the picture.
So if DTS networks have these interference issues, can it still work?
Equalizers To The Rescue
One very important feature of the ATSC signal structure is that it includes some equalizer training sequences in the data field sync.
Quote:
Sidebar
ATSC data is organized into regular blocks of data. For every 828 symbols worth of data, there are 4 segment sync symbols (total length of one segment is 832 symbols). As an analogy to analog systems, you can think of one segment as being equivalent to one scan line of video, and the segment sync is equivalent to the horizontal sync pulse.
For every 312 segments, there is one data field sync (DFS) segment. You can think of the data field sync as the vertical sync pulse or vertical blanking interval in an analog system. One DFS and 312 payload segments (313 segments total) constitute one field. Two fields constitute one frame of data (just like how even and odd fields make one frame in analog TV).
In other words, the ATSC signal is a continuous stream of digital data interspersed with segment syncs and data field syncs at regular intervals.
Every data field sync contains some known data patterns (PN511 and PN63) and a few other control bits (VSB mode and extended data). The contents of the data field sync looks something like this:
|
The fixed PN511 and PN63 symbol sequences in the data field sync (DFS) have a very special property that makes it possible for the receiver to tell if there is any multipath affecting the incoming signal. When the receiver does a pattern search for the known PN511 and PN63 sequences, it can detect the presence of "echo" images and estimate their relative timing delay and magnitude. This information can then feed back into an equalizer, which is a kind of filter that "subtracts out" the unwanted "echo" signal(s).
This has the effect of reconstructing a "cleaned-up" copy of the original signal with most of the multipath corruption removed. The ATSC decoder then has a much better chance of recovering the digital data that was intended to be in the payload. The special properties of the PN511 and PN63 sequences and their relationship to the equalizer is why they are called equalizer training sequences. In ATSC A/153 (the new ATSC standard for mobile and handheld devices), there are going to be even more equalizer training sequences interspersed throughout the data.
In principle, all equalizers are supposed to do the same thing, but in practice, there are many algorithms, techniques, tricks, and trade-offs that might make one equalizer do a better job than another. This may affect things like how strong or how early/late each "echo" can be and how cleanly they are removed from the incoming signal. Since every chip maker wants to differentiate their product from the competition, the specific details of an equalizer's design are usually well-kept trade secrets.
However, the industry realized early-on that equalizer performance was an important part of receiver robustness in real-world situations (indoor antennas, random environments, etc.). Receivers that had poor equalizer performance were prone to signal detection problems, picture break-ups, and an overall bad consumer experience. To help with consumer acceptance of the new digital TV standards, the ATSC released a set of recommended practices (in their A/74 standard) that includes a suggestion for minimum equalizer performance levels.
One of the big improvements in every "generation" of ATSC chipsets is their equalizer performance. In most cases, you will find that newer ATSC receivers do a better job of dealing with multipath than older ones, and a lot of the credit goes to better equalizer design.
So what does this mean for DTS networks? Well, it means that when you have multiple copies of the same signal coming into the ATSC receiver, the receiver's equalizer will be able to "subtract out" or "correct" some of the self-induced interference caused by having multiple transmitters. The fact that all the DTS transmitters are transmitting the same thing makes them all look like echoes of each other, and since equalizers are designed to "undo" the effects of multipath, this helps the DTS coverage in areas where signal overlap is present.
The bad news is that equalizer performance is still not perfect. Existing equalizers will only search for echoes over a limited range (in the time domain), and they can only "undo" the corruption of echoes up to a certain magnitude. We hope that the trend of improving equalizer performance continues with every generation of receivers, but we're still far from ideal.
When we include consideration of the equalizer, we can see that the coverage does improve a bit. In our simple example, the band of interference that went down the middle gets "repaired" slightly by the actions of the equalizer. For an average equalizer, the coverage map might look more like this:
Differences in equalizer performance might result in larger or smaller areas of "recoverable" coverage area. The "magnitude" of recovery (how close the coverage can be restored back to the "ideal" case) depends on how strong of an echo the equalizer can handle (y-axis in equalizer profiles above). The "width" of the recovered region depends on the time window over which the equalizer functions (x-axis in equalizer profiles above). An equalizer that can handle very strong echoes over a very wide time window will maximize the usable signal margin over the greatest area.