IP contribution and distribution

There is much talk about IP infrastructures at the moment. Typically, the talk centres on live production and interoperability between systems. This is, of course, extremely important. But there is another area in which it has the potential to deliver transformative change.

To get content from a remote location back to the broadcast centre, traditionally you booked a contribution circuit (called backhaul in the USA). This was usually just that: a single circuit. So if you were covering, say, a major football match, you had a single feed from the outside broadcast truck back to the studio.

Major venues would have broadcast circuits permanently installed. These were video cables, and thus of no use for any other purpose, which meant that the provider – usually the local telco – had to charge a significant sum to cover the costs of installation and provision.

Where there were no video circuits available, productions were forced to use line-of-sight microwave links (which were limited in range and location) or satellite uplinks. Like fixed links, both radio solutions were expensive, risky in terms of resilience, and in the case of satellite links added a significant latency.

With the coming of realtime connectivity for professional audio and video, all this changed. By converting the feed to IP it no longer needed a specialist link, and could be carried as data over any bearer that had sufficient bandwidth. In particular, as telcos installed high capacity dark fibre across their territories, and particularly in the sort of metropolitan areas home to major sports stadiums, the stream could be carried as data alongside other traffic.

This saved cost, as telcos tended to charge for the amount of data carried, so broadcasters only paid for what they used. It also increased resilience, as geographically diverse redundant paths could be used, with the receiving device switching seamlessly between the strongest signals.

Multiple feeds

Successfully delivering contribution over IP depended on good signal routing, and on high quality codecs to achieve broadcast quality in the optimum bitrate. While H.264 was widely used, and now H.265 is being considered, this was seen as an ideal application for JPEG2000. This codec provides high video quality with 10:1 compression ratio for contribution applications and its wavelet algorithms are generally considered to degrade more gracefully than the discrete cosine transforms used in MPEG-type compression.

JPEG2000 also allows for uncompressed but packetised delivery, when no quality compromises can be tolerated.

Once broadcasters accepted the concept of mild compression on contribution circuits, and IP carriage of those streams, then the obvious question became can we carry more than one stream from the venue to the studio. It is this which is transformative for production.

It makes possible the ability to deliver multiple parallel feeds from an event. For rugby or football, you could have different cuts for each team. For an athletics event you could have separate track- and field-oriented feeds. You could provide an international feed alongside a domestic production which included an on-site studio for discussions and presentation.

It also means that you can deliver alternative content alongside the main feed, allowing the rights-holder to package an event in different ways for different platforms. All these provide new ways to engage with the audience, and to monetise the coverage of the event.

Local and remote distribution

Another important requirement is to deliver multiple feeds at the location. An obvious use case is for a video referee, who will want to look at multiple camera angles. Rigging a large number of video feeds is challenging and time-consuming: running in a single fibre is much simpler.

While video referees are usually located at the event, in time it may be that major sports will follow the lead of the NBA in America, which has a centralised video referee centre collecting feeds from all simultaneous games.

Broadcasters frequently provide courtesy feeds to stadium screens, and to other areas in the venue such as the press box and radio commentary. Again, distribution of multiple feeds over fibre is much easier to provide.

This concept can then be carried on to distribution, the delivery of streams from the broadcaster. Again, this is traditionally a single channel output over a video circuit, which is then modified for the platform at each individual headend. So the broadcast signal will be compressed by hardware at the headends for terrestrial, cable and satellite before multiplexing; and it will be transcoded for storage for video on demand, and for live streaming across multiple platforms.

This architecture is inherently expensive, because it requires dedicated devices for each stream at each headend. It is also a quality risk, because the transcoders are outside the physical control of the broadcaster, being remote.

This same architecture of presenting multiple video streams along a single or multiple strands of dark fibre can be applied to distribution. This allows the broadcaster or content provider to maintain quality control over all the encoding in house, distributing all the different formats required ready packaged.

Comprimato Live transcoder

Software encoding

Central to moving these ideas from a theoretical discussion to a practical solution is the ability to encode and stream potentially large numbers of high quality strands in a cost-effective manner.

Comprimato’s core specialist skill lies in implementing high quality codecs in software, to run on standard IT platforms, and particularly on GPUs. This has allowed us to develop the Comprimato Live transcoder, which is a model for how this solution can now be delivered practically.

It runs on any standard x64 server architecture, which can be a physical device or virtualised in a data centre. The software supports up to 70 full HD streams in a single 1U server, so it is extremely compact.

All the functionality is implemented in software: it requires no additional proprietary hardware. The software is agile: new streams can be added in less than a minute. It is also readily extensible, allowing new formats like HDR or 4k to be instantly incorporated as soon as the business case is ready.

Finally, it is codec agnostic, supporting MPEG-2, H.264, H.265, Google VP8 and VP9, and JPEG2000. Individual output streams can provide adaptive bitrate delivery as required.

By adopting this software solution, the Live transcoder delivers high quality at low latency: typically less than 700ms end to end in a video delivery chain. It is also extremely cost-effective, eliminating the capital cost of multiple encoders and decoders. By running on industry-standard servers, the total cost is reduced to hundreds of dollars a stream.

IP connectivity provides new flexibility in both contribution and distribution circuits, allowing producers to create more targeted, engaging content, while ensuring quality is carefully controlled by bringing all the processing in house. Software systems which run on standardised hardware benefit from the continuous improvement in performance from the IT hardware industry, and through regular software updates the ability to add new functionality quickly, securely and cost-effectively.




Ja, ich möchte den Newsletter von FKT abonnieren