A Smarter Approach to Streaming
Artificial Intelligence (AI) used to be the subject of movies. Steven Spielberg even used it as the title of his 2001 futuristic classic. However, now AI is being used in real life to analyse thousands of assets as part of the streaming process. In doing this, AI has been shown to save operators around 30% of content delivery costs, while also improving the quality of this delivery.
Most operators now find that traditional streaming can result in buffering and other delays. Research by Conviva shows that while watching a half-hour show, the average viewer spends less than 18 seconds waiting for a video to rebuffer. However, even this short time is too long when consumer expectations are high and the market so competitive.
Currently, the industry debate is around HEVC versus AV1 codecs and which will be the preferred service by operators in years to come. Similar in compression efficiency, AV1 is much on a par with HEVC yet aims to challenge HEVC for OTT application deployment by offering a royalty-free licensing scheme with the main use case for AV1 being OTT VOD.
As AV1’s install base isn’t set to be significant until 2020, the more imminent conversation within the industry looks to be around adaptive streaming and its successor, content adaptive streaming. Adaptive streaming works by detecting a user’s bandwidth and CPU capacity in real time and adjusting the quality of a video stream accordingly. Although the former is widely used, it does mean that for half the content the bitrate will be too high, and for the other half it will be too low. If it’s too high the content may stall and means that the content is never fully optimised.
As a result, industry pioneers such as Netflix have been working on remedying this shortfall. Netflix has been leading the way with per-title encoding and even recently announced per-shot encoding, but these are proprietary technologies and not available to other operators.
Recognising this shortfall, other developers have been working on content that adjusts the bitrates based on the complexity of content rather than just the internet connection. The result is content adaptive streaming which uses AI to compute all the necessary information, such as motion estimation, to make intelligent allocation decisions. Using a variable bitrate to reach constant quality allows bits to be saved when the complexity drops on slow scenes – using fewer profiles on easier content.
The traditional approach is to keep chunks at fixed lengths. The ecosystem usually requires chunks to start with an I-frame so that profile switches can occur between chunks, but with fixed-size chunks this implies arbitrary I-frame placement. Therefore, a scene cut before a chunking point results in a major compression inefficiency as the image is encoded twice.
Content adaptive streaming combines a scene cut detection algorithm in the video encoder with rules to keep chunk size reasonable and minimise drift, in order to prepare the asset for more efficient packaging. This not only brings cost saving benefits due to reduced traffic, storage and other overheads, but also improves the quality of experience for the consumer.
In reality, the use of AI in the industry isn’t new – it has been used in the form of machine learning for years. However, this application uses it to full advantage – and saves substantial costs while ensuring quality of delivery too. There’s no reason why it shouldn’t come out top in the next industry debate.