Implementing AI In Media Production Takes Patience
Today’s content creators are challenged like never before to produce material for a ultitude of distribution platforms in the shortest amount of time and with the least amount of effort. To do this, they need tools and production systems that streamline the processes involved in creating new materials and repurposing and monetizing existing assets. These tools must offer ease of use, cloud-based collaboration and automated capabilities to enable a producer or disparately located production team to focus on the art of visual storytelling, not the technology required to create it.
The emergence of artificial intelligence (AI) into the media industry has brought many benefits to organizations with large-scale archives and big multi-venue events, including easy search and retrieval of assets as well as those that produce multi-venue sporting events via automated, remotely produced workflows with far less crew than was previously required.
However it’s also come to light that AI can’t happen without patient and accurate machine learning, and that takes time. Software has to be integrated to captures recurring eventsand learn from a database of operations or commands. Metadata has to be leveraged in the most efficient way. Then can the overall system complete those repetitive tasks quickly, but only after learning what had been done in the past and replicating it in a better way.
Like humans, the machines must gain an understanding of what specifically is being asked of it and then it can produce the desired result. Most of the value in media is in the production of complex content that requires judgment, interpretation, creativity, and communication, areas where humans continue to dominate algorithms and will do so for many years to come. Machines will have to be taught to acquire this intelligence.
This is not to say that AI won‘t replace humans eventually for many of today’s media tasks. Take for example, the writing of news content. When a magnitude 4.4 earthquake recently shook southern California, the first story about the quake on the Los Angeles Times’ website – a brief, factual account posted within minutes – was written entirely by an algorithm. Since then, “robot reporters” have produced stories in major news outlets on topics ranging from minor league baseball games to corporate earnings announcements. Some have speculated that future media will consist largely of content produced by AI – perhaps even this column! This remains to be seen.
The addition of AI can also help increase productivity among production teams that are struggling to create ever more content with the same resources. The process often starts with locating a series of audio, video and still assets related to a story or live sports production. This requires the use of a state-of-the-art media asset management (MAM) system. Most MAM platforms ingest an asset – which may or may not have metadata associated with it – so they use time code as an identifier. Other solutions allow users to employ a more granular contextual level of content search by leveraging the metadata from the outset and offering the ability to literally find a needle in a haystack.
But by tightly integrating a MAM platform with specialized machine learning software, users find the exact media they need easily while bringing new capabilities that allow broadcasters, content creators and distributors to repurpose and monetize their archives easily and in ways not possible before. A single piece of content can now be searched for and retrieved quickly across a massive archive using facial recognition, color schemes and other methods [for example, if the subject in question is holding a particular object].
Most advanced MAM systems – which include separate modules for ingest, transcoding, proxy editing, logging, management, metadata, audit-trails, reporting, and much more – are based on a metadata-centric model that uses a series of data APIs to marry machine learning and human logging to deliver highly accurate results. By connecting these two technologies, media organizations can enable their teams to work more efficiently, produce content more efficiently,make changes faster and launch new services on one integrated platform.
Overall, these AI-capable systems make it possible to quickly and accurately analyze massive amounts of diverse information – audio, video and text – from multiple sources and formats in a faster, more scalable and more cost-effective way than humans can. And with the pace of technological development, implementing any singletechnology is a challenge because it eventually becomes outdated. Most single point solutions also cannot be expanded to add additional applications or cognitive capabilities.
To stay competitive in their respective markets, broadcasters and media organizations have to rethink their production strategies and streamline their handing of archived content in order to improve productivity. They must also work with production teams to figure out the best methods for on-screen directing, file serving and master control operations and then “teach” their technology platforms how to get it done quickly and accurately. The use of metadata to ensure accuracy cannot be overstated.
With the potential to improve every part of the production lifecycle, AI can be a powerful tool, but it must be implemented and then given time to work. But with it, media professionals now have the flexibility to access their content quickly anywhere via on-premise or cloud-based architectures and can begin working immediately. They can then put those assets to work in the most efficient way.
Patience and hard work: sounds like the human formula for advanced learning – machine learning, that is.