Storage policy by application | television technology


Configuring an end-to-end system storage solution for a new installation can be challenging. Systems used to be based on the theory that “storage tiers” were key dividing elements. Today, challenges include managing the movement of data between these tiers and adapting storage types for the best uses and applications.

Storage tier assignment was a model that separated “continuously active” storage from “not so active” storage, i.e. fast versus slow or bulk storage. Legacy storage tier designs continually pushed data between storage tiers based on associated activities known at design time. As storage capacity requirements increased, more disks were “thrown” on the system. Dissimilar disk capacities created differentials, with performance measurement becoming unpredictable.

It is no longer practical to use a single piece of storage or just the cloud, alone, to support modern media production activities.

As the cloud and other sophisticated storage architectures have come into play, a different science has emerged. Previously dedicated “hierarchical architectures” have changed functionality to take advantage of software solutions that manage needs, capacities, performance, and schedule. By carefully modeling with the best data storage products, new solutions could be solved using complex predictions and automation of data activity. It is no longer practical to use a single piece of storage or just the cloud, alone, to support modern media production activities.

Manage by type to meet the need
The highly sophisticated frameworks of today’s media asset management solutions allow storage administrators to apply variations in storage arrays across groups of individual topologies and workflows (Fig. 1). Software approaches associated with fast flash memory and other backbones (NVMe, PCIe) now allow the administrator to select workflow segments and combine them into independent systems appropriately configured for workflows. specific work in the company.

Fig. 1: This conceptual diagram depicts the relative scale of storage systems, performance expectations, and general workloads for a modern architecture related to news production with end-to-end systemization, from ingest (L) to distribution (R). Similar representations could be used when planning or expanding a new storage system for media-centric operations. (Image credit: Karl Paulsen)

Add to that a growing proportion of media technologies migrating to the cloud, and hybrid choices open up wider possibilities among on-premises and cloud-based workflows.

The cloud supports ingest operations to funnel content from many geographic locations. Depending on the needs, placing accumulated data in a cloud platform offers many extras, including long-term storage of content in its raw form as well as the ability to consolidate content into a single virtualized allocation.

Knowing and defining the variety of steps to produce deliverable content is key to determining whether data is stored on-premises, in the cloud, or both. When and why to put what data in one or more places relative to efficiency or performance prospects for specific or immediate workflows is what defines “management by type and by need”.

Different storage structures and elements
Organizations with varying needs may need immediate and fast access to content at speeds beyond the capabilities of a “cloud-only” architecture; some workflows may not need the additional features, functionality, or capabilities offered in a cloud solution.

For media organizations, a workflow can simply push “raw” content straight into editorial production. When faced with a breaking news or live news requirement, they may need to push new content directly to broadcast while other activities (including editorial) are happening in parallel.

Often, workflows need to support multiple parallel processes, repetitive edits, and documentary-like preparation for specials or later features. In “live” production, raw content often requires at least a “technical preview” before being broadcast “live”. Producer approval or other reviews may also be required prior to live streaming.

Proxy generation, while simultaneously moving data from an ingest cache to read services, could gain efficiency from different storage platforms. While some may only need tops-and-tails segmentation, others may only require mid-sequence removal or audio harmonic.

These examples of “fast or slow” workflows require the ability to quickly take content, turn it into a “streamable” format, and assemble it for purposes that require a fast delivery process versus longer markup or more conventionally, moving to editorial caching or moving to short-term storage for other production purposes.

Pay attention to metadata
Tagging or associating metadata for any number of workflows or production/legal mandates is not uncommon. Metadata collection covers activities ranging from automated tagging and scene change detection to detailed analysis or evaluation of content with respect to people, place or purpose. These variants gain additional benefits by being placed on different types of storage depending on the level of effort required for each workflow.

Such differentiation by activity helps to reduce load balancing and can mitigate “bottlenecks”, which result in other processes involving multiple reads/writes or transfers between processing platforms. Other methodologies, such as Kubernetes and microservices frameworks, leverage AI principles to improve performance and speed up operations.

Depending on the workflow, data can be repositioned across fast disk arrays based on the best system approach using available targets. A storage that can materialize the information in a particular format, managed by a MAM, and sent to the cloud or directly to the antenna upstream of the writing can also be sought. Thus, we recognize that “not all storage is created equal”.

Proactively monitor
High-availability storage systems must be “monitored” to ensure that bottlenecks, over-provisioning, or other random events do not occur. Administrators who manage storage pools, manipulate best data paths, and proactively leverage file-based workflow activities keep tabs on storage volumes, overall system bandwidth, and management of these process.

Some less intense or non-immediate activities take longer to complete their operations and would be better distributed to times when other commands requiring higher performance are processed more immediately. For example, if the storage volumes associated with editing command more reads and writes than just reads for transfers, move these less important activities to nightly periods when fewer edits occur.

Administrators should review logs regularly to ensure availability, system load, and security are proactively managed.

Suggestions include scheduling upgrades for times when periods of high availability are not needed. Establish a hierarchy of notifications and reports to determine when periods of peak performance require full accessibility. High availability work periods, if proactively monitored and reported, can support most maintenance operations without downtime and can occur during a flexible and convenient period based on a routine schedule.

Extreme hyper-scalability
Selecting the right storage components requires understanding the entire media ecosystem. Workflows vary, so storage system administrators must engage components that are scaled enough to handle average workflows while providing those “hyper-buses” through which the next level of availability can be achieved. activated if necessary. Flexibility and capacity overhead for burst periods should be considered throughout different workflows.

Speed ​​and throughput are generally expected during “peak times”, but not necessarily at all times. When a workflow exceeds the limits or approaches the limit of the routine capacities of your storage infrastructure, your storage architecture must be called upon to “spin” at a higher level to meet short-term needs. Changing when these activities take place (i.e., “out of hours”) can mitigate the risk of bottlenecks during peak periods.

Some vendors can autonomously improve the performance of their system by using “capacity module” extensions. Additional support is typically configured using flash storage in an NVMe over PCIe architecture. These modules do not need to be deployed in all storage subsystems; and are likely relegated only to your “highest performance level” activities. Plan for potential needs for massive scalability, also known as “scaling and scaling”, this extra power may be needed for one-time or major events (e.g. Super Bowl, Final Four or the latest news).

We have only scratched the surface of modern intelligent storage management and its details. When considering an upgrade or new storage platform for a current or new installation, be sure to consider the ongoing competitive changes that occur in media storage industries. You may be surprised how the many new additions of on-premises storage change these capabilities that are essential to multimedia productions.


About Author

Comments are closed.