Centralcasting is a product of our times: the need to supply many channels and the technology to deliver them nationwide and much further afield. Any centralcast operation must have the storage, playout facilities, asset management and telecommunications to deliver programmes on a large scale. With the entire centralcasting programme assets to hand it could expand operations by opening services to new distant markets.
From several aspects, operating further a field would look attractive but for the extra costs. Getting the video there might involve the expense of multiple satellite hops and regionalising the station’s output for advertising, channel branding, time-shift and subtitles adds more cost. Ultimately, centralcasting is about reducing costs by using fewer staff and less technical equipment to service, for example, a whole TV network. The costs of expanding to another region would be looked at very carefully and weighted against the potential revenue that is mostly raised from advertising aimed at the new audience.
Increasing competition is squeezing channels’ adverting revenues and they will be looking to cut costs in all areas – including playout. Such calculations are necessary, but sticking to the centralcasting ‘norm’ is not. ‘Our times’ is a moving target and modern technology can change the workflow for the operators and the money for the accountants. To better address areas seen as uneconomic due costs and low potential viewing numbers, a fresh approach is needed to rebalance the economics in favour of a fully branded service with local content, while still running the station as a part of the centralcasting operation.
The whole ethos of centralcasting revolves around a central store for all programmes played out to the network’s stations for each local channel’s schedule. This workflow assumes that the cost of replay, equipment and its operation, is high. That was true for VTRs but today computer hard disc players are low cost and run perfectly unattended. Centralcasters now have their media on central disc-based stores for lower equipment and running costs.
Given the above, it could be better to place some storage at the remote stations and transfer programmes, and other material, prior to its transmission time. This then is replayed to air from the local storage, according to the station’s schedule. Distributed storage makes possible many changes to the classic centralcast model. Given solid reliability and more automation the remote can operate unattended, and so further reduce costs.
The traditional centralcaster’s live playout to each supported channel may no longer be required. For file-based media the replay from the central store does no longer has to be real time. This opens the door for the use of the internet as only link to the remote station. It can be the only delivery medium for content, schedules, control, monitoring, subtitles and even sending local content back to the broadcast centre. Unbounded by satellite or fibre this link goes worldwide and is low cost.
Exactly what data speed is required depends on the media format used at the remote location, the amount of new media needed per day, the bit rate chosen for playout and the available internet capacity. Programme data rates for MPEG4 IBP are typically 4 to 6 Mb/s for SD and 8 to 12Mb/s for HD (add 50% for MPEG2) – the more the better. Another factor is the rate at which programmes are churned; repeating the same material all week requires much less data rate than changing every day. The internet provider, the grade of the service (consumer, business, etc.) and the bit rate can be selected according to data transfer volumes and budgets.
There is any number of possibilities for the design of a remote unattended playout system or Edge Server. Choices depend on the facilities of the centralcaster and the requirements of the remote broadcast service so a modular equipment design and the integration of many third-party devices are both essential. Having the major elements as standard IT-based platforms helps the wide integration and keeps costs down to IT prices. Even though the solution is IT-based, system designers also need to understand the needs and technical standards of the broadcast television industry.
Budgets, the security of media and the continuity of output influence the design. Fail-safe features such as error detection in data transfers, RAID-protected storage, redundant power supplies and playout servers complete with automatic switchover all cost more but help to maintain service. Above all the basic system modules – ingest, playout, storage, on-air graphics and subtitles – must be highly reliable. Otherwise the idea of remote unattended operation makes no sense.
A typical design includes interfacing with the centralcast traffic system, conversations between the remote station and the central traffic system, sending required media and mechanisms to ensure against data degradation and firewalls at each end of the internet link to keep the data and media secure. The bulk of the remote station comprises tried and tested standard playout solutions complemented with items such as monitoring and internet communications as well as software to allow the necessary conversations with the centralcaster.
Workflow may start with a daily playlist created in the centralcaster’s traffic system that is integrated with the MAM. The list is delivered to the remote playout servers and converted, where necessary, to its native format. Error detection algorithms can check the file matches the original. If using a fully redundant playout, then the list can be sent separately to both and cross-checked. Then, at worst, one server should playout correctly.
Details of the workflow and file handling will differ according to each broadcaster’s needs but might be like the following example. After the remote has received the daily playlist it checks for the media required to fulfil the list, searching its local store accordingly. Any that is missing is flagged and a ‘missing items list’ generated and sent back to the MAM requesting the media and associated data, such as subtitle files. This routine is repeated until all the required media is present in the remote’s storage.
The MAM is responsible for finding and delivering that media. It may exist but the required video or file format of the remote differs from that used by other centralcast channels, eg SD or HD, bit rate, MPEG2 or MPEG4, etc. First the MAM checks for the media available in the correct delivery format. If not found, it then searches for the media in any other format, preferably at a higher quality or bit rate than is required by the remote. If found, the MAM sends the media for file transcoding and then to the remote for delivery to the playout server or servers. If the media is not available the MAM generates a capture list for the media to be ingested, transcoded and delivered.
The outputs of any unattended remote playout stations should look every bit as complete and reginalised as those delivered from the centralcast and broadcast via local channels. These may include graphics, from the basic channel ID bug/logo, to multi-layer on-air graphics, text rolls and crawls, live updates such as SMS-to-screen, voting, gaming, and displays driven from live databases, such as financial information, traffic, etc. Typically such graphics are delivered within a template, simplifying live operation to the addition of text or pictures to complete the unique presentation. Such activities can be scheduled in the traffic system and any new designs can be downloaded as required.
Subtitles can be prepared at the centralcast, or independently, and sent to the remote playout which will associate the text in sync with the relevant programme material.
For confirmation of playout the server can produce ‘As Run’ logs that are returned to the Traffic System for billing, etc. Compliance recording can also be performed and the results streamed back to the centralcast, if required.
Traffic management is a further housekeeping requirement at the remote. On the input side, this automatically moves the media arriving on the local file transfer server to the playout servers’ hard disc drives. Another automated routine is the removal of ‘old’ media from that same server. By scanning daily playlists and local drives, it can delete media not required for, say, seven days hence. If the media is required later it can still be copied again from the file transfer server or MAM. Purging a list of items from both the file transfer sever and playout servers can be commanded by the centralcaster.
Monitoring and Control
Remote monitoring is imperative, especially where channels cannot be viewed by a backhaul feed. The status of the playout and graphics servers can monitored and the resulting data sent back to the centralcast to raise any necessary alarms. All the remote servers are also connected to a VPN, accessible from anywhere according to the firewall’s rules. Remote video and audio monitoring via the net can also be devised as well as monitoring catastrophic failure of the main and redundant playout servers.
Various levels of protection can be applied to suit. Redundancy of servers and power supplies is common. The internet connections go via a firewall and file transfer server and can include a file transfer system agent to accelerate transfers, as well as using multiple internet paths and so allowing checking validity of files. On the playout side a smart switch can be added that monitors the main video and audio output and that automatically switches to the backup, in the event of a failure.
Although global brands may be relevant, most advertising time will occupied by more local commercials. For commercials, two playback approaches can be used. One involves opting out of the programme feed and inserting the region’s local commercials from a local player. The other approach avoids the opt out by including the commercials within its playlist.
Remote playout systems that can be run unattended using internet connections introduce many new opportunities for lowering the cost of centralcasting; from an economic way to extend services to viewers anywhere around the world, complete with local banding, to reducing dependency on fixed live-video links. In all cases distributed storage and servers open the door for new ways to run centralcasting. Each application will differ, at least in detail, so modular scaleable systems and detailed system design is essential – as much as reliability.