As shown in Figure 7, the automatic scaling system mainly consists of three parts: the broker, the worker, and the controller. They cooperate to ensure that our automatic scaling system operates smoothly. Since scaledown is more complicated, let’s describe it first.
Broker
As described in the System Architecture section, the broker is responsible for providing APIs for other programs to request. When the broker receives the task request from CMS, it will check the request parameters. If the check fails, it will return an error message. If the check passes, it will prepare to dispatch the task, the broker will check if the Redis has a lock. This lock is set by the controller. If the lock is set, waits for the controller to release the lock and then dispatches the task. If not, the broker dispatches the task directly.
Worker
The worker is a docker container running on EC2. It is responsible for the content asset processing operations. When the worker executes a task, it adds a task record in the Redis which indicates that the current worker is busy. It will update the task record when the task is finished. Both of these messages are used by the controller to determine if the worker is idle.
Controller
The third part is the controller, it checks transcoding worker status in a short interval. Before checking, it will set a lock in the Redis, if the broker receives a task at this time it will wait until the lock is released before dispatching task. The lock prevents task lost when terminating workers. Tasks may arrive at the worker which controller determines to terminate and that leads to task lost. There are two steps when terminating a worker. The first step is remotely closing the connection between the worker and the message queue, ensuring that the worker to be stopped does not receive new tasks. Because terminating AWS EC2 machine takes around 10s, if there is a task arriving during this 10s gap, the task will be lost without the first step. Then the controller sends the terminate command to stop the worker and finally releases the lock in the Redis.
The system scales the cluster up according to the queuing tasks and the resources in the current cluster. When there are too many tasks in the queue and it exceeds the processing capacity of the system, the scaleup procedure will be triggered. After upscaled the cluster, we record the current time. There is a 20 minute cooldown time between two upscaling. The reason for setting the cooldown time is that the machine needs to pre-install some libraries and download the docker image from AWS S3. Starting a worker needs around 8 minutes. Because the checking time is so short, if there is no cooldown time, the machine will continue to be applied during the startup.
This automatic scaling system provides flexibility to add or change upscaling or downscaling strategy. Now we don’t need to manually upscale or downscale the cluster, which saves time and money.
Management and Monitoring
Last but not least, the system has a management and monitor module for easier maintenance. The module is built on Celery Flower[18].
For the management function, it can control the parameters used in content asset processing, for example, how “black” is a pixel to be identified as a black border? What’s the threshold for detecting the aspect ratio gaps? It also manages a set of preset templates of parameters to help experts to fine-tune the output content assets.
For the monitor function, it monitors all the running metrics of the system, such as task queue status, task number, worker cluster status, worker number, and failed tasks, etc. It also detects systematic anomalies and sends out alerts.
MX Content Factory is a content asset processing system used in MX Player in US OTT business. It converts the source content’s video, audio, subtitles, and thumbnails to standard formats and assembles them into a unified content asset, which is used by downstream systems and presented on the app clients. The system utilizes diverse techniques to provide the best user experience and high compatibility of different devices. It is an auto-scale system that is capable of processing large batches of tasks in a short time and at low costs.
[1] Distributed Video Transcoding System, https://medium.com/mx-player/distributed-video-transcoding-system-cdfff0d42294
[2] Celery: Distributed Task Queue, http://docs.celeryproject.org/en/latest
[3] Django: The web framework for perfectionists with deadlines, https://www.djangoproject.com
[4] RabbitMQ: Messaging that just works, https://www.rabbitmq.com
[5] Widevine: Leading Content Protection for Media, https://www.widevine.com
[6] FairPlay, https://developer.apple.com/streaming/fps
[7] DASH: Dynamic Adaptive Streaming over HTTP, https://en.wikipedia.org/wiki/Dynamic_Adaptive_Streaming_over_HTTP
[8] HLS: HTTP Live Streaming, https://en.wikipedia.org/wiki/HTTP_Live_Streaming
[9] Shaka Packager, https://github.com/google/shaka-packager
[10] WebVTT: The Web Video Text Tracks Format, https://w3c.github.io/webvtt
[11] Yahoo Hecate: Automagically generate thumbnails, animated GIFs, and summaries from videos, https://github.com/yahoo/hecate
[12] To Click or Not To Click: Automatic Selection of Beautiful Thumbnails from Videos, https://arxiv.org/abs/1609.01388
[13] FreeType: a freely available software library to render fonts, https://www.freetype.org
[14] OpenCV, https://opencv.org/
[15] Guetzli: Perceptual JPEG encoder, https://github.com/google/guetzli
[16] Pngquant: a command-line utility and a library for lossy compression of PNG images, https://pngquant.org
[17] WebP: A new image format for the Web, https://developers.google.com/speed/webp
[18] Celery Flower: Real-time monitor and web admin for Celery distributed task queue, https://flower.readthedocs.io/en/latest