Program stream is designed for iiso access storage mediums such as hard disk drivesoptical discs and flash memory. High Efficiency Image File Format. High-definition television High-definition video Ultra-high-definition television. Please update this article to reflect recent events or newly available information.
|Published (Last):||23 October 2019|
|PDF File Size:||19.17 Mb|
|ePub File Size:||1.36 Mb|
|Price:||Free* [*Free Regsitration Required]|
Applications should cover, among other things, digital storage media, television broadcasting and communications. In the course of creating this specification, various requirements from typical applications have been considered, necessary algorithmic elements have been developed, and they have been integrated into a single syntax.
Hence this specification will facilitate the bitstream interchange among different applications. Considering the practicality of implementing the full syntax of this specification, however, a limited number of subsets of the syntax are also stipulated by means of "profile" and "level". These and other related terms are formally defined in clause 3 of this specification.
A "profile" is a defined sub-set of the entire bitstream syntax that is defined by this specification. Within the bounds imposed by the syntax of a given profile it is still possible to require a very large variation in the performance of encoders and decoders depending upon the values taken by parameters in the bitstream. For instance it is possible to specify frame sizes as large as approximately 2  pels wide by 2  lines high. It is currently neither practical nor economic to implement a decoder capable of dealing with all possible frame sizes.
In order to deal with this problem "levels" are defined within each profile. A level is a defined set of constraints imposed on parameters in the bitstream. These constraints may be simple limits on numbers. Alternatively they may take the form of constraints on arithmetic combinations of the parameters e.
Bitstreams complying with this specification use a common syntax. In order to achieve a sub-set of the complete syntax flags and parameters are included in the bitstream that signal the presence or otherwise of syntactic elements that occur later in the bitstream.
In order to specify constraints on the syntax and hence define a profile it is thus only necessary to constrain the values of these flags and parameters that specify the presence of later syntactic elements. The main feature of the non-scalable syntax is the extra compression tools for interlaced video signals. The second is the scalable syntax, the key property of which is to enable the reconstruction of useful video from pieces of a total bitstream.
This is achieved by structuring the total bitstream in two or more layers, starting from a standalone base layer and adding a number of enhancement layers. The algorithm is not lossless as the exact pixel values are not preserved during coding. The choice of the techniques is based on the need to balance a high image quality and compression ratio with the requirement to make random access to the coded bitstream.
Obtaining good image quality at the bitrates of interest demands very high compression, which is not achievable with intra picture coding alone. The need for random access, however, is best satisfied with pure intra picture coding. This requires a careful balance between intra- and interframe coding and between recursive and non-recursive temporal redundancy reduction.
A number of techniques are used to achieve high compression. The algorithm first uses block-based motion compensation to reduce the temporal redundancy. Motion compensation is used both for causal prediction of the current picture from a previous picture, and for non-causal, interpolative prediction from past and future pictures. Motion vectors are defined for each pixel by line region of the picture. The difference signal, i.
Finally, the motion vectors are combined with the residual DCT information, and encoded using variable length codes. Intra coded pictures I-Pictures are coded without reference to other pictures.
They provide access points to the coded sequence where decoding can begin, but are coded with only moderate compression. Predictive coded pictures P-Pictures are coded more efficiently using motion compensated prediction from a past intra or predictive coded picture and are generally used as a reference for further prediction. Bidirectionally-predictive coded pictures B-Pictures provide the highest degree of compression but require both past and future reference pictures for motion compensation.
Bidirectionally-predictive coded pictures are never used as references for prediction. The organisation of the three picture types in a sequence is very flexible.
The choice is left to the encoder and will depend on the requirements of the application. Figure illustrates the relationship among the three different picture types. Figure Example of temporal picture structure I. The specification allows either the frame to be encoded as picture or the two fields to be encoded as two pictures.
Frame encoding or field encoding can be adaptively selected on a frame-by-frame basis. Frame encoding is typically preferred when the video scene contains significant detail with limited motion. Field encoding, in which the second field can be predicted from the first, works better when there is fast movement. Each macroblock can be temporally predicted in one of a number of different ways.
For example, in frame encoding, the prediction from the previous reference frame can itself be either frame-based or field-based. Depending on the type of the macroblock, motion vector information and other side information is encoded with the compressed prediction error signal in each macroblock.
The motion vectors are encoded differentially with respect to the last encoded motion vectors using variable length codes.
The maximum length of the vectors that may be represented can be programmed, on a picture-by-picture basis, so that the most demanding applications can be met without compromising the performance of the system in more normal situations. It is the responsibility of the encoder to calculate appropriate motion vectors. The specification does not specify how this should be done. This specification uses a block-based DCT method with visually weighted quantisation and run-length coding.
After motion compensated prediction or interpolation, the residual picture is split into 8 by 8 blocks. These are transformed into the DCT domain where they are weighted before being quantised. After quantisation many of the coefficients are zero in value and so two-dimensional run-length and variable length coding is used to encode the remaining coefficients efficiently. Among the noteworthy applications areas addressed are video telecommunications, video on asynchronous transfer mode networks ATM , interworking of video standards, video service hierarchies with multiple spatial, temporal and quality resolutions, HDTV with embedded TV, systems allowing migration to higher temporal resolution HDTV etc.
In scalable video coding, it is assumed that given an encoded bitstream, decoders of various complexities can decode and display appropriate reproductions of coded video. A scalable video encoder is likely to have increased complexity when compared to a single layer encoder. However, this standard provides several different forms of scalabilities that address nonoverlapping applications with corresponding complexities.
The basic scalability tools offered are: data partitioning, SNR scalability, spatial scalability and temporal scalability. Moreover, combinations of these basic scalability tools are also supported and are referred to as hybrid scalability. In the case of basic scalability, two layers of video referred to as the lower layer and the enhancement layer are allowed, whereas in hybrid scalability up to 3 layers are supported.
The following tables provide a few example applications of various scalabilities. Table 0-? Spatial scalability involves generating two spatial resolution video layers from a single video source such that the lower layer is coded by itself to provide the basic spatial resolution and the enhancement layer employs the spatially interpolated lower layer and carries the full spatial resolution of the input video source.
The latter case achieves a further advantage by facilitating interworking between video coding standards. Moreover, spatial scalability offers flexibility in choice of video formats to be employed in each layer.
An additional advantage of spatial scalability is its ability to provide resilience to transmission errors as the more important data of the lower layer can be sent over channel with better error performance, while the less critical enhancement layer data can be sent over a channel with poor error performance.
SNR scalability involves generating two video layers of same spatial resolution but different video qualities from a single video source such that the lower layer is coded by itself to provide the basic video quality and the enhancement layer is coded to enhance the lower layer. The enhancement layer when added back to the lower layer regenerates a higher quality reproduction of the input video.
An additional advantage of SNR scalability is its ability to provide high degree of resilience to transmission errors as the more important data of the lower layer can be sent over channel with better error performance, while the less critical enhancement layer data can be sent over a channel with poor error performance. In many cases, the lower temporal resolution video systems may be either the existing systems or the less expensive early generation systems, with the motivation of introducing more sophisticated systems gradually.
Temporal scalability involves partitioning of video frames into layers, whereas the lower layer is coded by itself to provide the basic temporal rate and the enhancement layer is coded with temporal prediction with respect to the lower layer, these layers when decoded and temporal multiplexed to yield full temporal resolution of the video source. The lower temporal resolution systems may only decode the lower layer to provide basic temporal resolution, whereas more sophisticated systems of the future may decode both layers and provide high temporal resolution video while maintaining interworking with earlier generation systems.
An additional advantage of temporal scalability is its ability to provide resilience to transmission errors as the more important data of the lower layer can be sent over channel with better error performance, while the less critical enhancement layer can be sent over a channel with poor error performance.
The bitstream is partitioned between these channels such that more critical parts of the bitstream such as headers, motion vectors, DC coefficients are transmitted in the channel with the better error performance, and less critical data such as higher DCT coefficients is transmitted in the channel with poor error performance.
Thus, degradation to channel errors are minimised since the critical parts of a bitstream are better protected. Data from neither channel may be decoded on a decoder that is not intended for decoding data partitioned bitsreams. Scope: This Recommendation International Standard specifies the coded representation of picture information for digital storage media and digital video communication and specifies the decoding process. The representation supports constant bitrate transmission, variable bitrate transmission, random access, channel hopping, scalable decoding, bitstream editing, as well as special functions such as fast forward playback, slow motion, pause and still pictures.
This Recommendation International Standard is primarily applicable to digital storage media, video broadcast and communication. The storage media may be directly connected to the decoder, or via communications means such as busses, LANs, or telecommunications links.
ISO 13818-3 PDF