×Jan-03-2024
Apple Inc is looking for a 4G/5G system engineer on mobility control. Based in California, USA (Bay Area or San Diego). .

2D Compression TechnologiesTS 26.928

As of today, two codecs are prominently referenced and available, namely H.264/AVC [30] and H.265/HEVC [31]. Both codecs are defined as part of the TV Video Profiles in 3GPP TS 26.116 and are also the foundation of the VR Video Profiles in 3GPP TS 26.118.

Expected Video coding standards performance and bitrate target
CodecCoding performance (Ranom access)Targeted bitrate (Random Access)
ObjectiveSubjective
AVC4k
  • Statmux: 20-25 Mbps
  • CBR: 35 - 50 Mbps

8k
  • CBR: 80 - 100 Mbps
  • High quality: 100 - 150 Mbps
HEVC-40% vs AVC-60% vs AVC4k
  • Statmux: 10-13 Mbps
  • CBR: 18-25 Mbps

8k
  • CBR: 40-56 Mbps
  • High quality: 80-90 Mbps

Furthermore, for XR formats beyond regular 2D, two different approaches are taken in the compression:

  1. usage of existing 2D codecs and providing pre- and post-processing in order to convert the signals to 3D signals
  2. usage of dedicated compression technologies for specific formats.

Format and Parallel Decoding Challenges

In XR type of applications, when buffers are processed by rendering engines, existing video codecs may be used to efficiently compress them when they need to be transmitted over the network. As typically a huge amount of data is exchanged and operation needs to be done in a power-efficient manner in constraint environments, XR applications rely on existing video codecs on mobile platforms, for example those codecs defined in 3GPP specifications. While serving an immediate need and providing a kickstart for XR type of services, such video codecs may not be fully suitable for XR applications for different reasons, some of them listed below.

First of all, the formats of the buffers in XR and graphics applications may be different and more variety exists. Also in certain case, not only textures need to be supported, but also 3D formats.

Beyond this, XR applications may require that multiple buffers are served and synchronized in order to render an XR experience. This results in requirements for parallel decoding of multiple streams for multiple buffers (texture, geometry, etc.) as well as multiple objects. In many cases these buffers need to be made available to the rendering engine in a synchronized manner to ensure the highest quality of the rendered scene. Furthermore, the amount of streams and data to be processed may vary heavily over the period of an XR session and requires flexible video decoding architectures, also taking into account efficient and low-latency processing.

As an example, MPEG is addressing several of these challenges as part of their MPEG-I project on immersive media coding. In particular, for the variety of applications, a flexible and powerful hardware based decoding and processing architecture is desirable.