An FPGA-based processing pipeline for high-definition stereo video
13 pages
English

Découvre YouScribe en t'inscrivant gratuitement

Je m'inscris

An FPGA-based processing pipeline for high-definition stereo video

-

Découvre YouScribe en t'inscrivant gratuitement

Je m'inscris
Obtenez un accès à la bibliothèque pour le consulter en ligne
En savoir plus
13 pages
English
Obtenez un accès à la bibliothèque pour le consulter en ligne
En savoir plus

Description

This paper presents a real-time processing platform for high-definition stereo video. The system is capable to process stereo video streams at resolutions up to 1, 920 × 1, 080 at 30 frames per second (1080p30). In the hybrid FPGA-GPU-CPU system, a high-density FPGA is used not only to perform the low-level image processing tasks such as color interpolation and cross-image color correction, but also to carry out radial undistortion, image rectification, and disparity estimation. We show how the corresponding algorithms can be implemented very efficiently in programmable hardware, relieving the GPU from the burden of these tasks. Our FPGA implementation results are compared with corresponding GPU implementations and with other implementations reported in the literature.

Sujets

Informations

Publié par
Publié le 01 janvier 2011
Nombre de lectures 17
Langue English
Poids de l'ouvrage 2 Mo

Extrait

Greisen et al . EURASIP Journal on Image and Video Processing 2011, 2011 :18 http://jivp.eurasipjournals.com/content/2011/1/18
R E S E A R C H Open Access An FPGA-based processing pipeline for high-definition stereo video Pierre Greisen 1,2* , Simon Heinzle 2 , Markus Gross 1,2 and Andreas P Burg 3
Abstract This paper presents a real-time processing platform for high-definition stereo video. The system is capable to process stereo video streams at resolutions up to 1, 920 × 1, 080 at 30 frames per second (1080p30). In the hybrid FPGA-GPU-CPU system, a high-density FPGA is used not only to perform the low-level image processing tasks such as color interpolation and cross-image color correction, but also to carry out radial undistortion, image rectification, and disparity estimation. We show how the corresponding algorithms can be implemented very efficiently in programmable hardware, relieving the GPU from the burden of these tasks. Our FPGA implementation results are compared with corresponding GPU implementations and with other implementations reported in the literature. Keywords: Video processing pipeline, Stereoscopic video, FPGA, Disparity estimation, Image warping
1 Introduction potential mis-alignments of the hardware. Most recent Multi-view camera systems are becoming ubiquitous in camera systems furthermore rely on real-time analysis today s camera landscape. This trend has been driven by of the captured images for guiding the camera operator the advent of affordable high-quality imaging sensors on [4,5] and for automatic camera control [4]. A vital ele-one hand and by novel multi-view applications on the ment of such systems is the analysis of the physical lay-other hand. Prominent examples of such new applica- out of a captured scene by analyzing screen space tions are 360° vision panoptic cameras [1] that provide disparities . Disparities are the resulting displacements of immersive video experiences and vision systems for auto- a scene point across both camera views, which in turn matic vehicle/robot control or for augmented reality [2]. directly relate to the actual geometry of the scene. A particularly interesting and commercially most rele- Unfortunately, such processing of high-definition video vant real-time application is stereoscopic 3D (S3D) streams for real-time applications is a challenging task movie and broadcast production [3]. Corresponding since computationally demanding algorithms need to be video capture systems often employ two spatially offset applied to high-resolution images. In addition, image pro-cameras; however, imperfections of such systems require cessing tasks require a considerable amount of memory real-time processing of the video streams that reach bandwidth. Current CPUs and GPUs are therefore often beyond the processing routinely done today in mono- completely occupied when performing the full low-level scopic camera systems. In particular, different cameras processing and analysis pipeline including disparity analy-usually exhibit different sensor responses that can lead sis in real-time. FPGA platforms on the other hand offer to unpleasant viewer experiences. Also, the physical great potential for streaming-based tasks. Video pipelines alignment is often not sufficiently accurate and the can be implemented in parallel for low latency and high resulting videos can lead to eyestrain [3]. To correct for performance. Furthermore, optimal speed-precision-area non-idealities in the camera system, the raw video tradeoffs can be accomplished using fixed-point arith-streams need careful color co rrection to achieve similar metic, and custom-tailored caching architectures can be color responses across all views. In addition, an image employed to alleviate bandwidth bottlenecks. warping process must remove distortions and correct Related work * 1 ECToHrrZesurpiochn,d8e0n9c2e:Zpuireircrhe,.gSrewiistezner@ladnisdneyresearch.com pAultbhliosuhgehdsroemceentwloy,rkalulsienxgisFtiPnGgArepailp-etliimneeshhasdbeen ar ware Full list of author information is available at the end of the article © 2011 Greisen et al; licensee Springer. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • Univers Univers
  • Ebooks Ebooks
  • Livres audio Livres audio
  • Presse Presse
  • Podcasts Podcasts
  • BD BD
  • Documents Documents