Institut fur¨ Informatik
Technische Universitat¨ Munchen¨
Efficient Methods for the Display of
highly detailed Models
in Computer Graphics
Jens Schneider
Vollstandiger¨ Abdruck der von der Fakultat¨ fur¨ Informatik der Technischen Universitat¨
Munchen¨ zur Erlangung des akademischen Grades eines
Doktors der Naturwissenschaften (Dr. rer. nat.)
genehmigten Dissertation.
Vorsitzender: Univ.-Prof. Dr. U. Baumgarten
Prufer¨ der Dissertation: Univ.-Prof. Dr. R. Westermann
apl. Prof. Dr. T. Kuhlen
(Rheinisch-Westfalische¨ Technische Hochschule Aachen)
Die Dissertation wurde am 25.11.2008 bei der Technischen Universitat¨ Munchen¨
eingereicht und durch die Fakultat¨ fur¨ Informatik am 12.5.2009 angenommen.To my family and friends.Abstract
In 1965, Intel co-founder Gordon Moore observed that the number of devices (transis-
tors, resistors, etc.) doubled every twelve months. Ten years later he predicted that the
number of transistors of CPUs doubled every 24 months. A consequence of what is now
known as “Moore’s Law” is that processing power also increases exponentially, albeit
not at a factor of two per two years. The consequences of this still precise prognosis for
today’s society are amazing. It is by now possible to rapidly generate or acquire data
sets that are so large in size that processing or displaying these data sets has become
a severe issue and typically requires both state-of-the-art hardware and sophisticated
algorithms.
The recent introduction of graphics accelerators for mainstream PCs, collectively
known as graphics processing units (GPUs), has offered the potential to explore these
data sets at interactive rates. However, due to the still limited video memory of today’s
GPUs and the von Neumann architecture of modern PCs, the storage and bandwidth
requirements arising during the visualization of large data sets have to be carefully an-
alyzed.
In this thesis, we explore a class of visualization algorithms commonly referred to
as level-of-detail (LOD) algorithms. These algorithms typically perform a hierarchical
analysis of large, highly detailed data sets during a preprocessing step. During run-
time, the amount and detail of the data necessary to form an image for a given set of
camera parameters is determined, and the respective data is sent to the GPU to be dis-
played. Since the data is usually too large to reside in host memory, paging strategies
that hide latencies of external storage solutions are employed. We demonstrate that in
this way highly interactive frame rates can be achieved for the visualization of massive
point clouds, high-resolution terrain, large, triangulated models, and gigapixel-sized
images. Furthermore, we demonstrate that interactivity leads to immediate visual feed-
back loops for user-made changes of the data set. This feedback offers the possibility
iii
to design highly intuitive and powerful editing environments—e.g., for fractal land-
scapes and the design of digital filters that operate on gigapixel images—that follow
the well-established WYSIWIG concept.Zusammenfassung
Im Jahr 1965 beobachtete der Intel Mitbegrunder¨ Gordon Moore dass sich die Anzahl
der Bauteile (Transistoren, Widerstande¨ usw.) in etwa alle zwolf¨ Monate verdoppelt.
Zehn Jahre spater¨ formulierte er die nun als “Moores Gesetz” bekannte Prognose, dass
sich die Anzahl der Transistoren von CPUs alle 24 Monate verdoppelt. Obwohl sich
die Leistungsfahigk¨ eit von Prozessoren langsamer entwickelt, ist doch aufgrund der
auch heute noch prazisen¨ Prognose eine exponentielle Beschleunigung von generellen
CPUs zu beobachten. Die Konsequenzen fur¨ die heutige Gesellschaft sind erstaunlich.
Es ist heutzutage moglich,¨ in kurzer Zeit Datensatze¨ zu generieren oder zu messen,
deren schiere Große¨ bei der Darstellung echte Probleme aufwirft. Als Konsequenz sind
ublicherweise¨ aktuellste Rechner und ausgefeilte Algorithmen erforderlich.
Die erst seit recht kurzer Zeit fur¨ Standard-PCs verfugbaren¨ Grafikbeschleuniger,
die kollektiv auch GPUs (graphics processing units) genannt werden, haben das Po-
tential eine Echtzeitexploration solcher Datensatze¨ zu ermoglichen.¨ Jedoch mussen¨
die Speicher- und Bandbreitenanforderungen, die wahrend¨ der Visualisierung dieser
¨Daten anfallen, genau untersucht werden, da sich sowohl der verfugbare Videospeicher
als auch die verfugbaren¨ Bandbreiten heutiger von Neumann Architekturen schnell als
limitierend erweisen.
Diese Dissertation untersucht eine Klasse von Visualisierungsalgorithmen die kollek-
tiv als level-of-detail (LOD) Algorithmen bekannt sind. Diese Algorithmen fuhren¨
typischerweise eine hierarchische Analyse von großen, hochdetailierten Datensatzen¨
in einem Vorverarbeitungsschritt durch. Zur Laufzeit werden dann die Teile und der
Detailgrad bestimmt, die fur¨ die Berechnung des finalen Bildes unter den aktuellen
Kameraparametern notig¨ sind. Diese Daten werden dann an die GPU gesendet und
dargestellt. Weil die Datensatze¨ ublicherweise¨ zu groß sind, um im Hauptspeicher
gehalten werden zu konnen,¨ werden geeignete Auslagerungsstrategien beschrieben um
die Latenzen externer Speichermedien zu verstecken. Wir zeigen am Beispiel einer
iiiiv
Reihe von unterschiedlichen Daten, etwa gigantischen Punktwolken, hochaufgelosten¨
Landschaftsdaten sowie Gigapixel-Bildern, dass auf diese Weise interaktive Darstel-
lungsraten erreicht werden. Diese Interaktivitat¨ kann dann fur¨ Editoren mit soforti-
gen visuellen Ruckmeldungen¨ im Stile des WYSIWIG-Konzeptes (What-You-See-Is-
¨What-You-Get) genutzt werden. Solche Ruckmeldungen erlauben das Design hochin-
tuitiver und machtiger¨ Editierumgebungen, wie sie am Beispiel eines fraktalen Land-
schaftseditors und einer Rapid Prototyping Umgebung fur¨ digitale Bildfilter auf Giga-
pixel-Bildern demonstriert werden.Acknowledgements
I would like to thank all persons who supported me and helped me to make this work
possible. First of all, I would like to thank my collegues, current and former ones,
namely (and in alphabetical order) Kai Bur¨ ger, Christian Dick, Roland Fraedrich, Ray-
mund Ful¨ op,¨ Dr. Joachim Georgii, Stefan Hertel, Dr. Peter Kipfer, Dr. Polina Kon-
dratieva, Dr. Martin Kraus, Dr. Jens Kruger¨ , Jor¨ g Liebelt, Hans-Georg Menz, and Dr.
Thomas Schiwietz. They have always been available for discussing ideas and many
have helped proof-reading this thesis. I would also like to thank Sebastian Wohner for
his relentless effort to keep my desktop system running despite my natural gift to wreck
it in a wide variety of ways.
Also, I would like to thank some of my students, who helped me in implement-
ing and validating several of the methods presented here, namely Moritz Bartl, Tobias
Boldte, Dominik Meyer, Matthias Wagner, and Florian Wendel.
I would also like to thank my friends, who supported me with high morale and
well-received distractions of different kinds.
Last but not least I would like to thank my advisor, Prof. Dr. Rudiger¨ Westermann,
who provided me with the opportunity to conduct the studies presented in this thesis,
who always was open for discussion, and who inspired many of the methods presented
here. Without him, this thesis would not have been possible.
I would like to thank the many people and institutions who provided me with data
sets used in our publications and in this thesis, namely the people from the Digital
Michelangelo Project for scanning the statues used in Chapter 3 and for making the
data publicly available. Also, I would like the people from the Lawrence Livermore
National Laboratory in general and Peter Lindstrom in particular for providing the in-
terface instability data set. I would like to thank Siemens Corporate Research for kindly
providing the Wholebody CT scan.
vvi
Also, I would like to thank the DLR (Deutsches Zentrum fur¨ Luft- und Raumfahrt)
Oberpfaffenhofen for the Alps data set and the DLR and ISTAR for the Paris data set
used in Chapter 4. Furthermore, I am thankful to the people of GA Tech’s (Georgia
Institute of Technology) large models project for making the Puget Sound and Grand
Canyon data sets publicitly available.
I would like to thank Thomas Heinrich for modelling and kindly providing the Devil
Head data set used in Chapter 6. For all other meshes in that chapter that I did not
generate myself, I have to thank the people of the Aim @ Shape project for making
such a plethora of meshes publicly available.
Finally, I would like to thank Johannes Kopf and Prof. Dr. Oliver Deussen for
providing us with one of their gigapixel images.