Automatic hardening against dependability and security software bugs [Elektronische Ressource] / eingereicht von Martin Süßkraut
176 pages
English

Automatic hardening against dependability and security software bugs [Elektronische Ressource] / eingereicht von Martin Süßkraut

Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres
176 pages
English
Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres

Description

Automatic Hardening againstDependability and Security SoftwareBugsDissertationzur Erlangung des akademischen GradesDoktoringenieur (Dr.-Ing.)vorgelegt an derTechnischen Universitat DresdenFakultat Informatikeingereicht vonDipl.-Inf. Martin Su krautgeboren am 08.01.1979 in Halle (Saale)GutachterProf. Christof Fetzer, PhD, Technische Universit at DresdenProf. George Candea, PhD, Ecole Polytechnique Federale de LausanneTag der Verteidigung: 21. Mai 2010Dresden Juni 2010AbstractIt is a fact that software has bugs. These bugs can lead to failures. Especially dependabil-ity and security failures are a great threat to software users. This thesis introduces fournovel approaches that can be used to automatically harden software at the user’s site.Automatic hardening removes bugs from already deployed software. All four approachesare automated, i.e., they require little support from the end-user. However, some sup-port from the software developer is needed for two of these approaches. The presentedapproaches can be grouped into error toleration and bug removal. The two error tol-eration approaches are focused primarily on fast detection of security errors. When anerror is detected it can be tolerated with well-known existing approaches. The other twoapproaches are bug removal approaches. They remove dependability bugs from alreadydeployed software.

Sujets

Informations

Publié par
Publié le 01 janvier 2010
Nombre de lectures 27
Langue English
Poids de l'ouvrage 2 Mo

Extrait

Automatic Dependability
Hardening against and Security Software Bugs
Dissertation zur Erlangung des akademischen Grades Doktoringenieur (Dr.Ing.)
vorgelegt an der Technischen Universität Dresden Fakultät Informatik
eingereicht von Dipl.Inf. Martin Süßkraut geboren am 08.01.1979 in Halle (Saale)
Gutachter Prof. Christof Fetzer, PhD,Technische Universität Dresden Prof. George Candea, PhD,Ecole Polytechnique Fédérale de Lausanne
Tag der Verteidigung: 21. Mai 2010
Dresden Juni 2010
Abstract
It is a fact that software has bugs. These bugs can lead to failures. Especially dependabil ity and security failures are a great threat to software users. This thesis introduces four novel approaches that can be used to automatically harden software at the user’s site. Automatic hardening removes bugs from already deployed software. All four approaches are automated, i.e., they require little support from the enduser. However, some sup port from the software developer is needed for two of these approaches. The presented approaches can be grouped into error toleration and bug removal. The two error tol eration approaches are focused primarily on fast detection of security errors. When an error is detected it can be tolerated with wellknown existing approaches. The other two approaches are bug removal approaches. They remove dependability bugs from already deployed software. We tested all approaches with existing benchmarks and applications, like the Apache webserver.
i
Acknowledgements
I am grateful to many people for their help in doing this work. First of all, I wish to thank my family – especially my wife Birgit. Without her support I would not have had the strength and time for writing this thesis. I would like to acknowledge the debt I owe to my Advisor Christof Fetzer. He taught me most of what I know about doing research. I always enjoyed all the discussions with him that inspired me to most of this work. My colleague Ute Schiffel highly improved the quality of this thesis with her tough questions. I also want to thank her and my wife Birgit for their ability to withstand and identify my English mistakes. This thesis is based on several published papers. These papers would not have been possible without my coauthors: Christof Fetzer, Stefan Weigert, Ute Schiffel, Thomas Knauth, Martin Nowack, Diogo Becker de Brum, Stephan Creutz, and Martin Meinhold. I also wish to thank my colleagues at the Systems Engineering Group. I have learned a lot from them that is important in my job. Last but not least, I want to thank my students. By teaching them I also learned a lot. My apologies if I have inadvertently omitted anyone to whom acknowledgement is due. While I believe that all of those mentioned above have contributed to improve this work, none is, of course, responsible for any remaining weakness.
iii
Contents
Contents
1
2
3
Introduction 1.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Automatic Hardening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Theses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Enforcing Dynamic Personalized System Call Models 2.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . 2.2 SwitchBlade Architecture . . . . . . . . . . . . . . . . 2.3 System Call Model . . . . . . . . . . . . . . . . . . . . 2.3.1 Personalization . . . . . . . . . . . . . . . . . . 2.3.2 Randomization . . . . . . . . . . . . . . . . . . 2.4 Model Learner . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Problem: False Positives . . . . . . . . . . . . . 2.4.2 DataflowBased Learner . . . . . . . . . . . . . 2.5 Taint Analysis . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 TaintCheck . . . . . . . . . . . . . . . . . . . . 2.5.2 Escaping Valgrind . . . . . . . . . . . . . . . . . 2.5.3 Replay of Requests . . . . . . . . . . . . . . . . 2.6 Model Enforcement . . . . . . . . . . . . . . . . . . . . 2.6.1 Loading the System Call Model . . . . . . . . . 2.6.2 Checking System Calls . . . . . . . . . . . . . . 2.7 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.1 Synthetic Exploits . . . . . . . . . . . . . . . . 2.7.2 Apache . . . . . . . . . . . . . . . . . . . . . . . 2.7.3 Exploits . . . . . . . . . . . . . . . . . . . . . . 2.7.4 Micro Benchmarks . . . . . . . . . . . . . . . . 2.7.5 Model Size . . . . . . . . . . . . . . . . . . . . . 2.7.6 Stateful Application . . . . . . . . . . . . . . . 2.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
Speculation for Parallelizing Runtime Checks 3.1 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Compiler Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Runtime Support . . . . . . . . . . . . . . . . . . . . . . . . . . . .
v
1 2 3 4 5
9 12 14 17 19 20 22 22 26 28 28 29 29 30 31 31 32 32 33 35 36 38 39 40
43 46 47 49
v
CONTENTS
4
vi
3.2 3.3
3.4
3.5
3.6
3.7
3.8
Related Work . . . . . . . . . . . . . . . . . . . . . . . . . Deterministic Replay and Speculation . . . . . . . . . . . . 3.3.1 Interface . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Implementation . . . . . . . . . . . . . . . . . . . . Switching Code Bases . . . . . . . . . . . . . . . . . . . . 3.4.1 Example . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Integration withparexc chkpnt. . . . . . . . . 3.4.3 Code Transformations . . . . . . . . . . . . . . . . 3.4.4 Stacklocal Variables . . . . . . . . . . . . . . . . . Speculative Variables . . . . . . . . . . . . . . . . . . . . . 3.5.1 Interface . . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 Deadlock Avoidance . . . . . . . . . . . . . . . . . 3.5.3 Storage Backends . . . . . . . . . . . . . . . . . . Parallelized Checkers . . . . . . . . . . . . . . . . . . . . . 3.6.1 OutofBounds Checks . . . . . . . . . . . . . . . . 3.6.2 Data Flow Integrity Checks . . . . . . . . . . . . . 3.6.3 FastAssert Checker . . . . . . . . . . . . . . . . . . 3.6.4 Runtime Checking in STMBased Applications . . . Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.1 Performance . . . . . . . . . . . . . . . . . . . . . . 3.7.2 Checking Already Parallelized Applications . . . . . 3.7.3 ParExC Overhead . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
50 52 53 55 56 57 58 59 67 67 68 69 69 69 70 71 71 72 73 73 77 78 80
Automatically Finding and Patching Bad Error Handling 83 4.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.3 Learning LibraryLevel Error Return Values from System Call Error Injection 89 4.3.1 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.3.2 Efficient Error Injection . . . . . . . . . . . . . . . . . . . . . . . . 91 4.3.3 Obtain OS Error Specification . . . . . . . . . . . . . . . . . . . . . 92 4.4 Finding Bad Error Handling . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.4.1 Argument Recording . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.4.2 Systematic Error Injection . . . . . . . . . . . . . . . . . . . . . . . 94 4.4.3 Static Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 4.5 Fast Error Injection using Virtual Machines . . . . . . . . . . . . . . . . . 99 4.5.1 TheforkApproach . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.5.2 Virtual Machines for Fault Injection . . . . . . . . . . . . . . . . . . 101 4.6 Patching Bad Error Handling . . . . . . . . . . . . . . . . . . . . . . . . . 102 4.6.1 Error Value Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . 103 4.6.2 Preallocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 4.6.3 Patch Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 4.7 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 4.7.1 Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
5
6
4.8
CONTENTS
4.7.2 Bugs Found . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Robustness and Security Hardening of COTS Software Libraries 5.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Test Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Ballista Type System . . . . . . . . . . . . . . . . . . . 5.3.2 Meta Types . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.4 Type Templates . . . . . . . . . . . . . . . . . . . . . . 5.3.5 Type Characteristics . . . . . . . . . . . . . . . . . . . 5.3.6 Reducing the Number of Test Cases . . . . . . . . . . . 5.3.7 Other Sources of Test Values . . . . . . . . . . . . . . . 5.4 Checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Check Templates . . . . . . . . . . . . . . . . . . . . . 5.4.2 Parameterized Check Templates . . . . . . . . . . . . . 5.5 Protection Hypotheses . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Minimizing the Truth Table . . . . . . . . . . . . . . . 5.5.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.1 Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.2 Autocannon as Dependability Benchmark . . . . . . . 5.6.3 Protection Hypotheses . . . . . . . . . . . . . . . . . . 5.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
117 118 119 122 123 124 125 126 128 128 130 130 131 133 134 134 135 136 137 138 139 140
Conclusion 143 6.1 Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
References
List of Figures
List of Tables
Listings
147
159
163
165
vii
1
Introduction
It is a fact that software is deployed with bugs. Barry Boehm and Victor R. Basili state that: “About 40 to 50 percent of user programs contain nontrivial defects.” [15] These bugs can lead to failures that decrease the dependability, the availability, and the security of a system. Studies have found that 5% to 24% of all failures in deployed highperformance systems can be contributed to software [101]. Software failures can be catastrophic: “In the last 15 years alone, software defects have wrecked a European satellite launch, delayed the opening of the hugely expensive Denver airport for a year, destroyed a NASA Mars mission, killed four marines in a helicopter crash, induced a U.S. Navy ship to destroy a civilian airliner, and shut down ambulance systems in London, leading to as many as 30 deaths.” [69] Additionally, software failures have an economical impact. The National Institute of Standards and Technology estimates that in the U.S. software failures cost $59.5 billion annually [86]. The reasons why software is deployed with bugs are manyfold. Most prominent are economical, legal, and technical reasons as well as the human factor. Software makers have the ability to remove bugs in already deployed software by patching. This ability alone is an economical motivation to release early and fix later [6]. It is still an unresolved problem if software makers should be liable for their product’s failures [30]. Up to now it is common practice that software makers explicitly exclude their liability for software failures in their licence agreements. From a technical point of view a recent study suggests that the likelihood of bugs depends on the used components [79]. Other studies show that certain programmers introduce more bugs than others [60, 102]. As already stated, the industry standard to deal with bugs is to deploy software patches that remove bugs. But these patches are not sufficient, because first, not every bug is fixed and second, there is a window of exposure between detecting a bug and the application of a patch which removes this bug. The same economical reasons that lead to release a product with bugs encourage software makers to only fix most severe bugs, for example, a security vulnerability is more likely to be removed than a bug that can lead to a crash failure. Even if bugs are removed by patches from the software makers, there is a window of exposure. In 2008 the computer company Apple needed in average 9 days (worstcase 156 days) after the publication of a vulnerability for Apple’s web browser Safari until a patch was publicly available [47]. On the other hand, users do not always apply patches instantaneously. For web browsers not more that 80% of the Firefox users and 46% of the Opera users had the most uptodate version of their browser running on any day of 2007 [49]. A survey showed that 67.5% of Oracle database professionals donotinstall critical security patches [105]. Reasons for not applying patches are that patches incur the risk of new bugs [85] and that patching is sometimes uncomfortable for the software’s
1
  • Univers Univers
  • Ebooks Ebooks
  • Livres audio Livres audio
  • Presse Presse
  • Podcasts Podcasts
  • BD BD
  • Documents Documents