//img.uscri.be/pth/586d48f174d2a8a3515f59c097d379113a4600cb
La lecture en ligne est gratuite
Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres
Télécharger Lire

Tutorial 2 - Data Structures '10

De
25 pages
Tutorial 2Data Structures ’10Jonathan Cederberg thWednesday, September 15 , 2010First assignment Probability Sorting Invariants Second assignment SlackOutline1 First assignment2 Probability3 Sorting4 Invariants5 Second assignment6 Slack2 DS’10 | Tutorial 2 (Data Structures ’10)First assignment Probability Sorting Invariants Second assignment SlackThings to rememberThe definitions of ;O:A sum grows according to its fastest growing term (the leftone)Polynomials grow faster than polylogsExponentials grow faster than polynomialsLarger exponent =) faster growthLarger base =) faster growth4 DS’10 | Tutorial 2 (Data Structures ’10)First assignment Probability Sorting Invariants Second assignment SlackReview of assignment 1 nn 3lgn 2n 4 n! lgn 2 (n +1)!23 n nn nlgn 2 n25 DS’10 | Tutorial 2 (Data Structures ’10)First assignment Probability Sorting Invariants Second assignment SlackCommentsI have still not received any questions on e-mail. Use thispossibility to influence these sessions.Deadlines are hard. Late submissions of the assignments willnot be considered. You just have to get the 24 points.When presenting proofs, use some words like “since” and“therefore”. The three dots are forbidden.Answers without explicit justifications give 0 pts in an exam.Numerical justification is no justification.6 DS’10 | Tutorial 2 (Data Structures ’10)First assignment Probability Sorting Invariants Second assignment ...
Voir plus Voir moins
Jonathan
Cederberg
Tutorial 2 Data Structures ’10
<u.seit.uonjahtaec.nbred@gre>
Wednesday, September 15th, 2010
SD2attSurtcruse1)010|Tutorial2(Da
Slack
6
4
Invariants
Second assignment
5
Outline
1
First assignment
2
Probability
3
Sorting
ignmentPFirstassoSytnitraborilibsSntonecnvgIiaaralkcneStgimnadss
curtSataD(2lairo
Things to remember
The definitions ofΩ,O.Θ A sum grows according to its fastest growing term (the left one) Polynomials grow faster than polylogs Exponentials grow faster than polynomials Larger exponent=faster growth Larger base=faster growth
)10sretu|TutS104DitilorySPrntabobissaemngFtsrignmentSlcondassiirnasteSitgnnIavkca
mnnessgikcStaliantnvarondasSecytilibabIgnitroSigsstarsrotPennm5SD1|0uTot
32n
nlgn2nn2n
n3
4lgn
n
lgn22n(n+1)!
n!
Review of assignment 1
0)1esurtcurtSataD(2lairFi
cktSlanmenFirstaPtorabibssgimnnegIinarnvtylirtSoadnogisstnaiceSs)10sre
Comments
I have still not received any questions on e-mail. Use this possibility to influence these sessions. Deadlines are hard. Late submissions of the assignments will not be considered. You just have to get the 24 points. When presenting proofs, use some words like “since” and “therefore”. The three dots are forbidden. Answers without explicit justifications give 0 pts in an exam. Numerical justification is no justification.
taD(2laiutcurtSa6DS10|Tutor
tSlackntiaarnvgIinrtSonemngissadnoceSsssigrstaFiilytabibPtormnneaD(2lairtcurtSat0)1esur
Ois not the same as! Exponentiation is right associative Use logarithms with care!
Comments
7DSTuto10|
0t1nTe|mtbuorroPalii2blaDS(yttaitiorInngriva9tDsSan
Note thatE[]is a linear operator, i.e. X(aiXi) E"i=n1#=i=nX1(aiE[Xi])
Review of probability theory Asample space Sis the set of all possible outcomes. Anevent Ais a subsetAS A random variable is often denotedX Theexpected value E[X]of a random variableXis defined as E[X] =XxPr{X=x} x
traStuFuicrsstraes)si1g0nkcalSntmegnsiasndcoSe
Unbiased-Random 1 while true 2do 3xBiased-Random 4yBiased-Random 5ifx6=y 6thenreturnx
Exercise: Bleaching You have a function, Biased-Random, that returns 1 with probabilitypand 0 with probability 1p you do not. Sadly knowp a function Unbiased-Random that returns 1. Design with probability 1/2 and 0 with probability 1/2.
1)0ruseurtcotuTlairaD(2tSatDS100|1FtsriissaemngrPtnobabilitySortingnIavirnasteSocdnmegnsiaskacSlnt
|Tutorial2(DataS1D1S01curterut01s
or vice versa. Since
Exercise (cont.): Bleaching Why does this work? Because Unbiased-Random only returns whenx=0 andy=1 Pr{Unbiased-Randomreturns0}= Pr{x=0y=1}= (1p)p= p(1p) = Pr{x=1y=0}= Pr{Unbiased-Randomreturns1} and there are no other outcomes, Unbiased-Random is fair. Note that this relies on that the calls to Biased-Random are independent.
)ProbmentitySabilgnnIroitnastavirriFngissatscoSeasndgnsintmecalSk
By the definition of expected value we have for any indicator random variableXA E[XA] =E[I{A}] =1Pr{A}+0Pr{S\A} =1Pr{A}+0Pr{S\A} =Pr{A}
Definition Anindicator random variable I{A}of an eventAis a defined as I{A}=01ififAArcoucsnotdoesoccur
0)urtSataD1serutc0|TuDS1al2(tori21gissnemnorPtibabtylirtSogIinarnvaitnSscenoadssginmentSlacksratiF
airaSstnnocessadnmigtSenckla1gimnatssTo|r0a1bnSeDP3tlt2yiSaoobriultitgrIanSvartt(iDnrsFi10)
Example: Flipping a coin S={H,T} A={H} XH=I{Flip isH}=01 E[XH] =E[I{Flip isH}] =1Pr{Flip isH}+0Pr{Flip is notH} =1(1/2) +0(1/2) =1/2
if flip isH if flip is notH
uctures