
 Author:
Prof. Mor HarcholBalter
 Published: 2024
 Publisher: Cambridge University Press (CUP)
 ISBN: 9781009309073
 Buy it via Amazon.com or via CUP
 Free POWERPOINT SLIDES below for selected chapters
 Instructors can get a free Examination Copy and
other resources from CUP
 Book contains 400 exercises as well as numerous examples.
 Endorsements:
 Please report errors here

Probability theory has become indispensable in computer science. It is at the
core of machine learning and statistics, where one often needs to make decisions
under stochastic uncertainty. It is also integral to computer science theory, where
most algorithms today are randomized algorithms, involving random coin flips. It
is a central part of performance modeling in computer networks and systems,
where probability is used to predict delays, schedule jobs and resources, and
provision capacity.
This book gives an introduction to probability as it is used in
computer science theory and practice, drawing on applications and
current research developments as motivation and context. This is not a
typical counting and combinatorics book, but rather it is a book
centered on distributions and how to work with them. Every topic is
driven by what computer science students need to know. For example,
the book covers distributions that come up in computer science, such
as heavytailed distributions. There is a large emphasis on
variability and higher moments, which are very important in empirical
computing distributions. Computer systems modeling and simulation are
also discussed, as well as statistical inference for estimating
parameters of distributions. Much attention is devoted to tail bounds,
such as Chernoff bounds. Chernoff bounds are used for confidence
intervals and also play a big role in the analysis of randomized
algorithms, which themselves comprise a large part of the
book. Finally, the book covers Markov chains, as well as a bit of
queueing theory, both with an emphasis on their use in computer
systems analysis.
Table of Contents
 0 Preface and Contents pdf
Part I: Fundamentals and Probability on Events
 1 Before We Start ... Some Mathematical Basics pdf powerpoint slides
 1.1 Review of Simple Series
 1.2 Review of Double Integrals and Sums
 1.3 Fundamental Theorem of Calculus
 1.4 Review of Taylor Series and Other Limits
 1.5 A Little Combinatorics
 1.6 Review of Asymptotic Notation
 1.7 Exercises
 2 Probability on Events pdf powerpoint slides
 2.1 Sample Space and Events
 2.2 Probability Defined on Events
 2.3 Conditional Probabilities on Events
 2.4 Independent Events
 2.5 Law of Total Probability
 2.6 Bayes' Law
 2.7 Exercises
Part II: Discrete Random Variables
 3 Common Discrete Random Variables pdf powerpoint slides
 3.1 Random Variables
 3.2 Common Discrete Random Variables
 3.2.1 The Bernoulli Random Variable
 3.2.2 The Binomial Random Variable
 3.2.3 The Geometric Random Variable
 3.2.4 The Poisson Random Variable
 3.3 Multiple Random Variables and Joint Probabilities
 3.4 Exercises
 4 Expectation pdf powerpoint slides
 4.1 Expectation of a Discrete Random Variable
 4.2 Linearity of Expectation
 4.3 Conditional Expectation
 4.4 Computing Expectations via Conditioning
 4.5 Simpson's Paradox
 4.6 Exercises
 5 Variance, Higher Moments, and Random Sums pdf powerpoint slides
 5.1 Higher Moments
 5.2 Variance
 5.3 Alternative Definitions of Variance
 5.4 Properties of Variance
 5.5 Summary Table for Discrete Distributions
 5.6 Covariance
 5.7 Central Moments
 5.8 Sum of a Random Number of Random Variables
 5.9 Tails
 5.9.1 Simple Tail Bounds
 5.9.2 Stochastic Dominance
 5.10 Jensen's Inequality
 5.11 Inspection Paradox
 5.12 Exercises
 6 zTransforms pdf powerpoint slides
 6.1 Motivating Examples
 6.2 The Transform as an Onion
 6.3 Creating the Transform: Onion Building
 6.4 Getting Moments: Onion Peeling
 6.5 Linearity of Transforms
 6.6 Conditioning
 6.7 Using zTransforms to Solve Recurrence Relations
 6.8 Exercises
Part III: Continuous Random Variables
 7 Continuous Random Variables: Single Distribution pdf powerpoint slides
 7.1 Probability Density Functions
 7.2 Common Continuous Distributions
 7.3 Expectation, Variance, and Higher Moments
 7.4 Computing Probabilities by Conditioning on a R.V.
 7.5 Conditional Expectation and the Conditional Density
 7.6 Exercises
 8 Continuous Random Variables: Joint Distributions pdf powerpoint slides
 8.1 Joint Densities
 8.2 Probability Involving Multiple Random Variables
 8.3 Pop Quiz
 8.4 Conditional Expectation for Multiple Random Variables
 8.5 Linearity and Other Properties
 8.6 Exercises
 9 Normal Distribution pdf powerpoint slides
 9.1 Definition
 9.2 Linear Transformation Property
 9.3 The Cumulative Distribution Function
 9.4 Central Limit Theorem
 9.5 Exercises
 10 Heavy Tails: The Distributions of Computing pdf powerpoint slides
 10.1 Tales of Tails
 10.2 Increasing versus Decreasing Failure Rate
 10.3 UNIX Process Lifetime Measurements
 10.4 Properties of the Pareto Distribution
 10.5 The BoundedPareto Distribution
 10.6 Heavy Tails
 10.7 The Benefits of Active Process Migration
 10.8 From the 1990s to the 2020s
 10.9 Pareto Distributions Are Everywhere
 10.10 Summary Table for Continuous Distributions
 10.11 Exercises
 11 Laplace Transforms pdf powerpoint slides
 11.1 Motivating Example
 11.2 The Transform as an Onion
 11.3 Creating the Transform: Onion Building
 11.4 Getting Moments: Onion Peeling
 11.5 Linearity of Transforms
 11.6 Conditioning
 11.7 Combining Laplace and zTransforms
 11.8 One Final Result on Transforms
 11.9 Exercises
Part IV: Computer Systems Modeling and Simulation
 12 The Poisson Process pdf powerpoint slides
 12.1 Review of the Exponential Distribution
 12.2 Relating the Exponential Distribution to the Geometric
 12.3 More Properties of the Exponential
 12.4 The Celebrated Poisson Process
 12.5 Number of Poisson Arrivals during a Random Time
 12.6 Merging Independent Poisson Processes
 12.7 Poisson Splitting
 12.8 Uniformity
 12.9 Exercises
 13 Generating Random Variables for Simulation pdf powerpoint slides
 13.1 Inverse Transform Method
 13.1.1 The Continuous Case
 13.1.2 The Discrete Case
 13.2 AcceptReject Method
 13.2.1 Discrete Case
 13.2.2 Continuous Case
 13.2.3 A Harder Problem
 13.3 Readings
 13.4 Exercises
 14 EventDriven Simulation pdf powerpoint slides
 14.1 Some Queueing Definitions
 14.2 How to Run a Simulation
 14.3 How to Get Performance Metrics from Your Simulation
 14.4 More Complex Examples
 14.5 Exercises
Part V: Statistical Inference
 15 Estimators for Mean and Variance pdf powerpoint slides
 15.1 Point Estimation
 15.2 Sample Mean
 15.3 Desirable Properties of a Point Estimator
 15.4 An Estimator for Variance
 15.4.1 Estimating the Variance when the Mean is Known
 15.4.2 Estimating the Variance when the Mean is Unknown
 15.5 Estimators Based on the Sample Mean
 15.6 Exercises
 15.7 Acknowledgment
 16 Classical Statistical Inference pdf
 16.1 Towards More General Estimators
 16.2 Maximum Likelihood Estimation
 16.3 More Examples of ML Estimators
 16.4 Log Likelihood
 16.5 MLE with Data Modeled by Continuous Random Variables
 16.6 When Estimating More than One Parameter
 16.7 Linear Regression
 16.8 Exercises
 16.9 Acknowledgment
 17 Bayesian Statistical Inference pdf
 17.1 A Motivating Example
 17.2 The MAP Estimator
 17.3 More Examples of MAP Estimators
 17.4 Minimum Mean Square Error Estimator
 17.5 Measuring Accuracy in Bayesian Estimators
 17.6 Exercises
 17.7 Acknowledgment
Part VI: Tail Bounds and Applications
 18 Tail Bounds pdf powerpoint slides
 18.1 Markov's Inequality
 18.2 Chebyshev's Inequality
 18.3 Chernoff Bound
 18.4 Chernoff Bound for Poisson Tail
 18.5 Chernoff Bound for Binomial
 18.6 Comparing the Different Bounds and Approximations
 18.7 Proof of Chernoff Bound for Binomial: Theorem 18.4
 18.8 A (Sometimes) Stronger Chernoff Bound for Binomial
 18.9 Other Tail Bounds
 18.10 Appendix: Proof of Lemma 18.5
 18.11 Exercises
 19 Applications of Tail Bounds: Confidence Intervals and
Balls and Bins pdf
 19.1 Interval Estimation
 19.2 Exact Confidence Intervals
 19.2.1 Using Chernoff Bounds to Get Exact Confidence Intervals
 19.2.2 Using Chebyshev Bounds to Get Exact Confidence Intervals
 19.2.3 Using Tail Bounds to Get Exact Confidence Intervals
in General Settings
 19.3 Approximate Confidence Intervals
 19.4 Balls and Bins
 19.5 Remarks on Balls and Bins
 19.6 Exercises
 20 Hashing Algorithms pdf
 20.1 What is Hashing?
 20.2 Simple Uniform Hashing Assumption
 20.3 Bucket Hashing with Separate Chaining
 20.4 Linear Probing and Open Addressing
 20.5 Cryptographic Signature Hashing
 20.6 Remarks
 20.7 Exercises
Part VII: Randomized Algorithms
 21 Las Vegas Randomized Algorithms pdf
 21.1 Randomized versus Deterministic Algorithms
 21.2 Las Vegas versus Monte Carlo
 21.3 Review of Deterministic Quicksort
 21.4 Randomized Quicksort
 21.5 Randomized Selection and MedianFinding
 21.6 Exercises
 22 Monte Carlo Randomized Algorithms pdf
 22.1 Randomized MatrixMultiplication Checking
 22.2 Randomized Polynomial Checking
 22.3 Randomized MinCut
 22.4 Related Readings
 22.5 Exercises
 23 Primality Testing pdf
 23.1 Naive Algorithms
 23.2 Fermat's Little Theorem
 23.3 Fermat Primality Test
 23.4 MillerRabin Primality Test
 23.4.1 A New Witness of Compositeness
 23.4.2 Logic Behind the MillerRabin Test
 23.4.3 MillerRabin Primality Test
 23.5 Readings
 23.6 Appendix: Proof of Theorem 23.9
 23.7 Exercises
Part VIII: DiscreteTime Markov Chains
 24 DiscreteTime Markov Chains: FiniteState pdf
 24.1 Our First DiscreteTime Markov Chain
 24.2 Formal Definition of a DTMC
 24.3 Examples of FiniteState DTMCs
 24.3.1 Repair Facility Problem
 24.3.2 Umbrella Problem
 24.3.3 Program Analysis Problem
 24.4 Powers of P: nStep Transition Probabilities
 24.5 Limiting Probabilities
 24.6 Stationary Equations
 24.7 The Stationary Distribution Equals the Limiting Distribution
 24.8 Examples of Solving Stationary Equations
 24.9 Exercises
 25 Ergodicity for FiniteState DiscreteTime Markov Chains pdf
 25.1 Some Examples on Whether the Limiting Distribution Exists
 25.2 Aperiodicity
 25.3 Irreducibility
 25.4 Aperiodicity plus Irreducibility Implies Limiting Distribution
 25.5 Mean Time Between Visits to a State
 25.6 LongRun Time Averages
 25.6.1 Strong Law of Large Numbers
 25.6.2 A Bit of Renewal Theory
 25.6.3 Equality of the Time Average and Ensemble Average
 25.7 Summary of Results for Ergodic FiniteState DTMCs
 25.8 What If My DTMC Is Irreducible but Periodic?
 25.9 When the DTMC Is Not Irreducible
 25.10 An Application: PageRank
 25.10.1 Problems with Real Web Graphs
 25.10.2 Google's Solution to Dead Ends and Spider Traps
 25.10.3 Evaluation of the PageRank Algorithm and Practical
Considerations
 25.11 From Stationary Equations to TimeReversibility Equations
 25.12 Exercises
 26 DiscreteTime Markov Chains: InfiniteState pdf
 26.1 Stationary = Limiting
 26.2 Solving Stationary Equations in InfiniteState DTMCs
 26.3 A Harder Example of Solving Stationary Equations in InfiniteState DTMCs
 26.4 Ergodicity Questions
 26.5 Recurrent versus Transient: Will the Fish Return to Shore?
 26.6 Infinite Random Walk Example
 26.7 Back to the Three Chains and the Ergodicity Question
 26.8 Why Recurrence Is Not Enough
 26.9 Ergodicity for InfiniteState Chains
 26.10 Exercises
 27 A Little Bit of Queueing Theory pdf
 27.1 What Is Queueing Theory?
 27.2 A SingleServer Queue
 27.3 Kendall Notation
 27.4 Common Performance Metrics
 27.5 Another Metric: Throughput
 27.5.1 Throughput for M/G/k
 27.5.2 Throughput for Network of Queues with Probabilistic Routing
 27.5.3 Throughput for Network of Queues with Deterministic Routing
 27.5.4 Throughput for Finite Buffer
 27.6 Utilization
 27.7 Introduction to Little's Law
 27.8 Intuitions for Little's Law
 27.9 Statement of Little's Law
 27.10 Proof of Little's Law
 27.11 Important Corollaries of Little's Law
 27.12 Exercises
 References and Index pdf
Back to Mor's Home Page .