###
Title here

intro
#### Metrics

- Accuracy: (TP+TN)/(P+N)
- TPR(sensitivity or recall) = TP/P. FPR = FP/N.
- ROC cursve: plot of TPR vs FPR. If the output of a binary classifier is a real value, then the boundary between +/- classes can be changed by a threshold $\theta$.
- AUC: Area Underinsensitive to imbalanced classes. works on a ranking of prediction.
- AUC = $\frac{\sum_i=1^{m}\sum_j=1^{n}1_{x_i \lt y_i}}{mn} = p(X \lt Y) $ X is prob output is pos, Y is prob output is neg. Perfect classifiers gives AUC 1. (http://machinelearning.wustl.edu/mlpapers/paper_files/NIPS2003_AA40.pdf)
- F-score: 2PR/(P+R)