From 67708259da4d9c8465b8a8df9c2dbfb9bdab9de1 Mon Sep 17 00:00:00 2001 From: "Jip J. Dekker" Date: Fri, 25 May 2018 17:56:36 +1000 Subject: [PATCH] Make the mathematics a bit more readable --- mini_proj/report/waldo.tex | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mini_proj/report/waldo.tex b/mini_proj/report/waldo.tex index 37ac784..9390e02 100644 --- a/mini_proj/report/waldo.tex +++ b/mini_proj/report/waldo.tex @@ -260,7 +260,7 @@ To evaluate the performance of the models, we record the time taken by each model to train, based on the training data and the accuracy with which the model makes predictions. We calculate accuracy as - \(a = \frac{|correct\ predictions|}{|predictions|} = \frac{tp + tn}{tp + tn + fp + fn}\) + \[a = \frac{|correct\ predictions|}{|predictions|} = \frac{tp + tn}{tp + tn + fp + fn}\] where \(tp\) is the number of true positives, \(tn\) is the number of true negatives, \(fp\) is the number of false positives, and \(tp\) is the number of false negatives. @@ -299,11 +299,11 @@ network and traditional machine learning technique} \label{tab:results} \end{table} - + We can see by the results that Deep Neural Networks outperform our benchmark classification models, although the time required to train these networks is significantly greater. - + \section{Conclusion} \label{sec:conclusion} Image from the ``Where's Waldo?'' puzzle books are ideal images to test