diff --git a/mini_proj/report/waldo.tex b/mini_proj/report/waldo.tex index 37ac784..9390e02 100644 --- a/mini_proj/report/waldo.tex +++ b/mini_proj/report/waldo.tex @@ -260,7 +260,7 @@ To evaluate the performance of the models, we record the time taken by each model to train, based on the training data and the accuracy with which the model makes predictions. We calculate accuracy as - \(a = \frac{|correct\ predictions|}{|predictions|} = \frac{tp + tn}{tp + tn + fp + fn}\) + \[a = \frac{|correct\ predictions|}{|predictions|} = \frac{tp + tn}{tp + tn + fp + fn}\] where \(tp\) is the number of true positives, \(tn\) is the number of true negatives, \(fp\) is the number of false positives, and \(tp\) is the number of false negatives. @@ -299,11 +299,11 @@ network and traditional machine learning technique} \label{tab:results} \end{table} - + We can see by the results that Deep Neural Networks outperform our benchmark classification models, although the time required to train these networks is significantly greater. - + \section{Conclusion} \label{sec:conclusion} Image from the ``Where's Waldo?'' puzzle books are ideal images to test