However, such ground truth labels are rare and mostly reserved for training only. Especially deep learning approaches are data hungry algorithms requesting a lot of labeled examples.
In this project we explore the possibility of using uncertainty measurments with Monte-Carlo dropout  for the identification of model-induced misclassifications. In particular, we obtain uncertainty measures from several inferences induced by the Monte-Carlo dropout. Furthermore, we examine how Markov Random Field optimization can reduce the number of misclassifications and facilitate their identification. The extent to which uncertainties provide information about misclassifications is assessed.
Our results allow detecting 51 % of the misclassifications using uncertainties. Application of Markov Random Field optimization leads to a reduction of the percentage of misclassifications while detecting 0.4 % more misclassifications as without.