Custom R Evaluator
In this experiment we introduce a custom R evaluation module that returns several evaluation metrics to measure a classifier's performance.
This experiment includes a custom R module that allows evaluating a classifier using standard performance evaluation metrics and comparing it to random and majority-class classifiers.
The module returns per-class metrics as well as metrics of baseline classifiers: weighted/non-weighted random classifiers and a majority-class classifier.
The module expects as input a dataset containing the actual and predicted class labels. The names of those columns can be specified in the properties section.
The R code is available at [GitHub][1].
*Created by a Microsoft employee*
[1]: https://github.com/saidbleik/Evaluation