October 31, 2017
The result of this experiment is part of the MLBench benchmark (arXiv:1707.09562v3) Dataset: D-SCH Model: C-SVM Hyperparameter tuning: on
The result of this experiment is part of the MLBench benchmark. The details of the benchmark are summarized in our publication: MLBench: How Good Are Machine Learning Clouds for Binary Classification Tasks on Structured Data? (arXiv:1707.09562v3) Abstract: We conduct an empirical study of machine learning functionalities provided by major cloud service providers, which we call machine learning clouds. Machine learning clouds hold the promise of hiding all the sophistication of running large-scale machine learning: Instead of specifying how to run a machine learning task, users only specify what machine learning task to run and the cloud figures out the rest. Raising the level of abstraction, however, rarely comes free — a performance penalty is possible. How good, then, are current machine learning clouds on real-world machine learning workloads? We study this question with a focus on binary classification problems. We present mlbench, a novel benchmark constructed by harvesting datasets from Kaggle competitions. We then compare the performance of the top winning code available from Kaggle with that of running machine learning clouds from both Azure and Amazon on mlbench. Our comparative study reveals the strength and weakness of existing machine learning clouds and points out potential future directions for improvement. Note: if you feel that your copyright is violated and prefer us to remove certain dataset, please send an email to the last author of the mlbench paper.