Learning with Counts: Multiclass classification with NYC taxi data

By for July 16, 2015
This sample demonstrates how to use the learning with counts modules for performing multiclass classification on the publicly available NYC taxi dataset. We use a Multiclass logistic regression learner to model this problem.
#Learning with Counts: Multiclass Classification with NYC taxi data Learning with counts is a useful technique for efficient encoding of high-dimensional (also called "high cardinality") categorical variables. In this experiment, we demonstrate how to utilize the **Learning with Count** modules (**Build counting transform**, **Modify count table parameters**) and **Apply Transformation** module to generate compact representations of high dimensional categorical variables. These derived features are then used in a multiclass classification model to predict whether a passenger will tip and which bin the tip falls in. - For each unique value of a selected column, the **Build Counting Transform** module counts the number of examples belonging to each class. The module then outputs a transform that can be used to featurize the categorical values with default parameters. - The **Modify Count Table Parameters** module can be used to change the parameters in featurization of categorical values. - The **Apply Transformation** module applies the transform to a dataset with the same schema as the input of the **Build Counting Transform** module and replace the original categorical values with features (such as log odds, counts of both classes, and the use of a backoff). For more information about using counts in machine learning, see the online [help](https://msdn.microsoft.com/library/azure/81c457af-f5c0-4b2d-922c-fdef2274413c). ##Data We used the New York city taxi dataset in the experiment, freely available [here](http://www.andresmh.com/nyctaxitrips/). The dataset consists of two sets of data : the trip data and the fare data. A few lines of the trip data are shown below : medallion,hack_license,vendor_id,rate_code,store_and_fwd_flag,pickup_datetime,dropoff_datetime,passenger_count,trip_time_in_secs,trip_distance,pickup_longitude,pickup_latitude,dropoff_longitude,dropoff_latitude 89D227B655E5C82AECF13C3F540D4CF4,BA96DE419E711691B9445D6A6307C170,CMT,1,N,2013-01-01 15:11:48,2013-01-01 15:18:10,4,382,1.00,-73.978165,40.757977,-73.989838,40.751171 0BD7C8F5BA12B88E0B67BED28BEA73D8,9FD8F69F0804BDB5549F40E9DA1BE472,CMT,1,N,2013-01-06 00:18:35,2013-01-06 00:22:54,1,259,1.50,-74.006683,40.731781,-73.994499,40.75066 0BD7C8F5BA12B88E0B67BED28BEA73D8,9FD8F69F0804BDB5549F40E9DA1BE472,CMT,1,N,2013-01-05 18:49:41,2013-01-05 18:54:23,1,282,1.10,-74.004707,40.73777,-74.009834,40.726002 DFD2202EE08F7A8DC9A57B02ACB81FE2,51EE87E3205C985EF8431D850C786310,CMT,1,N,2013-01-07 23:54:15,2013-01-07 23:58:20,2,244,.70,-73.974602,40.759945,-73.984734,40.759388 DFD2202EE08F7A8DC9A57B02ACB81FE2,51EE87E3205C985EF8431D850C786310,CMT,1,N,2013-01-07 23:25:03,2013-01-07 23:34:24,1,560,2.10,-73.97625,40.748528,-74.002586,40.747868 We see that the trip data consists of driver details (medallion, hack\_license, vendor\_id) and trip details such as pickup and dropoff times, the number of passengers, trip time and distance, and the GPS coordinates of the pickup and dropoff. The fare data, on the other hand, contains fare details of the trip and we show a few lines below : medallion, hack_license, vendor_id, pickup_datetime, payment_type, fare_amount, surcharge, mta_tax, tip_amount, tolls_amount, total_amount 89D227B655E5C82AECF13C3F540D4CF4,BA96DE419E711691B9445D6A6307C170,CMT,2013-01-01 15:11:48,CSH,6.5,0,0.5,0,0,7 0BD7C8F5BA12B88E0B67BED28BEA73D8,9FD8F69F0804BDB5549F40E9DA1BE472,CMT,2013-01-06 00:18:35,CSH,6,0.5,0.5,0,0,7 0BD7C8F5BA12B88E0B67BED28BEA73D8,9FD8F69F0804BDB5549F40E9DA1BE472,CMT,2013-01-05 18:49:41,CSH,5.5,1,0.5,0,0,7 DFD2202EE08F7A8DC9A57B02ACB81FE2,51EE87E3205C985EF8431D850C786310,CMT,2013-01-07 23:54:15,CSH,5,0.5,0.5,0,0,6 We see that in addition to containing some common fields like the driver details, this dataset contains details on fare amount, the tolls and surcharge taxes, and the tip amount. ## Multiclass classification problem The multiclass classification problem we pose here takes the form : Given the driver and trip details, into which bin will a passenger tip fall? We define the tip bins as follows: Class 0 : Tip = $0 Class 1 : Tip > $0 and Tip < $1 Class 2 : Tip >= $1 and Tip < $5 Class 3 : Tip >= $5 and Tip < $20 Class 4 : Tip >= $20 After joining the trip and fare datasets on the medallion, hack\_license, and vendor\_id and attaching an additional column called "tip\_bin\_value" which takes a bin value based on the above rules, we obtain a dataset of the form shown below : medallion,hack_license,vendor_id,rate_code,pickup_datetime,dropoff_datetime,passenger_count,trip_time_in_secs,trip_distance,pickup_longitude,pickup_latitude,dropoff_longitude,dropoff_latitude,payment_type,fare_amount,surcharge,mta_tax,tip_amount,tolls_amount,total_amount,tip_bin_value,tipped 413F4FE8B13419006400C2A8517D7A44,01A7DEBB426ABA1C9CFD9DC4711EF497,CMT,1,2013-12-07 21:59:14,2013-12-07 22:08:18,1,543.0,1.7,-73.9702,40.757236,-73.952904,40.769035,CRD,9.0,0.5,0.5,2.5,0.0,12.5,1,1 413F4FE8B13419006400C2A8517D7A44,334CBA3C4F54A6A9E02BB506F74C674B,CMT,1,2013-06-29 01:31:18,2013-06-29 01:37:13,1,354.0,0.8,-73.993774,40.745815,-74.003891,40.742088,CSH,5.5,0.5,0.5,0.0,0.0,6.5,0,0 413F4FE8B13419006400C2A8517D7A44,FD1176E5658567D01B51C43525BA5672,CMT,1,2013-01-20 03:26:40,2013-01-20 03:52:49,3,1568.0,14.5,-74.008926,40.726002,-73.828033,40.68576,CRD,42.0,0.5,0.5,10.75,0.0,53.75,3,1 41410577B81EBF63D371BD07D3092DF9,2311C55F3F626C2956E79624CD0DA084,VTS,5,2013-11-17 01:54:00,2013-11-17 02:10:00,2,960.0,9.14,-73.989258,40.757542,-74.074799,40.764229,CRD,55.0,0.0,0.5,13.05,10.25,78.8,3,1 Note that we wish to predict the column "tip\_bin\_value". We will also refer to it as the label column in what follows. ## Label distribution Of interest in classification problems is the label distribution. We show the label distributions on our train data below : ![][labelDistributionMulticlass] We note that our label distribution is skewed ; class 0 and 1 make up almost 95% of the data while classes 2, 3, and 4 make up merely 5% of the total data. ## Experiment We now show the experiment in full, and then describe its various components. ![][fullExperimentMulticlass] ### Accessing the train and test datasets We use the Reader module to access the NYC taxi datasets via publicly available blobs. By choosing the "PublicOrSAS" option, we can access data stored in public blob storage. The train dataset is the one used for training our models; we evaluate model performance on the test dataset. ##Feature Engineering As mentioned in the introduction, in this experiment, we showcase how to produce a compact representation of high-dimensional categorical features by using the learning with counts approach. In our data, some of the high-dimensional categorical features are "medallion", "hack\_license", and the GPS coordinates. Below, we list the number of unique values for a few of these categorical variables: medallion : more than 13000 unique values hack_license : more than 39000 unique values pickup_longitude : more than 42000 unique values As we see, the number of unique values (and hence the dimensionality) of these categorical variables is very large. We expect count features to help us by producing a compact representation of these high-dimensional data. To use the count features in our modeling, the first step is to use the Build Count Transform as shown below to generate the counting transform on our chosen categorical variables. **Important Note:** In this sample experiment, we compute our count features on the train dataset and then use that to compute count features on the test dataset. In practice, it is even better to have an entirely separate dataset for just performing the counts on. ###Build Count Transform We use the [**Build Count Transform**](https://msdn.microsoft.com/library/azure/166586ff-5bba-46a9-b469-20179f179b6c) module and it looks like this : ![][buildCountTransformMulticlass] 1. We first select the number of classes; in our case, this is 5 since we are performing multiclass classification and there are 5 classes as we outlined above. 2. Next, we choose the number of bits of the hashing function for constructing a dictionary transform. We choose this to be 23. 3. Next, we choose a random seed - this allows for reproducibility of the transform if so needed. 4. In "Module type", we choose "Dataset". Note that the module also takes data from an Azure Machine Learning dataset or Mapreduce (where the data is stored in HDFS). 5. In "Label column index or name", we select "tip\_bin\_value". 6. In "Select columns to count", we select "medallion", "hack\_license", "vendor\_id", "pickup\_longitude", "pickup\_latitude", "dropoff\_longitude", and "dropoff\_latitude". 7. In "Label column", we specify the index of the column that is chosen as the label. In our case, we choose the index of the column "tipped". 8. Finally, we specify the type of count table to be constructed : either a dictionary or a count-min sketch. We choose a dictionary here and will not delve into the differences in these approaches here. This module outputs a transform that can be used to featurize selected data columns. ### Modify Count Table Parameters To control the output of the count transform, we can use the **Modify Count Table Parameters** module. For this experiment, we select the **LogOddsOnly** option for output features, and selected the option **Ignore back off column**. We use default values for other parameters. This is shown below. ![][modifyCountTableParamsMulticlass] Note that to generate the count features, we will use the **Apply Transformation** module shortly. ### Apply transformation To apply the counting transform to the test dataset, we simply use the **Apply Transformation** module and connect one of its ports to the modified count transform and the other to the train dataset. This is shown below (a similar procedure is repeated for the test dataset as well). ![][applyTransformationMulticlass] An excerpt of the result of generating count features is shown below: ![][countFeaturesMulticlass] ### Project Columns At this stage, we are ready to filter out columns that are possible target leaks, and also columns that we think are not essential to the modeling process. For this multiclass classification problem, we filter out the following target leaks : "tipped", "total\_amount", "tip\_amount", and "randnum". To do this, we use the **Project Columns"** module shown below: ![][projectColumnsMulticlass] After doing all the above steps for both train and test datasets, we are ready to build our multiclass classification model for this dataset. ### Choice of learner For the multiclass classification problem, we choose a multiclass logistic regression learner. To use this learner, we select **Multiclass Logistic Regression** using the Search toolbar and drag and drop the module on to our experiment canvas. Then, we select the **Train Model** module and do the same. The inputs to the **Train Model** module are the train dataset and the learner, which is shown below : ![][trainModelMulticlass] For simplicity, we choose the default values of the learner parameters here. ### Scoring the model on test data After the training is complete, we can measure the performance of our model on test data by using the **Score Model** module thus. ![][scoreModelMulticlass] ##Model Performance We can now evaluate model performance using the **Evaluate Model** module shown below. ![][evaluateModelMulticlass] Since this is a multiclass classification problem, we use a class confusion matrix to measure model performance. Below, we show these metrics on our dataset. ![][modelPerfMulticlass] We see that the prediction accuracy on the populous classes (class 0 and 1) is quite good. In addition to this, our prediction accuracy on the rarer classes 2 and 3 is also reasonable, given that we have fewer examples for learning from. We note that this performance can be improved further by two additional simple steps in the modeling process : i) Use the **Clean Missing Data** module to sanitize missing values in columns, ii) Use a **Sweep Parameters** module to run parameter sweeps so as to pick up the best multiclass logistic learner parameter values as opposed to the default settings we choose here. In this experiment, we demonstrated the use of the learning with counts technique by using **Build Counting Transform**, **Modify Count Table Parameters** and **Apply Transformation** modules to generate new count-based features for multiclass classification on the NYC taxi dataset. ## Summary We use the learning with counts approach to succinctly represent high-dimensional categorical variables in our modeling. This typically results in smaller models, faster run-times, and sometimes also in a better model performance. In particular, by employing these compact representations of high-dimensional categorical variables, the model performance on rarer classes is improved since the variance of the model is reduced. For a comparison of the models with and without count features, we refer the reader to this [blog](http://blogs.technet.com/b/machinelearning/archive/2015/04/02/building-azure-ml-models-on-the-nyc-taxi-dataset.aspx). Finally, we note that although for reasons of simplicity, we have shown how to perform multiclass classification on the NYC taxi dataset using a sample of the data, the technique of learning with counts is very scalable and has been demonstrated internally on very large datasets. <!-- Images --> [labelDistributionMulticlass]:https://az712634.vo.msecnd.net/samplesimg/v1/42/labelDIstributionMulticlass.PNG [buildCountTransformMulticlass]:https://az712634.vo.msecnd.net/samplesimg/v1/42/buildCountTransformMulticlass.PNG [fullExperimentMulticlass]:https://az712634.vo.msecnd.net/samplesimg/v1/42/fullExperimentMulticlass.PNG [modifyCountTableParamsMulticlass]:https://az712634.vo.msecnd.net/samplesimg/v1/42/modifyCountTableParamsMulticlass.PNG [applyTransformationMulticlass]:https://az712634.vo.msecnd.net/samplesimg/v1/42/applyTransformationMulticlass.PNG [countFeaturesMulticlass]:https://az712634.vo.msecnd.net/samplesimg/v1/42/countFeaturesMulticlass.PNG [projectColumnsMulticlass]:https://az712634.vo.msecnd.net/samplesimg/v1/42/projectColumnsMulticlass.PNG [trainModelMulticlass]:https://az712634.vo.msecnd.net/samplesimg/v1/42/trainModelMulticlass.PNG [scoreModelMulticlass]:https://az712634.vo.msecnd.net/samplesimg/v1/42/scoreModelMulticlass.PNG [evaluateModelMulticlass]:https://az712634.vo.msecnd.net/samplesimg/v1/42/evaluateModelMulticlass.PNG [modelPerfMulticlass]:https://az712634.vo.msecnd.net/samplesimg/v1/42/modelPerfMulticlass.PNG