Fork me on GitHub

MyMediaLite: Rating Prediction Tool

News

MyMediaLite 3.11 has been released.


MyMediaLite Rating Prediction 3.11

 usage:  rating_prediction --training-file=FILE --recommender=METHOD [OPTIONS]

  recommenders (plus options and their defaults):
   - BiPolarSlopeOne
   - FactorWiseMatrixFactorization num_factors=10 shrinkage=25 sensibility=1E-05 num_iter=10 reg_u=15 reg_i=10
       supports --find-iter=N
   - GlobalAverage
       supports --online-evaluation
   - ItemAttributeKNN k=80 correlation=BinaryCosine weighted_binary=False alpha=0; baseline predictor: reg_u=15 reg_i=10 num_iter=10
       needs --item-attributes=FILE
       supports --online-evaluation
   - ItemAverage
       supports --online-evaluation
   - ItemKNN k=80 correlation=BinaryCosine weighted_binary=False alpha=0; baseline predictor: reg_u=15 reg_i=10 num_iter=10
       supports --online-evaluation
   - MatrixFactorization num_factors=10 regularization=0.015 learn_rate=0.01 learn_rate_decay=1 num_iter=30
       supports --find-iter=N, --online-evaluation
   - SlopeOne
   - UserAttributeKNN k=80 correlation=BinaryCosine weighted_binary=False alpha=0; baseline predictor: reg_u=15 reg_i=10 num_iter=10
       needs --user-attributes=FILE
       supports --online-evaluation
   - UserAverage
       supports --online-evaluation
   - UserItemBaseline reg_u=15 reg_i=10 num_iter=10
       supports --find-iter=N, --online-evaluation
   - UserKNN k=80 correlation=BinaryCosine weighted_binary=False alpha=0; baseline predictor: reg_u=15 reg_i=10 num_iter=10
       supports --online-evaluation
   - TimeAwareBaseline num_iter=30 bin_size=70 beta=0.4 user_bias_learn_rate=0.003 item_bias_learn_rate=0.002 alpha_learn_rate=1E-05 item_bias_by_time_bin_learn_rate=5E-06 user_bias_by_day_learn_rate=0.0025 user_scaling_learn_rate=0.008 user_scaling_by_day_learn_rate=0.002 reg_u=0.03 reg_i=0.03 reg_alpha=50 reg_item_bias_by_time_bin=0.1 reg_user_bias_by_day=0.005 reg_user_scaling=0.01 reg_user_scaling_by_day=0.005
       supports --find-iter=N
   - TimeAwareBaselineWithFrequencies num_iter=40 bin_size=70 beta=0.4 user_bias_learn_rate=0.00267 item_bias_learn_rate=0.000488 alpha_learn_rate=3.11E-06 item_bias_by_time_bin_learn_rate=0.000115 user_bias_by_day_learn_rate=0.000257 user_scaling_learn_rate=0.00564 user_scaling_by_day_learn_rate=0.00103 reg_u=0.0255 reg_i=0.0255 reg_alpha=3.95 reg_item_bias_by_time_bin=0.0929 reg_user_bias_by_day=0.00231 reg_user_scaling=0.0476 reg_user_scaling_by_day=0.019 frequency_log_base=6.76 item_bias_at_frequency_learn_rate=0.00236 reg_item_bias_at_frequency=1.1E-08
       supports --find-iter=N
   - CoClustering num_user_clusters=3 num_item_clusters=3 num_iter=30
       supports --find-iter=N
   - Random
       supports --online-evaluation
   - Constant constant_rating=1
       supports --online-evaluation
   - LatentFeatureLogLinearModel num_factors=10 bias_reg=0.01 reg_u=0.015 reg_i=0.015 frequency_regularization=False learn_rate=0.01 bias_learn_rate=1 num_iter=30 loss=RMSE
       supports --find-iter=N
   - BiasedMatrixFactorization num_factors=10 bias_reg=0.01 reg_u=0.015 reg_i=0.015 frequency_regularization=False learn_rate=0.01 bias_learn_rate=1 learn_rate_decay=1 num_iter=30 bold_driver=False loss=RMSE max_threads=1 naive_parallelization=False
       supports --find-iter=N, --online-evaluation
   - SVDPlusPlus num_factors=10 regularization=0.015 bias_reg=0.33 frequency_regularization=False learn_rate=0.001 bias_learn_rate=0.7 learn_rate_decay=1 num_iter=30
       supports --find-iter=N, --online-evaluation
   - SigmoidSVDPlusPlus num_factors=10 regularization=0.015 bias_reg=0.33 frequency_regularization=False learn_rate=0.001 bias_learn_rate=0.7 learn_rate_decay=1 num_iter=30 loss=RMSE
       supports --find-iter=N, --online-evaluation
   - SocialMF num_factors=10 reg_u=0.015 reg_i=0.015 bias_reg=0.01 social_regularization=1 learn_rate=0.01 bias_learn_rate=1 num_iter=30 bold_driver=False loss=RMSE
       needs --user-relations=FILE
       supports --find-iter=N, --online-evaluation
   - SigmoidItemAsymmetricFactorModel num_factors=10 regularization=0.015 bias_reg=0.33 frequency_regularization=False learn_rate=0.001 bias_learn_rate=0.7 learn_rate_decay=1 num_iter=1 loss=30
       supports --find-iter=N, --online-evaluation
   - SigmoidUserAsymmetricFactorModel num_factors=10 regularization=0.015 bias_reg=0.33 frequency_regularization=False learn_rate=0.001 bias_learn_rate=0.7 learn_rate_decay=1 num_iter=30 loss=RMSE
       supports --find-iter=N, --online-evaluation
   - SigmoidCombinedAsymmetricFactorModel num_factors=10 regularization=0.015 bias_reg=0.33 frequency_regularization=False learn_rate=0.001 bias_learn_rate=0.7 learn_rate_decay=1 num_iter=30 loss=RMSE
       supports --find-iter=N, --online-evaluation
   - NaiveBayes class_smoothing=1 attribute_smoothing=1
       needs --item-attributes=FILE
       supports --online-evaluation
   - ExternalRatingPredictor prediction_file=FILENAME
   - GSVDPlusPlus num_factors=10 regularization=0.015 bias_reg=0.33 frequency_regularization=False learn_rate=0.001 bias_learn_rate=0.7 learn_rate_decay=1 num_iter=30
       needs --item-attributes=FILE
       supports --find-iter=N, --online-evaluation
  method ARGUMENTS have the form name=value

  general OPTIONS:
   --recommender=METHOD             set recommender method (default BiasedMatrixFactorization)
   --recommender-options=OPTIONS    use OPTIONS as recommender options
   --help                           display this usage information and exit
   --version                        display version information and exit
   --random-seed=N                  initialize random number generator with N
   --rating-type=float|byte         store ratings internally as floats (default) or bytes
   --no-id-mapping                  do not map user and item IDs to internal IDs, keep original IDs

  files:
   --training-file=FILE                   read training data from FILE
   --test-file=FILE                       read test data from FILE
   --test-no-ratings                      test data contains no rating column
                                          (needs both --prediction-file=FILE and --test-file=FILE)
   --file-format=movielens_1m|kddcup_2011|ignore_first_line|default
   --data-dir=DIR                         load all files from DIR
   --user-attributes=FILE                 file with user attribute information, 1 tuple per line
   --item-attributes=FILE                 file with item attribute information, 1 tuple per line
   --user-relations=FILE                  file with user relation information, 1 tuple per line
   --item-relations=FILE                  file with item relation information, 1 tuple per line
   --save-model=FILE                      save computed model to FILE
   --load-model=FILE                      load model from FILE
   --save-user-mapping=FILE               save user ID mapping to FILE
   --save-item-mapping=FILE               save item ID mapping to FILE
   --load-user-mapping=FILE               load user ID mapping from FILE
   --load-item-mapping=FILE               load item ID mapping from FILE

  prediction options:
   --prediction-file=FILE         write the rating predictions to FILE
   --prediction-line=FORMAT       format of the prediction line; {0}, {1}, {2} refer to user ID,
                                  item ID, and predicted rating; default is {0}\t{1}\t{2};
   --prediction-header=LINE       print LINE to the first line of the prediction file

  evaluation options:
   --cross-validation=K                perform k-fold cross-validation on the training data
   --test-ratio=NUM                    use a ratio of NUM of the training data for evaluation (simple split)
   --chronological-split=NUM|DATETIME  use the last ratio of NUM of the training data ratings for evaluation,
                                       or use the ratings from DATETIME on for evaluation (requires time information
                                       in the training data)
   --online-evaluation                 perform online evaluation (use every tested rating for incremental training)
   --search-hp                         search for good hyperparameter values (experimental feature)
   --compute-fit                       display fit on training data
   --measures=LIST                     comma- or space-separated list of evaluation measures to display (default is RMSE, MAE, CBD)
                                       use --help-measures to get a list of all available measures

  options for finding the right number of iterations (iterative methods)
   --find-iter=N                  give out statistics every N iterations
   --num-iter=N                   start measuring at N iterations
   --max-iter=N                   perform at most N iterations
   --epsilon=NUM                  abort iterations if main evaluation measure is more than best result plus NUM
   --cutoff=NUM                   abort if main evaluation measure is above NUM

ContactFollow us on Twitter