代写COMPSCI 753 Algorithms for Massive Data Assignment 2 / Semester 2, 2024代做Python语言

- 首页 >> Matlab编程

COMPSCI 753

Algorithms for Massive Data

Assignment 2 / Semester 2, 2024

Recommender Systems

General instructions and data

Recommender systems are widely used in entertainment. In this assignment, we will explore one of the Goodreads review datasets using the recommendation algorithms learned in the lectures. To make the task feasible on most of the laptops and PCs, we have extracted a manageable dataset of reviews on Young Adult books1 (containing 2,389,900 reviews). We have split the dataset on training data (1,433,940 reviews), validation data (477,980 reviews) and test data (477,980 reviews). The corresponding files can be found on the assignment page. These files are of the same format. Each line includes a user id, item id, review id and rating.

Submission

Please submit (1) a file (.pdf or .html) that reports the requested answers of each task, and (2) a source code file containing detailed comments. Submit this on Canvas by 23:59 NZST, Sunday 8 September. The files must contain your student ID, UPI and name.

Penalty Dates

The assignment will not be accepted after the last penalty date unless there are special circumstances (e.g., sickness with certificate). Penalties will be calculated as follows:

• 23:59 NZST, Sunday 8 September – No penalty

• 23:59 NZST, Monday 9 September – 25% penalty

• 23:59 NZST, Tuesday 10 September – 50% penalty

Tasks (100 points)

This assignment is composed of three tasks. Some considerations you may want to follow:

1. Data is provided in json files. Some help on reading json files:

https://www.geeksforgeeks.org/read-json-file-using-python/

2. When developing your solution, it is recommended that you test your code on a small sample of the data and make sure it doesn’t have bugs before running on the whole dataset. This will help fasten your development process.

Task 1 [10 points]: Explore biases

Calculate the global bias bg, user specific bias bi(user) and item specific bias bj(item) on the training data. Report:

(A) [4 points] The global bg bias

(B) [3 points] The user specific bias of user id= “91ceb82d91493506532feb02ce751ce7”

(C) [3 points] The item specific bias of item id = “6931234”.

Task 2 [45 points]: Implement the regularized latent factor model without bias using SGD

(A) [30 points] Implement the regularized latent factor model without considering the bias. The optimization problem that needs to be solved is (see slide 8 of W5.2 lecture notes):

The initialization of P and Q should be random, from a normal distribution. Set the number of latent factors to k = 8. Use Stochastic Gradient Descent (SGD) to solve the optimization problem on the training data (see slide 9 of W5.2 lecture notes). Run SGD for 10 iterations (also called epoches), with a fixed learning rate η = 0.01 and regulariza-tion hyperparameters λ1 = λ2 = 0.3. Remember that the regularization terms involve the L2-norms of the qi and pj vectors for each user i and item j respectively.

Report the RMSE on the training data for each epoch, by using the RMSE formula (see slide 36 of W4-5 lecture notes):

(B) [15 points] Use SGD to train the latent factor model on the training data for different values of k in {4, 8, 16}. For each value of k, train the model for 10 epoches/iterations. Report the RMSE for each value of k on the validation data. Pick the model that results in the best RMSE on the validation set and report its RMSE on the test data.

Task 3 [45 points]: Implement the regularized latent factor model with bias using SGD

(A) [30 points] Incorporate the bias terms bg, bi(user) and bj(item) to the latent factor model. The optimization problem that needs to be solved is (see slide 11 of W5.2 lecture notes):

The initialization of P and Q should be random, from a normal distribution. Initialize the user bias bi(user) and item bias terms bj(item) using the values computed in Task 1. Set the number of latent factors k = 8. Run SGD for 10 epoches with a fixed learning rate η = 0.01 and regularization hyperparameters λ1 = λ2 = λ3 = λ4 = 0.3. Report the RMSE on the training data for each epoch. After finishing all epoches, report the learned user-specific bias of the user with user id= “91ceb82d91493506532feb02ce751ce7” , and the learned item-specific bias of the item with item id = “6931234”.

(B) [15 points] Similar to Task 2 (B), find the best k in {4, 8, 16} for the model you devel-oped in Task 3 (A) on the validation set, by using RMSE to compare across these models, and apply the best of these models to the test data. Compare the resulting test RMSE with Task 2 (B). Analyse and explain your findings.

Note: In this case, you may have users and/or items in the validation or test set that are not in the training set (i.e. you may experience the cold start problem). Therefore, you will not have information about the bias of these users or items. For those users or items, use a bias of 0 in your calculations.





站长地图