Diverse but Relevant Recommendations with Continuous Ant Colony Optimization
Round 1
Reviewer 1 Report
Comments and Suggestions for Authors1. In abstract the mention the methods other conventional methods used for comparison
2. Highlight how your proposed method is advantages over the conventional ACO
3. Mention the parameter setting for the best performing proposed method discussed under results section
4. Conclusion has to be done with quantified results
5. What is the reason for selecting ACO for recommendations. Why not other algorithms
6. What is the design aspect that has improved the performance of the proposed method.
Comments on the Quality of English LanguageMinor
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsHow does the AcoRec framework address the cold start problem for new users in recommendation systems, and what are the specific mechanisms it uses to generate recommendations in the absence of historical data?
In what ways does AcoRec improve personalized recommendations compared to traditional methods? How does the use of Continuous Ant Colony Optimization (ACOR) enhance the accuracy and relevance of these recommendations?
How does AcoRec facilitate time complexity in recommendation systems? Specifically, what are the advantages of its parallelizable structure and row-based user recommendation approach in terms of computational efficiency?
How do normalized Discounted Cumulative Gain (nDCG) and Recall metrics specifically contribute to evaluating the quality of top-N recommendation lists in terms of relevance to the user?
What are the implications of modifying Cremonesi’s method by calculating the top-N lists based on all items the user did not click on, rather than selecting 1,000 items? How does this modification present a more challenging task, and what benefits does it provide?
How do the experimental results for AcoRec compare to other state-of-the-art recommendation models, and what conclusions can be drawn from the best and second-best results highlighted in the tables?
The paper mentioned that recommender systems mainly offer recommendation sets for each user based on their history on the system, which might turn out to be uncompelling and poor-quality recommendations for the users (lines 50-52). Some profilers in the performance domain also provide performance improvement recommendations based on users’ past history. For example, the paper “MobiCom’23 DroidPerf: Profiling Memory Objects on Android Devices” discusses how performance recommendations are made based on profiling memory objects.
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Reviewer 3 Report
Comments and Suggestions for AuthorsThe authors proposed the new approach to enhance item-based models through AcoRec, a heuristic model incorporating continuous ant colony optimization for hyperparameter tuning. I have following concerns regarding manuscript:
1. I can see repetition at many places. For example refer the conclusion section, it is having repetition of your contribution. Rewrite the same. Reduce the same and talk in numbers rather overexaggerating your findings.
2. The ACO algorithm is very known algorithm. It is over described.
3. As I can see the results are provided for ACO based recommender system. However, for acceptability of results, ACO based results should be compared with results of other algorithms.
In my opinion, comparing results for other algorithms is crucial, which is missing in this paper.
Comments on the Quality of English LanguageThe English of paper should also be improved.
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Round 2
Reviewer 3 Report
Comments and Suggestions for AuthorsThe authors responded satisfactorily to my comments.
Comments on the Quality of English LanguageEnglish is acceptable.