Think Big, We Do.
Rhode Island Seal


For any online reputation systems, it is very difficult to evaluate their attack-resistance properties in practical settings due to the lack of realistic attack data. Even if one can obtain data from e-commerce websites, there is usually no ground truth about which ratings are dishonest. To understand human users’ attack behaviors and evaluate TAUCA against non-simulated attacks, we held a Cyber Competition, named Competition of Attacking Network Trust (CANT) in 2008. Cash prizes were awarded in variety of categories.

  • We collected real online rating data for 300 products from a famous e-commerce website, Douban, in China. Such data contains 300 users’ ratings to these 300 products from day 1 to day 150.
  • We built up a virtual reputation system which contained the real rating data above. We considered this data set as normal rating data and provided it to the players participating in this competition.
  • Players submitted their attack strategies to the virtual reputation system. Each player controlled at most 30 malicious user IDs. The total number of ratings from malicious user IDs should be less than 100. The players’ goal was to downgrade the reputation score of product (i.e. object) 1.
  • The virtual reputation system calculated the reputation score of product O1 every 15 days. In other words, 10 reputation scores were calculated on day 15, day 30, … and day 150. The average of these 10 reputation scores was the overall reputation.
  • The competition rule encouraged the players to adjust the resources they used (i.e. the number of malicious user IDs). Each player could make many submissions. The effectiveness of each attack submission was measured by the bias introduced by the malicious users in the reputation score of product 1. The performance of each player is measured by the effectiveness of his/her submissions compared to other players. The competition attracted 630 registered players from 70 universities in China and United States. We collected 826,980 valid submissions.
  • Each submission, also called as an attack profile, used a specific number of malicious users ranging from 1 to 30. Each malicious user could rate any of those 300 products, and could not rate one product more than once.

A sample set of the competition data can be downloaded.

Copyright © 2020 University of Rhode Island.

The University of Rhode Island
Think Big, We Do.
A-ZDirectoryContact UsJump to top