The Anti-Social Web WWW 2012
The 2nd Joint WICOW/AIRWeb Workshop on Web Quality in conjunction with WWW 2012 in Lyon, France on April 16, 2012

News: WebQuality 2013 is planned to be organized at WWW 2013



Objectives

The objective of the workshop is to provide the research communities working on web spam, abuse, credibility, and reputation topics with a survey of current problems and potential solutions. It will present an opportunity for close interaction between practitioners who may have focused on more isolated sub-areas previously. We also want to gather crucial feedback for the academic community from participants representing major industry players on how web content quality research can contribute to practice.

On one hand, the joint workshop will cover the more blatant and malicious attempts that deteriorate web quality such as spam, plagiarism, or various forms of abuse and ways to prevent them or neutralize their impact on information retrieval. On the other hand, it will also provide a venue for exchanging ideas on quantifying finer-grained issues of content credibility and author reputation, and modeling them in web information retrieval.

Themes and Topics

The main themes of the workshop are that of evaluating web information credibility, and identifying and combating qualitatively extreme content (and related behavior), such as spam. These themes encompass a large set of often-related topics and subtopics, as listed below.

Assessing the credibility of content and people on the web and social media.

    Uncovering distorted and biased content
  • Detecting disagreement and conflicting opinions
  • Detecting disputed or controversial claims
  • Uncovering distorted or biased, inaccurate or false information
  • Uncovering common misconceptions and false beliefs
  • Search models and applications for finding factually correct information on the Web
  • Comparing authorized vs. unauthorized information (e.g. news article vs. readers' comments)
  • Comparing and evaluating online reviews, product or service testimonials
    Measuring quality of web content
  • Information quality and credibility of web search results, on social media sites, of online mass-media and news, and on the Web in general
  • Estimation of information age, provenance, validity, coverage, and completeness or depth
  • Formation, change, and evolution of opinions
  • Sociological and psychological aspects of information credibility estimation
  • Users studies of information credibility evaluation
    Modeling author identity, trust, and reputation
  • Estimating authors' and publishers' reputation
  • Evaluating authors' qualifications and credentials
  • Transparent ranking/reputation systems
  • Author intent detection
  • Capturing personal traits and sentiment
  • Modeling author identity, authorship attribution, and writing style
  • Systems for managing author identity on the Web
  • Revealing hidden associations between authors, commenters, reviewers, etc.
    Role of groups and communities
  • Role of groups, communities, and invisible colleges in the formation of opinions on the Web
  • Social-network-based credibility evaluation
  • Analysis of information dissemination on the Web
  • Common cognitive or social biases in user behavior (e.g., herd behavior)
  • Credibility in collaborative environments (e.g., on Wikipedia)
    Multimedia content credibility
  • Detecting deceptive manipulation or distortion of images and multimedia
  • Hiding content in images
  • Detecting incorrect labels or captions of images on the Web
  • Detecting mismatches between online images and the represented real objects
  • Credibility of online maps

Fighting spam, abuse, and plagiarism on the Web and social media

    Reducing web spam
  • Detecting various types of search engine spam (e.g., link spam, content spam, or cloaking)
  • Uncovering social network spam (e.g., serial sharing and lobbying) and spam in online media (e.g., blog, forum, wiki spam, or tag spam)
  • Identifying review and rating spam
  • Characterizing trends in spamming techniques
    Reducing abuses of electronic messaging systems
  • Detecting e-mail spam
  • Detecting spit (spam over internet telephony) and spim (spam over instant messenger)
    Detecting abuses in internet advertising
  • Click fraud detection
  • Measuring information credibility in online advertising and monetization
    Uncovering plagiarism and multiple-identity issues
  • Detecting plagiarism in general, and in web communities, social networks, and cross-language environments in particular
  • Identifying near-duplicate and versioned content of all kinds (e.g., text, software, image, music, or video)
  • High-similarity retrieval technologies (e.g., fingerprinting and similarity hashing)
    Promoting cooperative behavior in social networks
  • Monitoring vandalism, trolling, and stalking
  • Detecting fake friendship requests with spam intentions
  • Creating incentives for good behavior in social networks
  • User studies of misuse of the Web
    Security issues with online communication
  • Detecting phishing and identity theft
  • Flagging malware (e.g., viruses and spyware)
  • Web forensics

Other adversarial issues

  • Modeling and anticipating responses of adversaries to counter-measures
  • New web infringements
  • Web content filtering
  • Bypassing censorship on the Web
  • Blocking online advertisements
  • Reverse engineering of ranking algorithms
  • Stealth crawling

Paper Submission

Full papers will be limited to 8 pages, while short papers to 4 pages. Submissions should be sent in English in PDF via the submission website. Papers should adhere to ACM formatting guidelines. They must be original and have not been submitted for publication elsewhere. Submissions will be evaluated by at least three different reviewers. The accepted papers will be published in ACM Digital Library (ISBN 978-1-4503-1237-0).

    Important Dates
  • Paper submission deadline: February 14, 2012 February 19, 2012 (23:59 Hawaii Time)
  • Notification of acceptance: March 4, 2012 March 7, 2012
  • Camera ready copy deadline: March 20, 2012
  • Workshop date: April 16, 2012

Program

[8:45 - 10:35]

Web Quality Session:

  • "On Measuring the Lexical Quality of the Web" [slides] [paper]
    Ricardo Baeza-Yates and Luz Rello
  • "Measuring the Quality of Web Content using Factual Information" [slides] [paper]
    Elisabeth Lex, Michael Voelske, Marcelo Errecalde, Edgardo Ferretti, Leticia Cagnina, Christopher Horn, Benno Stein and Michael Granitzerg
  • "A Breakdown of Quality Flaws in Wikipedia" [slides] [paper]
    Maik Anderka and Benno Stein
  • "A Deformation Analysis Method for Artificial Maps Based on Geographical Accuracy and Its Applications" [slides] [paper]
    Dasiuke Kitayama and Kazutoshi Sumiya
[10:35 - 11:00]

** Coffee Break **

[11:00 - 12:30]

Online Credibility and Trust Session:

  • "Game-theoretic Models of Web Credibility" [slides] [paper]
    Thanasis Papaioannou, Katarzyna Abramczuk, Paulina Adamska, Adam Wierzbicki and Karl Aberer
  • "An Information Theoretic Approach to Sentimental Polarity Classification" [paper]
    Yuming Lin, Jingwei Zhang, Wang Xiaoling and Aoying Zhou
  • "Content-Based Trust and Bias Classification via Biclustering" [slides] [paper]
    David Siklosi, Balint Daroczy and Andras A. Benczur
[12:30 - 14:00]

** Lunch Break **

[14:00 - 15:10]

Abuse Detection and Prevention Session:

  • "Detecting Collective Attention Spam" [slides] [paper]
    Kyumin Lee, James Caverlee, Krishna Kamath and Zhiyuan Cheng
  • "Identifying Spam in the iOS App Store" [slides] [paper]
    Rishi Chandy and Haijie Gu
  • "kaPoW Plugins: Protecting Web Applications Using Reputation-based Proof-of-Work" [slides] [paper]
    Akshay Dua, Wu-Chang Feng and Tien Le

Organizers

Carlos Castillo (Yahoo! Research)
Zoltan Gyongyi (Google Research)
Adam Jatowt (Kyoto University)
Katsumi Tanaka (Kyoto University)

PC Members:
Ching-man Au Yeung (Astri)
Andras Benczur (Hungarian Academy of Sciences)
James Caverlee (Texas A&M University)
Matt Cutts (Google)
Brian Davison (Lehigh University)
Dennis Fetterly (Microsoft)
Andrew Flanagin (University of California, Santa Barbara)
Panagiotis Metaxas (Wellesley College)
Miriam Metzger (University of California, Santa Barbara)
Masashi Toyoda (University of Tokyo)
Steve Webb (Georgia Institute of Technology)
Xiaofang Zhou (University of Queensland)

Contact

Emailadam [at] dl [dot] kuis [dot] kyoto-u [dot] ac [dot] jp
Phone+81-75-753-5909