Task Definition

Overview

Current web search engines usually return a ranked list of URLs in response to a query. The user often has to visit several web pages and locate relevant parts within long web pages. Especially for mobile users, those actions need much effort and attention. However, for some classes of queries, the system should be able to return a very concise summary of relevant information directly and to satisfy her immediately after clicking on the search button.

The MobileClick task is defined as follows: given a query, return a structured textual output. At this round, we expect that the output is two-layered text, where the first layer contains the most important information and the outline of relevant information, while the second layer, which consists of several parts of text, contains detailed information that can be accessed by clicking on an associated part of the text at the first layer. In the example below, for a query “NTCIR-11″, the system presents general information on NTCIR-11 and a list of core task links in the first layer. When the MobileClick link is clicked on, the system shows text in the second layer that is associated with the link. Try a mockup.

TwoLayer

The MobileClick task focuses on evaluating textual output based on information units (iUnits) rather than document relevance. Moreover, we require the systems to try to minimize the amount of text the user has to read or, equivalently, the time she has to spend in order to obtain the information. The systems are thus expected to search the web and return a multi-document summary of retrieved relevant webpages that fits a mobile phone screen.

Queries

We will use a total of 50 queries for the English task and another 50 for the Japanese task, some of which overlap 1CLICK-2@NTCIR-10 for monitoring the progress. The queries will be selected from real mobile query logs. As opposed to 1CLICK-2, we will not provide a precise list of query types and their frequency in MobileClick before the formal run submission due.

Tasks

The Main Task of MobileClick is “given a query, return a structured textual output (X-string),” which is composed of two subtasks: iUnit Retrieval Subtask and iUnit Summarization Subtask.

iUnit Retrieval Subtask

The iUnit Retrieval Subtask is a task where systems are expected to generate a list of pieces of information (iUnits) ranked according to their importance for a given query.

There are two types of iUnit Retrieval Subtask runs:

  • MANDATORY Runs: Organizers will provide baseline search results and their page contents for each query. Participants must use these contents only to generate a list of iUnits. Note that any data resources can be used for estimating the importance of each iUnit.
  • OPEN Runs (OPTIONAL): Participants may choose to search the live web on their own to generate a list of iUnits. Any run extracts iUnits from at least some privately-obtained web search results is considered as an Open run, even if it also uses the baseline data.

iUnit Summarization Subtask

The iUnit Summarization Subtask is defined as follows: for a given query and a given list of iUnits ranked according to their importance, generate a structured textual output (X-string).

In MobileClick, more precisely, the X-string consists of two layers. The first layer is just text, while the second layer consists of a set of text, all of which are associated with only a part of text in the first layer.
NOTE: The length of text in the first layer is limited to L, and the length of text in each second layer is also limited to L. L is 280 characters for the English iUnit Summarization Subtask, while L is 140 characters for the Japanese iUnit Sumarization Subtask. Symbols (such as ‘,’ and ‘(‘) are excluded. Excess text will be truncated in evaluation.

The X-string is expected to include more important information and to minimize the amount of text users have to read. For example,

  • an X-string that presents more important information earlier in the first layer is evaluated better;
  • for a query with few subtopics, an X-string that shows all the information in the first layer would get a higher score than one that separates information into text fragments in the second layer.
  • for a query with many subtopics, an X-string that hides the details of each subtopic in the second layer is evaluated better than one that shows all the information in the first layer, as users interested in different subtopics can save text they have to read.

There are two types of iUnit Summarization Subtask runs:

  • MANDATORY Runs: Participants must use a iUnit list distributed by the organizers only to generate summaries. Note that any data resources can be used for estimating the importance of each iUnit.
  • OPEN Runs (OPTIONAL): Participants may choose to search the live web on their own to generate summaries. Any run uses contents from at least some privately-obtained web search results is considered as an OPEN run, even if it also uses the baseline data.

Input/Output

iUnit Retrieval Subtask

Input

  • A query

Output

  • A list of information pieces (iUnits) ranked according to their importance

iUnit Summarization Subtask

Input

  • A query and a list of iUnits ranked according to their importance

Output

  • A two-layered textual output