ACM SIGCHI General Interest Announcements (Mailing List)


Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Surya Kallumadi <[log in to unmask]>
Reply To:
Surya Kallumadi <[log in to unmask]>
Sun, 2 Jun 2019 13:48:20 -0400
text/plain (76 lines)
Call For Participation

The 2019 SIGIR workshop on eCommerce <> is
hosting the High Accuracy Recall Task Data Challenge as part of the
workshop. The data is provided by eBay search. SIGIR eCom is a full day
workshop taking place on Thursday,  July 25, 2019 in conjunction with SIGIR
2019 in Paris, France. Challenge participants will have the opportunity to
present their work at the workshop.

*Challenge website:*

*Important Dates:   *

   - Data Challenge opens: May 17, 2019
   - Final Leaderboard - July 18, 2019
   - SIGIR eCom Full day Workshop -  July 25, 2019

*Task Description:*

This challenge targets a common problem in eCommerce search: Identifying
the items to show when using non-relevance sorts. Users of eCommerce search
applications often sort by dimensions other than relevance. Popularity,
review score, price, distance, recency, etc. This is a notable difference
from traditional information oriented search, including web search, where
documents are surfaced in relevance order.

Relevance ordering obviates the need for explicit relevant-or-not decisions
on individual documents. Many well studied search methodologies take
advantage of this. Non-relevance sorts orders are less well studied, but
raise a number of interesting research topics. Evaluation metrics, ranking
formulas, performance optimization, user experience, and more. These topics
are discussed in the High Accuracy Recall Task paper, published at the
SIGIR 2018 Workshop on eCommerce.

This search challenge focuses on the most basic aspect of this problem:
identifying the items to include in the recall set when using non-relevance
sorts. This is already a difficult problem, and includes typical search
challenges like ambiguity, multiple query intents, etc.

*Participation and Data:*

The data challenge is open to everyone.

The challenge data consists of a set of popular search queries and a fair
size set of candidate documents. Challenge participants make a boolean
relevant-or-not decision for each query-document pair. Human judgments are
used to create labeled training and evaluation data for a subset of the
query-document pairs. Evaluation of submissions will be based on the
traditional F1 metric, incorporating components of both recall and

Details about evaluation metrics and other aspects of the task can be found
at the website:

    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see