ACM SIGCHI General Interest Announcements (Mailing List)


Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Bogdan Ionescu <[log in to unmask]>
Reply To:
Bogdan Ionescu <[log in to unmask]>
Thu, 23 Jan 2020 23:29:29 +0200
text/plain (74 lines)
[Apologies for multiple postings]


Building websites requires a very specific set of skills. Currently,
the two main ways to achieve this is either by using a visual website
builder or by programming. Both approaches have a steep learning
curve. Enabling people to create websites by drawing them on a
whiteboard or on a piece of paper would make the webpage building
process more accessible.

A first step in capturing the intent expressed by a user through a
wireframe is to correctly detect a set of atomic user interface
elements (UI) in their drawings. The bounding boxes and labels
resulted from this detection step can then be used to accurately
generate a website layout using various heuristics.

In this context, the detection and recognition of hand drawn website
UIs task addresses the problem of automatically recognizing the hand
drawn objects representing website UIs, which are further used to be
translated automatically into website code.

*** TASK ***
Given a set of images of hand drawn UIs, participants are required to
develop machine learning techniques that are able to predict the exact
position and type of UI elements.

*** DATA SET ***
The provided data set consists of 3,000 hand drawn images inspired
from mobile application screenshots and actual web pages containing
1,000 different templates. Each image comes with the manual labeling
of the positions of the bounding boxes corresponding to each UI
element and its type. To avoid any ambiguity, a predefined shape
dictionary with 21 classes is used, e.g., paragraph, label, header.
The development set contains 2,000 images while the test set contains
1,000 images.

*** METRICS ***
The performance of the algorithms will be evaluated using the standard
mean Average Precision over IoU .5, commonly used in object detection.

- Task registration opens: December 20, 2019
- Development data release: January 13, 2020
- Test data release: March 16, 2020
- Run submission: May 11, 2020
- Working notes submission: May 25, 2020
- CLEF 2020 conference: September 22-25, Thessalonik, Greece

*** REGISTER ***

Raul Berari, teleportHQ, Cluj Napoca, Romania
Paul Brie, teleportHQ, Cluj Napoca, Romania
Dimitri Fichou, teleportHQ, Cluj Napoca, Romania
Mihai Dogariu, University Politehnica of Bucharest, Romania
Liviu Daniel Stefan, University Politehnica of Bucharest, Romania
Mihai Gabriel Constantin, University Politehnica of Bucharest, Romania
Bogdan Ionescu, University Politehnica of Bucharest, Romania

    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see