CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Content-Type:
text/plain; charset="Windows-1252"
X-To:
Date:
Fri, 22 Jun 2012 15:47:57 +0000
Reply-To:
Burak Turhan <[log in to unmask]>
Subject:
From:
Burak Turhan <[log in to unmask]>
Message-ID:
MIME-Version:
1.0
Content-Transfer-Encoding:
quoted-printable
Sender:
"ACM SIGCHI General Interest Announcements (Mailing List)" <[log in to unmask]>
Parts/Attachments:
text/plain (154 lines)
================================================
Call for papers
Automated Software Engineering Journal
Special Issue:  Next Generation of Empirical SE

GUEST EDITORS:

Stefan Wagner (University of Stuttgart)
[log in to unmask]

Tim Menzies (West Virginia University)
[log in to unmask]
================================================


It is well-established that, using data mining, we
can predict properties of a wide range of software
engineering products (e.g.  [1,2]).  Now, it is
time to ask  “what’s next?”.

We ask for papers that explore “what’s next”.
Based on all our past experience on data mining
for SE, what can we now say about:

o New pre-processing: before running the miners,
what extra processing do we now know is required?

o New generalities:  what new tools, methods or
generalities might be proposed?

o New usage issues: what new issues have emerged?

o New ways to use these tools: after the
prediction is made, what do we now know about how
different communities use these tool?

For this journal special issue, we seek archival
contributions (not speculative proposals). The
papers must describe mature results with strong
evaluations. Papers must discuss automated methods
for addressing issues relating to the next
generation of empirical SE.  Those issues include,
but are not restricted to the following:

o Data quality issues  (e.g.   [3]);

o Ensemble learning methods (e.g. [4]);

o Tool (mis)usability issues  (e.g. [5]);

o Support for managerial decision making issues
(e.g.  [6]);

o Data privacy issues  (e.g. [7]);

o Cross-company learning issues (e.g. [8]);

o Predicting the quality of a software system
(e.g. [9]).

This call for papers is open to all researchers.

IS IT A NEXT GEN PAPER?

It is a requirement for all submissions to the
special issue to have a section called “Empirical
SE, V2.0” that discusses next gen issues; i.e. how
their work fits into the broader picture beyond
just building a predictor (see notes above).

PUBLIC DATA

Papers are required to offer verifiable results;
i.e.  they must be based on public-domain data
sets or models. Submissions should come with an
attached note offering the URL of the data/model
used to make the paper's conclusions.  A condition
of publication for accepted papers is that their
data/model must be transferred to the PROMISE
repository (http://promisedata.org/data) prior to
final acceptance. That data/model must be in a
freely accessible format (i.e.no proprietary
formats).

DATES

Jan 1 2013   : submission
April 1, 2013: reviews, round 1
June 1, 2013 : resubmit revised papers

SUBMISSION

Submit to  http://www.editorialmanager.com/ause/,
adhering to the instructions for authors at
http://www.springer.com/computer/ai/journal/10515.
On submission, please include a note saying "For
the special issue on Next Gen Empirical Methods".

REFERENCES

1. Hall, T.; Beecham, S.; Bowes, D.; Gray, D.;
Counsell, S.; , "A Systematic Review of Fault
Prediction Performance in Software Engineering," ,
Pre-print IEEE Transactions on Software
Engineering: http://goo.gl/FOiT9

2. Dejaeger, K.; Verbeke, W.; Martens D.; Baesens,
B; "Data Mining Techniques for Software Effort
Estimation: A Comparative Study".  IEEE
Transactions on Software Engineering, 2012.
http://goo.gl/eZ8RS

3. Gray , D.; Bowes, D.; Davey, N.; Sun, Y.;
Christianson, B., “The misuse of the NASA metrics
data program data sets for automated software
defect prediction”, IET Seminar Digest, 2011
http://goo.gl/QE5au

4. Kocaguneli, E.; Menzies, T.; Keung, J.; , "On
the Value of Ensemble Effort Estimation". IEEE
Transactions on Software Engineering,
http://goo.gl/0LWKZ

5. Shepperd, M.; Hall, T.; Bowes, D.: “A
Meta-Analysis of Software Defect Prediction
Studies”. http://goo.gl/qtc9o

6. Heaven, W.; Letier, E.: “Simulating and
optimising design decisions in quantitative goal
models”. RE’11 http://goo.gl/7bGJc

7. Peters, F.; Menzies, T.: “Privacy and Utility
for Defect Prediction: Experiments with MORPH”.
ICSE’12. http://goo.gl/hF4Un

8. Turhan, B.; Menzies, T.; Bener, A.; Distefano,
J.: "On the Relative Value of Cross-Company and
Within-Company Data for Defect Prediction".
Empirical Software Engineering, 2009.

9. Wagner, S.: “A Bayesian Network Approach to
Assess and Predict Software Quality Using
Activity-Based Quality Models”, Information and
Software Technology, 2010
    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2