Call for papers
Automated Software Engineering Journal
Special Issue: Next Generation of Empirical SE
Stefan Wagner (University of Stuttgart)
[log in to unmask]
Tim Menzies (West Virginia University)
[log in to unmask]
It is well-established that, using data mining, we
can predict properties of a wide range of software
engineering products (e.g. [1,2]). Now, it is
time to ask “what’s next?”.
We ask for papers that explore “what’s next”.
Based on all our past experience on data mining
for SE, what can we now say about:
o New pre-processing: before running the miners,
what extra processing do we now know is required?
o New generalities: what new tools, methods or
generalities might be proposed?
o New usage issues: what new issues have emerged?
o New ways to use these tools: after the
prediction is made, what do we now know about how
different communities use these tool?
For this journal special issue, we seek archival
contributions (not speculative proposals). The
papers must describe mature results with strong
evaluations. Papers must discuss automated methods
for addressing issues relating to the next
generation of empirical SE. Those issues include,
but are not restricted to the following:
o Data quality issues (e.g. );
o Ensemble learning methods (e.g. );
o Tool (mis)usability issues (e.g. );
o Support for managerial decision making issues
o Data privacy issues (e.g. );
o Cross-company learning issues (e.g. );
o Predicting the quality of a software system
This call for papers is open to all researchers.
IS IT A NEXT GEN PAPER?
It is a requirement for all submissions to the
special issue to have a section called “Empirical
SE, V2.0” that discusses next gen issues; i.e. how
their work fits into the broader picture beyond
just building a predictor (see notes above).
Papers are required to offer verifiable results;
i.e. they must be based on public-domain data
sets or models. Submissions should come with an
attached note offering the URL of the data/model
used to make the paper's conclusions. A condition
of publication for accepted papers is that their
data/model must be transferred to the PROMISE
repository (http://promisedata.org/data) prior to
final acceptance. That data/model must be in a
freely accessible format (i.e.no proprietary
Jan 1 2013 : submission
April 1, 2013: reviews, round 1
June 1, 2013 : resubmit revised papers
Submit to http://www.editorialmanager.com/ause/,
adhering to the instructions for authors at
On submission, please include a note saying "For
the special issue on Next Gen Empirical Methods".
1. Hall, T.; Beecham, S.; Bowes, D.; Gray, D.;
Counsell, S.; , "A Systematic Review of Fault
Prediction Performance in Software Engineering," ,
Pre-print IEEE Transactions on Software
2. Dejaeger, K.; Verbeke, W.; Martens D.; Baesens,
B; "Data Mining Techniques for Software Effort
Estimation: A Comparative Study". IEEE
Transactions on Software Engineering, 2012.
3. Gray , D.; Bowes, D.; Davey, N.; Sun, Y.;
Christianson, B., “The misuse of the NASA metrics
data program data sets for automated software
defect prediction”, IET Seminar Digest, 2011
4. Kocaguneli, E.; Menzies, T.; Keung, J.; , "On
the Value of Ensemble Effort Estimation". IEEE
Transactions on Software Engineering,
5. Shepperd, M.; Hall, T.; Bowes, D.: “A
Meta-Analysis of Software Defect Prediction
6. Heaven, W.; Letier, E.: “Simulating and
optimising design decisions in quantitative goal
models”. RE’11 http://goo.gl/7bGJc
7. Peters, F.; Menzies, T.: “Privacy and Utility
for Defect Prediction: Experiments with MORPH”.
8. Turhan, B.; Menzies, T.; Bener, A.; Distefano,
J.: "On the Relative Value of Cross-Company and
Within-Company Data for Defect Prediction".
Empirical Software Engineering, 2009.
9. Wagner, S.: “A Bayesian Network Approach to
Assess and Predict Software Quality Using
Activity-Based Quality Models”, Information and
Software Technology, 2010
For news of CHI books, courses & software, join CHI-RESOURCES
mailto: [log in to unmask]
To unsubscribe from CHI-ANNOUNCEMENTS send an email to
mailto:[log in to unmask]
For further details of CHI lists see http://listserv.acm.org