ACM SIGSOFT Empirical Standards for Software Engineering

The ACM SIGSOFT Empirical Standards are the official evidence standards (models of a community's expectations for conducting and reporting studies) for software engineering research.

Review a paper

19 empirical standards to choose from

Different kinds of research have different norms, so each common research method has a unique standard

Checklists for authors and reviewers

Use standards-based checklists to improve your methods, papers, and peer reviews

Desirable and extraordinary attributes

Standards and checklists clearly differentiate must-haves from exceptional research

Features of the standards

Specific attributes

A list of properties the paper should possess, grouped into essential, desirable, and extraordinary

Conceptually evaluates the proposed artifact; discusses its strengths, weaknesses and limitations.

Engineering Research

General quality criteria

Qualitative and quantitative quality criteria the paper should meet

Conclusion validity, construct validity, internal validity, reliability, objectivity, reproducibility.

Experiment

Acceptable deviations

Circumstances where the paper is permitted to deviate from a standard

Data not shared because it is impractical (e.g. too large) or unethical (e.g. too sensitive)

Data Science

Antipatterns

Common problems with this methodology that papers should avoid

Data analysis focusing on counting words, codes, concepts, or categories instead of interpreting.

Grounded Theory

Invalid criticisms

Unreasonable arguments against a paper that reviewers should not make

The replication merely confirms the findings of the original study; no inconsistencies are reported.

Replication

Suggested readings

Additional scholarship on the method upon which the standard is based

Barbara Kitchenham and Stuart Charters. 2007. Guidelines for performing Systematic Literature Reviews in Software Engineering.

Systematic Review

Exemplars

Good examples of the method that authors should emulate

Diomidis Spinellis and Paris C. Avgeriou. Evolution of the Unix System Architecture: An Exploratory Case Study. IEEE Transactions on Software Engineering. (2019).

Case Study

Interactive checklists

Interactive checklists based on the standards make peer review more specific, technical, and reliable. Customized diagnostics help reviewers make more reasonable and actionable suggestions.

  • More effective, transparent peer reviews
  • Faster publication times
  • Reduced reviewer workload
  • Higher-quality papers

History

MAY 2019

At ICSE town hall, SIGSOFT launches the "Improving Paper and Peer Review Quality Initiative"

MAY 2020

First 8 empirical standards drafted

OCT 2020

Empirical Standards Report made accessible on arXiv Read the Empirical Standards Report →

JAN 2021

First 8 standards made available on GitHub for public comment Visit GitHub repository →

MAY 2021

First review checklists available on the web

JUNE 2021

First recommendation of the standards by a conference (EASE 2021)

JUNE 2023

Field experiment at EASE showing standards improve reliability

Roadmap

EASE experiment and standards published in journal

Empirical standards listed on EQUATOR Network

Ready to try?

Try a checklist

Cite the Empirical Standards