Case Study and Ethnography

"An empirical inquiry that investigates a contemporary phenomenon (the "case") in depth and within its real-world context, especially when the boundaries between phenomenon and context are unclear" (Yin 2017)

Application

This standard applies to empirical research that meets the following conditions.

  • Presents a detailed account of a specific instance of a phenomenon at a site. The phenomenon can be virtually anything of interest (e.g. Unix, cohesion metrics, communication issues). The site can be a community, an organization, a team, a person, a process, an internet platform, etc.
  • Features direct or indirect observation (e.g. interviews, focus groups)—see Lethbridge et al.’s (2005) taxonomy.
  • Is not an experience report (cf. Perry et al. 2004) or a series of shallow inquiries at many different sites.

A case study can be brief (e.g. a week of observation) or longitudinal (if observation exceeds the natural rhythm of the site; e.g., observing a product over many releases). For our purposes, case study subsumes ethnography.

If data collection and analysis are interleaved, consider the Grounded Theory Standard. If the study mentions action research, or intervenes in the context, consider the Action Research Standard. If the study captures a large quantitative dataset with limited context, consider the Data Science Standard.

Specific Attributes

Essential Attributes

  • justifies the selection of the case(s) or site(s) that was(were) studied
  • describes the site(s) in rich detail
  • the site is a real (physical or virtual) place, not a toy example or imaginary world
  • reports the type of case study1
  • describes data sources (e.g. participants’ demographics and work roles)
  • defines unit(s) of analysis or observation

  • presents a clear chain of evidence from observations to findings

Desirable Attributes

  • provides supplemental materials such as interview guide(s), coding schemes, coding examples, decision rules, or extended chain-of-evidence tables
  • triangulates across data sources, informants or researchers
  • cross-checks interviewee statements (e.g. against direct observation or archival records)
  • uses participant observation (ethnography) or direct observation (non-ethnography) and clearly integrates these observations into results2
  • validates results using member checking, dialogical interviewing3, feedback from non-participant practitioners or research audits of coding by advisors or other researchers
  • describes external events and other factors that may have affected the case or site
  • describes how prior understandings of the phenomena were managed and/or influenced the research
  • uses quotations to illustrate findings4
  • EITHER: evaluates an a priori theory (or model, framework, taxonomy, etc.) using deductive coding with an a priori coding scheme based on the prior theory
    OR: synthesizes results into a new, mature, fully-developed and clearly articulated theory (or model, etc.) using some form of inductive coding (coding scheme generated from data)
  • researchers reflect on their own possible biases

Extraordinary Attributes

  • multiple, deep, fully-developed cases with cross-case triangulation
  • uses a team-based approach; e.g., multiple raters with analysis of inter-rater reliability (see the IRR/IRA Supplement)
  • published a case study protocol beforehand and made it publicly accessible (see the Registered Reports Supplement)

General Quality Criteria

Case studies should be evaluated using qualitative validity criteria such as credibility, multivocality, reflexivity, rigor, and transferability (see Glossary). Quantitative quality criteria such as replicability, generalizability and objectivity typically do not apply.

Types of Case Studies

There is no standard way of conducting a case study. Case study research can adopt different philosophies, most notably (post-)positivism (Lee 1989) and interpretivism/constructivism (Walsham 1995), and serve different purposes, including:

  • a descriptive case study describes—in vivid detail–a particular instance of a phenomenon
  • an emancipatory case study identifies social, cultural, or political domination “that may hinder human ability” (Runeson and Host 2009), commensurate with a critical epistemological stance
  • an evaluative case study evaluates a priori research questions, propositions, hypotheses or technological artifacts
  • an explanatory case study explains how or why a phenomenon occurred, typically using a process or variance theory
  • an exploratory case study explores a particular phenomenon to identify new questions, propositions or hypotheses
  • an historical case study draws on archival data, for instance, software repositories
  • a revelatory case study examines a hitherto unknown or unexplored phenomenon

Antipatterns

  • Relying on a single approach to data collection (e.g. interviews) without corroboration or data triangulation
  • Oversimplifying and over-rationalizing complex phenomena; presenting messy, complicated things as simple and clean

Invalid Criticisms

  • Does not present quantitative data.
  • Sample of one; findings not generalizable. The point of a case study is to study one thing deeply, not to generalize to a population. Case studies should lead to theoretical generalization; that is, concepts that are transferable in principle.
  • Lack of internal validity. Internal validity only applies to explanatory case studies that seek to establish causality.
  • Lack of reproducibility or a “replication package”; Data are not disclosed (qualitative data are often confidential).
  • Insufficient number or length of interviews. There is no magic number; what matters is that there is enough data that the findings are credible, and the description is deep and rich.

Suggested Readings

(Note: we recommend beginning with Yin’s book.)

Line Dube and Guy Pare. Rigor in information systems positivist case re-search: current practices, trends, and recommendations. 2003. MIS Quarterly. 27, 4, 597–636. DOI: 10.2307/30036550

Shiva Ebneyamini, and Mohammad Reza Sadeghi Moghadam. 2018. Toward Developing a Framework for Conducting Case Study Research. International Journal of Qualitative Methods. 17, 1 (Dec. 2018)

Barbara Kitchenham, Lesley Pickard, and Shari Lawrence Pfleeger. 1995. Case studies for method and tool evaluation. IEEE software. 12, 4 (1995), 52–62.

Timothy C. Lethbridge, Susan Elliott Sim, and Janice Singer. 2005. Studying software engineers: Data collection techniques for software field studies. Empirical Software Engineering. 10, 3 (2005), 311–341.

Mathew Miles, A Michael Huberman and Saldana Johnny. 2014. Qualitative data analysis: A methods sourcebook. Sage.

Dewayne E. Perry, Susan Elliott Sim, and Steve M. Easterbrook. 2004. Case Studies for Software Engineers, In Proceedings 26th International Conference on Software Engineering. 28 May 2008, Edinburgh, UK, 736–738.

Per Runeson and Martin Höst. 2009. Guidelines for conducting and reporting case study research in software engineering. Empirical Software Engineering. 14, 2, Article 131.

Per Runeson, Martin Host, Austen Rainer, and Bjorn Regnell. 2012. Case study research in software engineering: Guidelines and examples. John Wiley & Sons.

Sarah J. Tracy. 2010. Qualitative Quality: Eight “Big-Tent” Criteria for Excellent Qualitative Research. Qualitative Inquiry. 16, 10, 837–851. DOI: 10.1177/1077800410383121

Geoff Walsham, 1995. Interpretive case studies in IS research: nature and method. European Journal of information systems. 4,2, 74–81.

Robert K. Yin. 2017. Case study research and applications: Design and methods. Sage publications.

Exemplars

Adam Alami, and Andrzej Wąsowski. 2019. Affiliated participation in open source communities. In 2019 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM). 1–11

Michael Felderer and Rudolf Ramler. 2016. Risk orientation in software testing processes of small and medium enterprises: an exploratory and comparative study. Software Quality Journal. 24, 3 (2016), 519–548.

Audris Mockus, Roy T. Fielding, and James D. Herbsleb. 2002. Two case studies of open source software development: Apache and Mozilla. ACM Transactions on Software Engineering and Methodology (TOSEM). 11, 3 (2002), 309–346.

Helen Sharp and Hugh Robinson. 2004. An ethnographic study of XP practice. Empirical Software Engineering. 9, 4 (2004), 353–375.

Diomidis Spinellis and Paris C. Avgeriou. Evolution of the Unix System Architecture: An Exploratory Case Study. IEEE Transactions on Software Engineering. (2019).

Klaas-Jan Stol and Brian Fitzgerald. Two’s company, three’s a crowd: a case study of crowdsourcing software development. In Proceedings of the 36^th^ International Conference on Software Engineering, 187–198, 2014.


1e.g. descriptive, emancipatory, evaluative, explanatory, exploratory, historical, revelatory
2Direct observation means watching research subjects without getting involved; participant observation means joining in with whatever participants are doing
3L. Harvey. 2015. Beyond member-checking: A dialogic approach to the research interview, International Journal of Research & Method in Education, 38, 1, 23–38.
4Quotations should not be the only representation of a finding; each finding should be described independently of supporting quotations

Cite the Empirical Standards