SPLC '21: Proceedings of the 25th ACM International Systems and Software Product Line Conference - Volume A
SESSION: Variability modeling and analysis
Variability modules for Java-like languages
A Software Product Line (SPL) is a family of similar programs (called variants) generated from a common artifact base. A Multi SPL (MPL) is a set of interdependent SPLs (i.e., such that an SPL's variant can depend on variants from other SPLs). MPLs are challenging to model and implement efficiently, especially when different variants of the same SPL must coexist and interoperate. We address this challenge by introducing variability modules (VMs), a new language construct. A VM represents both a module and an SPL of standard (variability-free), possibly interdependent modules. Generating a variant of a VM triggers the generation of all variants required to fulfill its dependencies. Then, a set of interdependent VMs represents an MPL that can be compiled into a set of standard modules. We illustrate VMs by an example from an industrial modeling scenario, formalize them in a core calculus, provide an implementation for the Java-like modeling language ABS, and evaluate VMs by case studies.
SPLC '21: Proceedings of the 25th ACM International Systems and Software Product Line Conference - Volume B
SESSION: Doctoral symposium
LIFTS: learning featured transition systems
This PhD project aims to automatically learn transition systems capturing the behaviour of a whole family of software-based systems. Reasoning at the family level yields important economies of scale and quality improvements for a broad range of systems such as software product lines, adaptive and configurable systems. Yet, to fully benefit from the above advantages, a model of the system family's behaviour is necessary. Such a model is often prohibitively expensive to create manually due to the number of variants. For large long-lived systems with outdated specifications or for systems that continuously adapt, the modelling cost is even higher. Therefore, this PhD proposes to automate the learning of such models from existing artefacts. To advance research at a fundamental level, our learning target are Featured Transition Systems (FTS), an abstract formalism that can be used to provide a pivot semantics to a range of variability-aware state-based modelling languages. The main research questions addressed by this PhD project are: (1) Can we learn variability-aware models efficiently? (2) Can we learn FTS in a black-box fashion? (i.e., with access to execution logs but not to source code); (3) Can we learn FTS in a white/grey-box testing fashion? (i.e., with access to source code); and (4) How do the proposed techniques scale in practice?
SPLC '20: Proceedings of the 24th ACM Conference on Systems and Software Product Line: Volume A - Volume A
SESSION: Adoption and experiences
PAxSPL: a feature retrieval process for SPL reengineering
In this extended abstract, we discuss the Journal First summary of our work published in the Journal of Software: Practice and Experience (SPE) [1].
SPLC '20: Proceedings of the 24th ACM International Systems and Software Product Line Conference - Volume B
DEMONSTRATION SESSION: Demonstrations and Tools
Many-objective Search-based Selection of Software Product Line Test Products with Nautilus
The Variability Testing of Software Product Lines (VTSPL) concerns the selection of the most representative products to be tested according to specific goals. Works in the literature use a great variety of objectives and distinct algorithms. However, they neither address all the objectives at the same time nor offer an automatic tool to support this task. To this end, this work introduces Nautilus/VTSPL, a tool to address the VTSPL problem, created by instantiating Nautilus Framework. Nautilus/VTSPL allows the tester to experiment and configure different objectives and categories of many-objective algorithms. The tool also offers support to visualization of the generated solutions, easing the decision-making process.
SQUADE '18- Proceedings of the 1st International Workshop on Software Qualities and Their Dependencies
Software quality through the eyes of the end-user and static analysis tools: a study on Android OSS applications
Source code analysis tools have been the vehicle for measuring and assessing the quality of a software product for decades. However, recently many studies have shown that post-deployment end-user reviews provide a wealth of insight into the quality of a software product and how it should evolve and be maintained. For example, end-user reviews help to identify missing features or inform developers about incorrect or unexpected software behavior. We believe that analyzing end-user reviews and utilizing analysis tools are a crucial step towards understanding the complete picture of the quality of a software product, as well as towards reasoning about the evolution history of it. In this paper, we investigate whether both methods correlate with one another. In other words, we explore if there exists a relationship between user satisfaction and the application's internal quality characteristics. To conduct our research, we analyze a total of 46 actual releases of three Android open source software (OSS) applications on the Google Play Store. For each release, we employ multiple static analysis tools to assess several aspects of the application's software quality. We retrieve and manually analyze the complete reviews after each release of each application from its store page, totaling 1004 reviews. Our initial results suggest that having high or low code quality does not necessary ensure user overall satisfaction.
SQUADE 2019- Proceedings of the 2nd ACM SIGSOFT International Workshop on Software Qualities and Their Dependencies
SESSION: Papers
A heuristic fuzz test generator for Java native interface
It is well known that once a Java application uses native C/C++ methods through the Java Native Interface (JNI), any security guarantees provided by Java might be invalidated by the native methods. So any vulnerability in this trusted native code can compromise the security of the Java program. Fuzzing test is an approach to software testing whereby the system being tested is bombarded with inputs generated by another program. When using fuzzer to test JNI programs, how to accurately reach the JNI functions and run through them to find the sensitive system APIs is the pre-condition of the test. In this paper, we present a heuristic fuzz generator method on JNI vulnerability detection based on the branch predication information of program. The result in the experiment shows our method can use less fuzzing times to reach more sensitive windows APIs in Java native code.
SSE 2014- Proceedings of the 6th International Workshop on Social Software Engineering
SESSION: Collaboration
Can collaborative tagging improve user feedback? a case study
Supporting collaboration of heterogeneous teams in an augmented team room
SESSION: Human Factors
Eliciting and visualising trust expectations using persona trust characteristics and goal models
One size doesn't fit all: diversifying "the user" using personas and emotional scenarios
Towards discovering the role of emotions in stack overflow
SESSION: Empirical Studies
An empirical investigation of socio-technical code review metrics and security vulnerabilities
Developer involvement considered harmful?: an empirical examination of Android bug resolution times
SSE 2016- Proceedings of the 8th International Workshop on Social Software Engineering
SWAN 2016- Proceedings of the 2nd International Workshop on Software Analytics
SESSION: API Analytics and Security
Addressing scalability in API method call analytics
Vulnerability severity scoring and bounties: why the disconnect?
SESSION: Defects and Effort Estimation
A replication study: mining a proprietary temporal defect dataset
A hybrid model for task completion effort estimation
SESSION: Crowdsourcing
Analyzing on-boarding time in context of crowdsourcing
Software crowdsourcing reliability: an empirical study on developers behavior
SESSION: Design and Clones
FourD: do developers discuss design? revisited
Sampling code clones from program dependence graphs with GRAPLE
SWAN 2017- Proceedings of the 3rd ACM SIGSOFT International Workshop on Software Analytics
Copyright (c) 2020 - 2025, SIGSOFT; all rights reserved.
Template by Bootstrapious. Ported to Hugo by DevCows.
