MODELSWARD 2021 Abstracts


Area 1 - Applications and Software Development

Full Papers
Paper Nr: 8
Title:

An MDE Method for Improving Deep Learning Dataset Requirements Engineering using Alloy and UML

Authors:

Benoít Ries, Nicolas Guelfi and Benjamin Jahić

Abstract: Since the emergence of deep learning (DL) a decade ago, only few software engineering development methods have been defined for systems based on this machine learning approach. Moreover, rare are the DL approaches addressing specifically requirements engineering. In this paper, we define a model-driven engineering (MDE) method based on traditional requirements engineering to improve datasets requirements engineering. Our MDE method is composed of a process supported by tools to aid customers and analysts in eliciting, specifying and validating dataset structural requirements for DL-based systems. Our model driven engineering approach uses the UML semi-formal modeling language for the analysis of datasets structural requirements, and the Alloy formal language for the requirements model execution based on our informal translational semantics. The model executions results are then presented to the customer for improving the dataset validation activity. Our approach aims at validating DL-based dataset structural requirements by modeling and instantiating their datatypes. We illustrate our approach with a case study on the requirements engineering of the structure of a dataset for classification of five-segments digits images.

Paper Nr: 32
Title:

A Model-driven Implementation of PSCS Specification for C++

Authors:

Maximilian Hammer, Ralph Maschotta, Alexander Wichmann, Tino Jungebloud, Francesco Bedini and Armin Zimmermann

Abstract: OMG’s PSCS specification extends the execution model of fUML by precise runtime semantics for UML composite structures. With composite structures being a concept for describing structural properties of a model, the majority of execution semantics specified by PSCS concern analysis and processing of static information about the model’s fine-grained structure at runtime. Using Model-To-Text-Transformation to generate source code, which serves as an input for PSCS’s actual execution environment, the runtime level of model execution can be relieved by outsourcing analysis and processing of static information to the level of code generation. By inserting this step of preprocessing, the performance of the actual model execution at runtime can be improved. This paper introduces an implementation of the PSCS specification for C++ based on code generation using Model-to-Text-Transformation. Moreover, it presents a set of test models validating the correct functionality of the implementation as well as a performance benchmark. The PSCS implementation presented by this paper was developed as a part of the MDE4CPP∗ project.

Short Papers
Paper Nr: 24
Title:

Interfacing Digital and Analog Models for Fast Simulation and Virtual Prototyping

Authors:

Daniela Genius and Ludovic Apvrille

Abstract: The paper presents an enhancement for the virtual simulation of analog/mixed-signal systems from high-level SysML models. Embedded systems, e.g. in robotic systems, feature both digital and analog circuits such as sensors and actuators. Simulation of these system requires to handle both domains (digital, analog); our aim is thus to make the interactions between these two domains explicit. For this, the paper first defines new send and receive procedures, then explains how to check semantic aspects of models to ensure a correct simulation. A running example illustrates the basic concepts of our approach. A proof of concept based on an existing rover is also presented.

Paper Nr: 29
Title:

Empirical and Theoretical Evaluation of USE and OCLE Tools

Authors:

Carlos Vera-Mejia, Maria F. Granda and Otto Parra

Abstract: Validating the conceptual model (CM) is a key activity in ensuring software quality and saving costs, especially when adopting any type of Model-driven Software Engineering methodology, in which standard modelling languages such as UML and tools support for validation become essential. This paper analyses and evaluates the main characteristics of the tools to support test-based validation in CMs. For this, two research approaches were used: (1) an empirical evaluation to compare the effectiveness and fault detection efficiency in a CM and analyses the level of ease of use of two tools used to validate requirements in UML conceptual models, and (2) a complementary theoretical analysis. The study focuses on the class diagram, the most common type of UML diagram, and two tools widely used by the modelling community for test-based validation: USE and OCLE. Theoretical and empirical comparisons were carried out with the aim of selecting an appropriate tool to validate UML-based CMs with OCLE achieving a better score.

Area 2 - Methodologies, Processes and Platforms

Full Papers
Paper Nr: 3
Title:

From User Stories to Models: A Machine Learning Empowered Automation

Authors:

Takwa Kochbati, Shuai Li, Sébastien Gérard and Chokri Mraidha

Abstract: In modern software development, manually deriving architecture models from software requirements expressed in natural language becomes a tedious and time-consuming task particularly for more complex systems. Moreover, the increase in size of the developed systems raises the need to decompose the software system into sub-systems at early stages since such decomposition aids to better design the system architecture. In this paper, we propose a machine learning based approach to automatically break-down the system into sub-systems and generate preliminary architecture models from natural language user stories in the Scrum process. Our approach consists of three pillars. Firstly, we compute word level similarity of requirements using word2vec as a prediction model. Secondly, we extend it to the requirement level similarity computation, using a scoring formula. Thirdly, we employ the Hierarchical Agglomerative Clustering algorithm to group the semantically similar requirements and provide an early decomposition of the system. Finally, we implement a set of specific Natural Language Processing heuristics in order to extract relevant elements that are needed to build models from the identified clusters. Ultimately, we illustrate our approach by the generation of sub-systems expressed as UML use-case models and demonstrate its applicability using three case studies.

Paper Nr: 10
Title:

Reusing (Safety-oriented) Compliance Artifacts while Recertifying

Authors:

Julieth C. Ardila and Barbara Gallina

Abstract: Revisions of safety-related standards lead to the release of new versions. Consequently, products and processes need to be recertified. To support that need, product line-oriented best practices have been adopted to systematize reuse at various levels, including the engineering process itself. As a result, Safety-oriented Process Line Engineering (SoPLE) is introduced to systematize reuse of safety-oriented process-related artifacts. To systematize reuse of artifacts during automated process compliance checking, SoPLE was conceptually combined with a logic-based framework. However, no integrated and tool-supported solution was provided. In this paper, we focus on process recertification (interpreted as the need to show process plan adherence with the new version of the standard) and propose a concrete technical and tool-supported methodological framework for reusing (safety-oriented) compliance artifacts while recertifying. We illustrate the benefits of our methodological framework by considering ISO 14971 versions, and measuring the enabled reuse.

Paper Nr: 11
Title:

Process Digitalization using Blockchain: EU Parliament Elections Case Study

Authors:

Marek Skotnica, Marta Aparício, Robert Pergl and Sérgio Guerreiro

Abstract: Blockchain aims to be a disruptive technology in many aspects of our lives. It attempts to disrupt the aspect of our lives that were hard to digitize, such as democratic elections or were digitized in a centralized way, such as the financial system or social media. Despite the initial success of blockchain platforms, many hacks, design errors, and a lack of standards was encountered. This paper aims to provide a methodical approach to the design of the blockchain-based systems and help to eliminate these issues. The approach is then demonstrated in an EU parliament elections case study.

Paper Nr: 14
Title:

Performance Aspects of Correctness-oriented Synthesis Flows

Authors:

Fritjof Bornebusch, Christoph Lüth, Robert Wille and Rolf Drechsler

Abstract: When designing electronic circuits, available synthesis flows either focus on accelerating the synthesized circuit or correctness. In the quest for ever-faster hardware designs, the correctness of these designs is often neglected. Thus, designers need to trade-off between correctness and performance. The question is how large the trade-off is? This work presents a systematic comparison of two representative synthesis flows, the LegUp HLS framework as a representative for flows focusing on hardware acceleration, and a flow based on the proof assistant Coq focusing on correctness. For evaluation purposes, a 32-bit MIPS processor synthesized using the two flows, and the final HDL implementations are compared regarding their performance. Our evaluation allows a quantitative analysis of the trade-off, showing that correctness-oriented synthesis flows are competitive concerning performance.

Paper Nr: 27
Title:

Integrating Kahn Process Networks as a Model of Computation in an Extendable Model-based Design Framework

Authors:

Omair Rafique and Klaus Schneider

Abstract: This work builds upon an extendable model-based design framework called SHeD that enables the automatic software synthesis of different classes of dataflow process networks (DPNs) which represent different kinds of models of computation (MoCs). SHeD proposes a general DPN model that can be restricted by constraints to special classes of DPNs. It provides a tool chain including different specialized code generators for specific MoCs and a runtime system that finally maps models using a combination of different MoCs on cross-vendor target hardware. In this paper, we further extend the framework by integrating Kahn process networks (KPNs) in addition to the so-far existing support of dynamic and static/synchronous DPNs. The tool chain is extended for automatically synthesizing the modeled systems for the target hardware. In particular, a specialized code generator is developed and the runtime system is extended to implement models based on the underlying semantics of the KPN MoC. We modeled and automatically synthesized a set of benchmarks for different target hardware based on all supported MoCs of the framework, including the newly integrated KPN MoC. The results are evaluated to analyze and compare the code size and the end-to-end performance of the generated implementations of all MoCs.

Short Papers
Paper Nr: 9
Title:

Multi-view-Model Risk Assessment in Cyber-Physical Production Systems Engineering

Authors:

Stefan Biffl, Arndt Lüder, Kristof Meixner, Felix Rinker, Matthias Eckhart and Dietmar Winkler

Abstract: The engineering of complex, flexible production systems, Cyber Physical Production Systems (CPPSs), requires integrating models across engineering disciplines. A CPPS Engineering Network (CEN), an integrated multi-domain multi-view model, facilitates the assessment of risks to CPPS and product designs, i.e., risks stemming from several engineering disciplines. However, traditional risk assessment, e.g., Failure Mode and Effect Analysis (FMEA), provides informal cause-effect hypotheses, which may be hard to test without interdisciplinary links through the CEN to CPPS data sources. This paper aims to improve the effectiveness of model-based cause identification and validation for risks to CPPS functions that come from modeling in several CPPS disciplines by introducing the CPPS Risk Assessment (CPPS-RA) approach for representing FMEA cause-effect hypotheses and linking them to a CEN. These links provide the basis to specify CPPS engineering and operational data required for hypothesis testing. We evaluate the CPPS-RA approach in a feasibility study on a representative use case from discrete manufacturing. In the study context, domain experts found the CPPS-RA meta-model sufficiently expressive and the CPPS-RA method useful to validate FMEA results.

Paper Nr: 22
Title:

Transforming Data Flow Diagrams for Privacy Compliance

Authors:

Hanaa Alshareef, Sandro Stucki and Gerardo Schneider

Abstract: Most software design tools, as for instance Data Flow Diagrams (DFDs), are focused on functional aspects and cannot thus model non-functional aspects like privacy. In this paper, we provide an explicit algorithm and a proof-of-concept implementation to transform DFDs into so-called Privacy-Aware Data Flow Diagrams (PA-DFDs). Our tool systematically inserts privacy checks to a DFD, generating a PA-DFD. We apply our approach to two realistic applications from the construction and online retail sectors.

Paper Nr: 43
Title:

Continuous Integration in Multi-view Modeling: A Model Transformation Pipeline Architecture for Production Systems Engineering

Authors:

Felix Rinker, Laura Waltersdorfer, Kristof Meixner, Dietmar Winkler, Arndt Lüder and Stefan Biffl

Abstract: Background. Systems modeling in Production Systems Engineering (PSE) is complex: Multiple views from different disciplines have to be integrated, while semantic differences stemming from various descriptions must be bridged. Aim. This paper proposes the Multi-view Modeling Framework (MvMF) approach and architecture of a model transformation pipeline. The approach aims to ease setup and shorten configuration effort of multi-view modeling operations and support the reusability of modeling environments, like additional view integration. Method. We combine multi-view modeling with principles from distributed, agile workflows, i.e., Git and Continuous Integration. Results. The MvMF provides a light-weight modeling operation environment for AutomationML (AML) models. We show MvMF capabilities and demonstrate the feasibility of MvMF with a demonstrating use case including fundamental model operation features, such as compare and merge. Conclusion. Increasing requirements on the traceability of changes and validation of system designs require improved and extended model transformations and integration mechanisms. The proposed architecture and prototype design represents a first step towards an agile PSE modeling workflow.

Paper Nr: 4
Title:

SeGa4Biz: Model-Driven Framework for Developing Serious Games for Business Processes

Authors:

Faezeh Khorram, Masoumeh Taromirad and Raman Ramsin

Abstract: Organizations look for effective ways to teach their business processes to their employees. The application of serious games for teaching business processes is getting attraction recently. However, existing works are by large business-specific and few of them aim at teaching business processes in general, besides that the development of such games inherently suffers lack of precise and clear development approaches. This paper presents SeGa4Biz, a model-driven framework for serious game development for teaching business processes. Modeling supports different levels of abstraction and hence, increases user involvement throughout the development. SeGa4Biz particularly provides metamodels for creating Educational Serious Games (ESG) and Game-Aware Process (GAP) models, and automates considerable parts of the modeling and development activities, via model transformation. The effectiveness and applicability of SeGa4Biz is examined through a serious game development project in a software development company.

Paper Nr: 55
Title:

Towards Evolutionary Multi-layer Modeling with DMLA

Authors:

Sándor Bácsi, Dániel Palatinszky and Máté Hidvégi

Abstract: State-of-the-art meta-model based methodologies are facing increasing pressure under new challenges originating from practical applications. In such cases, there is a strong need for approaches that support continuous, fine-graded, incremental refining of concepts. To address these challenges, our research group started working on a new modeling framework, the Dynamic Multi-Layer Algebra (DMLA) a few years ago. DMLA follows a completely new modeling paradigm, referred to as multi-layer modeling. Multi-layer modeling is originated from multi-level modeling and offers a highly flexible abstraction management approach in a level-blind fashion through its advanced deep instantiation and evolutionary snapshot management. One of the key features of DMLA is its self-validation mechanism based on a built-in, completely modeled operation language. Our initial solution had its limitations, since interactive editing was not supported, modelers could interact only with a single snapshot of the model. To overcome the limitations, we have created a virtual machine and an interpreter. In this paper, we present the novel architecture of our solution and demonstrate the feasibility of our approach by a walk-through of the concrete model management steps of an illustrative example to show the benefits of evolutionary model editing in DMLA.

Area 3 - Modeling Languages, Tools and Architectures

Full Papers
Paper Nr: 2
Title:

Combining a Declarative Language and an Imperative Language for Bidirectional Incremental Model Transformations

Authors:

Matthias Bank, Thomas Buchmann and Bernhard Westfechtel

Abstract: Bidirectional incremental model transformations are crucial for supporting round-trip engineering in model-driven software development. A variety of domain-specific languages (DSLs) have been proposed for the declarative specification of bidirectional transformations. Unfortunately, previous proposals fail to provide the expressiveness required for solving practically relevant bidirectional transformation problems. To address this shortcoming, we propose a layered approach: On the declarative level, a bidirectional transformation is specified concisely in a small and light-weight external DSL. From this specification, code is generarated into an object-oriented framework, on top of which the behavior of the transformation may be complemented and adapted in an imperative internal DSL. An evaluation with the help of a well-known transformation case demonstrates that this layered hybrid approach is both concise and expressive, and also scalable.

Paper Nr: 37
Title:

RCM: Requirement Capturing Model for Automated Requirements Formalisation

Authors:

Aya Zaki-Ismail, Mohamed Osama, Mohamed Abdelrazek, John Grundy and Amani Ibrahim

Abstract: Most existing automated requirements formalisation techniques require system engineers to (re)write their requirements using a set of predefined requirement templates with a fixed structure and known semantics to simplify the formalisation process. However, these techniques require understanding and memorising requirement templates, which are usually fixed format, limit requirements captured, and do not allow capture of more diverse requirements. To address these limitations, we need a reference model that captures key requirement details regardless of their structure, format or order. Then, using NLP techniques we can transform textual requirements into the reference model. Finally, using a suite of transformation rules we can then convert these requirements into formal notations. In this paper, we introduce the first and key step in this process, a Requirement Capturing Model (RCM) - as a reference model - to model the key elements of a system requirement regardless of their format, or order. We evaluated the robustness of the RCM model compared to 15 existing requirements representation approaches and a benchmark of 162 requirements. Our evaluation shows that RCM breakdowns support a wider range of requirements formats compared to the existing approaches. We also implemented a suite of transformation rules that transforms RCM-based requirements into temporal logic(s). In the future, we will develop NLP-based RCM extraction technique to provide end-to-end solution.

Paper Nr: 51
Title:

Dedicated Model Transformation Languages vs. General-purpose Languages: A Historical Perspective on ATL vs. Java

Authors:

Stefan Götz, Matthias Tichy and Timo Kehrer

Abstract: Model transformations are among the key concepts of model-driven engineering (MDE), and dedicated model transformation languages (MTLs) emerged with the popularity of the MDE paradigm about 15 to 20 years ago. MTLs claim to increase the ease of development of model transformations by abstracting from recurring transformation aspects and hiding complex semantics behind a simple yet intuitive syntax. Nonetheless, MTLs are rarely adopted in practice, there is still no empirical evidence for this claim, and the argument of abstraction deserves a fresh look in the light of modern general-purpose languages (GPLs) which have undergone a significant evolution in the last two decades. In this paper, we report about a study in which we compare the complexity of model transformations written in three different languages, namely (i) the Atlas Transformation Language (ATL), (ii) Java SE5, and (iii) Java SE14; the Java transformations are derived from an ATL specification using a translation schema we developed in terms of our study. In a nutshell, we found that some of the new features in Java SE14 compared to Java SE5 help to significantly reduce the complexity of transformations written in Java. At the same time, however, the relative amount of complexity that stems from aspects that ATL can hide from the developer stays about the same. Based on these results, we indicate potential avenues for future research on the comparison of MTLs and GPLs in a model transformation context.

Short Papers
Paper Nr: 15
Title:

Improving Digital Twin Experience Reports

Authors:

Bentley J. Oakes, Ali Parsai, Simon Van Mierlo, Serge Demeyer, Joachim Denil, Paul De Meulenaere and Hans Vangheluwe

Abstract: Digital twins (DTs) are prevalent throughout industrial domains as evidenced by the rapid pace of experience reports in the literature. However, there remains disagreement about the precise definition of a DT and the essential characteristics in the DT paradigm, such as the scope of the system-under-study and the time-scale of its communication with the DT. These experience reports could therefore be hampering further classification and research insights by not reporting all of these relevant details about the DT solutions. We address these concerns by providing a conceptual structure for DTs as a common understanding and checklist for researchers and practitioners to precisely describe the characteristics and capabilities of their DT solutions. We express five experience reports using our structure to demonstrate its applicability and role as a guideline to improve the reporting of characteristics and increase the clarity of future experience reports.

Paper Nr: 17
Title:

Towards a Model Transformation based Code Renovation Tool

Authors:

Norbert Somogyi, Gábor Kövesdán and László Lengyel

Abstract: Maintaining legacy software has always required considerable effort in software engineering. To alleviate these efforts, extensive research has been dedicated to automate the modernization of such systems. The process includes several challenges, such as the syntactic translation of the old software to a modern programming language, the mapping of the type systems of the source and target languages and the paradigm shift if the two languages use different approaches, such as transforming procedural code to the object-oriented or functional paradigm. In the case of procedural to object-oriented transformations, the state-of-the-art solutions are not capable of automatically producing satisfactory results and some researchers suggest that complete automation will never be achieved. In our paper, we report on our work in progress on using recent advances in the fields of modeling and model transformation to build a software modernization tool. Our solution capitalizes on the advantages of the Ecore-based modeling ecosystem of Eclipse and focuses on not just the syntactic translation of the system, but also on the paradigm shift of procedural to object-oriented transformations. Our approach builds a semantic model from the original source code written in C language and produces Java code by analysing and transforming this model.

Paper Nr: 18
Title:

From Quantities in Software Models to Implementation

Authors:

Steve McKeever

Abstract: In scientific and engineering applications, physical quantities expressed as units of measurement (UoM) are used regularly. If the algebraic properties of a system’s UoM information are incorrectly handled at run-time then catastrophic problems can arise. Much work has gone into creating libraries, languages and tools to ensure developers can leverage UoM information in their designs and codes. While there are technical solutions that allow units of measurement to be specified at both the model and code level, a broader assessment of their strengths and weaknesses has not been undertaken. Inspired by a survey of practitioners, we review four competing methods that support unit checking of code bases. The most straightforward solution is for the programming language to Natively support UoM as this allows for efficient unit conversion and static checking. Alas, none of the mainstream languages provide such support. Libraries might seem compelling, and all popular programming languages have a myriad of options, but they’re cumbersome in practice and have specific performance costs. Libraries are best suited to applications in which UoM checking is desirable at run-time. Lightweight methods, such as Component based checking or Black Box testing, provide many benefits of UoM libraries with minimal overheads but sacrifice coverage and thus robustness. By separating and analysing the various options, we hope to enable scientific developers to select the most appropriate approach to transferring UoM information from their software models to their programs.

Paper Nr: 26
Title:

A Modeling Workbench for the Development of Situation-specific Test Co-migration Methods

Authors:

Ivan Jovanovikj, Anu T. Thottam, Vishal J. Vincent, Enes Yigitbas, Stefan Sauer and Gregor Engels

Abstract: Reusing existing test cases in software migration projects is a widely used validation technique in software migration projects. When performing a test case migration, a transformation method is required which serves as a technical guideline and describes the activities necessary to perform, tools to be used, and roles to be involved. The transformation method should consider the situational context as it influences the quality and the effort regarding the test case migration. On the one hand, the development of a situation-specific transformation method is a very important task as it influences the overall success of the migration project in terms of effectiveness and efficiency. On the other hand, the development and enactment of situation-specific test transformation methods without proper tool support and guidance is a complex and cumbersome task. Therefore, in this paper, we present a modeling workbench implemented in Eclipse Sirius that supports the development of situation-specific test case co-migration methods. Initial evaluation results show the benefit of the modeling workbench in the sense of efficiency, effectiveness, and user satisfaction.

Paper Nr: 28
Title:

HERO vs. Zombie: Identifying Zombie Guests in a Virtual Machine Environment

Authors:

Yael Elinav, Alex Moshinky, Lior Siag and Nezer J. Zaidenberg

Abstract: Virtual servers are important in many data-centers. Multiple guest virtual machines are consolidated on several hosts on-site or on the cloud, and serve the organization’s computational needs. However, virtual machines not cleared from the system, known as zombie machines, waste resources and pose a security risk. We present a novel tool to optimize resource use by tracking down zombie machines: HERO (Host Environment Resource Optimization). HERO leverages multiple testing approaches and machine learning to assist system administrators in locating “zombie” machines.

Paper Nr: 33
Title:

Model-based Analysis Support for Dependable Complex Systems in CHESS

Authors:

Alberto Debiasi, Felicien Ihirwe, Pierluigi Pierini, Silvia Mazzini and Stefano Tonetta

Abstract: The challenges related to dependable complex systems are heterogeneous and involve different aspects of the system. On one hand, the decision-making processes need to take into account many options. On the other hand, the design of the system logical architecture must consider various dependability concerns such as safety, reliability, and security. Moreover, in case of high-assurance systems, the analysis of such concerns must be performed with rigorous methods. In this paper, we present the new development of CHESS, a cross-domain, model-driven, component-based and open-source tool for the development of high-integrity systems. We focus on the new recently distributed version of CHESS, which supports extended model-based development and analyses for safety and security concerns.

Paper Nr: 38
Title:

RCM-Extractor: Automated Extraction of a Semi Formal Representation Model from Natural Language Requirements

Authors:

Aya Zaki-Ismail, Mohamed Osama, Mohamed Abdelrazek, John Grundy and Amani Ibrahim

Abstract: Formal verification requires system requirements to be specified in formal notations. Formalisation of system requirements manually is a time-consuming and error-prone process, and requires engineers to have strong mathematical and domain expertise. Most existing requirements formalisation techniques assume requirements to be specified in pre-defined templates and these techniques employ pre-defined transformation rules to transform requirements specified in the predefined templates to formal notations. These techniques tend to have limited expressiveness and more importantly require system engineers to re-write their system requirements following these templates. In this paper, we introduces an automated extraction technique (RCM-Extractor) to extract the key constructs of a comprehensive and formalisable semi-formal representation model from textual requirements. We have evaluated our RCM-Extractor on a dataset of 162 requirements curated from the literature. RCM-Extractor achieved 95% precision, 79% recall, 86% F-measure and 75% accuracy.

Paper Nr: 44
Title:

A Framework for Projectional Multi-variant Model Editors

Authors:

Johannes Schröpfer, Thomas Buchmann and Bernhard Westfechtel

Abstract: Model-driven software product line engineering (MDSPLE) combines the productivity gains achieved by model-driven software engineering and software product line engineering. In MDSPLE, multi-variant models are created in domain engineering which are configured into single-variant models that are adapted further (if required) in application engineering. Since multi-variant models are inherently complex, tools are urgently needed which provide specific support for editing multi-variant models. In this paper, we present a framework for projectional multi-variant editors which do not hide complexity but make it manageable by a user-friendly representation. At all times, a domain engineer is aware of editing a multi-variant model which is necessary to assess the impact of changes on all model variants. Projectional multi-variant editors provide a novel approach to representing variability information which is displayed non-intrusively and supports a clear separation of the product space (the domain model) from the variant space (variability annotations). Furthermore, the domain engineer may employ a projectional multi-variant editor to adapt the representation of the multi-variant domain model in a flexible way, according to the current focus of interest.

Paper Nr: 46
Title:

SciModeler: A Metamodel and Graph Database for Consolidating Scientific Knowledge by Linking Empirical Data with Theoretical Constructs

Authors:

Raoul Nuijten and Pieter Van Gorp

Abstract: An important purpose of science is building and advancing general theories from empirical data. This process is complicated by the immense volume of empirical data and scientific theories in some fields. Particularly, the systematic linking of empirical data with theoretical constructs is currently lacking. Within this article, we propose a prototypical solution (i.e., a metamodel and graph database) for consolidating scientific knowledge by linking theoretical constructs with empirical data. We conducted a case study within the field of health behavior change where the system is used to record three scientific theories and three empirical studies as well as their mutual links. Finally, we demonstrate how the system can be queried to accumulate knowledge.

Paper Nr: 53
Title:

Verification of Scenario-based Behavioural Models using Capella and PyNuSMV

Authors:

Simon Busard, Christophe Ponsard and Charles Pecheur

Abstract: Scenarios are widely use to capture a set of key system behaviours. They are part of standardised modelling languages like UML and SysML. Precise semantics enable to analyse them at a formal level. In this paper, we show how scenarios can be used to perform early checks on behavioural models in an industrial context by providing a bridge between system modelling with Capella and the NuSMV model checker through the PyNuSMV integration library and using hMSC semantics. Both the modelling front-end and verification back-end are discussed and illustrated on a case study of unmanned aerial vehicles. Some interesting extensions to increase the value of the integration are also identified and discussed.

Paper Nr: 58
Title:

Towards Repairing Scenario-Based Models with Rich Events

Authors:

Guy Katz

Abstract: Repairing legacy systems is a difficult and error-prone task: often, limited knowledge of the intricacies of these systems could make an attempted repair result in new errors. Consequently, it is desirable to repair such systems in an automated and sound way. Here, we discuss our ongoing work on the automated repair of Scenario-Based Models: fully executable models that describe a system using scenario objects that model its individual behaviors. We show how rich, scenario-based models can be model-checked, and then repaired to prevent various safety violations. The actual repair is performed by adding new scenario objects to the model, and without altering existing ones — in a way that is well aligned with the principles of scenario-based modeling. In order to automate our repair approach, we leverage off-the-shelf SMT solvers. We describe the main principles of our approach, and discuss our plans for future work.

Paper Nr: 5
Title:

TranspLanMeta: A Metamodel for TranspLan Modeling Language

Authors:

Deniz Cetinkaya and Mahmood Hosseini

Abstract: Transparency and transparent decision making are essential requirements in information systems. To this end, a modeling language called TranspLan has been proposed. TranspLan is a domain-specific modeling language which is designed for the purpose of analysing and modeling transparency requirements in information systems. This paper presents a metamodel for transparency requirements modeling. We are introducing a model-driven approach to TranspLan language specifications to facilitate the use of the language more efficiently in real life cases. Metamodeling is an effective method for formally defining domain specific languages and moving from specifications to computer-aided modeling. In this paper, we propose a metamodel for TranspLan modeling language which is called as TranspLanMeta. The metamodeling process helps us to transfer TranspLan language specifications into a machine-readable format. The metamodel has been developed with GME (Generic Modelling Environment), which is a configurable toolkit for creating domain-specific modeling and program synthesis environments. By developing TranspLanMeta with GME, an automatically-generated modeling tool for TranspLan language is provided as well. In this way, an effective approach for accelerating software development is followed and the auto-generated modeling editor is used to define various models. This work provides a formal and practical solution for transparency modeling and a well-defined basis for using transparency requirements models in the further steps of the business process.

Paper Nr: 6
Title:

Addressing Industrial Needs with the Fulib Modeling Library

Authors:

Albert Zündorf, Adrian Kunz and Christoph Eickhoff

Abstract: Fulib is a new lightweight modeling tool providing code generation and model transformations. Code generated by Fulib does not need a Fulib runtime library. Fulib has been designed to be integrated into agile software development processes. Fulib collaborates with versioning tools like Git. These features address practical problems with modeling tools that frequently prevent their usage in industry.

Paper Nr: 13
Title:

Textual Approach for Designing Database Conceptual Models: A Focus Group

Authors:

Jonnathan Lopes, Maicon Bernardino, Fábio Basso and Elder Rodrigues

Abstract: Different approaches to software development are based on at least three database models: conceptual, logical, and physical, requiring distinct abstraction levels to represent complementary concepts. The selection of a conceptual database modeling approach, among other things, depends on the problem domain, knowledge, and preferences of the developer. This paper presents a new domain-specific textual language devoted to the conceptual relational database modeling, i.e., entity-relationship modeling. Furthermore, we present a preliminary evaluation conducted to analyze our proposed DSL, comparing two grammar versions in a focus group research method.

Paper Nr: 23
Title:

Direct Model-checking of SysML Models

Authors:

Alessandro T. Calvino and Ludovic Apvrille

Abstract: Model-checking intends to verify whether a property is satisfied by a model, or not. Model-checking of high-level models, e.g. SysML models, usually first requires a model transformation to a low level formal specification. The present papers proposes a new model-checker that can be applied (almost) directly to the SysML model. The paper first explains how this model-checker works. Then, we explain how it can efficiently check CTL-like properties. Finally, the paper discusses the performance of this model-checker integrated in the TTool framework.

Paper Nr: 30
Title:

Characterization of Software Design and Collaborative Modeling in Open Source Projects

Authors:

Khandoker Rahad, Omar Badreddin and Sayed M. Reza

Abstract: Software design is fundamental to developing high-quality, sustainable, maintainable software. Design languages, such as UML, have become the defacto standard in software design, but their infiltration in the mainstream practices remains vague. Recent studies suggest significant and increasing uptake in mainstream and open source spheres. Mining repositories and the software modeling artifacts often underpin the findings of these studies and focus on counting the instances of modeling artifacts as an indicator for adoption. This study aims to characterize this uptake in greater depth by focusing on analyzing the instances of models in open source projects. The goal is to uncover the profiles of developers who tend to create modeling artifacts, and those developers who maintain them throughout the project life cycle and to uncover the timelines of model creation and manipulation in reference to project evolution. This study sheds light on the nature of model-based collaboration and interactions and characterizes the role of model-based artifacts well beyond mining their presence in open source repositories. The study finds that, despite the nominal increase in the presence of model-based artifacts, these artifacts are rarely maintained and are typically created by a small and unique set of practitioners. Models are often created early in the project life cycle and do not play any significant role in the collaborative development activities of the subject projects. Life span of these model files is relatively shorter than the code file life span. Unexpectedly, models tend to be more frequently updated and maintained when the project has a relatively fewer number of models.

Paper Nr: 40
Title:

SRCM: A Semi Formal Requirements Representation Model Enabling System Visualisation and Quality Checking

Authors:

Mohamed Osama, Aya Zaki-Ismail, Mohamed Abdelrazek, John Grundy and Amani Ibrahim

Abstract: Requirements engineering is pivotal to the successful development of any given system. The core artifact for such phase is the requirements specification document. Requirements can be specified in informal, semi-formal, and formal notations. The majority of the requirements across many fields and domains are written natural language. However, natural language is inherently ambiguous and imprecise and the requirements cannot be automatically validated. Formal notations on the other hand enable automated testing and validation but is only comprehensible by experts and requires rewriting the requirements. Semi-formal notations strikes a good balance between comprehension and checking for several systems. However, the majority of the existing representation models mandates the requirements to be (re)written to adhere to certain templates. They also do not support automated checking. In this paper, we present SRCM –a semi-formal requirements representation model based on a comprehensive requirements capturing model (RCM) that does not enforce much limitations on how the requirements can be written. We also provide an automated approach to construct SRCM from RCM. In addition to providing a unified visualisation of the system entities and relations between the requirements key components, SRCM also enables automated quality checking on the requirements.

Paper Nr: 45
Title:

Approach for Evolving Sensing and Actuation Devices in Cyberphysical Systems Architectures

Authors:

Diego C. Sales and Leandro B. Becker

Abstract: The constant technological evolution, in this case targeting sensing and actuation devices (S&A), causes designers to evaluate potential modifications (upgrades) in the architecture of cyberphysical systems (CPS). Including or exchanging S&A devices in the architecture is a complex activity that requires the use of a systematic approach. This paper presents the CPS Architectural Evolution Approach (CAvA), which provide means to evaluate the impacts on architecture components, requirements, and software quality characteristics for mitigating risks and possible failures on meeting design goals. CAvA proposes a set of phases and activities to guide designers in the selection, evaluation, integration, and compatibility through the analysis of software quality attributes, based on the ISO/IEC 25010. A software tool was designed to support CAvA application. A case study is presented to highlight CAvA benefits for evolving the architecture of a small-scale UAV, showing a set of possible scenarios according to predefined goals.

Paper Nr: 48
Title:

Towards a Theory of Models in Systems Development Modeling

Authors:

Andrea Hillenbrand

Abstract: Despite decades of gaining experience in the development of software systems, the controversies on competing methodologies have not subsided. The pivotal element of reasoning and justification of any perspective taken thereon is arguably the recourse to models. Specifically, by means of a logical conceptualization of the notion of model-being, judgments on models can then be assessed whether they are justified, because as model judgments they are based on typical contextual constructive relationships. With such a conceptualization the potential lies in the realization of an epistemic architecture that emerges as a systematic structure of relations between models at object and meta levels. Each model application is then embedded in a systematic process during which a software system is developed. In this article, the combinatorics of model interweavements of such an epistemic architecture is presented, thereby providing the means to assess the development of a particular system as well as best practices and methodologies of systems development in general.

Paper Nr: 49
Title:

Layer Modeling and Its Code Generation based on Context-oriented Programming

Authors:

Chinatsu Yamamoto, Ikuta Tanigawa, Kenji Hisazumi, Mikiko Sato, Takeshi Ohkawa, Nobuhiko Ogura and Harumi Watanabe

Abstract: This paper contributes to the runtime cross-cutting concerns problem by a layer structure model based on UML (Unified-Modeling Language) and code generation to COP (Context-Oriented Programming). For software development, the cross-cutting concerns problem is well-known to cause complicated models. The reason is that one cross-cutting concern affects multiple objects. Also, the problems occasionally occur at runtime. Recently, this problem has become more challenging. Modern software such as IoTs usually connect with many machines and devices and change context-dependent behavior at runtime. Thus, runtime cross-cutting problems will occur increasingly. To solve this problem, we focus on the COP. It can gather scattered cross-cutting concerns in one module called the layer and change the layer at runtime. However, UML lacks the notation involving COP and also the code generation. Therefore, the first step to solve the runtime cross-cutting concerns problem is to propose a layer structure model on UML and COP code generation from its model.

Paper Nr: 56
Title:

Extending BPM(N) to Support Face-to-Virtual (F2V) Process Modeling

Authors:

Sasha M. Rudan, Sinisa Rudan and Birger Møller-Pedersen

Abstract: In this paper, we present research on CoPI4P (Community of Practice & Interest for Purpose) communities that consist of knowledge and creative workers in domains such as education, environmentalism, inter-disciplinary research, engaged art, and practice F2V (face-to-virtual) processes, i.e. processes that consist of both face-to-face and virtual (online) activities. Standards and methods for modeling business processes have successfully solved various crucial problems, such as providing boundary objects (or a common language) between business analysts, domain experts, process performers and business solutions developers. Performers can execute business processes with standards that support the execution of such processes. However, for CoPI4P communities, BPM standards 1) require additional domain specialization and 2) are either too imperative, or fail to provide enough guidance. This paper identifies these challenges and provides partial solutions applicable to modeling and supporting CoPI4P communities and the corresponding F2V business processes. In addition, results of focus groups interviews and surveys are provided that shed light, notably, on the way CoPI4P communities have been impacted by COVID-19 and how they have coped with digital disruptions to their working models. Our contribution to business processes management lies in researching and identifying the peculiarities of such communities and their workflows, followed by interventions in the BPM, and particularly the BPMN, domain. We then propose a toolset of BPMN extensions required to match the observed F2V workflows and thus to digitalize CoPI4P business models. In this context, we introduce the notion of sub-process palettes as a means to reduce the rigidity, and introduce personalization, of processes.