MODELSWARD 2020 Abstracts


Area 1 - Applications and Software Development

Full Papers
Paper Nr: 24
Title:

Towards Ontology Driven Provenance in Scientific Workflow Engine

Authors:

Anila S. Butt, Nicholas Car and Peter Fitch

Abstract: Most workflow engines automatically capture and provide access to their workflow provenance, which enables its users to trust and reuse scientific workflows and their data products. However, the deed of instrumenting a workflow engine to capture and query provenance data is burdensome. The task may require adding hooks to the workflow engine, which can lead to perturbation in execution. An alternative approach is intelligent logging and a careful analysis of logs to extract critical information about workflows. However, rapid growth in the size of the log and the cloud-based multi-tenant nature of the engines has made this solution increasingly inefficient. We have proposed ProvAnalyser, an ontology-based approach to capture the provenance of workflows from event logs. Our approach reduces provenance use cases to SPARQL queries over captured provenance and is capable of reconstructing complete data and invocation dependency graphs for a workflow run. The queries can be performed on nested workflow executions and can return information generated from one or various executions.

Paper Nr: 38
Title:

Graph-based Model Inspection Tool for Multi-disciplinary Production Systems Engineering

Authors:

Felix Rinker, Laura Waltersdorfer, Manuel Schüller and Dietmar Winkler

Abstract: Background. In Production Systems Engineering (PSE), the planning of production systems involves domain experts from various domains, such as mechanical, electrical and software engineering collaborating and modeling their specific views on the system. These models, describing entire plants, can reach a large size (up to several GBs) with complex relationships and dependencies. Due to the size, ambiguous semantics and diverging views, consistency of data and the awareness of changes are challenging to track. Aim. In this paper we explore visualizations mechanisms for a model inspection tool to support consistency checking and the awareness of changes in multi-disciplinary PSE environments, as well has more efficient handing of AutomationML (AML) files. Method. We explore various visualization capabilities that are suitable for hierarchical structures common in PSE and identified requirements for a model-inspection tool for PSE purposes based on workshops with our company partner. A proof-of concept software prototype is developed based on the elicited requirements. Results. We evaluate the effectiveness of our Information Visualisation (InfoVis) approach in comparison to a standard modeling tool in PSE, the AutomationML Editor. The evaluation showed promising results for handling large-scale engineering models based on AML for the selected scenarios, but also areas for future improvement, such as more advanced capabilities. Conclusion. Although InfoVis was found useful in the evaluation context, in-depth analysis with domain experts from industry regarding usability and features remain for future work.

Paper Nr: 46
Title:

Guarded Deep Learning using Scenario-based Modeling

Authors:

Guy Katz

Abstract: Deep neural networks (DNNs) are becoming prevalent, often outperforming manually-created systems. Unfortunately, DNN models are opaque to humans, and may behave in unexpected ways when deployed. One approach for allowing safer deployment of DNN models calls for augmenting them with hand-crafted override rules, which serve to override decisions made by the DNN model when certain criteria are met. Here, we propose to bring together DNNs and the well-studied scenario-based modeling paradigm, by expressing these override rules as simple and intuitive scenarios. This approach can lead to override rules that are comprehensible to humans, but are also sufficiently expressive and powerful to increase the overall safety of the model. We describe how to extend and apply scenario-based modeling to this new setting, and demonstrate our proposed technique on multiple DNN models.

Short Papers
Paper Nr: 36
Title:

Development of Health Software using Behaviour Driven Development - BDD

Authors:

Mohammad Z. Anjum, Silvana M. Mahon and Fergal McCaffery

Abstract: The health software industry is facing an immense challenge of managing quality and preventing software failures. Poorly defined requirements are one of the significant cause of health software failures. Agile practices are being increasingly used by the software industry to develop systems on time and within budget with improved software quality and user acceptance. Behaviour-driven development (BDD) is an agile software engineering practice that can help to improve health software quality vastly. BDD achieves this by prioritising the illustration of software’s behaviour using ubiquitous language, followed by automated acceptance testing to assess if the illustrated behaviour was achieved. This paper presents a review of BDD literature, including the characteristics of BDD and examines how BDD can benefit health software quality. The paper reviews health software standards and guidelines, to examine their compatibility with a BDD approach. Finally, the paper details future plans for the development of a framework that provides health software companies with a detailed step by step guideline on how to use BDD to develop safer health software.

Paper Nr: 12
Title:

Towards a Model-based Fuzzy Software Quality Metrics

Authors:

Omar Masmali and Omar Badreddin

Abstract: Code smells and Technical debt are two common notions that are often referred to for quantifying codebase quality. Quality metrics based on such notions often reply on rigid thresholds and are insensitive to the project unique context, such as development technologies, team size, and the desired code qualities. This challenge often manifest itself in inadequate quantification of code qualities and potentially numerous false positives cases. This paper presents a novel approach that formulates code quality metrics with thresholds that are derived from software design models. This method results in metrics that, instead of adopting rigid thresholds, formulates unique and evolving thresholds specific to each code module. This paper presents the novel methodology and introduces some novel code quality formulas. To evaluate the proposed formulas, we evaluate them against open source codebase developed by experienced software engineers. The results suggest that the proposed methodology results in code quality quantification that provides more adequate characterization.

Paper Nr: 64
Title:

Automatic Verification of Behavior of UML Requirements Specifications using Model Checking

Authors:

Saeko Matsuura, Sae Ikeda and Kasumi Yokotae

Abstract: With the development of information and communication technology (ICT), services have often been provided through a collection of systems of various architectures interoperating with each other. System development must incorporate non-functional requirements in addition to traditional functional requirements. However, to determine the requirements of multiple cooperative systems, it is necessary a) to consider hardware architecture, user characteristics, and system safety requirements and b) to verify these at an early stage of development. UML is a well-known general purpose modeling language through which it is possible to define functional requirements and to support design and implementation efforts that are based on a specified use case model. However, it is difficult to verify such inter-system cooperation using use case models in UML. Moreover, confirming the correct behaviors, exhibited concurrently, of a system of multiple interoperating systems is difficult using the static models found in UML. This study proposes a method of transforming a model of mutually cooperating multiple systems described in UML into a model that uses the model-checking tool UPPAAL and verifying whether parallel behaviors can occur without deadlock. Consequently, a method, applied at an early stage of development, of guaranteeing the correctness of the concurrent operation and cooperation of multiple systems is demonstrated.

Paper Nr: 68
Title:

Defining Controlled Experiments Inside the Access Control Environment

Authors:

Said Daoudagh and Eda Marchetti

Abstract: In ICT systems and modern applications access control systems are important mechanisms for managing resources and data access. Their criticality requires high security levels and consequently, the application of effective and efficient testing approaches. In this paper we propose standardized guidelines for correctly and systematically performing the testing process in order to avoid errors and improve the effectiveness of the validation. We focus in particular on Controlled Experiments, and we provide here a characterization of the first three steps of the experiment process (i.e., Scoping, Planning and Operation) by the adoption of the Goal-Question-Metric template. The specialization of the three phases is provided through a concrete example.

Area 2 - Methodologies, Processes and Platforms

Full Papers
Paper Nr: 5
Title:

A Model based Toolchain for the Cosimulation of Cyber-physical Systems with FMI

Authors:

David Oudart, Jérôme Cantenot, Frédéric Boulanger and Sophie Chabridon

Abstract: Smart Grids are cyber-physical systems that interface power grids with information and communication technologies in order to monitor them, automate decision making and balance production and consumption. Cosimulation with the Functional Mock-up Interface standard allows the exploration of the behavior of such complex systems by coordinating simulation units that correspond to the grid part, the communication network and the information system. However, FMI has limitations when it comes to cyber-physical system simulation, particularly because discrete-event signals exchanged by cyber components are not well supported. In addition, industrial projects involve several teams with different skills and methods that work in parallel to produce all the models required by the simulation, which increases the risk of inconsistency between models. This article presents a way to exchange discrete-event signals between FMI artifacts, which complies with the current 2.0 version of the standard. We developed a DSL and a model-based toolchain to generate the artifacts that are necessary to run the cosimulation of the whole system, and to detect potential inconsistencies between models. The approach is illustrated by the use case of an islanded grid implementing diesel and renewable sources, battery storage and intelligent control of the production.

Paper Nr: 43
Title:

Early Synthesis of Timing Models in AUTOSAR-based Automotive Embedded Software Systems

Authors:

Padma Iyenghar, Lars Huning and Elke Pulvermueller

Abstract: Development of AUTOSAR-based embedded systems in Unified Modeling Language (UML) tools is an emerging state-of-the-art practice in the automotive industry. In case of such automotive systems with strict timing requirements (e.g. advanced driver assistance systems), not only the correctness of the computation results is important, but also their timeliness. This necessitates comprehensive and early verification and validation procedures to ensure desired software quality without overshooting the budget. The main input required for such timing analysis, in specialized timing analysis tools, is the AUTOSAR-timing model corresponding to the timing annotated AUTOSAR-design model. Thus, synthesis of such a timing analysis model from AUTOSAR-based design model (developed in UML tools), early in the development stages, constitutes an important step and a research gap. Towards this direction, this paper presents a systematic approach for extraction and synthesis of timing analysis model from AUTOSAR-based embedded system design models developed in UML tools. A prototype of model transformations for the synthesis of timing models and its evaluation in an automotive use case are presented.

Short Papers
Paper Nr: 23
Title:

Integer Overflow Detection in Hardware Designs at the Specification Level

Authors:

Fritjof Bornebusch, Christoph Lüth, Robert Wille and Rolf Drechsler

Abstract: In this work, we present a hardware design approach that allows the detection of integer overflows by describing finite integer types at the specification level. In contrast to the established design flow that uses infinite integer types at the specification level. This causes a semantic gap between these infinite types and the finite integer types used at the model level. The proposed design approach uses dependent types in combination with proof assistants. The combination allows the arguing about the behavior of finite integer types that is used to detect integer overflows at the specification level. To achieve this, we utilized the CompCert integer library that describes finite data types as dependent types.

Paper Nr: 32
Title:

Improving Multi-domain Stakeholder Communication of Embedded Safety-critical Development using Agile Practices: Expert Review

Authors:

Surafel Demissie, Frank Keenan, Róisín Loughran and Fergal McCaffery

Abstract: The development of embedded safety critical software is different from ordinary software development as such development needs to be coordinated with the hardware development. A typical embedded system project involves multi-domain experts such as business unit, software developers, hardware engineers and firmware developers. Agile methods have been successfully adopted in software engineering in general, and more recently in embedded safety critical development. A previous systematic literature review (SLR), conducted as part of this research, reported that one of the challenges of embedded safety-critical software development is multi-domain stakeholder communication. Additionally, suitable agile practices which have been used in embedded safety critical domains have been investigated. This earlier work proposed a process using a combination of suitable agile practices to support multi-domain stakeholder communication. In order to validate this proposed process, an expert review has been conducted. This paper outlines the proposed process and the findings of the expert validation.

Paper Nr: 40
Title:

The Seamless Low-cost Development Platform LoRra for Model based Systems Engineering

Authors:

Sven Jacobitz and Xiaobo Liu-Henke

Abstract: This paper presents a seamless low-cost Rapid Control Prototyping (RCP) development platform, LoRra for short, based on the open source software Scilab / Xcos. The model-based, verification-oriented RCP development process is introduced to master the increasing system complexity in ever-shortening development cycles. Within this process Model-in-the-loop (MiL)-, Software-in-the-Loop (SiL)-, and Hardware-in-the-Loop (HiL)-simulations are performed for testing and optimization. Based on requirements derived from the process, the concept of the LoRra platform is developed first. It contains model libraries, a code generator, a real-time interface, real-time hardware and a human-machine interface for measurement and calibration tasks. Subsequently, the design of each component will be discussed. Finally, a first validation and optimization of the platform is carried out by using the state of charge estimation for lithium-ion batteries.

Paper Nr: 51
Title:

Business Process Model Recommendation as a Transformation Process in MDE: Conceptualization and First Experiments

Authors:

Hadjer Khider, Slimane Hammoudi and Abdelkrim Meziane

Abstract: Business Process (BP) model repositories have been proposed to store models of BP and make them available to their stakeholders for future reuse. One of the challenges facing users of such repositories concerns the retrieval of models that suit their business needs in a given situation, which is not provided by current repositories. In order to overcome this lack, one important issue to investigate is to provide recommendation of BP models based on the user profile as the most important way to better meet his business needs which promote BP model reusability. In this paper, we propose a conceptual framework of BP model recommendation based on the user social profile and implemented as a transformation process in model driven engineering (MDE). In our experiments, the LinkedIn social network is used to extract the users’ business interests. These user business interests are then used to recommend the appropriate BP models that could fit to the user. Our proposed framework is based on model driven architecture (OMG MDE approach) where techniques of models, metamodels, transformation and weaving are used to implement a generic recommendation process.

Paper Nr: 55
Title:

Model Transformation by Example with Statistical Machine Translation

Authors:

Karima Berramla, El Abbassia Deba, Jiechen Wu, Houari Sahraoui and Abou H. Benyamina

Abstract: In the last decade, Model-Driven Engineering (MDE) has experienced rapid growth in the software development community. In this context, model transformation occupies an important place that automates the transitions between development steps during the application production. To implement this transformation process, we require mastering languages and tools, but more importantly the semantic equivalence between the involved input and output metamodels. This knowledge is in general difficult to acquire, which makes transformation writing complex, time-consuming, and error-prone. In this paper, we propose a new model transformation by example approach to simplify model transformations, using Statistical Machine Translation (SMT). Our approach exploits the power of SMT by converting models in natural language texts and by processing them using models trained with IBM1 model.

Paper Nr: 57
Title:

High-level Partitioning and Design Space Exploration for Cyber Physical Systems

Authors:

Daniela Genius, Ilias Bournias, Ludovic Apvrille and Roselyne Chotin

Abstract: Virtual prototyping and co-simulation of mixed analog/digital embedded systems have emerged as a promising research topic, but usually assume an already Hardware/Software partitioned system. The paper presents a new approach for high-level system partitioning of such mixed systems, by expressing the structure and the behaivour of the analog parts with SysML diagrams. A tool already able to handle some aspects of analog design after partitioning has been extended to be able to handle partitioning, thus completing the methodology. As a real-world case study, we show the design of the hardware part of a medical application.

Paper Nr: 58
Title:

Determination of ISO 22400 Key Performance Indicators using Simulation Models: The Concept and Methodology

Authors:

Mateusz Kikolski

Abstract: The study focuses on developing an approach to determining production key performance indicators (KPIs). Different types of KPIs have been defined and their distribution has been determined. The article deals with the problem of how to determine indicators. A review of KPIs and ISO 22400 was carried out. The author's own methodology for simulation determination of indicators was proposed. The conducted case studies were prepared on the basis of sample processes in order to indicate the mechanism of proceeding in the author's methodology. The research used one of the available systems for designing and optimizing virtual models of production processes and showed the possibilities of its use in the analysis of production processes.

Area 3 - Modeling Languages, Tools and Architectures

Full Papers
Paper Nr: 1
Title:

Resilient BPMN: Robust Process Modeling in Unreliable Communication Environments

Authors:

Frank Nordemann, Ralf Tönjes and Elke Pulvermüller

Abstract: Process modeling languages help to define and execute processes and workflows. The Business Process Model and Notation (BPMN) 2.0 is used for business processes in commercial areas such as banks, shops, production and supply industry. Due to its flexible notation, BPMN is increasingly being used in non-traditional business process domains like Internet of Things (IoT) and agriculture. However, BPMN does not fit well to scenarios taking place in environments featuring limited, delayed, intermittent or broken connectivity. Communication just exists for BPMN - characteristics of message transfers, their priorities and connectivity parameters are not part of the model. No backup mechanism for communication issues exists, resulting in error-prone and failing processes. This paper introduces resilient BPMN (rBPMN), a valid BPMN extension for process modeling in unreliable communication environments. The meta model addition of opportunistic message flows with Quality of Service (QoS) parameters and connectivity characteristics allows to verify and enhance process robustness at design time. Modeling of explicit or implicit, decision-based alternatives ensures optimal process operation even when connectivity issues occur. In case of no connectivity, locally moved functionality guarantees stable process operation. Evaluation using an agricultural slurry application showed significant robustness enhancements and prevented process failures due to communication issues.

Paper Nr: 6
Title:

Themulus: A Timed Contract-calculus

Authors:

Alberto A. García, María-Emilia Cambronero, Christian Colombo, Luis Llana and Gordon J. Pace

Abstract: Over these past years, formal reasoning about contracts between parties has been increasingly explored in the literature. There has been a shift of view from that viewing contracts simply as properties to be satisfied by the parties to contracts as first class syntactic objects which can be reasoned about independently of the parties’ behaviour. In this paper, we present a real-time deontic contract calculus, Themulus, to reason about contracts, abstracting the parties’ behaviour through the use of a simulation relation. In doing so, we can compare real-time deontic contracts in terms of their strictness over permissions, prohibitions and obligations.

Paper Nr: 7
Title:

Towards Model Transformation from a CBM Model to CEP Rules to Support Predictive Maintenance

Authors:

Alexandre Sarazin, Sébastien Truptil, Aurélie Montarnal, Jacques Lamothe, Julien Commanay and Laurent Sagaspe

Abstract: Over the past decades, the development of predictive maintenance strategies, like Prognostics and Health Management (PHM), have brought new opportunities to the maintenance domain. However, implementing such systems addresses several challenges. First, all information related to the system description and failure definition must be collected and processed. In this regard, using an expert system (ES) seems interesting. The second challenge, when monitoring complex systems, is to deal with the high volume and velocity of the input data. To reduce them, Complex Event Processing (CEP) can be used to identify relevant events, based on predefined rules. These rules can be extracted from the ES knowledge base using model transformation. This process consists in transforming some concepts from a source to a target model using transformation rules. In this paper, we propose to transform a part of the knowledge from a condition-based maintenance (CBM) model into CEP rules. After further explaining the motivations behind this work and defining the principles behind model-driven architecture and model transformation, the transformation from a CBM model to a “generic rules” model will be proposed. This model will then be transformed into an Event Processing Language (EPL) model. Examples will be given as illustrations for each transformation.

Paper Nr: 16
Title:

Towards Abstract Test Execution in Early Stages of Model-driven Software Development

Authors:

Noël Hagemann, Reinhard Pröll and Bernhard Bauer

Abstract: Over the last decades, systems immanent complexity has significantly increased. In order to cope with the emerging challenges during the development of such systems, modeling approaches become an indispensable part. While many process steps are applicable to the model-level, there are no sufficient realizations for test execution yet. As a result, we present a semi-formal approach enabling developers to perform abstract test execution straight on the modeled artifacts to support the overarching objective of a shift left of verification and validation tasks. Our concept challenges an abstract test case (derived from test model) against a system model utilizing an integrated set of domain-specific models, i.e. the omni model. Driven by an optimistic dataflow analysis based on a combined view of an abstract test case and its triggered system behavior, possible test verdicts are assigned. Based on a prototypical implementation of the concept, the proof of concept is demonstrated and further on put in the context of related research.

Paper Nr: 17
Title:

A Methodological Assistant for Use Case Diagrams

Authors:

Erika R. Aquino, Pierre de Saqui-Sannes and Rob A. Vingerhoeds

Abstract: Use case driven analysis is the corner stone of software and systems modeling in UML and SysML, respectively. Although many books and tutorials have discussed the use of use case diagrams, students and industry practitioners regularly face methodological problems in writing good use cases. This paper defines a methodological assistant that helps designing use case diagrams relying on formalized rules and reuse of previous diagrams. The methodological assistant is implemented in Python. It is interfaced with the free SysML software TTool, and with Cameo Systems Modeler.

Paper Nr: 34
Title:

A Technique for Automata-based Verification with Residual Reasoning

Authors:

Shaun Azzopardi, Christian Colombo and Gordon Pace

Abstract: Analysing programs at a high-level of abstraction reduces the effort required for verification, but may abstract away details required for full verification of a specification. Working at a lower level, e.g. through model checking or runtime verifying program code, can avoid this problem of abstraction, at the expense of much larger resource requirements. To reduce the resources required by verification, analysis techniques at decreasing levels of abstraction can be combined in a complementary manner through partial verification or residual analysis, where any useful partial information discovered at a high-level is used to reduce the verification problem, leaving an easier residual problem for lower-level analyses. Our contribution in this paper is a technology-agnostic symbolic-automata-based framework to project verification effort onto different verification stages. Properties and programs are both represented as symbolic automata, with an event-based view of verification. We give correctness conditions for residual analysis based on equivalence with respect to verification of the original problem. Furthermore we present an intraprocedural residual analysis to identify parts of the property respected by the program, and parts of the program that cannot violate the property.

Paper Nr: 65
Title:

Verifying OCL Operational Contracts via SMT-based Synthesising

Authors:

Hao Wu and Joseph Timoney

Abstract: The set of operational contracts written in the Object Constraint Language can be used to describe the behaviour of a system. These contracts are specified as pre/post conditions to constrain inputs and outputs of operation calls defined in a UML class diagram. Hence, a sequence of operation calls conforming to pre/postconditions is crucial to analyse, verify and understand the behaviour of a system. In this paper, we present a new technique for synthesising property-based call sequences from a set of operational contracts. This technique works by reducing a synthesis problem to a satisfiability modulo theories (SMT) problem. We distinguish our technique from existing approaches by introducing a novel encoding that supports high levels of expressiveness, flexibility and performance. This encoding not only allows us to synthesise call sequences at a much larger scale but also maintains high performance. The evaluation results show that our technique is effective and scales reasonably well.

Short Papers
Paper Nr: 9
Title:

Chaining Model Transformations for System Model Verification: Application to Verify Capella Model with Simulink

Authors:

Christophe Duhil, Jean-Philippe Babau, Eric Lepicier, Jean-Luc Voirin and Juan Navas

Abstract: In the context of Model-Based System Engineering (MBSE), Thales has developed a method called Arcadia, and its dedicated workbench Capella. This approach provides engineer generic practices and tools to design system models in a coherent way. While models grew in complexity, the need emerged for model Simulation and verification. In this paper, a model based approach is proposed to provide an interpretation of the Capella dynamic behavior description of modeled systems. The approach allows targeting different semantics and facilitating reuse of legacy semantics. The idea is to enforce separation of concerns of semantics definition by defining a chain of five transformations. The approach ensures traceability between Capella source models and target models, facilitating interpretation of the verification results. We apply our approach to analyze dataflow diagrams of a Capella "clock radio" model. For this purpose we transform the Capella dataflow model to a Simulink model. The experimentation on the use case demonstrates the ability of the tool to catch model inconsistency problems.

Paper Nr: 10
Title:

Model-to-Model Transformations for Efficient Time-domain Verification of Concurrent Models by NuSMV Modules

Authors:

Miguel Carrillo, Vladimir Estivill-Castro and David A. Rosenblueth

Abstract: We introduce and describe an algorithmic transformation from the formalism of arrangements of logic-labelled finite-state machines (LLFSMs) into NuSMV modules (and its implementation as a model-to-model ATL transformation from an Ecore meta-model to the NuSMV language). Our transformation benefits from using modules and integers of NuSMV to improve the efficiency in the construction and verification of the model. Moreover, we can handle predicates about time. Thus, we enable verification of LLFSMs in the time domain. Our transformation is a considerable improvement in efficiency. Compared with earlier transformation algorithms developed by us, the one presented here produces concise NuSMV files (in an example, 130,295 lines were reduced to 418). We thus show that it is possible to automatically translate arrangements of LLFSMs to concise models that can be efficiently and formally verified.

Paper Nr: 11
Title:

Domain-specific Language and Tools for Strategic Domain-driven Design, Context Mapping and Bounded Context Modeling

Authors:

Stefan Kapferer and Olaf Zimmermann

Abstract: Service-oriented architectures and microservices have gained much attention in recent years; companies adopt these concepts and supporting technologies in order to increase agility, scalability, and maintainability of their systems. Decomposing an application into multiple independently deployable, appropriately sized services and then integrating such services is challenging. With strategic patterns such as Bounded Context and Context Map, Domain-driven Design (DDD) can support business analysts, (enterprise) architects, and microservice adopters. However, existing architecture description languages do not support the strategic DDD patterns sufficiently; modeling tools for DDD primarily focus on its tactical patterns. As a consequence, different opinions on how to apply strategic DDD exist, and it is not clear how to combine its patterns. Aiming for a clear and concise interpretation of the patterns and their combinations, this paper distills a meta-model of selected strategic DDD patterns from the literature. It then introduces Context Mapper, an open source project that a) defines a Domain-specific Language (DSL) expressing the strategic DDD patterns and b) provides editing, validation, and transformation tools for this DSL. As a machine-readable description of DDD, the DSL provides a modeling foundation for (micro-)service design and integration. The models can be refactored and transformed within an envisioned tool chain supporting the continuous specification and evolution of Context Maps. Our validation activities (prototyping, action research, and case studies) suggest that the DDD pattern clarification in our meta-model and the Context Mapper tool indeed can benefit the target audience.

Paper Nr: 15
Title:

Real Models are Really on M0 - Or How to Make Programmers Use Modeling

Authors:

Joachim Fischer, Birger Møller-Pedersen and Andreas Prinz

Abstract: This paper discusses the term ’model’ and the role of the level M0 in the four-layer metamodeling architecture of MOF/OMG. It illustrates the failures of the OMG MOF standard and how a model is an abstraction, not a description. We apply two simple approaches: (1) observing the use of models (of real or planned systems) in system development, including prototyping, simulations, and models in general, and (2) comparing modeling with programming. These approaches lead to the conclusion that models should be placed on M0, while UML models are model descriptions. This conclusion leads to a better understanding of InstanceSpecification for description of snapshots, and of metamodeling applied to ontologies.

Paper Nr: 18
Title:

Aocl : A Pure-Java Constraint and Transformation Language for MDE

Authors:

Don Batory and Najd Altoyan

Abstract: OCL is a standard MDE language to express constraints. OCL has been criticized for being too complicated, over-engineered, and difficult to learn. But beneath OCL’s complicated exterior is an elegant language based on relational algebra. We call this language Aocl, which has a straightforward implementation in Java. Aocl can be used to write OCL-like constraints and model transformations in Java. A simple MDE tool generates an Aocl Java 8.0 package from an input class diagram for Aocl to be used.

Paper Nr: 19
Title:

A DSL-Driven Development Framework for Components to Provide Environmental Data in Simulation based Testing

Authors:

Liqun Wu and Axel Hahn

Abstract: Developing components that produce data representing simulated environments for spatial-aware simulations could be difficult and error-prone. Knowledges of the required outputs of these components and computational models of the environmental phenomena are often held by different roles in the development. Miscommunications may appear among involved roles due to their different perspectives to view environmental phenomena. Consequently, requirements of simulated environments in simulation scenarios may not be correctly preserved in the developed components. This paper presents a domain-specific development framework to overcome this problem. It focuses on bridging the gap between human-view requirement descriptions of simulated environments and system-view component design models to produce digital representations of these environments. It specifies a CIM (Computation-Independent Model) -layer language which supports system of interest modelers to document required context of simulated environments in their simulation scenarios in a half-formal manner. Transformation rules from these CIMs are established to derive necessary data structures and computation flows as PIM (Platform-Independent Model) -layer models of simulated environment components. These transformations are further combined with general Model-Driven Development (MDD) solutions to create platform-specific component skeletons.

Paper Nr: 20
Title:

A UML Profile for Automatic Code Generation of Optimistic Graceful Degradation Features at the Application Level

Authors:

Lars Huning, Padma Iyenghar and Elke Pulvermueller

Abstract: Safety standards such as ISO26262 or IEC61508 recommend a variety of safety mechanisms for the development of safety-critical systems. One of these mechanisms is graceful degradation, which aims to provide a degraded service of an application after an error has occurred. While several safety standards recommend graceful degradation, they do not provide any concrete development or implementation assistance. This paper employs model-driven development to realize such an automated approach for optimistic graceful degradation, which is a specific variant of the graceful degradation safety mechanism. We introduce a UML profile that may be used to model optimistic graceful degradation at the application level within a UML class diagram. We leverage this model representation to automatically generate productive source code that is capable of optimistic graceful degradation. This source code is generated without requiring any additional developer actions.

Paper Nr: 26
Title:

Correctness of an ATL Model Transformation from SysML State Machine Diagrams to Promela

Authors:

Georgiana Caltais, Stefan Leue and Hargurbir Singh

Abstract: In this paper we discuss the correctness of an ATL-based model transformation from the systems engineering modelling language SysML into Promela, the input language of the SPIN model checker. More precisely, we reduce showing the correctness of the transformation to showing a notion of what we refer to as observational equivalence of the SysML and the generated Promela models, respectively. This paves the way to a proof technique that could be further exploited in order to argue the correctness of model transformations from SysML to various model checkers, based on the observable actions generated by the systems under analysis.

Paper Nr: 28
Title:

Impact of Security Measures on Performance Aspects in SysML Models

Authors:

Maysam Zoor, Ludovic Apvrille and Renaud Pacalet

Abstract: Because embedded systems are now frequently connected, security must be taken into consideration during system modeling. However adding security features can degrade performance. In this paper, the trade-off between security and performance is tackled with a new model-based method that can automatically assess the impact of security measures on performance. The contribution is illustrated with an industrial motor control taken from the H2020 AQUAS project.

Paper Nr: 29
Title:

A Generic Projectional Editor for EMF Models

Authors:

Johannes Schröpfer, Thomas Buchmann and Bernhard Westfechtel

Abstract: The Eclipse Modeling Framework (EMF) constitutes a popular ecosystem for model-driven development. In the technological space of EMF, a wide variety of model-based tools have been developed, including tools for transforming and editing models. Model editors may display models in different representations such as diagrams, trees, or tables. Due to the increasing popularity of human-readable textual syntax, there is a growing demand for textual model editors. In EMF, this demand is currently satisfied by syntax-based editors which persist models as text files. In contrast, we propose a projectional editor that persists models natively as EMF models; the textual representation constitutes a projection of the underlying EMF model. Projectional editing does not only exclude syntactic errors; in addition, maintaining the underlying model persistently facilitates tool integration. The projectional editor is generic; it may be instantiated for different modeling languages by declarative definitions of their concrete syntax. So far, model editors for subsets of Java and ALF (Action Language for Foundational UML) have been built to demonstrate the feasibility of the generic approach.

Paper Nr: 30
Title:

Multi-level Modeling without Classical Modeling Facilities

Authors:

Ferenc A. Somogyi, Zoltán Theisz, Sándor Bácsi, Gergely Mezei and Dániel Palatinszky

Abstract: Multi-level modeling is a modeling paradigm that has been becoming more and more popular in recent years. The ultimate goal of multi-level modeling is to reduce accidental complexity that may arise from modeling methodologies that are not suitable to describe multi-level models. However, currently, most multi-level modeling approaches are dependent on classical modeling facilities, like OMG’s four-level modeling architecture. The potency notion is one of the most widely-used enablers of multi-level modeling that governs the depth model elements can be instantiated at. In this paper, we propose an alternative implementation to the first incarnation of the potency notion, the so-called classic potency notion. Its multi-level nature is mapped into our multi-layer modeling framework, which liberates it from direct dependence on classical modeling facilities. We examine the proposed mapping in details and also demonstrate it via a simple case study. This work is aimed as the first step towards a generic multi-level modeling framework where researchers can freely experiment with novel multi-level modeling ideas and can share and compare their results without worrying about any sorts of tool dependence.

Paper Nr: 31
Title:

Operator-based Viewpoint Definition

Authors:

Johannes Meier, Ruthbetha Kateule and Andreas Winter

Abstract: With the increase in size, complexity and heterogeneity of software-intensive systems, it becomes harder for single persons to manage such systems in their entirety. Instead, various parts of systems are managed by different stakeholders based on their specific concerns. While the concerns are realized by viewpoints, the conforming views contain projected parts of the system under development. After analyzing existing techniques to develop new viewpoints and views on top of existing systems, this paper presents an approach that defines new viewpoints and views in a coupled way based on operators. This operator-based approach is used to define new viewpoints and views for an existing Module-based description of architectures.

Paper Nr: 39
Title:

Defining Referential Integrity Constraints in Graph-oriented Datastores

Authors:

Thibaud Masson, Romain Ravet, Francisco B. Ruiz, Souhaila Serbout, Diego S. Ruiz and Anthony Cleve

Abstract: Nowadays, the volume of data manipulated by our information systems is growing so rapidly that they cannot be efficiently managed and exploited only by means of standard relational data management systems. Hence the recent emergence of NoSQL datastores as alternative/complementary choices for big data management. While NoSQL datastores are usually designed with high performance and scalability as primary concerns, this often comes at a cost of tolerating (temporary) data inconsistencies. This is the case, in particular, for managing referential integrity in graph-oriented datastores, for which no support currently exists. This paper presents a MDE-based, tool-supported approach to the definition and enforcement of referential integrity constraints (RICs) in graph-oriented NoSQL datastores. This approach relies on a domain-specific language allowing users to specify RICs as well as the way they must be managed. This specification is then exploited to support the automated identification and correction of RICs violations in a graph-oriented datastore. We illustrate the application of our approach, currently implemented for Neo4J, through a small experiment.

Paper Nr: 50
Title:

Refining Automation System Control with MDE

Authors:

Pascal André and Mohammed A. Tebib

Abstract: Software gets increasing matter in control systems such as cyber-physical systems and pervasive computing. Beyond the reliability and performance requirements, the software must continuously evolve and adapt to new needs and constraints from the physical world or technical support (reconfiguration and maintenance). Model engineering aims to shorten the development cycle by focusing on abstractions and partially automating code generation. In this article, we explore the assistance for stepwise transition from the models to the code to reduce the application development time. The model covers structural, dynamic and functional aspects of the control system. The target code is that of a system distributed over several devices. To conduct the experiments, the models are written in UML (or SysML) and programs deployed on Android and Lego EV3. We report the lessons learnt for future work.

Paper Nr: 52
Title:

Assessment of EMF Model to Text Generation Strategies and Libraries in an Industrial Context

Authors:

Christophe Ponsard, Denis Darquennes, Valery Ramon and Jean-Christophe Deprez

Abstract: Model-Based System Engineering is increasingly adopted. However it is centred on the notion of model while the industry is still strongly document-based. In order to enable a smooth evolution between those paradigms, the interplay between documents and models need to be managed, especially models can be used to efficiently derive up-to-date documents matching standard templates from initial requirements to final certification. The purpose of this paper is to review and to assess current strategies to manage model for document generators in order to meet industrial requirements like document complexity, document/model scalability, quick generation time, maintainability and evolvability of document templates. After exploring different generation strategies, our work focuses on toolchains based on the Eclipse Modelling Framework and compares a few shortlisted libraries in the light of our requirements and using document derived from industry cases.

Paper Nr: 54
Title:

Towards Metrics for Analyzing System Architectures Modeled with EAST-ADL

Authors:

Christoph Etzel, Florian Hofhammer and Bernhard Bauer

Abstract: Quality of system and software architectures plays a major role in today’s development to reduce the complexity of systems during design and ensure maintainability and extensibility. Metrics can provide system architects with information about the quality of the system architectures and support them to ensure the quality characteristics defined in ISO 25010. In this position paper, we look on selected metrics from code and object-oriented design analysis to use them for the evaluation of system architectures modeled with EAST-ADL. Therefore, the metrics (collections) Lines of Code, Cyclomatic Complexity, Chidamber and Kemerer and MOOD are examined for their application on EAST-ADL models.

Paper Nr: 60
Title:

Towards a Generalized Queuing Network Model for Self-adaptive Software Systems

Authors:

Davide Arcelli

Abstract: A Self-adaptive Software Systems (SASSs) is composed by a managing and a managed subsystem. The former comprises system’s adaptation logic and controls the latter, which provides system’s functionalities by perceiving and affecting the environment through its sensors and actuators, respectively. Such control often conforms to a MAPE-K feedback loop, i.e. a Knowledge-based architecture model that divides the adaptation process into four activities, namely Monitor, Analyze, Plan and Execute. Performance modeling notations, analysis methods and tools, have been coupled to other kinds of techniques (e.g. control theory, machine learning) for modeling and assessing the performance of managing subsystems, possibly aimed at supporting the identification of more convenient architectural alternatives. The contribution of this paper is a generalized Queuing Network model for SASSs, where the managed subsystem is explicitly modelled, thus widening performance modeling and analysis scope to the whole system. Job classes flowing through the QN represent activities of a global feedback control loop, which is based on the system’s mode profile and implemented by class-switches operating in conformance to proper predefined class-switching and routing probabilities. Results obtained by means of a proof-of-concept addressing a realistic case study show that the generalized QN model can usefully support performance-driven architectural decision-making.

Paper Nr: 61
Title:

On a Metasemantic Protocol for Modeling Language Extension

Authors:

Ed Seidewitz

Abstract: A metaobject protocol is an object-oriented interface that allows a programming language to be efficiently extended by users of that language from within the language itself. A metasemantic protocol is a generalization of that idea, providing a mechanism to allow users of a formally defined modeling language to syntactically and semantically extend that language from within the language. Such an approach is fundamental to the language architecture being developed for the proposed second version of the Systems Modeling Language (SysML). SysML v2 is being effectively defined as an extension to a foundational Kernel Modeling Language (KerML), and then users can define domain-specific languages in the same way as extensions of SysML. This approach is already being worked out in the ongoing pilot implementation of SysML v2, but there is still much to do before the vision of a true metasemantic protocol is fully realized.

Paper Nr: 4
Title:

Automated Synthesis of ATL Transformations from Metamodel Correspondences

Authors:

Kevin Lano and Shichao Fang

Abstract: In this paper we describe techniques for semi-automatically synthesising transformations from metamodel correspondences, in order to accelerate transformation development. We provide a strategy for synthesising complete ATL transformations from correspondences, and evaluate the approach using examples from the ATL zoo.

Paper Nr: 8
Title:

Classifying Unstructured Models into Metamodels using Multi Layer Perceptrons

Authors:

Walmir O. Couto, Emerson C. Morais and Marcos D. Fabro

Abstract: Models and metamodels created using model-based approaches have restrict conformance relations. However, there has been an increase of semi-structured or schema-free data formats, such as document-oriented representations, which are often persisted as JSON documents. Despite not having an explicit schema/metamodel, these documents could be categorized to discover their domain and to partially conform to a metamodel. Recent approaches are emerging to extract information or to couple modeling with cognification. However, there is a lack of approaches exploring semi-structured formats classification. In this paper, we present a methodology to analyze and classify JSON documents according to existing metamodels. First, we describe how to extract metamodels elements into a Multi-Layer Perceptron (MLP) network to be trained. Then, we translate the JSON documents into the input format of the encoded MLP. We present the step-by-step tasks to classify JSON documents according to existing metamodels extracted from a repository. We have conducted a series of experiments, showing that the approach is effective to classify the documents.

Paper Nr: 21
Title:

An Architecture-independent Data Model for Managing Information Generated by Human-chatbot Interactions

Authors:

Massimiliano Luca, Alberto Montresor, Carlo Caprini and Daniele Miorandi

Abstract: This paper introduces a data model for representing humans-chatbots interactions. Despite there are many models that allow representing the usage and the behaviour of bots, a service that can store information from any conversational agent regardless the architecture is still missing. With this work, we introduce a general-purpose data model to store both messages and logs. To succeed, we raise the level of abstraction of the other analyzed models and we focused on the core logic of chatbots: conversations and interactions with the users

Paper Nr: 25
Title:

CLARVA: Model-based Residual Verification of Java Programs

Authors:

Shaun Azzopardi, Christian Colombo and Gordon Pace

Abstract: Runtime verification (RV) is an established approach that utilises monitors synthesized from a property language (e.g. temporal logics or some form of automata) to observe program behaviour at runtime, determining compliance of the program with the property at runtime. An issue with RV is that it introduces overheads at runtime, while identifying a violation at runtime may be too late. This can be tackled by introducing light analyses that attempt to prove parts of the property with respect to the program, leaving a residual property that induces a smaller monitoring footprint at runtime and encodes some static guarantees. In this paper we present CLARVA as a tool developed for this end for the RV tool LARVA. CLARVA transforms Java code into an automaton-based model, and allows for the incorporation of control-flow analyses that analyse this model against Dynamic Automata with Timers and Events or DATES (the property language used by LARVA) to produce residuals that produce an equivalent judgement at runtime.

Paper Nr: 41
Title:

ArchiMEO: A Standardized Enterprise Ontology based on the ArchiMate Conceptual Model

Authors:

Knut Hinkelmann, Emanuele Laurenzi, Andreas Martin, Devid Montecchiari, Maja Spahic and Barbara Thönssen

Abstract: Many enterprises face the increasing challenge of sharing and exchanging data from multiple heterogeneous sources. Enterprise Ontologies can be used to effectively address such challenge. In this paper, we present an Enterprise Ontology called ArchiMEO, which is based on an ontological representation of the ArchiMate standard for modeling Enterprise Architectures. ArchiMEO has been extended to cover various application domains such as supply risk management, experience management, workplace learning and business process as a service. Such extensions have successfully proven that our Enterprise Ontology is beneficial for enterprise applications integration purposes.

Paper Nr: 56
Title:

Concept-based Co-migration of Test Cases

Authors:

Ivan Jovanovikj, Enes Yigitbas, Stefan Sauer and Gregor Engels

Abstract: Software testing plays an important role in software migration as it verifies its success. As the creation of test cases is an expensive and time consuming activity, whenever test cases are existing, their reuse should be considered, thus implying their co-migration. During co-migration of test cases, two main challenges have to be addressed: situativity and co-evolution. The first one suggests that when a test migration method is developed, the situational context has to be considered as it influences the quality and the effort regarding the test case migration. The latter suggests that the changes that happen to the system have to be considered and eventually reflected to the test cases. We address these challenges by proposing a solution that applies situational method engineering extended with co-evolution analysis. The development of the test migration method is centered upon the identification of concepts describing the original tests and original system. Furthermore, the impact of the different realization of the system concepts in source and target environments is analyzed as part of the co-evolution analysis. Lastly, based on this information, a selection of suitable test migration strategies is performed.