Providing Model-Extraction-as-a-Service for Architectural Performance Models. J. Walter; S. Eismann; N. Reed; S. Kounev; in Proceedings of the 2017 Symposium on Software Performance (SSP) (2017).
Architectural performance models can be leveraged to explore performance properties of software systems during design-time and run-time. We see a reluctance from industry to adopt model-based analysis approaches due to the required expertise and modeling effort. Building models from scratch in an editor does not scale for medium and large-scale systems in an industrial context. Existing open-source performance model extraction approaches imply signi cant initial efforts which might be challenging for layman users. To simplify usage, we provide the extraction of architectural performance models based on application monitoring traces as a web service. Model-Extraction-as-a-Service (MEaaS) solves the usability problem and lowers the initial effort of applying model-based analysis approaches.
CASPA: A Platform for Comparability of Architecture-based Software Performance Engineering Approaches. T. F. Düllmann; R. Heinrich; A. van Hoorn; T. Pitakrat; J. Walter; F. Willnecker; in Proceedings of the 2017 IEEE International Conference on Software Architecture (ICSA 2017) (2017).
Setting up an experimental evaluation for architecture-based Software Performance Engineering (SPE) approaches requires enormous efforts. This includes the selection and installation of representative applications, usage profiles, supporting tools, infrastructures, etc. Quantitative comparisons with related approaches are hardly possible due to limited repeatability of previous experiments by other researchers. This paper presents CASPA, a ready-to-use and extensible evaluation platform that already includes example applications and state-of-the-art SPE components, such as monitoring and model extraction. The platform explicitly provides interfaces to replace applications and components by custom(ized) components. The platform builds on state-of-the-art technologies such as container-based virtualization.
An Expandable Extraction Framework for Architectural Performance Models. J. Walter; C. Stier; H. Koziolek; S. Kounev; in Proceedings of the 3rd International Workshop on Quality-Aware DevOps (QUDOS’17) (2017).
Providing users with Quality of Service (QoS) guarantees and the prevention of performance problems are challenging tasks for software systems. Architectural performance models can be applied to explore performance properties of a software system at design time and run time. At design time, architectural performance models support reasoning on effects of design decisions. At run time, they enable automatic reconfigurations by reasoning on the effects of changing user behavior. In this paper, we present a framework for the extraction of architectural performance models based on monitoring log files generalizing over the targeted architectural modeling language. Using the presented framework, the creation of a performance model extraction tool for a specific modeling formalism requires only the implementation of a key set of object creation routines specific to the formalism. Our framework integrates them with extraction techniques that apply to many architectural performance models, e.g., resource demand estimation techniques. This lowers the effort to implement performance model extraction tools tremendously through a high level of reuse. We evaluate our framework presenting builders for the Descartes Modeling Language (DML) and the Palladio Component Model (PCM). For the extracted models we compare simulation results with measurements receiving accurate results.
Mapping of Service Level Objectives to Performance Queries. J. Walter; D. Okanovic; S. Kounev; in Proceedings of the 2017 Workshop on Challenges in Performance Methods for Software Development (WOSP-C’17) co-located with 8th ACM/SPEC International Conference on Performance Engineering (ICPE 2017) (2017).
The concept of service level agreements (SLAs) defines the idea of a reliable contract between service providers and their users. SLAs provide information on the scope, the quality and the responsibilities of a service and its provider. Service level objectives (SLOs) define the detailed, measurable conditions of the SLAs. After service deployment, SLAs are monitored, to assess potentially dangerous situations, that further lead to violation of the SLAs. However, the SLA monitoring infrastructure is usually specific to the underlying system infrastructure, lacks generalization, and is often limited to measurement-based approaches. This makes it hard to apply the results from SLA monitoring in other stages of the software life-cycle. In this paper we propose the mapping of concerns defined in SLAs to the performance metrics queries using the Descartes Query Language (DQL). The benefit of our approach is that the same performance query can then be reused for evaluation of performance concerns throughout the entire life-cycle, and regardless of which approach is used for evaluation.
Online Learning of Run-time Models for Performance and Resource Management in Data Centers. J. Walter; A. D. Marco; S. Spinner; P. Inverardi; S. Kounev; in Self-Aware Computing Systems, S. Kounev, J. O. Kephart, A. Milenkoski, X. Zhu (Hrsg.) (2017).
In this chapter, we explain how to extract and learn run-time models that a system can use for self-aware performance and resource management in data centers. We abstract from concrete formalisms and identify extraction aspects relevant for performance models. We categorize the learning aspects into: i) model structure, ii) model parametrization (estimation and calibration of model parameters), and iii) model adaptation options (change point detection and run-time reconfiguration). The chapter identifies alternative approaches for the respective model aspects. The type and granularity of each aspect depends on the characteristic of the concrete performance models.