Intern
Chair of Computer Science II - Software Engineering

Language

DQL has a declarative textual syntax to represent queries. The language structure of DQL is lent from the Structured Query Language (SQL), but differs conceptually. In DQL, there is no relational model. All queries in DQL belong to query classes, that group similar expressions by their semantics. In the following, we introduce query classes of DQL in straight-forward manner. The necessary steps to conduct a performance analysis using DQL are (i) to obtain knowledge about the structure of a referenced descriptive performance model, (ii) to obtain knowledge about available performance metrics for specific performance-relevant model entities, and (iii) to execute performance predictions and/or to extract the performance metrics of interest.

Model Structure Queries

A Model Structure Query is used to obtain information on the structure of a performance model. We focus on typical architecture-level performance models that consist of performance-relevant entities to model resources and services. By means of DQL, a resource is an entity that is demanded by a service to process a given request of a user, i.e. to process workload. When users execute Model Strucuture Queries, they (i) obtain a type-mapping of model entities to the means of DQL and (ii) they obtain information about available performance metrics for given entities. The type-mapping to DQL is necessary to bridge from a specific performance model to the means to DQL and to obtain the absolute identifiers of model instances. Subsequently, when users identified their demanded model entities, they can proceed to obtain the available performance metrics for specific model entities. As DQL is designed independently of any specific performance modeling formalism or performance prediction approach, such interpretations of model instances are necessary during run-time as the language cannot contain such information statically without a significant loss of flexibility.

Example: List all Entities

The following query returns the identifiers of all performance-relevant entities that are part of a performance model. In the second line of the query, the users chooses which DQL Connector to use to access a performance model at any kind of location supported by the DQL Connector, i.e. a file on a file system or a model instance persisted in a database. Then, the DQL Connector executes the listing operation and all resources and services are reported to the user.

 

LIST ENTITIES USING connector@'modelLocation'; 

 

Example: List all Metrics for given Entities

This example is the subsequent step to start a performance analysis in DQL. Here, the user requests a listing of all performance metrics that are available for specific, performance-relevant entities. The referenced DQL Connector will interpret the performance model instance and determine which metrics can be calculated through the available performance prediction tools or extracted from performance data repositories.

 

LIST METRICS (RESOURCE 'id1' AS cpu, SERVICE 'id2' AS webService) 
USING connector@'modelLocation';

 




				
					



				
				
					



				
			

Performance Metrics Queries

A Performance Metrics Query is used to control performance predictions and to extract the demanded performance metrics. The declarative language design of DQL simplifies the tasks that are typically manually executed by users. Common approaches for performance predictions force users to (i) prepare and calibrate descriptive performance models, (ii) configure model-to-model transformations from a descriptive performance model into a predictive performance model, (iii) start the simulation or solving of the predictive performance model and, finally, after the the process has completed, (iv) users can extract the demanded performance metrics manually.

In case of Performance Metrics Queries in DQL, users can rely on the description of their demanded result, specify the relevant model entities in the descriptive performance model to tailor the transformation process and finally specify which descriptive performance model to use. All manual tasks are hidden by the components of DQL and users obtain a tailored result set, which contains only their demanded performance metrics. As DQL is designed independent of a specific performance prediction or modeling approach, the structure of Performance Metrics Queries is generic and once a user is used to DQL, he can employ DQL even for different performance prediction approaches. The developers of performance prediction approaches can provide a DQL Connector and one a DQL Connector is made available, users can use other approaches without the need to learn new syntaxes, semantics or to adapt custom tools for result processing to new result formats.

Performance Metrics Queries can be extended to reflect dynamics in descriptive performance models through so-called Degrees-of-Freedom (DoFs). In descriptive performance models DoFs can specify the valid configuration space of model entities, e.g. a resource can model a compute server that may consist of one to four CPUs or an amount of RAM ranging from one to 64 GBs. These configuration options typically arise in dynamic computing environments like in Cloud Computing. Here, users are typically interested in finding a suitable sizing for such a compute server to deploy a software system onto it while constraints like Service-Level Agreements (SLAs) for the software system in operation are not violated. A SLA can be the response time of a service, e.g. to order a product from a web shop or to purchase stocks, of the software system that is accessed by users.

Example: Computation of Performance Metrics

This is example shows how to obtain performance metrics using DQL. As the user has already discovered the available performance-relevant entities and performance metrics, the user can request the computation of performance metrics through the DQL Connector. Here, the utilization of an entity aliased as cpu is requested as a performance metric. The DQL Connector hides all modeling formalism-specific tasks and returns the resulting performance metrics in a tailored result set as it has been requested by the user.

 

SELECT cpu.utilization, webService.responseTime 
FOR RESOURCE 'id1' AS cpu, SERVICE 'id2' AS webService
USING connector@'modelLocation';

 

Example: Constrained Computation of Performance Metrics

In this query the prior query is extended with a constraint. In DQL, constraints specify a trade-off for the computation of performance metrics. At the system run-time, users may be forced to obtain performance metrics within strict time bounds. This constraint might be satisfied at the price of a lower accuracy by less detailed transformations from an descriptive performance model into a predictive performance model or to use cached results.

 

SELECT cpu.utilization, webService.responseTime 
CONSTRAINED AS "fastResponse"
FOR RESOURCE 'id1' AS cpu, SERVICE 'id2' AS webService
USING connector@'modelLocation';

 


Example: Evaluation of Degrees-of-Freedom

In this query again the prior query for the computation of performance metrics is extended. Here, the parameter space of a Degree-of-Freedom (DoF) is modified and varied. For each variation, one result set of the requested performance metrics is returned to the user. In this case, the number of users accessing a system is the referenced DoF and a vector indicating one, 100 and 1000 users is used to specify the amount of users that should be used in simulation of the performance model instance.

 

SELECT cpu.utilization, webService.responseTime 
EVALUATE DOF
VARYING 'id3' AS userWorkload <1, 100, 1000>
FOR RESOURCE 'id1' AS cpu, SERVICE 'id2' AS webService
USING connector@'modelLocation';

Performance Issue Queries

A Performance Issue Query automates the interpretation of performance metrics and a descriptive performance model to identify issues, e.g. bottlenecks[3]. Currently, we work on this query class as a first step towards Goal-oriented Queries. Opposed to Performance Metrics Queries, Performance Issue Queries and superior Goal-oriented Queries do not focus on performance metrics as a result, but enable users to specify What-If Questions[4][5] and the results provide insight to optimization problems, reconfiguration scenarios or systems management challenges.