• Aucun résultat trouvé

Modeling user perceived unavailability due to long response times

N/A
N/A
Protected

Academic year: 2021

Partager "Modeling user perceived unavailability due to long response times"

Copied!
9
0
0

Texte intégral

(1)

HAL Id: hal-01212162

https://hal.archives-ouvertes.fr/hal-01212162

Submitted on 7 Oct 2015

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

Modeling user perceived unavailability due to long

response times

Magnos Martinello, Mohamed Kaâniche, Karama Kanoun, Carlos Aguilar

Melchor

To cite this version:

Magnos Martinello, Mohamed Kaâniche, Karama Kanoun, Carlos Aguilar Melchor.

Model-ing user perceived unavailability due to long response times.

20th IEEE International Parallel

and Distributed Processing Symposium (IPDPS 2006), Apr 2006, Rhodes Island, Greece.

8p.,

(2)

Modeling User Perceived Unavailability due to Long Response Times

Magnos Martinello, Mohamed Kaˆaniche, Karama Kanoun and Carlos Aguilar Melchor

LAAS-CNRS

7, Av. du Colonel Roche 31077 Toulouse - France

(magnos,kaaniche,kanoun,caguilar)@laas.fr

Abstract

In this paper, we introduce a simple analytical modeling approach for computing service unavailability due to long response time, for infinite and finite single-server systems as well as for multi-server systems. Closed-form equations of system unavailability based on the conditional response time distributions are derived and sensitivity analyses are carried out to analyze the impact of long response time on service unavailability. The evaluation provides practical quantitative results that can help distributed system devel-opers in design decisions.

1

Introduction

Distributed systems widely sustain our day-to-day life providing service for a large number of enterprises, pro-ducing business opportunities and offering new services to customers. Such systems should ideally remain operational supporting correct service despite the occurrence of unde-sirable events.

Service unavailability may result from several causes such as i) failures in service hosts, in the communication infrastructure [1] or in the user site, or ii) heavy loads lead-ing to overloaded servers. From the user viewpoint, the ser-vice is perceived as degraded or even unavailable if the re-sponse time is too long compared to what he or she is ex-pecting. A long response time may discourage some users who will visit other service providers. For instance, if a re-quest takes 30 seconds to complete, users may consider the request failed.

The goal of this paper is to provide a modeling ap-proach for computing service unavailability due to long re-sponse time, relying on Markov reward models and

queue-ing theory. We analyze the long response time effects

on service unavailability without distinguishing the various



M. Martinello has a researcher fellowship from CAPES-Brazil.

causes leading to long response time. We introduce a flexi-ble mathematical abstraction that is general enough to cap-ture the essence of unavailability behavior due to long re-sponse time. We analyze this measure at steady state, start-ing with a sstart-ingle-server queuestart-ing system, then multi-server queueing systems are considered. The aim of our work is twofold:



Evaluate the effects of long response times on service unavailability.



Provide practical quantitative results that can help in design decisions.

The derivation of the response-time distribution is widely recognized to be not trivial [8]. Some interesting ap-proaches have been proposed to define measures combining performance and dependability issues [7, 6, 3]. [7, 6] intro-duced a model for hard and soft real-time systems, while [3] has considered transactional systems in which failures may be due to frequent violation of response time constraints.

The modeling approach presented this paper builds on and extends the work introduced in [3]. The latter work uses i) a Markov model to evaluate system availability, and ii) a tagged job approach to compute the response time dis-tribution. We take a step further by providing closed-form equations for response-time distribution and for service un-availability due to long response time in single and multi-server systems. The closed-form equations are derived us-ing the well-known gamma function. We present several sensitivity analyses to illustrate how the designers can use the models to guide the system design.

The paper is organized as follows. Section 2 defines the availability measure based on response time. In section 3, we introduce the modeling approach using single server queueing systems. This is followed by sensitivity analysis results illustrating the measure behavior. Section 4 provides a modeling approach using multi-server queueing systems with sensitivity analysis. Section 5 concludes the paper.

(3)

2

Availability measure definition

Steady service availability may be defined as the long-term fraction of time of actually delivered service. We as-sume that the service states as perceived by the users are partitioned into two sets: i) a set of states in which the ser-vice is perceived as available and ii) the complementary set in which the service is perceived as unavailable.

Let



denote the set of all service states, and: be the

probability that the service is in state at steady-state.

In order to define the service availability based on the response time, let us introduce

 

 : the random variable denoting the response time

given that the system is in at steady-state;  

: the maximum acceptable response time (i.e., if the response time is longer, the service is considered as unavailable), this metric can indicate network delays or an overloaded server; it is also referred to as the maximum response time requirement;



: the quality of service requirement (or the accepted quality of service) representing the minimum fraction of requests that satisfy the maximum response time re-quirement;

   



: the conditional response-time distribu-tion (i.e., the probability that the response time of a re-quest is lower than or equal to

, given that the system is in state at steady-state).

Using the definitions above, the service is said to be available if the following condition is satisfied

  



(1)

Let us denote by  the states in which the service is

available (i.e., equation (1) is satisfied for all states,

to  ). Thus, the service availability ! and the service

unavailability"#! are given by:

!$ % & ('*)  *+ "!$-,/. % & ('*)   (2)

The evaluation of the unavailability measure based on re-sponse time is carried out according to the following steps. First, one needs to specify the service model describing in particular the distribution of request arrivals and processing times as well as the servers capacity. Based on this specifi-cation, 

 0 

and  can be obtained for a given

 . For a given , using  1 234 , is derived, then

the availability is computed by equation (2). It is worth to

mention that the two parameters, 

and

characterize the quality of service and should be specified a priori. For ex-ample, one can specify

to be equal to



657

and 48

seconds. This means that the service response time should be less than 5 seconds for at least 90% of all requests.

In the following sections, we will i) build analytical mod-els for single-server systems and for multi-server systems, and ii) derive closed-from equations for the conditional re-sponse time probability and service unavailability.

3

Single server queueing systems

In this section, we assume that the server is modeled as a single queueing system with exponential arrival and service times. The modeling approach using single server queueing systems is carried out in two steps. First, we model the availability measure based on the response time distribution at steady-state and then some numerical sensitivity analysis results are presented.

9;:=< >@?;AB*CED=FG

3.1.1 Conditional response time distribution

In this simplest example, we assume that there is only a single process serving the incoming requests at a constant rate H requests/sec. The system is assumed to be in state

when there are  requests in the system (;.I, waiting for

service in the queue and one being served). By definition, if a request arrives given that there are already requests in

the system, then the total time spent in the system by the request, denoted as

 , is given by a sum of6J , random

variables. Since the random variables are independent and identically distributed with mean,LKMH , it can be shown that 

 is described by an Erlang distribution [2] as follows

   2 N,/.  & O '*) H  O PRQTSVURWX (3)

Let us consider the incomplete gamma function1defined

asY ZJ[,\ H  ]_^ ` URWX SVURa=b   b.

Using the fact that

c ,Jed1J dgf f Q JihjhkhlJ d O PRQnm SVUo  Y P J[,\pdq Y P J[,M    2

can be expressed as follows

    N,/. Y *Ji,V\ H  Y *Ji,L (4) 1Note that

rsutwvyx{z is defined by the following integralrRs(twv|x{z;} ~

€‚kƒg„†…ˆ‡Š‰‹…

. Ift is a positive integer, thenrs(tRvŒx{z3}$t†. It is also

(4)

3.1.2 Service unavailability

Consider a system accessible to a very large population. The arrival process is characterized by requests arriving at an average arrival rate ofŽ requests/sec. This assumption is

known as the single class or homogeneous workload. We first assume that all requests arriving are queued for service. This assumption is known as infinite buffer. Then we will consider the finite buffer case. All the analyses pre-sented assume that the system being analyzed is in opera-tional equilibrium.

3.1.3 Infinite buffer

Requests arrive at the system at a rate of Ž requests/sec,

queue for service, get served at rateH requests/sec and

de-part. Such a system is a traditional M/M/1 queue system [2], in which the probability () that there are requests at

steady-state is well-known    ,/.w =  (5) where‘“’ W

refers to the load.

Therefore, equation (2) becomes!I

% & ('*) ,”.Œw =  .

Using the fact that

% & ('*)    ,3.Œ %–•— ,/. , we obtain !$-,/.Œ %–•— + "#!$[ %–•˜— (6) 3.1.4 Finite buffer

For a system supporting at most ™ requests including the

request being processed (finite buffer) denoted by M/M/1/b queue system, we have

!_ ,/.Œ %–•— ,/.Œgš •— + "#!$-,/. c ,/. %–•— ,/.qš •— m (7) whereœ›™ .

Table 1 summarizes the equations for user perceived unavailability due to long response time in single server queueing systems. Recall that the computation of "#!

re-quires to calculate  which corresponds to the maximum

value of satisfying equation (1). 9;:Š ž]BZF ŸVD†¡gD†¢D†¡j£-¤¥F¤C†£;ŸVD¦Ÿ

In this section, some numerical results are presented in order to illustrate the behavior of "#! using the equations

derived in Table 1.

Conditional response time probability

    N,/. Y *Ji,V\ H  Y *Ji,L

Unavailability due to long response time for an M/M/1 queue system

"#!$i %–•—

Unavailability due to long response time for an M/M/1/b queue system "#!$N,/.T§ — UR¨‹©*ªg« — U¨‹¬ ª­«¯®

Table 1. Closed-form equations for single server queueing systems

3.2.1 Variation of response time

An important consideration for design purposes is to define when the service is ”too slow”. In fact, one has to speci-fy the threshold 

for the acceptable response time. Con-sidering the example of web servers, practical experiences have suggested that ten seconds is well above the normal re-sponse time for all the sites studied in [5]. The latter divides timing problems affecting sites availability into ”medium” (ten seconds) and ”severe” (thirty seconds) problems.

Figure 1 shows the conditional response time distribution (

   

), given in Table 1, as a function of the number of requests, considering different values forH



. In fact, the evaluation of this distribution allows us to determine the

states for which (  

2˜°

).

As it can be seen, the response time probability is di-rectly affected by the productH



. It is noteworthy thatH



corresponds to the average number of requests processed by the server during a period of time

. Another observation is that  increases withH



. For example, setting the quality of service parameter  to 0.9, ±³² for H  ´,lf 5 8 and µ[¶V· forH 

_²28 . In fact, the greater is , the higher is

the probability that the response time is lower than

. Fig-ure 1 also shows that lower values for the quality of service

parameter 

(e.g., 



w5¸

) clearly lead to greater values

for . In other words

2, the greater is

 , the more requests

arriving at the server are likely to be satisfied within the ac-ceptable response time.

Such analyses are useful for design decisions, since the

2It can be noticed that for a given

¹ , the longer is the response time

(5)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 10 20 30 40 50 60 70 80 90 100 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 10 20 30 40 50 60 70 80 90 100 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 10 20 30 40 50 60 70 80 90 100 Number of requests P ( R(i) < d ) µd=12.5 µd=25 µd=50 µd=75 Κ=7 Κ=18 Κ=40 Κ=63 φ=0.9 Figure 1.   » 

for single server sys-tems

expected level of degradation of response time probability as a function of the number of requests queued in the system can be evaluated.

Once is evaluated, one can compute the service

un-availability using equation (6). Table 2 shows how "#!

varies asH



increases for different loads and for





w57

. We clearly observe that"! decays faster as decreases.

H   ‘ w57 ‘ 65¸ ‘ w5 ² ‘ 65 ¶

12.5 7 4.3e-01 1.6e-01 5.7e-02 1.6e-02

25 18 1.3e-01 1.4e-02 1.1e-03 6.0e-05

50 40 1.3e-02 1.0e-04 4.4e-07 8.0e-10

75 63 1.2e-03 6.2e-07 1.2e-10 6.3e-15

Table 2."#! asH  increases (  w57 ) 3.2.2 Effects of and on"#!

"#! provides a useful indicator to analyze the impact of

re-sponse time on service unavailability. By definition,

param-eter  represents the set of states for which the response

time is acceptable for a given quality of service requirement. Figure 2 shows "#! as a function of for different loads

 . The system is assumed to be composed by one server

with infinite buffer (M/M/1). Note that by definition,‘N,

implies"#!$-, and ¼u½(¾ %3¿

`

"#!$ .

From the figure, we can see that"#! is very sensitive to

the load ."#! decays slowly for heavy loads . In contrast,

for ”light” loads À› w5

¶ , the unavailability due to long

response time is negligible. On the other hand, the greater is K, the lower is "#! . Such behaviour is better illustrated

on Figure 3 that plots"#! as a function of the load (



w5 ¶ )

for different values of . In particular, for systems in which

0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 UA K ρ = 0.9 ρ = 0.8 ρ = 0.6

Figure 2. "! as a function of  (M/M/1

sys-tem) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 UA Load ρ K=50 K=10 K=20

Figure 3."! as a function of (M/M/1 system)





8 , there is a small probability that the service is

perceived as unavailable due to long response time. Thus, according to the figure, it is likely that service unavailability due to long response time will be very low.

3.2.3 Finite buffer effects on"!

All the evaluations presented along the previous section can also be applied to a system with a finite buffer (i.e., consid-ering an M/M/1/b queue).

Figure 4 shows a comparison between a system with a finite buffer (M/M/1/b) and a system with an infinite buffer (M/M/1). The results for M/M/1/b (dotted lines) are ob-tained using equation (7). As expected, the greater is™ , the

lower is the difference between the models. For 

657

M/M/1/40 is very close to M/M/1. The difference is signif-icant for M/M/1/20. However, for lower loads (‘

65 ¶ and w5

(6)

0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 UA Κ ρ = 0.7 ρ = 0.9 M/M/1 M/M/1/40 ρ = 0.6 M/M/1/20

Figure 4. The effect of finite buffer size ™ on "!

as for M/M/1.

For a given request, the greater is its position in the buffer, the lower is the probability that it is served within the maximum response time requirement. In finite queue-ing systems, the requests arrivqueue-ing when the buffer capacity™

is full are rejected, and therefore, they are not considered as leading to service unavailability due to long response time. This explains the fact that "#! for M/M/1/b is lower than "#! for M/M/1.

3.2.4 Approximation for"#!

The evaluation of"#! requires the calculation of parameter  which represents the set of states after which all arriving

requests probably perceive the service as unavailable. K is not known a priori. It is computed in an intermediate step. We have investigated a more direct approach for computing

"#! based on an approximation of , in order to obtain an

analytical equation of"#! as a function of only well known

parameters, such as service rate (H ) and required maximum

response time (

), without needing to compute in an

in-termediate step based on the conditional response time dis-tribution.

By analyzing the properties of the finite series Á ŠÂ  à & O '*) H  O

PRQ , we found the following approximation

3for  ÅÄÅƆH  .»ÇÈ H É (8) whereÇ is a constant that can be set to support a given

qual-ity of service (e.g.,ǐN,

5 ·V8 for   657 ). Accordingly,"#! can be obtained as follows

3In fact,

º is obtained through a sub-linear approximation to the

in-flexion point of f(n) aroundÊ .

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 10 20 30 40 50 60 µd ρ=0.95 ρ=0.9 ρ=0.8 ρ=0.6 UA equation (6) equation (9) Figure 5."#! as a function ofH  "#!$Äi6ËÍÌWXjUZÎgÏ WXpÐlÑ •— (9)

Thus, "#! can be evaluated directly as a function ofH

and 

only, for a given 

. Figure 5 shows a comparison between the unavailability computed by equation (6) (where

 is an integer value obtained from equation (4)) and the

approximation given by equation (9) (ÇTÒ,

5

·V8 ). "#! is

plotted as a function of H



, where dotted lines represent the approximation. As it can be seen, the approximation is very accurate. It differs from the exact value only for

H



›I8 . In other words, for anyH Ó

8 (which is the case

of most server systems capable of handling much more than 5 requests per second), there is no difference between the exact and the approximated value of"! .

Equation (9) and the results of figure 5 show that"#!

ap-proaches asH



approaches infinity, (i.e., ¼(½u¾ WX

¿

`

"#!_ ).

It means that for a given

with a server having infinite

service rate H[“Ô , there is no service unavailability due

to long response time. At the same time, for a given service rateH



, there is no service unavailability for a user with an infinite patience

$Ô .

4

Multi-server queueing systems

Let us consider a multi-server queueing system consist-ing of a queueconsist-ing buffer of finite or infinite size, with mul-tiple identical servers. Such an elementary queueing sys-tem is also referred to as a multi-server syssys-tem. In section 4.1 we present closed-form equations for the condition re-sponse time distribution and the service unavailability. Sen-sitivity analysis results are presented in section 4.2.

(7)

4.1.1 Conditional response time distribution

Let us suppose that the multi-server system is composed of

Ö

identical servers, where each server is capable of handling

H requests/sec. Let #×j

 be the random variable denoting

the response time in steady-state of an arriving request at a system withÖ

servers and requests. If a request arrives

when there are already requests in the system, two

differ-ent cases can be distinguished to model the corresponding conditional response time distribution:



If›

Ö

, the new arrival is processed immediately by one of the free servers. Thus, ×

 is an exponential

random variable with parameterH .



IfÙØ

Ö

, the new arrival must wait for;.

Ö

J4,

ser-vice completions before receiving serser-vice (If˜

Ö

, the new request must wait for one service completion. If

]

Ö

J$, , two service completions are required, etc.).

In this case,1×l

 is the sum of an Erlang random

vari-ableÚ corresponding to the request waiting time and

an exponential random variableÛ denoting the service

time. Therefore, by convolution

 ×  ;ÚiJ ÛÀ   ^ X )-Ü Š .ÙÝ6 ¦Þ Ý6  Ý , where Ü ˆß \ ¥. Ö J[,M ]4,/.  U × & O '*) H Önß O PRQ SVUW ×=à and Þ Ý6 ]H S URW2á .

After a set of transformations (see [4] for all details ), we obtain equation (10) #×k  –  ãâä ä ä å ä ä äæ ,/. S URWX , if › Ö ,/. × × U —  U × •— S URWX §¦,/._ç Ë  U × •—nè Ë × U — ÑéWXÍÑ ç Ë  U × •˜— Ñ ® .ç Ë  U × •—nè × WXÍÑ ç Ë  U × •— Ñ , ifØ Ö (10) 4.1.2 Service unavailability

Let us take the same system consisting ofÖ

identical servers,

where each server is capable of handling H requests/sec.

We need to compute the probability that the system with

Ö

servers has requests at steady-state denoted  ŠÖ

.

As-suming that the sequence of interarrival times is described by independent and identical exponential random variables of rateŽ (a traditional M/M/c in which 

ŠÖ

is well-known

[2] withe ’

×

W

), we obtain for service unavailability the following closed-form equation (see [4] for all details)

"#!$ âä ä å ä äæ ,/.ê ) ç Ë %–•—Íè × ¨nъë ìˆí ç Ë %–•˜— Ñ , if´› Ö ,/.êR) ç Ë × è × ¨Íцëìˆí ç Ë × Ñ J Ë × ¨Íцì ×î Ë — U¨ ©ï ì ªg« Ñ — U¨ , if´Ø Ö (11) To summarize, forð \k,\Ífq\ 5u5(5 requests, ×  3 

can be computed using equation (10). Based on this

distri-bution, K is obtained as the maximum value of satisfying

#×l  ”

2]I

. Then, we can proceed to calculate"#!

for multi-servers in an infinite and finite queueing systems using equation (11).

Õ :Š žðBZF Ÿ­D†¡­D†¢D†¡j£-¤¥F¤C†£;ŸVD¦Ÿ

In this section, we study the effect of the number of

servers on "#! using the modeling approach developed in

the previous section. The analysis is divided in two parts. First, in 4.2.1 the response time distribution is studied in order to quantify how the response time is affected by the number of servers. A simple example of system is presented illustrating some possible configurations. Then, we evalu-ate "#! itself taking into account the response time

varia-tion, the load effects in 4.2.2 and the number of servers in 4.2.3.

4.2.1 Variation of response time distribution

Figure 6 shows the response time distribution ( ×   2

) as a function of the number of requests computed by equation (10). This function is evaluated varying the num-ber of serversÖ

and the productH



. As it can be seen, asÖ

or H



increases the response time probability is improved. This is illustrated by the increase of . Clearly, the greater

is , the lower is"#! . The effect of on"#! for

multi-servers is similar to the case of single-server (discussed in section 3.2.2).

These results can be used for supporting design deci-sions. For instance, let us define byH

Ö

the aggregated ser-vice rate provided byÖ

servers. We consider a set of system configurations designed to support the same aggregated ser-vice rate ofH

Ö

“,M8 requests/sec. Table 3 identifies four

possible system configurations using only multi-servers, i) to iv), and one configuration, v), with a single server.

The values of areñ



,,l¶w\k,Mf2¶6\k,j· \j,j·w,V\k,k·V·



cor-responding to configurations i), ii), iii), iv) and v) respec-tively. This result shows that a configuration with only 1

server provides the greatest  . Clearly, the response time

is longer when the aggregated service rate is split among

the servers. This fact explains why decreases for

config-urations that employ various servers with low service rates

(e.g., ò‚,V,j¶ for Ö ‚,lf and Hó‚,lf 5 8 , compared to µ-,l·w, for Ö [f andHô_²28 ).

(8)

0 0.2 0.4 0.6 0.8 1 50 100 150 200 250 300 350 400 450 0 0.2 0.4 0.6 0.8 1 50 100 150 200 250 300 350 400 450 0 0.2 0.4 0.6 0.8 1 50 100 150 200 250 300 350 400 450 0 0.2 0.4 0.6 0.8 1 50 100 150 200 250 300 350 400 450 0 0.2 0.4 0.6 0.8 1 50 100 150 200 250 300 350 400 450 0 0.2 0.4 0.6 0.8 1 50 100 150 200 250 300 350 400 450 0 0.2 0.4 0.6 0.8 1 50 100 150 200 250 300 350 400 450 0 0.2 0.4 0.6 0.8 1 50 100 150 200 250 300 350 400 450 0 0.2 0.4 0.6 0.8 1 50 100 150 200 250 300 350 400 450 0 0.2 0.4 0.6 0.8 1 50 100 150 200 250 300 350 400 450 0 0.2 0.4 0.6 0.8 1 50 100 150 200 250 300 350 400 450 0 0.2 0.4 0.6 0.8 1 50 100 150 200 250 300 350 400 450 c=1 c=2 c=3 c=4 c=5 P ( R c (i) < d ) Number of requests µd=75 0.9 Κ=63 Κ=133 Κ=205 Κ=277 Κ=349 P ( R c (i) < d ) µd=50 0.9 c=1 c=2 c=3 c=4 c=5 Number of requests Κ=40 Κ=86 Κ=133 Κ=181 Κ=229 P ( R c (i) < d ) P ( R c (i) < d ) 0.9 0.9 Number of requests Number of requests µd=25 µd=12.5 φ=0.9 φ=0.9 φ=0.9 φ=0.9 c=1 c=2 c=3 c=4 c=5 c=5 c=1 Figure 6. ×  3 

variation for multi-server queuing systems

Configuration Ö H i) 12 12.5 ii) 6 25 iii) 3 50 iv) 2 75 v) 1 150

Table 3. Configurations forH

Ö

4,l8 req/sec.

4.2.2 Load effects on"#!

Table 4 shows the impact of Ö

andH



on "! for different

loads  (setting



to 1 to simplify the analysis) assuming the same aggregated service rate ofH

Ö

T,l8 requests/sec.

The service unavailability is given in days:hours:minutes per year for various loadsê

 w5¸ \ 657 \ 657 8  ."! is

com-puted using equation (11) based on the value of obtained

from equation (10). Note that for 

w5¸

, there is a very small unavailability due to long response time.

According to Table 4, it can be seen that the lowest"#!

is obtained for a single powerful server (configuration v). Nevertheless, for all configurations, "#! is less than 5 min

and 30 sec per year forõ w57

, which is relatively low."#!

is significantly much higher for‘ w57

8 .

UA in days:hours:minutes per year

Configuration ‘ w5¸ ‘ w57 ‘ 657 8 i) 0 00:00:05 00:32:28 ii) 0 00:00:01 00:15:17 iii) 0 0 00:11:07 iv) 0 0 00:10:16 v) 0 0 00:09:04 Table 4. Impact of on"! (H Ö N,l8 ).

4.2.3 Impact of the number of serversÖ

on"#!

Table 5 shows the impact of Ö

for three values of service ratesH



fV8q\{8 \n²28



, when the load is set toõ w57

. It is important to note that increasingÖ

is efficient for reducing

"#! especially when the load is not heavy _› 657

. For instance, ifH$öf8 and÷

657

, then "#! is 49 days per

year forÖ

µ, compared to ("#!“ÄÅ,j¶ minutes per year)

for Ö

ö· . It becomes more efficient asH increases (e.g.,

forH ó8 ,"#!óÄNø days per year for

Ö

ù, compared to

"#!$ÄN, minute per year for

Ö

(9)

Ö HôIf8 H|I8 H|$²8 1 49:07:22 04:20:29 00:10:16 2 06:07:24 00:01:09 0 3 00:15:51 0 0 4 00:01:38 0 0 5 00:00:07 0 0

Table 5. "#! in days:hours:minutes per year

for‘ w57

.

5

Conclusion

In this paper we have provided an analytic modeling ap-proach for computing service unavailability due to long re-sponse times using queueing systems theory. Closed-form equations of the response time distribution and the service unavailability were developed in order to illustrate funda-mental availability issues for single and multi-server sys-tems.

The models presented in this paper are aimed to allow the system developers to draw some interesting and practi-cal conclusions concerning the impact of various parame-ters on the user perceived unavailability. For example, the results have shown that for ”light” loads (i.e.,Ó

w5 ² ), the

unavailability due to long response time is negligible. From the designer perspective, the evaluations suggest that sys-tems with low service rate subject to a heavy load|Ø

w57

tend to exhibit the highest unavailability due to long re-sponse time. The effect of heavy loads on "! is less

sub-stantial as the aggregate service rate increases. Also, the obtained results have suggested that the difference on"#!

among the configurations becomes negligible as the aggre-gate service rate increases. Finally, increasing the number of servers (Ö

) has been efficient for reducing"#! especially

for low loadsú› w57

.

It has been shown that it is possible to provide a service satisfying a response time requirement using only servers with low service rate, although this is not the optimal con-figuration. In fact, we should employ either a powerful sin-gle server or various servers preventing as much as possible the overloaded periods (_Ø

w57

). For multi-servers sys-tems, the response time is longer as the aggregated service rate is shared among the servers. This fact explains why configurations that employ various servers with low service rates are not the optimal configuration.

All the analyzes presented have focused on the unavail-ability due to long response time, assuming that all the servers are available. We emphasize the fact that if we take into account the failures of one or more servers, the impact of long response time on service unavailability should be more significant. Although the optimal configuration

con-sists of a powerful single server, it represents a single point of failure under the availability viewpoint. Therefore, an al-ternative configuration employing more than a single server should provide a better tradeoff supporting degradable ser-vice under the presence of failures.

Finally, it is recognized that the service unavailability in the context of widely distributed server systems might be due to problems with the host (e.g., the remote host is too busy handling other requests), problems with the underly-ing network (e.g., a proper route to the site does not exist) or problems in the user host. In this paper, our attention was devoted to the service unavailability due to long response time concentrated at the server side. In order to analyze the impact of the response time on the end-to-end service un-availability as perceived by users, it is necessary to include other components affecting the time spent by a user request, e.g. the network delay (latency and transmission time), etc.

References

[1] M. Dahlin, B. Chandra, L. Gao, and A. Nayate. End-to-end wan service availability. ACM/IEEE Transactions on Networking, 11(2), 2003.

[2] L. Kleinrock. Queueing Systems, volume I - Theory. Wiley, 1975.

[3] V. Mainkar. Availability Analysis of Transaction

Processing Systems based on User-Perceived Perfor-mance. Proceedings of 16th Symposium on Reliable Distributed Systems, pages 10–17, 1997.

[4] M. Martinello. Availability Modeling and Evaluation of Web-based Services : A Pragmatic Approach. PhD Thesis LAAS-CNRS, (05552), 2005.

[5] M. Merzbacher and D. A. Patterson. Measuring End-User Availability of the Web:Practical Experience. De-pendable Computing and Network, 2002.

[6] J.K. Muppala, S.P. Woolet, and K.S. Trivedi. Real-time systems performance in the presence of failures. IEEE Computer, pages 37–47, 1991.

[7] K.G. Shin and C. M. Krishna. New performance mea-sures for design and evaluation of real-time multipro-cessors. Computer System Science and Engineering, 4(1):179–192, 1986.

[8] K. S. Trivedi, S. Ramani, and R. Fricks. Recent ad-vances in modeling response-time distributions in real-time systems. Proceeding of the IEEE, 91(7):1023– 1037, 2003.

Figure

Table 1. Closed-form equations for single server queueing systems
Table 2. &#34;#! as H  increases (    w5 7 )
Figure 4. The effect of finite buffer size ™ on
Figure 6.   ×  3  variation for multi-server queuing systems
+2

Références

Documents relatifs

As such, the dependent service does not notice the fail- ure and/or inaccessibility of the antecedent service unless the former reaches an operational mode where it requires the

As pointed out previously in this section, our focus is on minimizing the total absolute deviation of the job completion times from an unrestrictive common due date on a single

We calculate contribution of the step oscillation to the surface free energy at high temperatures, and find that the interaction is repulsive.. and

Thus, for a given estimation of asphaltene particle size, it is shown that the permeability reduction during asphaltenic crude flow in a model porous medium can be

simulation, these selection strategies for sires are compared with selection on EBV alone. When compared at the time horizon specified by the selection goal, the

Section 6 contains results of numerical runs for a few specific cases of bottom bathymetry, a parameter study of wave run-up in relation to the initial depth of the landslide, and

Millimeter glass and plastic spheres with a given size are deposited on a rough surface built with glued glass beads with different packing fractions coverage.. The critical values

In this Note, Floquet transforms are used to obtain the response of periodic structures due to moving loads from the transfer function in the frequency-wavenumber domain calculated