• Aucun résultat trouvé

Systems of parallel communicating restarting automata

N/A
N/A
Protected

Academic year: 2022

Partager "Systems of parallel communicating restarting automata"

Copied!
20
0
0

Texte intégral

(1)

SYSTEMS OF PARALLEL COMMUNICATING RESTARTING AUTOMATA

Marcel Vollweiler

1

and Friedrich Otto

1

Abstract.Several types of systems of parallel communicating restart- ing automata are introduced and studied. The main result shows that, for all types of restarting automata X, centralized systems of restart- ing automata of type X have the same computational power as non- centralized systems of restarting automata of the same type and the same number of components. This result is proved by a direct sim- ulation. In addition, it is shown that for one-way restarting automata without auxiliary symbols, systems of degree at least two are more pow- erful than the component automata themselves. Finally, a lower and an upper bound are given for the expressive power of systems of parallel communicating restarting automata.

Mathematics Subject Classification. 68Q45.

1. Introduction

Here we define and investigate systems of parallel communicating restarting automata in detail. These systems, which were announced in [4], combine two well-known concepts: the restarting automaton and the problem solving strategy known asclassroom model.

The restarting automaton was originally developed by Janˇcar et al. to model the linguistic technique of “analysis by reduction” [6]. This technique is used for checking the syntactical correctness of sentences of a natural language with free word order. A given sentence is simplified stepwise by a sequence of local rewrite steps until either a correct core sentence is obtained or until an error is located.

Meanwhile, many variants and extentions of the original model of the restarting

Keywords and phrases.Parallel communicating system, restarting automaton, language class.

1 Fachbereich Elektrotechnik/Informatik, Universit¨at Kassel, 34109 Kassel, Germany.

{vollweiler,otto}@theory.informatik.uni-kassel.de

Article published by EDP Sciences c EDP Sciences 2014

(2)

M. VOLLWEILER AND F. OTTO

automaton have been defined, relating them to various well-known classes of for- mal languages (see [13] for an overview). Also cooperating distributed systems of restarting automata have been studied. In these systems, several components work together in processing an input string in a sequential manner [8]. These systems are closely related to nonforgetting restarting automata [8], and recently it has been shown that some classes of trace languages can be characterized by certain types of cooperating distributed systems of a very simple kind of restarting au- tomaton [10,11].

While the cooperating distributed systems can be seen as an abstraction of the problem solving strategy known as blackboard model, in which a finite group of agents take turns in working on a solution of a common instance of some problem, the parallel communicating systems are an abstraction of the classroom model, where a finite group of agents work in a parallel manner, each on its own copy of a common instance of some problem. While they work in parallel, but inde- pendently of each other, the agents (the components) are allowed to exchange information by passing messages (see, e.g., [2]). Parallel communicating systems have been introduced in the context of phrase-structure grammars in [15]. These so-called parallel communicating grammar systems are discussed in detail in [2].

Since then, also parallel communicating systems of finite-state automata [7] or pushdown automata [3] have been studied.

Here we combine a finite set of restarting automata into aparallel communicat- ing system (PC system). These restarting automata, which are called thecompo- nentsof the PC system, work in parallel and independently of each other on their own working tapes (starting with the same input), but they can exchange infor- mation by sending and receiving messages. There are various options for realizing this message passing. Here we use particular internal states, calledcommunication states, to realize a communication in the form of a two-way handshake.

Our communication protocol is quite different from those of the above men- tioned PC systems. There, the components work in a synchronous way,i.e., there is a global clock that forces all components to execute exactly one computation step in each unit of time. Synchronization can be seen as hidden communication. For instance, a PC system of finite automata with three components is able to accept a non-contextfree language without any (explicit) communication by simply using synchronization [7]. As synchronization is not natural for distributed systems, the local computations of the components of our systems are not synchronized. This means that the numbers of steps of the local computations of two components that are executed between two communications may differ. Moreover, the communica- tion of the mentioned PC systems is one-directional,i.e., only the component that requests some information from another component determines when and with which other component a communication takes place. The component that sends information to the requesting one is not aware of that communication and thus is only passively involved in that communication step. In addition, particularly in PC systems of finite automata, the requesting component forgets everything about its previous computation in each computation step. Here, we want to avoid

(3)

these particular properties. Therefore, we use a completely different asynchronous point-to-point communication protocol for PC systems of restarting automata.

This protocol was also applied for PC systems of finite automata (asynchronous PC systems of finite automata [16]) and (in a modified version) for PC systems of pushdown automata (asynchronous PC systems of pushdown automata [14]).

In general, a communication can occur between any two components of a PC system. If there is a particular component, called the master of the system, such that all communications involve this component, then the PC system is called centralized. While for PC grammar systems and (synchronous) PC systems of deterministic finite automata the centralized variants are in general less powerful than the non-centralized variants, we will see that for PC systems of restarting automata, the centralized variants have exactly the same computational power as the non-centralized variants. In fact, our main result states that each non- centralized system of degree n≥2 of restarting automata of any typeX can be simulated by a centralized system of the same degree of restarting automata of the same typeX. As it turns out it is the fact that by each restart operation, the state of a restarting automaton is reset to its initial state that causes the main problems for this simulation.

This paper is organized as follows. After restating in short the necessary no- tions and notation on restarting automata, we define the PC systems of restarting automata (PCRA systems, for short), and explain in detail the way in which they work, using a simple example. In Section 3, we state our main result and provide a rather detailed proof. Then we compare the computational power of PCRA sys- tems to the power of restarting automata in Section 4, and to that of systems of parallel communicating finite automata and of multi-head automata in Section 5.

In this way, the power of PCRA systems is related to the space complexity classes L(deterministic logarithmic space),NL(nondeterministic logarithmic space), and CSL(nondeterministic linear space).

2. Restarting automata and PCRA systems

A restarting automatonM consists of a finite-state control and a flexible tape with a read/write window. Formally, it is described by an 8-tuple M = (Q, Σ, Γ,c|,$, q0, k, δ), where Q is a finite set of states, Σ is a finite input alphabet, Γ is a finite tape alphabet containing Σ, and q0 Q is the initial state. The symbols in Γ Σ are called auxiliary symbols, the symbols c|,$ ∈Γ are used as left and right delimiters of the tape, and the integer k 1 denotes the size of the read/write window. Further,δis a transition relation that assigns finite sets of possible operations to pairs of the form (p, α), wherep∈Qandα∈ {ε,c| }·Γ·{ε,$} is a possible content of the window. Hereεdenotes the empty word.

There are five different operations thatM can use:

– A move-left step is of the form (q,MVL)∈δ(p, α). It causesM to change into state q and to move the window one position to the left. However, the window cannot move, if it already contains the left delimiter c|.

(4)

M. VOLLWEILER AND F. OTTO

– Amove-right step is of the form (q,MVR)∈δ(p, α). It causesM to change into stateq and to move the window one position to the right. However, the window cannot move, if it just contains the right delimiter $.

– A rewrite step is of the form (q, β) δ(p, α), where |β| < |α|. It causes M to change into state q, to replace α by β, thereby also shortening the tape accordingly, and to move the window immediately to the right of β. Some restrictions apply in that the symbols c| and $ cannot be rewritten.

– Arestart step is of the formRestart δ(p, α). It causesM to change into the initial stateq0 and to reposition the window on the left end of the tape.

– Anaccept step is of the formAccept∈δ(p, α). It causesM to halt and accept.

If δ(p, α) = , then M gets stuck (without acceptance) in a corresponding sit- uation. It is required that in any computation, rewrite and restart steps occur alternatingly, with a rewrite step coming first. In general, a restarting automaton M is nondeterministic, but ifδ is a (partial) function, thenM is adeterministic restarting automaton.

Aconfiguration κof a restarting automatonM is either of the formκ=Accept, which is called anaccepting configuration, or of the formκ=αqβ, where q∈Q and eitherα=εandβ ∈ {c| }·Γ·{$}orα∈ {c| }·Γandβ∈Γ·{$}. Hereqis the current state ofM, andαβis the current tape content with the window containing the first k symbols of β, if |β| ≥ k, or all symbols of β including the right end marker $, otherwise. For an input wordw∈Σ,q0c|w$ is the correspondinginitial configuration, and any configuration of the form q0c|u$ with u Γ is called a restarting configuration.

The transition relation δ induces a binary relation M on the set of config- urations of M, the so-called single-step computation relation. Its reflexive and transitive closure M is thecomputation relation ofM. The languageL(M) that is accepted byM is defined asL(M) ={w∈Σ|q0c|w$MAccept}.

The behaviour of a restarting automatonM can be described informally as fol- lows: beginning in a restarting configurationq0c|u$ for someu∈Γ(this includes in particular the initial configurations),M can move its window along the tape, then it rewrites a part of the tape, then it can move the window again, and then M performs a restart step. This takesM back into a restarting configuration, and so this process can be repeated until an accepting configuration is reached or until M gets stuck.

The type of restarting automaton defined above is called anRLWW-automaton.

By putting additional restrictions on the definition, various restricted types of restarting automata can be obtained. AnRRWW-automaton is anRLWW-automa- ton that is not allowed to use anyMVL-steps. Thus, apart from the restart steps, such an automaton can move its window only from left to right. Accordingly, this type is also called aone-way restarting automaton. AnRWW-automaton is an RRWW-automaton that has to restart immediately after performing a rewrite step.

In other words, anRWW-automaton is not allowed to move the window between

(5)

a rewrite and a restart step, and hence, the rewrite and the restart steps can be merged into a single operation for these automata. Furthermore, also the rewrite operation can be restricted. AnRLWW-automaton (RRWW-,RWW-automaton) is anRLW-automaton (RRW-,RW-automaton), if no auxiliary symbols are available, that is, ifΣ =Γ. Finally, anRLW-automaton (RRW-, RW-automaton) is anRL- automaton (RR-,R-automaton), if all its rewrite operations are of the form (q, β) δ(p, β), whereβis a scattered subword ofβ. This means thatβcan be obtained by deleting one or more symbols fromβ. To simplify the notation in what follows, we setT ={RLW W, RLW, RL, RRW W, RRW, RR, RW W, RW, R}. To denote types of deterministic restarting automata, we use the prefixdet. Thus, a deterministic automaton of type X∈ T is a det-X-automaton. The class of languages that are accepted by any automaton of type(det-)X,X∈ T, is denoted by L((det-)X).

Now we come to the main definition of this paper. Asystem of parallel commu- nicating restarting automata(aPCRA system, for short) of degreenis given by an n-tuple M= (M1, M2, . . . , Mn), whereMi = (Qi, Σ, Γi,c|,$, qi, ki, δi), 1≤i≤n, are restarting automata, which are called thecomponents of the system. The sys- temMworks as follows. Given an inputw∈Σ, all components start from their corresponding initial configurations for input w. They perform local operations, concurrently and independently of each other. At some point, a component Mi

may request information from another componentMj, orMj may have informa- tion for Mi. The former situation is realized by Mi entering a special internal statereqjd, called a request state, where the superscriptj indicates thatMi needs information fromMj, and the subscriptdis some local information thatMiwants to remember. The other situation is realized by Mj entering a special internal stateresid,c, called aresponse state, where the superscriptiindicates thatMj has information for Mi, the subscript c encodes this particular information, and the subscriptd is some local information thatMj wants to remember.

IfMiis in statereqjdandMjis in stateresid,c, then acommunicationtakes place:

Mi enters the internal state recjd,c, which is called areceive state, indicating that it has received information c from Mj, and Mj enters its internal state ackid,c, called an acknowledge state, which indicates that it has just sent information c to Mi. Observe how the superscripts of these states ofMi and Mj indicate the communication partners, the subscriptcencodes the information transmitted, and the subscriptsdandddenote local information that is preserved by the communi- cation step. If only one ofMi andMj is in the internal state indicated above, then this component is suspended until the other component reaches its corresponding internal state. If that never happens, then the former component is blocked for the rest of the computation. However, also with one or more blocked components, the system continues with the computation using only the non-blocked components.

Accordingly, we assume that, for each componentMi, 1≤i≤n, the set of internal statesQicontains a finite number ofcommunication states, that is, request states, response states, receive states, and acknowledge states. Notice that there are only finitely many communication states altogether, which implies that the number of different messagesc that are communicated by any communication is finite, too.

(6)

M. VOLLWEILER AND F. OTTO

As a communication step can be executed, if and only if a component Mi

reaches a request state reqjd through its local computation, and the component Mj reaches a corresponding response stateresid,c, we admit transition steps of the formreqjd∈δi(q, α) and resjd,c∈δi(q, α) for all componentsMi. By a communica- tion step, the componentMi being in statereqjd is put into a receive staterecjd,c, and the componentMjis put into the acknowledge stateackid,c, whereby they keep the current window positions. Thereafter, both components continue with their in- dependent local computations. Thus, the receive and acknowledge states can be interpreted as the successor states of the request and response states, respectively.

The type of the components of a PCRA system determines the type of the system. If all its components are restarting automata of type X for someX∈ T, then the system is said to be of type PC-X, and is called aPC-X-system. A sys- tem of type X and degreen is called a PC-X(n)-system. Moreover, if the size of the windows of all components is limited by the constant k, then we talk about PC-X(n, k)-systems.

For a formal description of the behaviour of a PCRA system, we need to define the concepts of configurations and computations. Aconfiguration of a PCRA sys- temM= (M1, M2, . . . , Mn) of degreenis ann-tupleK= (κ1, κ2, . . . , κn), where κi is a configuration of component Mi, 1 i n. For an input word w Σ, the corresponding initial configuration ofMis K0 = (q1c|w$, q2c|w$, . . . , qnc|w$), where qi denotes the initial state of Mi, 1≤i ≤n, and each configuration that includes a componentAcceptis called anaccepting configuration. Acomputational step ofM is described by the binary relation M. LetK= (κ1, κ2, . . . , κn) and K = (κ1, κ2, . . . , κn) be two configurations. Then KMK if and only if, for all i∈ {1,2, . . . , n}, one of the following conditions holds:

1. κiMiκi (alocal computation step);

2. ∃j∈ {1,2, . . . , n}{i}:κi=uireqjdivi, κj =ujresidj,cvj, κi=uirecjdi,cvi, κj =ujackidj,cvj

(a communication);

3. ∃j∈ {1,2, . . . , n}{i}:κi=uiresjdi,cvi, κj=ujreqidjvj, κi=uiackjdi,cvi, κj=ujrecidj,cvj

(acommunication);

4. κi=κi, κi =Accept, and no local operation (MVR, MVL, rewrite, or restart) or communication is possible forMi.

Observe that the fourth item above leads to a non-synchronous behaviour of local computations, as a component can wait arbitrarily long for the execution of a communication. This implies in particular that a component does not know the exact number of local steps that its communication partner has executed since the last communication took place. Thus, the only synchronization between various components is by explicit communications.

In addition, a component is only allowed to keep its current configuration, if it either waits for a communication or if it is stuck. Whenever a local computation

(7)

step or a communication is possible, it is executed immediately. Thus, the only reason why the configuration of a system is not changed during a computation step is that all components are stuck or are in a communication deadlock.

The reflexive and transitive closure M of the relation M is thecomputation relation of M, and the language accepted by a PCRA system M with input alphabetΣis defined as

L(M) ={w∈Σ|(q1c|w$, . . . , qnc|w$)M (κ1, . . . , κn) and

1, . . . , κn} ∩ {Accept} =∅ }.

The class of languages accepted by PC-X-systems is denoted by L(PC-X). A PCRA system is calledcentralized, if every communication involves the first com- ponent, which is then called themasterof the system, that is, every communication takes place between the master and some other component. We use the prefix c to denote centralized PCRA systems, that is, acPC-X-system is a centralizedPC- X-system. In a centralized system, the superscripts of the communication states (that denote the communication partner) can be omitted (except for the master).

The class of languages accepted by cPC-X-systems is denoted byL(cPC-X).

A PCRA system is called nondeterministic, if at least one of its components is nondeterministic. If all components are deterministic, then the system is called deterministic, which is denoted by the prefixdet.

We close this section with a detailed example of a PCRA system.

Example 2.1. LetLpal={w#wR|w∈ {a, b}+}be themarked mirror language.

This language is accepted by a PC-R-systemM= (M1, M2) that consists of two deterministic components with window size one. System M behaves as follows, where the numbers in brackets correspond to the transitions used (see below):

initially, M1 moves its window two steps to the right, storing the first symbol to the right of the c|-symbol within its internal control (1). The automatonM2moves its window initially one step to the right (14). Now, the window of M1 is exactly one position further to the right as the window ofM2.

If the second symbol to the right of the c|-symbol is the #-symbol, then M1

checks whether the tape content is of the form c|a#a$ or c|b#b$. In the affirma- tive M1 accepts (2−4). Otherwise, both windows move right stepwise and syn- chronously untilM1reads the #-symbol (58 and 1518). The window ofM2 is now exactly on the last symbol to the left of #.

Then,M1moves across the # (9), reads the first symbol of the second syllable, and sends it toM2 (10). SoM1 and M2 can compare the symbols positioned di- rectly before and behind the #-symbol. If both symbols are different, thenM2gets stuck, andM1cannot accept anymore. If both symbols are equal, thenM1deletes the first symbol of the second syllable (11), applies a restart operation (12), moves its window again two steps to the right (1), and requests information from M2. In parallel, M2 sends a message to M1 telling it to delete a symbol of the first syllable (19). Thus,M1 deletes the current symbol from the tape (5, 13) and ap- plies one more restart. MeanwhileM2deletes the symbol to the left of # and also

(8)

M. VOLLWEILER AND F. OTTO

applies a restart operation (20, 21). Now the computation is repeated with the modified tape contents.

Observe thatM2 never reads or changes the second syllable, and the window ofM2 never moves across the #-symbol. On the other hand, M1 not only works on the second syllable during the comparison, but it shortens the first syllable as well, so that the leftmost syllables of both, M1 and M2, have the same length.

This is important for counting the number of MVR steps, so that the window of M2 is positioned exactly one position before #. For that,M1 deletes the second symbol of the first syllable after each comparison. Thus, the first symbol of the first syllable remains unchanged and can therefore be used in the last cycle, where M1 compares this symbol with the last symbol of the second syllable.

Formally,M1 andM2 are defined as follows:

M1 = ({q0, q1, q2, qa, qa2, qb, qb2, qe,req,recmvr,resmvr,ackmvr,resa,resb,acka,ackb, qr,recdel},{a, b,#},{a, b,#},c|,$, q0,1, δ1) withδ1defined as follows:

1) δ1(q0,c|) = (q0,MVR), δ1(q0, a) = (qa,MVR), δ1(q0, b) = (qb,MVR), 2) δ1(qa,#) = (qa2,MVR), δ1(qb,#) = (qb2,MVR),

3) δ1(qa2, a) = (qe,MVR), δ1(qb2, b) = (qe,MVR), 4) δ1(qe,$) =Accept,

5) δ1(qa, a) =δ1(qa, b) =δ1(qb, a) =δ1(qb, b) =req, 6) δ1(recmvr, a) =δ1(recmvr, b) = (q1,MVR),

7) δ1(q1, a) =δ1(q1, b) =δ1(q1,#) =resmvr, 8) δ1(ackmvr, a) =δ1(ackmvr, b) = (q1,MVR), 9) δ1(ackmvr,#) = (q2,MVR),

10)δ1(q2, a) =resa, δ1(q2, b) =resb, 11)δ1(acka, a) =δ1(ackb, b) = (qr, ε),

12)δ1(qr, a) =δ1(qr, b) =δ1(qr,#) =Restart, 13)δ1(recdel, a) =δ1(recdel, b) = (qr, ε),

M2 = ({q0, q1, qr,resmvr,ackmvr,recmvr,req,reca,recb,resdel,ackdel},{a, b,#}, {a, b,#},c|,$, q0,1, δ2), whereδ2is defined as follows:

14)δ2(q0,c|) = (q0,MVR), 15)δ2(q0, a) =δ2(q0, b) =resmvr, 16)δ2(ackmvr, a) =δ2(ackmvr, b) =req, 17)δ2(recmvr, a) =δ2(recmvr, b) = (q1,MVR), 18)δ2(q1, a) =δ2(q1, b) =req,

19)δ2(reca, a) =δ2(recb, b) =resdel, 20)δ2(ackdel, a) =δ2(ackdel, b) = (qr, ε), 21)δ2(qr, a) =δ2(qr, b) =δ2(qr,#) =Restart.

A computation ofMfor the input wordab#ba is shown in Figure1.

(9)

(q0c|ab#ba$, q0c|ab#ba$) M (c|q0ab#ba$, c|q0ab#ba$) M (c|aqab#ba$, c|resmvrab#ba$) M (c|areqb#ba$, c|resmvrab#ba$) M (c|arecmvrb#ba$, c|ackmvrab#ba$) M (c|abq1#ba$, c|reqab#ba$) M (c|abresmvr#ba$, c|reqab#ba$) M (c|abackmvr#ba$,c|recmvrab#ba$) M (c|ab#q2ba$, c|aq1b#ba$) M (c|ab#resbba$, c|areqb#ba$) M (c|ab#ackbba$, c|arecbb#ba$) M (c|ab#qra$, c|aresdelb#ba$)

M (q0c|ab#a$, c|aresdelb#ba$) M (c|q0ab#a$, c|aresdelb#ba$) M (c|aqab#a$, c|aresdelb#ba$) M (c|areqb#a$, c|aresdelb#ba$) M (c|arecdelb#a$,c|aackdelb#ba$) M (c|aqr#a$, c|aqr#ba$) M (q0c|a#a$, q0c|a#ba$) M (c|q0a#a$, c|q0a#ba$) M (c|aqa#a$, c|resmvra#ba$) M (c|a#qa2a$, c|resmvra#ba$) M (c|a#aqe$, c|resmvra#ba$) M (Accept, c|resmvra#ba$). Figure 1. A computation ofMfor the input wordab#ba.

3. Centralized versus non-centralized PCRA systems

As centralized PCRA systems are a special case of non-centralized PCRA sys- tems, it is clear that non-centralized systems are at least as powerful as centralized systems. Therefore, it remains to determine whether the property of centraliza- tion actually decreases the expressive power of PCRA systems. For example, this phenomenon occurs in the setting of (synchronous) deterministic parallel commu- nicating finite automata [1] and for PC grammar systems with regular or linear components [2]. Here we will show that in the case of PCRA systems, centralized systems have the same expressive power as non-centralized systems with the same type of components.

This result will be proved in two main steps. First, we establish it by introducing an additional component that serves as the master of the new system. Then, we present a construction that allows us to get rid of the additional component, in this way obtaining a centralized system of the same degree and with the same type of components as the given non-centralized system.

Proposition 3.1. For any X∈ T and any n 2, one can effectively construct a centralized PC-X-system M of degree n+ 1 from a given PC-X-system M of degree nsuch that L(M) =L(M)holds.

Proof. LetX∈ T, and letM= (M1, M2, . . . , Mn) be aPC-X-system of degreen, whereMi= (Qi, Σ, Γi,c|,$, q(i)0 , k, δi) (1≤i≤n). FromMwe construct a central- ized PC-X-systemM = (M, M1, M2, . . . , Mn) such thatL(M) =L(M). Here, M is a new master component that controls all communications of M, that is, while M1, . . . , Mn can communicate with each other, M1, . . . , Mn can only com- municate with M. The main idea is that the masterM cyclically asks theclients M1, . . . , Mn about their current situations. For this, it will be important to ensure that computation loops and communication deadlocks of the clients do not cause the master to get stuck.

(10)

M. VOLLWEILER AND F. OTTO

In the following, we describe the behaviour of the components M1, . . . , Mn, and then we give the formal definition. Afterwards we do the same for the new master component. The bracketed annotations refer to the corresponding part of the definition below.

The clients are modified as follows: the operations MVR, Restart, and rewrite are retained unchanged (A1). Observe that there are exactly three possibilities of how a local computation of a component Mi can end: it may accept (A4), it may get stuck (A3), or it may reach a communication state (A5, A7). For these situations the component Mi sends a corresponding message to the master by entering an appropriate response state: [accept], [⊥], [req(j)] or [res(j, c)]. After sending the response, the component is either stuck (if [accept] or [] was sent) or it enters a request state to await information from the master about the currently simulated communication step. The original communication states are included in the subscript (the local information) of the new communication states in order to enableM to simulate the communication steps correctly.

By using MVL operations together with MVR operations, a client may get into an infinite loop during a local computation. This would lead to a deadlock of the system, when the master tries to communicate with that component. This is avoided as follows: whenever a component is about to perform a MVL operation, it sends the message [mvl] to the master, stores the successor state within the index of the communication state, and continues the local computation after this communication has been completed (A2).

Formally, a componentMi (1≤i≤n) is obtained by modifying the transition relation ofMi as follows. Here q, q Qi, αis a possible content of the window, j∈ {1,2, . . . , n}, andc anddare strings:

A1. For all (q,MVR)∈δi(q, α), (q,MVR)∈δi(q, α) (analogously forRestartand rewrite operations).

A2. For all (q,MVL)∈δi(q, α),

res[q],[mvl] ∈δi(q, α) and

δi(ack[q],[mvl], α) ={(q,MVL)}.

A3. For allδi(q, α) =∅,δi(q, α) ={res[⊥]}.

A4. For allAccept∈δi(q, α),res[accept]∈δi(q, α).

A5. Forreqjd∈δi(q, α),

res[req(j,d)],[req(j)]∈δi(q, α) and

req[req(j,d)]∈δi(ack[req(j,d)],[req(j)], α).

The local informationdof the original communication state is stored within the local information of the new communication state, but it is not sent to the master. After sending the current communication state to the master, Mi requests the information from the master of whether its communication partner has sent a corresponding communication state. If the communication partner does not do so, thenMi is stuck just likeMi in this situation.

(11)

A6. ForA∈δi(recjd,c, α),A∈δi(rec[req(j,d)],[res(j,c)], α).

The action A can be performed by Mi, when the master tells it that the communication partnerMj has sent the corresponding communication state.

A7. Forresjd,c∈δi(q, α),

res[res(j,d,c)],[res(j,c)]∈δi(q, α) and

req[res(j,d,c)]∈δi(ack[res(j,d,c)],[res(j,c)], α).

This case is similar to A5.

A8. ForA∈δi(ackjd,c, α),A∈δi(rec[res(j,d,c)],[req(j)], α).

This case is similar to A6.

The sets of states of the modified components are given indirectly by the definition of their transition relations. The initial state ofMiis the same as that ofMiexcept in the special case that it is a communication state. If the initial state ofMiisreqjd (orresjd,c), then the initial state ofMiisres[req(j,d)],[req(j)](orres[res(j,d,c)],[res(j,c)]).

The master componentM has communication states only. In its index (as local information) each of these states contains a situation tuple that has a component for each of the automata M1 to Mn. For example, the tuple∗,⊥, res(1, c) de- scribes the situation that the master does not (yet) know the current situation of M1 (that is, the master still has to ask for it),M2 is stuck, andM3 wants to send a response message toM1 with informationc.

The initial state of the master is req1∗,...,∗. Thus, initially the master does not have any information about the current situations of the other components, and M starts by asking M1 for the result of its local computation. Observe that the superscript of the communication state does indeed refer to the index of the corresponding component and not to its position within the system M. In the following the master asks all components for which its current situation tuple contains the symbol for the results of their local computations by contacting them one after the other in a cyclic manner. If the current situation tuple is t1, t2, . . . , tn, and ifMm is the last component that the master has communicated with, then the next componentMm to be contacted is chosen as follows. IfN1= {i|m < i≤n∧ti=∗ }andN2={i|1≤i < m∧ti=∗ }, then

m=

min(N1), ifN1=∅,

min(N2), ifN1=∅ ∧N2=∅.

If there is no left in the situation tuple, and hence,N1 =N2 =∅, then the system M is stuck, and therewith the current computation does not accept. In addition, observe that the successor componentMm is obtained deterministically if it exists.

As in the centralized system M each communication step must involve the master M, the communication steps between two componentsMi andMj of the original system Mmust be simulated by a sequence of communication steps be- tween Mi and the master M and between Mj and the master M. This will be

(12)

M. VOLLWEILER AND F. OTTO

done in such a way that the master controls and forwards all communication de- mands. The information about the original communication demand (component Mi enters statereqjd orresjc,d) is first stored within the situation tuple of the mas- ter (see B3 “otherwise” and B6 “otherwise” below). When the master receives a corresponding message fromMj, it immediately informs both communication partners about the respective other message (see B3 “if” to B5 and B6 “if” to B8 below). After these communication steps have been performed, the symbol is stored in the new situation tuple for both communication partners, and thenM continues with asking the next component (see above). Moreover, when receiving the message [mvl], the master just goes on with communicating with the (cyclic) successor component without changing the information aboutMm in its situation tuple (B9). Therefore,Mm will be asked again after the master has communicated with all other components that are not yet stuck or waiting in a communication state. In between, Mm continues performing local computation steps. Finally, if the master receives the information [accept] from some component, then it accepts itself, and therewith the systemM accepts (B1).

Below the formal definition of the transition function of the masterM is given, where the transitions are defined for all 1≤m≤n:

B1. δ(recmt1,...,tn,[accept], α) =Accept,

B2. δ(recmt1,...,tn,[⊥], α) =reqmt1,...,tm−1,⊥,tm+1,...,tn, B3. δ(recmt1,...,tn,[req(j)], α) =

resmt1,...,tm−1,req(j),tm+1,...,tj−1,res(m,c),tj+1,...,tn,[res(j,c)], iftj =res(m, c) reqmt1,...,tm−1,req(j),tm+1,...,tn, otherwise, B4. δ(ackmt1,...,tm−1,req(j),tm+1,...,tj−1,res(m,c),tj+1,...,tn,[res(j,c)], α) =

resjt1,...,tm−1,∗,tm+1,...,tj−1,∗,tj+1,...,tn,[req(m)],

B5. δ(ackjt1,...,tm−1,∗,tm+1,...,tj−1,∗,tj+1,...,tn,[req(m)], α) = reqmt1,...,tm−1,∗,tm+1,...,tj−1,∗,tj+1,...,tn,

B6. δ(recmt1,...,tn,[res(j,c)], α) =

resmt1,...,tm−1,res(j,c),tm+1,...,tj−1,req(m),tj+1,...,tn,[req(j)] iftj =req(m), reqmt1,...,tm−1,res(j,c),tm+1,...,tn otherwise, B7. δ(ackmt1,...,tm−1,res(j,c),tm+1,...,tj−1,req(m),tj+1,...,tn,[req(j)], α) =

resjt1,...,tm−1,∗,tm+1,...,tj−1,∗,tj+1,...,tn,[res(m,c)],

B8. δ(ackjt1,...,tm−1,∗,tm+1,...,tj−1,∗,tj+1,...,tn,[res(m,c)], α) = reqmt1,...,tm−1,∗,tm+1,...,tj−1,∗,tj+1,...,tn.

B9. δ(recmt1,...,tm−1,∗,tm+1,...,tn,[mvl], α) ={reqmt1,...,tm−1,∗,tm+1,...,tn}.

Finally, whenever a componentMi reaches the accepting configuration, then Mi enters the stateres[accept], andMand therewithMaccepts the input. In particular, even if one component (or more) is in an infinite loop of computation,M continues

(13)

with asking all components cyclically. On the other hand, if none of the components reaches the accepting configuration, then none of the componentsM1 toMn can enter the state res[accept], and thus, M cannot reach the accepting configuration, and M does not accept the input. Hence,L(M) =L(M), which completes the

proof.

In the construction presented in the proof of Propositions3.1, the master com- ponent is introduced as an additional component of the centralized PCRA system.

Thus, the simulating centralized PCRA system M has more components than the non-centralized PCRA systemMbeing simulated. However, we can actually get rid of this extra component. Observe that the master component in the above construction does not perform any local computation steps at all, that is, it does not access its tape at all. Hence, we can combine the master with componentM1 into a new component ˆM1 that is essentially the direct product of M and M1. However, the situation is complicated by the fact that M1 may perform restart steps, during which its internal state is reset to its initial state. Accordingly, also Mˆ1will have to perform restart steps, causing it to lose all information about the situation tuple that is currently stored by its subcomponentM. To overcome this problem, we need another component, sayM2, that stores the information about the situation tuple immediatelybefore Mˆ1performs a restart step, so that ˆM1can restore this information immediatelyafter completing a restart step by requesting it fromM2. Based on this idea, the following result can be derived.

Proposition 3.2. For any X∈ T and any n≥2, one can effectively construct a centralizedPC-X-systemM of degree nfrom a givenPC-X-systemMof degree n such that L(M) =L(M)holds.

Proof. For simplicity transitions of the formq∈δ(p, α) are used in this proof, such that a component can change its internal state without performing any operation on the tape. It can be easily shown by successive elimination of these transitions that each system that uses this kind of transitions can be transformed into an equivalent system without such transitions.

Now, let M = (M, M1, M2, . . . , Mn) be the centralized system that was con- structed in the proof of Proposition 3.1. Since the master component M only performs communication steps independently of its tape content, we merge M andM1to a new component ˆM1 in the first step of this proof. However, in order to ensure that ˆM1 does not lose the information on the actual state of M when performing a restart, we use annotated restart states for ˆM1, that is, we take ˆM1to be anon-forgettingX-automaton (see,e.g., [8]). This annotation will be eliminated in the second part of the proof.

Each computation of ˆM1starts with the new initial state (req1∗,...,∗, q(1)0 ), which combines the original initial states of M and M1. Now, ˆM1 behaves as follows.

Whenever M wants to communicate with M1 (as for the initial state) and the latter is still not in a communication state, ˆM1 keeps the current state ofM and simulates the computation ofM1untilM1reaches a corresponding communication

(14)

M. VOLLWEILER AND F. OTTO

state:

δˆ1((req1d, q), α) ={((req1d, q),MVR)|(q,MVR)∈δ1(q, α)}

∪{((req1d, q),MVL)|(q,MVL)∈δ1(q, α)}

∪{((req1d, q), β)|(q, β)∈δ1(q, α)}

∪{(Restart,(req1d, q0(1)))|Restart∈δ1(q, α)}, δˆ1((res1d,c, q), α) ={((res1d,c, q),MVR)|(q,MVR)∈δ1(q, α)}

∪{((res1d,c, q),MVL)|(q,MVL)∈δ1(q, α)}

∪{((res1d,c, q), β)|(q, β)∈δ1(q, α)}

∪{(Restart,(res1d,c, q0(1)))|Restart∈δ1(q, α)}.

As mentioned before, the restart operation is annotated by a state. IfM1 enters a corresponding communication state, then the communication step betweenM andM1is resolved by ˆM1:

ˆδ1((req1d,resd,c), α) ={(rec1d,c,ackd,c)}, ˆδ1((res1d,c,reqd), α) ={(ack1d,c,recd,c)}.

If M1 does not reach a corresponding communication state, then ˆM1 is blocked in the systemM just as the master M is in this situation. Thereafter, the next communication step ofM is initiated immediately. For communications between M andM1we therefore define:

δˆ1((rec1d,c,ackd,c), α) ={(p,ackd,c)|p∈δ(rec1d,c, α)}, δˆ1((ack1d,c,recd,c), α) ={(p,recd,c)|p∈δ(ack1d,c, α)}.

For communications betweenM and any other component, ˆM1enters the accord- ing communication state and, moreover, keeps the current internal state of M1

within the local information of its communication state:

δˆ1((reqid, q), α) ={reqi[d,q]},

δˆ1(reci[d,q],c, α) ={(p, q)|p∈δ1(recid,c, α)}, δˆ1((resid,c, q), α) ={resi[d,q],c},

δˆ1(acki[d,q],c, α) ={(p, q)|p∈δ1(ackid,c, α)}, fori= 2, . . . , n.

Above, we used annotations for the restart operations of ˆM1 in order to not lose its current internal state during the restarts. Now, in the second part of the proof, we construct a systemM = ( ˆM1, M2, M3, . . . , Mn), where we modify Mˆ1 and M2 in such a way that this annotation is not necessary anymore. The basic idea is that ˆM1 controls the computation ofM2 in the following sense: M2 always requests instructions from ˆM1. There are only two different situations, in which M2 has to execute computation steps: either ˆM1 needs M2 for storing its current state during a restart operation, or ˆM1 wants to simulate a communication between ˆM1andM2. In the second case it is important thatM2 simulates the local

Références

Documents relatifs

The word order shift is an op- eration based upon word order changes preserving syntac- tic correctness, individual word forms, their morphologi- cal characteristics, and their

Note that the energy function window is independent of the CA neighborhood. Energy functions can be defined over regions different from the scope of the transition rule of the CA.

Reason for the first property: RMT 0 for all the rules of NCCA are 0. So, if RMT 0 at any node of the reachability tree is received any weight, the weight is carried by the

– In order to ensure adequate modeling of natural language meaning (on t-layer), certain groups of words have to be deleted in a single step (e.g., valency frame evoking words 2

They do not support modeling of analysis by reduction with any kind of correctness pre- serving property, they do not support any type of sensi- tivity to the size of individual

Out of the 30 test cases, 23 of them were successful, i.e., the output sequence produced by the transition system was in the set of output sequences generated by the algorithm, while

Target language [8], built as parallel composition of the specification and the system automata, is a base for founding bad states which should be forbidden by a supervisor..

3.4 On Sensitivity of hLSA by hRLWW(i)-Automata Here we will recall that for hRLWW(1)-automata in weak cyclic form, there are strict hierarchies of classes of finite and infinite