• Aucun résultat trouvé

Learning rational stochastic languages

N/A
N/A
Protected

Academic year: 2021

Partager "Learning rational stochastic languages"

Copied!
16
0
0

Texte intégral

(1)

HAL Id: hal-00019161

https://hal.archives-ouvertes.fr/hal-00019161

Preprint submitted on 17 Feb 2006

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Learning rational stochastic languages

François Denis, Yann Esposito, Amaury Habrard

To cite this version:

François Denis, Yann Esposito, Amaury Habrard. Learning rational stochastic languages. 2006.

�hal-00019161�

(2)

ccsd-00019161, version 1 - 17 Feb 2006

Learning rational stochastic languages

Franc¸ois Denis, Yann Esposito, Amaury Habrard

Laboratoire d’Informatique Fondamentale de Marseille (L.I.F.) UMR CNRS 6166 {fdenis,esposito,habrard}@cmi.univ-mrs.fr

Abstract Given a finite set of wordsw1, . . . , wnindependently drawn according to a fixed unknown distribution lawPcalled a stochastic language, an usual goal in Grammatical Inference is to infer an estimate ofPin some class of probabilistic models, such as Probabilistic Automata (PA). Here, we study the classSRrat(Σ) of rational stochastic languages, which consists in stochastic languages that can be generated by Multiplicity Automata (MA) and which strictly includes the class of stochastic languages generated by PA. Rational stochastic languages have min- imal normal representation which may be very concise, and whose parameters can be efficiently estimated from stochastic samples. We design an efficient infer- ence algorithm DEES which aims at building a minimal normal representation of the target. Despite the fact that no recursively enumerable class of MA computes exactlySQrat(Σ), we show that DEES strongly identifiesSQrat(Σ)in the limit.

We study the intermediary MA output by DEES and show that they compute ra- tional series which converge absolutely to one and which can be used to provide stochastic languages which closely estimate the target.

1 Introduction

In probabilistic grammatical inference, it is supposed that data arise in the form of a fi- nite set of wordsw1, . . . , wn, built on a predefinite alphabetΣ, and independently drawn according to a fixed unknown distribution law onΣcalled a stochastic language. Then, an usual goal is to try to infer an estimate of this distribution law in some class of proba- bilistic models, such as Probabilistic Automata (PA), which have the same expressivity as Hidden Markov Models (HMM). PA are identifiable in the limit [6]. However, to our knowledge, there exists no efficient inference algorithm able to deal with the whole class of stochastic languages that can be generated from PA. Most of the previous works use restricted subclasses of PA such as Probabilistic Deterministic Automata (PDA) [5,12].

In the other hand, Probabilistic Automata are particular cases of Multiplicity Automata, and stochastic languages which can be generated by multiplicity automata are special cases of rational languages that we call rational stochastic languages. MA have been used in grammatical inference in a variant of the exact learning model of Angluin [3,1,2]

but not in probabilistic grammatical inference. Let us design bySKrat(Σ), the class of ra- tional stochastic languages over the semiringK. WhenK=Q+orK=R+,SKrat(Σ) is exactly the class of stochastic languages generated by PA with parameters inK. But, whenK=QorK=R, we obtain strictly greater classes which provide several advan- tages and at least one drawback: elements ofSKrat+(Σ)may have significantly smaller representation in SKrat(Σ)which is clearly an advantage from a learning perspective;

elements ofSKrat(Σ)have a minimal normal representation while such normal repre- sentations do not exist for PA; parameters of these minimal representations are directly

(3)

related to probabilities of some natural events of the form, which can be efficiently estimated from stochastic samples; lastly, whenKis a field, rational series overKform a vector space and efficient linear algebra techniques can be used to deal with rational stochastic languages. However, the classSQrat(Σ)presents a serious drawback : there ex- ists no recursively enumerable subset of MA which exactly generates it [6]. Moreover, this class of representations is unstable: arbitrarily close to an MA which generates a stochastic language, we may find MA whose associated rational seriesrtakes negative values and is not absolutely convergent: the global weightP

w∈Σr(w)may be un- bounded or not (absolutely) defined. However, we show thatSQrat(Σ)is strongly identi- fiable in the limit: we design an algorithm DEES such that, for any targetP ∈ SQrat(Σ) and given access to an infinite sampleSdrawn according toP, will converge in a finite but unbounded number of steps to a minimal normal representation ofP. Moreover, DEES is efficient: it runs within polynomial time in the size of the input and it computes a minimal number of parameters with classical statistical rates of convergence. How- ever, before converging to the target, DEES output MA which are close to the target but which do not compute stochastic languages. The question is: what kind of guar- antees do we have on these intermediary hypotheses and how can we use them for a probabilistic inference purpose? We show that, since the algorithm aims at building a minimal normal representation of the target, the intermediary hypothesesroutput by DEES have a nice property: they absolutely converge to 1, i.e.r=P

w∈Σ|r(w)|< andP

k≥0r(Σk) = 1. As a consequence,r(X)is defined without ambiguity for any X Σ, and it can be shown thatNr = P

r(u)<0|r(u)| tends to 0 as the learning proceeds. Given any such seriesr, we can efficiently compute a stochastic languagepr, which is not rational, but has the property thateNr/rpr(u)/r(u)1for any wordu such thatr(u >0). Our conclusion is that, despite the fact that no recursively enumer- able class of MA represents the class of rational stochastic languages, MA can be used efficiently to infer such stochastic languages.

Classical notions on stochastic languages, rational series, and multiplicity automata are recalled in Section 2. We study an example which shows that the representation of rational stochastic languages by MA with real parameters may be very concise. We introduce our inference algorithm DEES in Section 3 and we show that SQrat(Σ) is strongly indentifiable in the limit. We study the properties of the MA output by DEES in Section 4 and we show that they define absolutely convergent rational series which can be used to compute stochastic languages which are estimates of the target.

2 Preliminaries

Formal power series and stochastic languages. LetΣbe the set of words on the finite alphabetΣ. The empty word is denoted byεand the length of a worduis denoted by

|u|. For any integerk, letΣk ={uΣ: |u|=k}andΣ≤k={uΣ: |u| ≤k}.

We denote by<the length-lexicographic order onΣ. A subsetPofΣis prefixial if for anyu, vΣ,uvP uP. For anySΣ, letpref(S) ={uΣ:∃v Σ, uvS}andf act(S) ={vΣ:∃u, wΣ, uvwS}.

LetΣbe a finite alphabet andKa semiring. A formal power series is a mappingr ofΣintoK. In this paper, we always suppose thatK ∈ {R,Q,R+,Q+}. The set of all formal power series is denoted byKhhΣii. Let us denote bysupp(r)the support of r, i.e. the set{wΣ:r(w)6= 0}.

(4)

A stochastic language is a formal seriespwhich takes its values inR+and such that P

w∈Σp(w) = 1. For any languageLΣ, let us denoteP

w∈Lp(w)byp(L). The set of all stochastic languages overΣis denoted byS(Σ). For any stochastic language pand any wordusuch thatp(uΣ) 6= 0, we define the stochastic languageu−1pby u−1p(w) = p(uΣp(uw))·u−1pis called the residual language ofpwrtu. Let us denote by res(p)the set{uΣ : p(uΣ)6= 0}and byRes(p)the set{u−1p: ures(p)}.

We call sample any finite sequence of words. Let S be a sample. We denote byPS

the empirical distribution onΣassociated withS. A complete presentation ofP is an infinite sequenceS of words independently drawn according toP. We denote bySn

the sequence composed of thenfirst words ofS. We shall make a frequent use of the Borel-Cantelli Lemma which states that if(Ak)k∈N is a sequence of events such that P

k∈NP r(Ak)<∞, then the probability that a finite number ofAkoccurs is 1.

Automata. LetKbe a semiring. AK-multiplicity automaton (MA) is a 5-tuplehΣ, Q, ϕ, ι, τiwhereQis a finite set of states,ϕ:Q×Σ×QKis the transition function, ι : Q Kis the initialization function andτ : Q Kis the termination function.

LetQI ={q Q|ι(q)6= 0}be the set of initial states andQT ={q Q|τ(q)6= 0}

be the set of terminal states. The support of an MA A = hΣ, Q, ϕ, ι, τiis the NFA supp(A) =hΣ, Q, QI, QT, δiwhereδ(q, x) = {q Q|ϕ(q, x, q)6= 0}. We extend the transition functionϕtoQ×Σ×Qbyϕ(q, wx, r) =P

s∈Qϕ(q, w, s)ϕ(s, x, r) andϕ(q, ε, r) = 1ifq=rand0otherwise, for anyq, rQ,xΣandwΣ. For any finite subsetLΣand anyRQ, defineϕ(q, L, R) =P

w∈L,r∈Rϕ(q, w, r).

For any MAA, letrAbe the series defined byrA(w) =P

q,r∈Qι(q)ϕ(q, w, r)τ(r).

For anyqQ, we define the seriesrA,qbyrA,q(w) =P

r∈Qϕ(q, w, r)τ(r).A state q Qis accessible (resp. co-accessible) if there existsq0 QI (resp.qt QT) and u Σsuch thatϕ(q0, u, q) 6= 0(resp.ϕ(q, u, qt)6= 0). An MA is trimmed if all its states are accessible and co-accessible. From now, we only consider trimmed MA.

A Probabilistic Automaton (PA) is a trimmed MAhΣ, Q, ϕ, ι, τis.t.ι, ϕandτtake their values in[0,1], such thatP

q∈Qι(q) = 1and for any stateq,τ(q) +ϕ(q, Σ, Q) = 1. Probabilistic automata generate stochastic languages. A Probabilistic Deterministic Automaton (PDA) is a PA whose support is deterministic.

For any classCof multiplicity automata overK, let us denote bySKC(Σ)the class of all stochastic languages which are recognized by an element ofC.

Rational series and rational stochastic languages. Rational series have several char- acterization ([11,4,10]). Here, we shall say that a formal power series overΣ is K- rational iff there exists aK-multiplicity automatonAsuch thatr = rA, whereK {R,R+,Q,Q+}. Let us denote byKrathhΣiithe set ofK-rational series overΣand bySKrat(Σ) = KrathhΣii ∩ S(Σ), the set of rational stochastic languages overK.

Rational stochastic languages have been studied in [7] from a language theoretical point of view. Inclusion relations between classes of rational stochastic languages are summa- rized on Fig 1. It is worth noting thatSRP DA(Σ)(SRP A(Σ)(SRrat(Σ).

LetP be a rational stochastic language. The MAA =hΣ, Q, ϕ, ι, τiis a reduced representation ofP if (i)P =PA, (ii)∀qQ, PA,q ∈ S(Σ)and (iii) the set{PA,q : qQ}is linearly independent. It can be shown thatRes(P)spans a finite dimensional vector subspace [Res(P)] ofRhhΣii. LetQP be the smallest subset ofres(P) s.t.

{u−1P : u QP} spans [Res(P)]. It is a finite prefixial subset ofΣ. LetA = hΣ, QP, ϕ, ι, τibe the MA defined by:

(5)

S(Σ)

Srat

Q+(Σ) =SP A Q+(Σ) SRrat(Σ)

Srat

R+(Σ) =SP A R+(Σ)

SQrat(Σ) =SRrat(Σ)Q+(Σ) S(Σ)Q+""Σ##

Srat

R+(Σ)Q+""Σ##

SP DA

R (Σ) =SP DA

R+ (Σ) SP DA

Q (Σ) =SP DA

Q+ (Σ) =SP DA

R (Σ)Q""Σ##

Figure1. Inclusion relations between classes of rational stochastic languages.

ι(ε) = 1,ι(u) = 0otherwise;τ(u) =u−1P(ε), ϕ(u, x, ux) =u−1P(xΣ)ifu, uxQP andxΣ,

ϕ(u, x, v) =P αvu−1P(xΣ)ifxΣ,ux(QPΣ\QP)∩res(P)and(ux)−1P =

v∈QPαvv−1P.

It can be shown thatAis a reduced representation ofP;Ais called the prefixial reduced representation ofP. Note that the parameters ofAcorrespond to natural components of the residual ofP and can be estimated by using samples ofP.

We give below an example of a rational stochastic language which cannot be gen- erated by a PA. Moreover, for any integerN there exists a rational stochastic language which can be generated by a multiplicity automaton with 3 states and such that the smallest PA which generates it hasNstates. That is, considering rational stochastic lan- guage makes it possible to deal with stochastic languages which cannot be generated by PA; it also permits to significantly decrease the size of their representation.

Proposition 1. For any α R, let Aα be the MA described on Fig. 2. Let Sα = {(λ0, λ1, λ2) R3 : rAα ∈ S(Σ)}. Ifα/(2π) = p/q Qwhere pandqare rel- atively prime,Sαis the convex hull of a polygon withqvertices which are the residual languages of any one of them. Ifα/(2π)6∈Q,Sαis the convex hull of an ellipse, any point of which, is a stochastic language which cannot be computed by a PA.

Proof (sketch).

Letrq0,rq1andrq2 be the series associated with the states ofAα. We have rq0(an) = cossin

2n , rq1(an) = cos+ sin

2n andrq2(an) = 1 2n. The sumsP

n∈Nrq0(an),P

n∈Nrq1(an)andP

n∈Nrq2(an)converge since|rqi(an)|= O(2−n)fori= 0,1,2. Let us denoteσi =P

n∈Nrqi(an)fori= 0,1,2. Check that σ0= 42 cosα2 sinα

54 cosα , σ1=42 cosα+ 2 sinα

54 cosα andσ2= 2.

Consider the 3-dimensional vector subspaceVofRhhΣiigenerated byrq0,rq1and rq2and letr=λ0rq01rq12rq2be a generic element ofV. We haveP

n∈Nr(an) = λ0σ0+λ1σ1+λ2σ2. The equationλ0σ0+λ1σ1+λ2σ2= 1defines a planeHinV.

Consider the constraintsr(an) 0 for anyn 0. The elementsrofHwhich satisfies all the constraintsr(an)0are exactly the stochastic languages inH.

(6)

Ifα/(2π) = k/h Qwherekandhare relatively prime, the set of constraints {r(an)0}is finite: it delimites a convex regular polygonPin the planeH. Letpbe a vertex ofP. It can be shown that its residual languages are exactly thehvertices ofP and any PA generatingpmust have at leasthstates.

Ifα/(2π) 6∈Q, the constraints delimite an ellipseE. Letpbe an element ofE. It can be shown, by using techniques developed in [7], that its residual languages are dense

inEand that no PA can generatep.

Matrices. We consider the Euclidan norm onRn:k(x1, . . . , xn)k= (x21+. . .+x2n)1/2. For anyR 0, let us denote byB(0, R)the set{x Rn : kxk ≤ R}. The in- duced norm on the set of n×nsquare matrices M over R is defined by:kMk = sup{kM xk:xRnwithkxk = 1}.Some properties of the induced norm:kM xk ≤ kMk · kxkfor allM Rn×n, xRn;kM Nk ≤ kMk · kNkfor allM, N Rn×n; limk→∞kMkk1/k =ρ(M)whereρ(M)is the spectral radius ofM, i.e. the maximum magnitude of the eigen values ofM (Gelfand’s Formula).

1 0.575

0.632

a,0.425 a,0.368 0.69

0.741 a,0.31

0.717

a,0.259 a,0.283 0.339 a,0.661 1e-20

0.128 a,1

0.726 a,0.872

0.377 a,0.726

0.454 a,0.623

0.518 a,0.546 a,0.482

Aα

q1 λ0 1

q2 λ1 1

q3 λ2 1 cosα

2 sinα

2 sinα

2

cosα 2

1 2

B

q1 1 0.575

q2 0.632

q3 0.69

a,0.425 a,0.368

a,0.0708 a,−0.345

a,0.584

C

Figure2. Whenλ0 =λ2 = 1andλ1 = 0, the MAAπ/6defines a stochastic language P whose prefixed reduced representation is the MA B (with approximate values on transitions). In fact,P can be computed by a PDA and the smallest PA computing it is C.

3 IdentifyingSrat

Q (Σ)in the limit.

LetSbe a non empty finite sample ofΣ, letQbe prefixial subset ofpref(S), letv pref(S)\Q, and letǫ >0. We denote byI(Q, v, S, ǫ)the following set of inequations over the set of variables{xu|uQ}:

I(Q, v, S, ǫ) ={|v−1PS(wΣ)X

uQ

xuu−1PS(wΣ)| ≤ǫ|wf act(S)} ∪ {X

uQ

xu= 1}.

Let DEES be the following algorithm:

Input: a sample S

0utput: a prefixial reduced MA A=hΣ, Q, ϕ, ι, τi Q← {ǫ}, ι(ǫ) = 1, τ(ǫ) =PS(ǫ), FΣpref(S) while F6= do {

(7)

v=ux=M inFwhereuΣandxΣ,F F\ {v}

ifI(Q, v, S,|S|−1/3)has no solutionthen{

QQ∪ {v}, ι(v) = 0, τ(v) =PS(v)/PS(vΣ),

ϕ(u, x, v) =PS(vΣ)/PS(uΣ),F F∪ {vxres(PS)|xΣ}}

else{

let w)w∈Q be a solution of I(Q, v, S,|S|−1/3) ϕ(u, x, w) =αwPS(vΣ) for any wQ}}

Lemma 1. LetP be a stochastic language and letu0, u1, . . . , un Res(P)be such that{u−10 P, u−11 P, . . . , u−1n P}is linearly independent. Then, with probability one, for any complete presentationS ofP, there exist a positive numberǫand an integer M such thatI({u1, . . . , un}, u0, Sm, ǫ)has no solution for everymM.

Proof. LetSbe a complete presentation ofP. Suppose that for everyǫ >0and every integerM, there existsm M such thatI({u1, . . . , un}, u0, Sm, ǫ)has a solution.

Then, for any integerk, there existsmk ksuch thatI({u1, . . . , un}, u0, Smk,1/k) has a solution1,k, . . . , αn,k). Letρk =M ax{1,1,k|, . . . ,n,k|},γ0,k= 1/ρkand γi,k=−αi,kkfor1in. For everyk,M ax{|γi,k|: 0in}= 1. Check that

∀k0,

Xn i=0

γi,ku−1i PSmk(wΣ) 1

ρkk 1 k.

There exists a subsequence1,φ(k), . . . , αn,φ(k))of1,k, . . . , αn,k)such that 0,φ(k), . . . , γn,φ(k))converges to0, . . . , γn). We show below that we should have Pn

i=0γiu−1i P(wΣ) = 0for every wordw, which is contradictory with the indepen- dance assumption sinceM ax{γi: 0in}= 1.

Letw f act(supp(P)). With probability 1, there exists an integerk0 such that wf act(Smk)for anykk0. For such ak, we can write

γiu−1i P = (γiu−1i Pγiu−1i PSmk) + (γiγi,φ(k))u−1i PSmk+γi,φ(k)u−1i PSmk

and therefore

Xn i=0

γiu−1i P(wΣ)

Xn i=0

|u−1i (PPSmk)(wΣ))|+ Xn

i=0

iγi,φ(k)|+ 1 k

which converges to 0 whenktends to infinity.

LetPbe a stochastic language overΣ, letA= (Ai)i∈Ibe a family of subsets ofΣ, letS be a finite sample drawn according toP, and letPS be the empirical distribution associated withS. It can be shown [13,9] that for any confidence parameterδ, with a probability greater than1δ, for anyiI,

|PS(Ai)P(Ai)| ≤c

qVC(A)−logδ4

Card(S) (1)

whereVC(A)is the dimension of Vapnik-Chervonenkis ofAandcis a constant.

WhenA = ({wΣ})w∈Σ,VC(A) 2. Indeed, letr, s, t Σ and let Y = {r, s, t}. Leturs(resp.urt, ust) be the longest prefix shared byrands(resp.randt,s

(8)

andt). One of these 3 words is a prefix of the two other ones. Suppose thatursis a prefix ofurtandust. Then, there exists no wordwsuch thatY ={r, s}. Therefore, no subset containing more than two elements can be shattered byA.

LetΨ(ǫ, δ) =cǫ22(2log4δ).

Lemma 2. LetP ∈ S(Σ)and letSbe a complete presentation ofP. For any precision parameterǫ, any confidence parameterδ, anyn Ψ(ǫ, δ), with a probability greater than1δ, |Pn(wΣ)P(wΣ)| ≤ǫfor allwΣ.

Proof. Use inequality (1).

Check that for anyαsuch that −1/2 < α < 0 and anyβ < −1, if we define ǫk =kαandδk =kβ, there existsKsuch that for allkK, we havekΨk, δk).

For such choices ofαandβ, we havelimk→∞ǫk= 0andP

k≥1δk <∞.

Lemma 3. LetP ∈ S(Σ),u0, u1, . . . , un res(P)andα1, . . . , αn Rbe such that u−10 P = Pn

i=1αiu−1i P. Then, with probability one, for any complete presentationS ofP, there existsKs.t.I({u1, . . . , un}, u0, Sk, k−1/3)has a solution for everykK.

Proof. LetS be a complete presentation ofP. Letα0 = 1and letR = M ax{|αi| : 0 i n}. With probability one, there exists K1 s.t. ∀k K1,∀i = 0, . . . , n,

|u−1i Sk| ≥Ψ([k1/3(n+ 1)R]−1,[(n+ 1)k2]−1). LetkK1. For anyX Σ,

|u−10 PSk(X)−

Xn

i=1

αiu−1i PSk(X)| ≤ |u−10 PSk(X)−u−10 P(X)|+

Xn

i=1

i||u−1i PSk(X)−u−1i P(X)|.

From Lemma 2, with probability greater than 1 1/k2, for any i = 0, . . . , n and any word w, |u−1i PSk(wΣ)u−1i P(wΣ)| ≤ [k1/3(n + 1)R]−1 and therefore,

|u−10 PSk(wΣ)Pn

i=1αiu−1i PSk(wΣ)| ≤k−1/3.

For any integerkK1, letAkbe the event:|u−10 PSk(wΣ)−Pn

i=1αiu−1i PSk(wΣ)|>

k−1/3. SinceP r(Ak)<1/k2, the probability that a finite number ofAkoccurs is 1.

Therefore, with probability 1, there exists an integerKsuch that for anyk K, I({u1, . . . , un}, u0, Sk, k−1/3)has a solution. Lemma 4. LetP ∈ S(Σ), letu0, u1, . . . , un res(P)such that{u−11 P, . . . , u−1n P}

is linearly independent and letα1, . . . , αn Rbe such thatu−10 P =Pn

i=1αiu−1i P. Then, with probability one, for any complete presentationSofP, there exists an inte- gerKsuch that∀k K, any solutioncα1, . . . ,αcn ofI({u1, . . . , un}, u0, Sk, k−1/3) satisfies|αbiαi|< O(k−1/3)for1in.

Proof. Letw1, . . . , wn Σbe such that the square matrixM defined byM[i, j] = u−1j P(wiΣ)for1i, jnis inversible. LetA= (α1, . . . , αn)t,U0= (u−10 P(w1Σ), . . . , u−10 P(wnΣ))t. We haveM A =U0. LetS be a complete presentation ofP, let k Nand let αc1, . . . ,αcn be a solution of I({u1, . . . , un}, u0, Sk, k−1/3). LetMk

be the square matrix defined by Mk[i, j] = u−1j PSk(wiΣ) for1 i, j n, let Ak = (αc1, . . . ,αcn)tandU0,k= (u−10 PSk(w1Σ), . . . , u−10 PSk(wnΣ))t. We have

kMkAkU0,kk2= Xn i=1

[u−10 PSk(wiΣ) Xn j=1

c

αju−1j PSk(wiΣ)]2nk−2/3.

Check that

Références

Documents relatifs

Here we show using a mouse with the same genetic deficiency that metabotropic glutamate receptor 5- (mGluR5-) dependent synaptic plasticity and protein synthesis is altered in

An instance of the Algebraic-Logarithmic Integer Programming Problem (ALIP) consists of a finite system of inequalities of the

This article will offer an analysis of Lisola ’s Buckler of state and justice but it will also bring to light new archival findings concerning the diplomatic role played by Lisola

Market volume is also necessary to be evaluated by national market regulators and central banks in order to have current information on global tendency of their

In view of the many potential risks of educational disadvantages for language minority groups such as, but not limited to, language assessment, unequal distribution of

Given a sample of trees independently drawn according to an un- known stochastic language P , we aim at finding an estimate of P in a given class of models such as probabilistic

In this section, we study the representation of these classes by means of multiplicity automata: given a subclass C of rational stochastic languages over K, is there a subset

An intermediate working fluid, fluid C, is looped within two heat exchangers as it shuttles heat energy from the molten salt bath to the steam which is commonly