rep-resents its performance with clustering while the other reprep-resents its performance by counting the clustering. This is, for the first columns we give the clustering informa-tion toStrataGEM, for the second one we let it compute itself. One can consider the first column as the usage of the tool by an experienced user while the second one is the usage of the tool by a non-expert user.
The algorithm for clustering uses a heuristics to group sets of places that share internal transitions among them. Given aPN, the algorithm creates a partition of the set of places. The goal is that each set in the partition is has the least number of transitions that connect it to other sets in the partition. We have tried several heuristics for this problem. The heuristic used in the version ofStrataGEMhas proven to be very slow for models with a great number of arcs, however, when it finishes it provides very good partitions. For example, for the Shared Memory problem, the clusterization algorithm takes a lot of time (more or less 20 minutes, hence it is not included in Table8.1), but the clustering found by it is capable of beating Marcie.
Another thing that we need to take into account is the fact that the clusterization heuristics does not always provide optimal clusterings. We can see that in the FMS problem. It is very easy to determine the right clustering the FMS problem visually.
However, our algorithm does not determine the very best clustering directly. In Ta-ble8.1we can see that the clustering found by our algorithm for this problem makes it lose against Marcie.
Luckily for us, StrataGEM enables the user to construct the clustering himself (as we did for column 3 of Table8.1). One might argue that this is not feasible for large models like our SharedMemory model. However, large models are not created by hand. They are usually derived from high level formalisms. This opens the pos-sibility to create the clustering directly using the high level specification. Indeed, this approach has already been used in the MCC2014. The organizers of the MCC included extra information about the PN models inNested-Unit Petri Nets (NUPN) format [GS90]. NUPNis format the allows to describe clusters of places and clusters of clusters.
These results shown in Table 8.1 empirically prove that the great generality of our approach does not impact its performance, as our prototype shows to be more than competitive against a tool based on similar techniques. We must note also that our tool is implemented with a small number of line of Scala code (3700 without the parser) due to its very general and generic principles.
It is worth noting that our tool also participated in theMCC. The reader can check the results of our benchmarks in [KGH+14].
We also run some benchmarks againstDiViNE. We keep these benchmarks in a dif-ferent section because DiViNE is an implicit parallel model checker. We also run the tests in another machine because we did not manage to compile DiViNEin our usual machine. Each test is run on a cluster of 32 cores Linux with 64GB of RAM,
102 Chapter 8. Practical Results and Tool
Model Scale Divine StrataGEM Number of
parameter time (s) time (s) states
FMS 2 0.92 3.99 3444
5 69.36 7.72 2895018
10 time/out 16.77 2501413200 20 time/out 126.44 6029168852784
Kanban 05 60.97 6.04 2546432
10 time/out 19.31 1005927208 20 time/out 148.75 805422366595
SharedMemory 05 0.98 4.80 1863
10 268.90 10.96 1830519
20 time/out 192.39 445146141861
Table8.2 – RuntimeComparison with clustering enabled
with a wall-clock time limit of 15 minutes. Tests, taken from the Model Checking Contest [KGH+14], arePetri nets, directly translated to the input format of DiViNE;
StrataGEMcan read them natively. For a fair comparison, we run StrataGEMusing automatic clustering.
Results are presented in Table8.2. Three scalable models are considered: a Flexi-ble Manufacturing System (FMS), a classical Kanban system and a mutual-exclusion for shared memory model. The tools were asked to generate their state spaces. The results are self-explanatory: not onlyStrataGEMhandles much bigger instances than DiViNE, it also runs much faster on models handled by both tools.
One might argue that the results might be different if we translated fromDiViNE format toPNMLto make the comparison. It is hard to answer this question generally.
In Section6.5we present a translation fromDiViNEdirectly toStrataGEM. The goal of that translation was to prove the expressivity of our approach and not to obtain performance.
We have run our translation in Appendix B of the Peterson problem against the native version in DiViNEand we confirmed that it is not very efficient. The cause does not only lie in the translation but also in a weakness ofDDsto treat global state.
In fact, in native DiViNE models, processes communicate mainly through a global variable. Our translation however does not take any steps to optimize theSRstrategies to handle the global variables.
There are also some considerations with the Peterson problem not only has a global variable but also is an extremely synchronous problem,i.e.,the processes are synchro-nizing all the time with the global variables. Finally, the Peterson problem does not have a big state space. Hence, DiViNE can handle this problem much better than StrataGEM. For example,DiViNEcan compute the state space of Peterson until scale parameter 8 very fast (less than a second). StrataGEMhowever can only do it until scale parameter 3 in three minutes. However, for this type of problem StrataGEMis in the same league as other symbolic model checkers. For the Peterson in the MCC 2014 we have that the best tool (Marcie, a symbolic model checker) could only handle Peterson with scale parameter 3.
In this section we have presented our toolStrataGEM, how to use it and some bench-marks. In particular we highlight that our complete theory is supported by an
im-8.7. Summary 103 plementation. The implementation can outperform other state of the art tools and is usable. A new user can describe models in eitherPNMLor in directly in SR. Thus, people wanting to write translations toSR can write a translator for their formalism that only needs to produce a transition system in a textual format.
104 Chapter 8. Practical Results and Tool
In this work we developed an entire theory to describe transition systems using rewrit-ing and strategies. Our development was driven by the shortcomrewrit-ings ofDDs.
In this final chapter we present a summary of our work, and some perspectives for the future.
9.1 Summary of this work
9.1.1 Σ DD
Our journey starts by considering different kinds ofDDsand their evolution. We no-tice that throughout the years DDs evolve towards more expressive structures. This evolution eases the work of translating a formalism toDDs. However, this translation stays difficult to date. Even though someDDsare capable of easily representing com-plex data structures, defining efficient user defined operations on them stays a complex affair.
ΣDD is a type of DD capable of representing complex data structures. Indeed, ΣDDs encode sets of terms. Although originally introduced by Hostettler [Hos11], we propose a new mathematical presentation for them. Operations onΣDDsare rep-resented using rewrite rules. These rewrite rules are implemented by homomorphisms.
A problem of this approach is that the rewrite rule evaluation is governed by a fixed strategy (innermost or outermost application of the rules, as well as some clustering strategies). Modifying this strategy to make it more efficient implies directly writing ΣDDhomomorphisms, a daunting task for non-experts. Our new formulation has the advantage of being more general. It also simplifies operations and also allows to easily write custom evaluation strategies.
Our formulation is based on the IIPFs. IIPFs are very general lattice-like data structures. Indeed, other types ofDDs, likeSDDsandIDDscan be represented using IIPFs. Our presentation aims to be easier to understand and also to implement.
IIPFsare very generic operations and define only generic lattice operations: meet and join. We instantiateIIPFstoΣDDsand enrich them with a specific operation. This operation allows the rewriting of a set of terms. This operation tries to be as simple as
106 Chapter 9. Conclusion possible, without sacrificing expressivity and efficiency. The most important property ofΣDDsis preserved: the ability to rewrite several terms in one rewrite step.
9.1.2 Set rewriting
As mentioned in the previous subsection, the evaluation strategy of rewrite rules in ΣDDswas fixed, or difficult to modify. In this work we introduced setSet Rewriting (SR) to cope with that problem. SR is a framework inspired from term rewriting strategies. It is a set of operations (called strategies) that enable the control of the rewriting process coupled with a language to describe entire transition systems in terms of rewrite rules.
The strategies we introduced are inspired by the ones already proposed in term rewriting frameworks such as Tom [BKK+96] and ELAN [Plo04]. However, we ex-tended the semantics of those strategies to work on sets of terms. We also introduced some new strategies that only make sense for sets of terms.
Using this strategies as building blocks we defined a complete formalism to de-scribe a transition system. Our formalism is entirely based on rewrite rules, terms, and strategies. In particular, it completely hides the complexity ofΣDDsand its na-tive operations.
9.1.3 Usage of Set Rewriting
Set rewriting adds a new layer on top DDs. We understand that as SR gains popu-larity some examples of its capabilities are to be given. We have provided two cases studies to help new users ofSRto familiarize themselves with it. We defined a formal translation from two well known formalisms,PNsandDiViNE.
To make our translation easier to understand we used another intermediate layer for the sake of the explanation. Indeed, we defined our translation in terms of a subset of the well knownSOSrules. Our main assumption was that readers are more familiar withSOSrules. Hence, new users ofSRcan start by representing their semantics using SOSrules and then using the method we propose to translate their rulesSR.
9.1.4 Model checking
SR was born from a need to perform model checking on high level formalisms that were not easily handled by state of the artDDs. Thus, we tackled the problem directly by showing howSRcan be used to perform model checking.
We focused on two things: state space computation, and simple property checking.
We showed that SRis not only a good fit to describe a transition system but also to describe a model checking computation.
The model checking computation is however a delicate matter. It requires extreme efficiency to handle large state spaces. We have shown how differentDD optimiza-tions can be expressed plainly usingSR. We showed how to implement commonDD optimizations like clustering, anonymization, and saturation usingSRstrategies.