• Aucun résultat trouvé

In the current HLT Steering and TriggerDB implementation, PS and PT factors can be changed in every new run. LVL1 PS factors, however, can be adapted on a luminosity block basis (order of 1min). It provides the trigger operator with the flexibility needed to quickly react to changes in beam and detector conditions.

Upcoming extensions of the ATLAS trigger PS mechanism are:

1. Change the HLT PS factors on a luminosity block basis. The technical challenge is to keep all HLT farm nodes synchronised, and also assure that all events are treated with the correct set of PS factors.9 This new HLT feature is foreseen to be tested in the next major software release.

2. Dynamic change of PS factors to compensate for the falling accelerator luminosity and thus make full use of the available data processing bandwidth. An automatic feedback system is in development to monitor the total output rate and adapt LVL1 PS factors accordingly.

This has been successfully realised for instance in the CDF experiment [101].

4.9 Monitoring

For successful data-taking a continuous monitoring of the trigger and its performance is essen-tial. The shift crew must be able to react immediately to malfunctions of the system in order to minimise the loss of data. Periods with bad trigger conditions or detector performance have to be identified to allow their exclusion from the off-line data analysis.

Different aspects of the monitoring of the HLT can be considered: the monitoring of the HLT Steering decision and trigger rates, a persistent data quality check of the events processed by the HLT and the operational monitoring of the HLT Steering. Data quality checks are more important for the off-line quality assessment. Rate measurements, on the other hand, are sensitive to the stability of the HLT on-line operation, the accelerator and beam conditions, and the performance of the sub-detectors that are used in the trigger.

The HLT Steering provides a monitoring framework which is based on ROOT [102] histograms.

The individual trigger algorithms running in the HLT use this framework in order to fill histograms with variables that are sensitive to the trigger behaviour and the algorithms’ performance. The Steering code itself is independent of the monitoring code, and the monitoring histograms are configurable within the TriggerDB. All histograms can be filled continuously. They can also be individually reset for convenient data-taking intervals like runs and luminosity blocks.

In the on-line environment the monitoring histograms from the individual HLT farm nodes are combined (summed up or averaged) by the on-line histogramming service (OH) [103] and made available for further processing. The histograms serve as a basis for the on-line and off-line as-sessment of data quality and the trigger performance monitoring, as well as for software validation of the HLT Steering and algorithm code.

9Due to the asynchronous and distributed nature of the HLT system, two events processed in parallel by two nodes do not necessarily belong to the same luminosity block. Therefore, HLT nodes have to hold a list of PS sets where each set is associated to one luminosity block number.

Transverse energy [GeV]

0 5 10 15 20 25 30 35 40 45 50

ROIs

10 102

103

104

LVL2 cluster ET

(T2CaloEgamma) HLT on−line monitoring, SIM data set

φ

−3 −2 −1 0 1 2 3

ROIs

1000 1500 2000 2500 3000 3500

4000 LVL2 cluster φ

(T2CaloEgamma) HLT on−line monitoring, SIM data set

Figure 4.9: HLT on-line monitoring examples, based on the Sim data, see text in Section4.9. Distribution of the LVL2 cluster algorithm’s transverse energy (left) andφ(right).

In the following, two datasets are used to illustrate the HLT on-line monitoring of trigger algo-rithms and the Steering, as well as to show the Steering performance and validation tests (next Section4.10). The two datasets are:

SIM data: Simulation of the ATLAS LVL2 and EF, using athenaMT and athenaPT respectively.

The employed raw event data file has been obtained from an official ATLAS enhanced min-imum bias sample (√

s = 14 TeV). In the enhanced minimum bias sample, the following lowest unprescaled LVL1 trigger selections are applied: EM3, MU4, J18, FJ18, XE25, TE250. This provides a higher statistics sample with little bias.

COSMIC data: Cosmic ray data-taken with the ATLAS detector in the autumn of 2008. The detector and trigger configuration varied over the runs, since many different aspects were studied. It is noteworthy, however, that the trigger system was configured with a special cosmic commissioning setup. Details of the trigger menu will be discussed where needed.

In the example of the LVL2 cluster algorithm (T2CaloEgamma), the list of monitored variables includes: ET andη, φof the electromagnetic clusters; ratio of the core-cell energy to the total energy, as well as the number of electron candidates as a function of the pseudorapidityη. Fur-thermore, the algorithms’ execution time is monitored. Fig.4.9shows the T2CaloEgamma on-line monitoring distributions ofET (left) andφ(right), obtained from running on the Sim dataset. Note that it peaks around5 GeV, the point where theEM3trigger is fully efficient.

In addition to the monitoring of variables inside the algorithms, the monitoring of the HLT Steering itself is performed after each trigger level (after the result building). At this stage, access to the full trigger information is available for accepted and also rejected events.

Examples of the Steering on-line monitoring are given in Fig. 4.10which shows the number of LVL1 ROIs per event for the SIM dataset (left) and COSMIC data (right).

Further HLT on-line monitoring examples are shown in Fig.4.11: The left-hand plot shows the LVL2 muon (muFast)φdistribution, obtained from COSMIC data. As expected, a peak can be seen atφ'1.6corresponding to the muons originating from cosmic rays (mainly protons) which enter the ATLAS cavern primarily through the access shafts. No peak is present aroundφ' −1.6 because the muon spectrometer was inactive in that region at that time.

4.9. MONITORING 57

Number of LVL1 ROIs per event

0 5 10 15 20 25 30 HLT on−line monitoring, SIM data set

Number of LVL1 ROIs per event

0 5 10 15 20 25 30 HLT on−line monitoring, COSMIC run: 92226

Figure 4.10: The number of LVL1 ROIs per event from SIM data (top left) and COSMIC data (top right) as obtained from the Steering on-line monitoring.

LVL2 muon (muFast) φ

× HLT on−line monitoring, COSMIC run: 93730

Number of LVL2 data requests per event

0 10 20 30 40 50 HLT on−line monitoring, COSMIC run: 92226

Figure 4.11: Left plot: Distribution of LVL2 muons inφfrom COSMIC data. Note that the muon spec-trometer aroundφ' −1.6was inactive in this run. Right plot: Number of data requests to the ROSs per event during LVL2 processing for accepted (green/striped) and rejected (orange/filled) events.

The right-hand plot of Fig.4.11shows the number of data requests during LVL2 processing, also obtained from COSMIC data, for accepted (green/dashed histogram) and rejected (orange/filled histogram) events. Following the early-rejection principle, rejected events cause less data requests than the accepted events. A certain fraction of the accepted events, however, generates no or only few data requests. These events have no or only few ROIs, and are passed-through at LVL2 (COSMIC HLT menu).

All LVL2 and EF chain results are monitored separately, before and after pre-scale and pass-through factors have been applied. Furthermore, all chain results are monitored on a per step level. This allows to assess the trigger’s step-wise event reduction. The information can also be monitored for groups of chains.

One important client of the chain result monitoring is the HLT on-line rate calculation. Since the HLT is an asynchronous and distributed system, overall trigger rates cannot be calculated at each node separately. Instead, the combined information from all nodes is used. The HLT trigger rates comprise the total-acceptance rate as well as rates of all individual chains at each step. The LVL2 (EF) rates are determined by multiplying the LVL1 (LVL2) input rate with the ratio of accepted to total input events. LVL1 information is available via the CTP.

An extensive timing monitoring has been implemented. It comprises, for both trigger levels, the following timers:

• overall HLT Steering execution time, including all sub-systems; (also the data retrieval time);

• overall HLT Steering execution time broken down into rejected and accepted events;

• total execution time of the combined trigger chains as well as the individual chains, includ-ing all executed algorithms;

• total execution time of each trigger sequence, including all executed algorithms;

• total execution time of each algorithm;

• execution time of the Steering components: result builder, level converter, and the monitor-ing itself.

Further quantities which are monitored include: HLT errors that occur during event processing (cf. Section4.7). The TE numbers are counted and monitored for every type. This includes TEs which represent the LVL1 ROIs (and LVL1 thresholds). The TE monitoring information allows for an additional monitoring of the trigger sequences’ performance and selectivity. Additionally, the LVL1 and HLT ROIs’ηandφare monitored in order to spot malfunctioning sub-detectors or triggers. Differences between two trigger steps or levels provides a good handle to evaluate the refined reconstruction in the step-wise event processing.