• Aucun résultat trouvé

The LHC ’s first run started in 2010 and ended in 2013; data-taking periods were spaced by technical stops and technical shutdowns. Technical stop period usually started at the end of the year and ended around spring, signalling the start of a new data-taking period with different beam settings (see Tab. 2.1). The accelerator machine delivered proton–proton collision at a centre-of-mass energy of√

s= 7 TeVand√

s= 8 TeV for the years 2010-2011 and 2012 respectively. Short runs of proton-Lead, proton-Iron, Iron-Iron and Lead-Lead were also part of theLHC’s programme and those types of collisions were delivered at the end of each scheduled pp data-taking period.

Figure 2.13 left shows the total integrated luminosity ofpp collisions delivered by the accelerator and recorded by the ATLAS detector during the years 2011-2012. The instantaneous luminosity can be expressed as a function of the number of inelastic interactions per bunch crossingµ [73]:

LLumi = µnbfrev σinel

(2.2) where σinel is the pp total inelastic cross section per bunch crossing. The number of inelastic collisions per bunch crossing is also referred in this document as pile-up. The detector records responses from multiple collisions happening in the same bunch crossing (“in-time pile-up”), but also from signal remnants from previous bunch crossings (“out-of-time pile-up”). Several methods exist for measuringµorµ/σineland for distinguishing the detector response from in-time collisions

Experimental Setup 38

Month in Year Jan Apr Jul Oct Jan Apr Jul Oct

-1 fbTotal Integrated Luminosity

0

Mean Number of Interactions per Crossing

0 5 10 15 20 25 30 35 40 45

/0.1]-1 Recorded Luminosity [pb

0

180 ATLASOnline Luminosity

> = 20.7

Figure 2.13: On the left, the cumulative luminosity versus time delivered to (green), recorded by ATLAS (yellow), and certified to be good quality data (blue) during stable beams and for pp collisions at 7 and 8 TeV centre-of-mass energy in 2011 and 2012, is shown. The delivered luminosity accounts for the luminosity delivered from the start of stable beams until the LHC requests ATLAS to put the detector in a safe standby mode to allow a beam dump or beam studies.

The recorded luminosity reflects the DAQ inefficiency, as well as the inefficiency when the stable beam flag is raised, but the tracking detectors undergo a ramp of the high-voltage and, for the pixel system, turning on the preamplifiers, so-called warm-start. The data quality assessment shown corresponds to the “All Good” efficiency shown in Tab. 2.2. The luminosity shown represents the 7 TeV and 8 TeV luminosity calibration. [72]. On the right, the the luminosity-weighted distribution of the mean number of interactions per crossing (µ) for the 2011 and 2012 data is shown. This shows the full 2011 and 2012pp runs [73].

and out-of-time responses. Referenced documentation [73] can provide the reader with more details on the detector components used for this purpose.

A less precise determination of µ can by achieved by using tracking information, of which an example is shown in Fig. 2.14. This method was used as an indicator of the tracking performance when compared to dedicated measurements ofµ.

Recorded data are scrutinised by collaborators and a quality assessment is made. Only the portion of data passing stringent quality requirements, which are associated to DAQ and subsystem performance, are flagged as “good” for further analysis. Table 2.2 shows the portion of good data recored by the experiment with respect to the total integrated luminosity delivered by the LHC.

The next two sections are dedicated to an overview of the operations and performance of the SCT subsystem.

2.7.1 SCT operational experience

More than99%of the6.3million strips were functional and available for tracking in all data taking periods. Constant work of shifters and experts during data taking and technical stop periods was crucial in maintaining this high efficiency [60]. The SCT crew consisted of a shifter present any time in the ATLAS Control Room with a turn over of8 hours and a pool of experts being on call

Figure 2.14: Measurement ofµversus the event time stamp during an ATLAS run. The number of in-time collisions was determined by reconstructing, the primary collision vertices. The measure-ment was performed on-line,i.e. progressively as events were recorded, from the “express” stream.

This quantity was used for on-line data quality monitoring.

Sub system

Year PIXEL SCT TRT LAr Tile MDT RPC CSC TGC

2011 99.8 99.6 99.2 98.7 99.2 99.4 98.8 99.4 99.1 2012 99.9 99.1 99.8 99.1 99.6 99.6 99.8 100.0 99.6

Table 2.2: Fraction of good quality data delivered by the subsystems during data tanking periods in pp collisions for 2011 and 2012 (√

s = 7 TeV and √

s = 8 TeV respectively). Runs taken between March 13th and October 30th 2011 correspond to R

LLumidt = 5.23 fb−1 [74]. Runs recorded between April 4th and December 6th 2012 correspond to R

LLumidt = 21.3 fb−1 [75].

Numbers are shown in percent.

in weekly blocks.

The semiconductor tracker (SCT) DAQ has proved to be highly reliable with excellent data taking efficiency. There are two potential sources of inefficiency: (i) errors from the front-end ASICs, for which data were flagged as “non-usable” for tracking purposes,(ii) and a BUSY signal from the SCT Readout Drivers (RODs) preventing ATLAS from taking data. The operation issues that impacted on data taking efficiency and data quality were as follows, listed in order from the most to the least significant:

1.High occupancy and high rates. In 2012 the SCT operated with a pile-up of up to∼30 interactions per bunch crossing and an occupancy reaching ∼1%. The high occupancy and rate exposed shortcomings in the DAQ processing and decoding of the data which lead to an increasing rate of BUSYs. Although this was the most significant issue impacting on data taking efficiency, it was mitigated by introducing the ability to disable the source of the busy ROD, reconfigure the

Experimental Setup 40

affected modules, and then to re-integrate the ROD without interruption to ATLAS data taking.

2. High leakage current. A (small) number of the SCT modules were assembled using sensors from a different vendor (CiS) compared to the majority (Hamamatsu) [76]. A small but significant fraction of those sensors exhibited high leakage currents at high luminosities, correlated with high noise levels. It is suspected that intense radiation may ionise nitrogen gas surrounding the silicon and the corresponding accumulated charge on the oxide may be responsible for the increase in current. Between data taking periods, the bias was decreased to5 V with respect to the nominal value of50 V and the high noise and currents were eventually mitigated by reducing the potential difference down from the nominal150V but keeping it above the depletion voltage of the sensors (typically>90V).

3.Humidity affecting optical transmitters. The optical transmitters (TXs) used by the RODs to broadcast the commands and triggers to the front-end modules have been problematic in all data taking so far. Individual channel deaths within the12-channel Vertical-Cavity Surface-Emitting Laser array (VCSEL) lead to a loss of data from modules, until the TX was replaced or repaired. Early failures were due to the ingress of humidity to the VCSELs, which were addressed by introducing dry air to the racks. Humidity-resistant VCSEL arrays were installed afterwards.

4. Single event upsets. Single Event Upsets (SEUs) can corrupt front-end chip registers, leading to high or low noise from that chip, or to desynchronisation of the chips with the rest of ATLAS. In 2011, an automatic reconfiguration of individual modules was implemented and invoked when a desynchronisation was detected. In addition and in order to target noise-invoked SEU issues, a global reconfiguration of all modules, with negligible dead-time, was invoked every 30minutes. With these measures, the fraction of the∼8000data links giving errors was typically at∼0.2%.

The increase of ROD BUSYs, as discussed in item 1, and the significant increase of leakage current of a portion of the modules at high luminosity, as discussed in item 2, were dominant in 2012while items 3 and 4 dominated up to 2011.