• Aucun résultat trouvé

5. EVALUATION OF THE SIMULATOR TRAINING PROVIDED

5.1. Sources of input data

Trainee feedback is often identified as level 1 feedback. If not managed properly, level 1 feedback will fail to obtain useful information to facilitate the improvement of the simulator training programme. This level 1 feedback of the trainee is normally completed immediately after all aspects of a training session have been completed, whilst the experiences remain fresh in the student’s mind and easier for them to recollect and record.

The feedback could be recorded by each student on the completion of each training scenario during the day, including subsequent debriefs, or at the end of the day’s training on a specially prepared feedback form. A combination of the two being another alternative, advantages of this being that the feedback for each exercise is more likely to contain detail from the students immediate recollection and experience of what occurred, followed up with an

overall summary at the end of the day, giving the students time to reflect and discuss the day’s training before providing feedback individually and/or as a team.

Feedback requested at the end of a course, that may have been running for a couple of days or a week of instruction, often does not contain detail useful to the instructors for the development/improvement of the program. This feedback can often be too general or rushed by the students, as the detail from previous days of training is no longer fresh in the trainees memory, or they may not feel as passionately about some aspect of the training as they did on completion of the scenario, so may be less likely to provide that initial feedback which may have some significance.

If feedback is requested towards the end of the course, typically post exam and course summary, the students may feel that the summary discussion was their feedback and are less likely to provide details. Satisfactory results are more likely to come from feedback given before completion of the course summary and exam. The end of course summary discussion could be facilitated by the instructor or shift manager as a flip chart exercise, so that it can be captured by the training team for the final course review to be completed by the department shift and training managers.

The instructors will usually obtain more considered thought going into the feedback, if they encourage it throughout the course in short directed questions and comments rather than as the final ‘can you just fill in your feedback form before you leave’ request to the students. If the instructor does not demonstrate value and purpose to the feedback process, then the students certainly will not become sufficiently engaged.

The trainee’s level 1 feedback is typically collected using pre-prepared feedback/training response questionnaires. It is desirable that each individual complete one to ensure their individual thoughts on the course are captured before any team feedback/response is requested from the crew, as this group feedback may be overly influenced or controlled by a particularly strong individual or group.

It has been considered good practice to ask for the following aspects on the feedback forms using a scoring system:

 Feedback on the pre-course preparation of the participants (joining instructions, pre-course work);

 Timelines/effectiveness of the training provided;

 Suitability of training to trainee’s position/normal duties;

 Level/degree of difficulty of scenario;

 Simulator fidelity/performance;

 Scenario realism;

 Instructor performance;

 Areas of strength or improvement required.

The scoring facilitates post course trending, but it is good practice to ensure students provide supporting comments to the score they have provided before including it in any trending. Asking open questions for written comments that can be used to further analyse the participants’ feedback is considered a far better practice than scoring alone.

The process will gain credibility and future support from the MCR teams if a summary of applicable actions, as a result of their feedback, can be discussed and decided upon by the responsible managers, through the relevant training committee and subsequently fed back to all the shifts for comment on the final completion and review of the whole suite of courses provided to the shifts. This facilitates:

 higher levels of training effectiveness feedback and it is deemed good practice to solicit feedback a short time after training, and prior to further continued/shift revision/assessments training being carried out on the simulator to the MCR crews. For example, issuing a follow up questionnaire 3-6 months after the training to the shift, summarising the training provided and asking for feedback on any aspects of the training that related/supported their role upon return to the line, or any topic that should be considered for the next round of revision/continuing training, etc.;

 Encourage the MCR crews to raise reports/CR (condition reports) on good and bad occurrences that may have occurred to them when working back on the plant that may have been influenced by the training they received.

5.1.2. Instructor reports

Instructor reports on individual and crew performance, training issues and simulator fidelity are summarised on completion of a suite of any training courses provided, e.g.:

 Shift revision training being provided to all the shifts over several weeks;

 Individual and MCR team 2-3 yearly re-authorization assessments;

 Continued, IPTE or JIT training for a modification or infrequent operation training to the shift crews.

Feedback from the tutors on the effectiveness the training has on student performance must go back to line management periodically throughout a long training programme (e.g.

initial training over an extended period 1-2 years). This is to engage the line manager, at an early stage of an individual’s training programme, to assist and rectify any identified weakness reported back to them by the tutors. A plan for remedial action, with line support will be required to ensure any weaknesses are worked on and remedied long before the completion of the training programme and before the student sits through an authorisation panel.

After conducting a shift-team course or initial training the performance of the individuals involved needs to be fed back to the line manager for analysis. The shift manager (if not in attendance during the training) and department manager will need to discuss between themselves any remediation of an individual and decide at the appropriate overview committee if there are any outcomes, trends, gaps identified that require addressing as a department and how they intend doing this.

This is an important stage in closing out the loop of the SAT process to enable assessment of the value of the training provided and identify any gaps in process or personnel to facilitate the continuous improvement of the training. For example, gaps identified in:

 Operator knowledge, experience and practical application of operator fundamentals;

 Plant performance limitations that might reduce plant availability or may not be the optimum design or implementation method for human machine interfaces in the main control room;

 Processes or administration documentation that set the operator up to fail, e.g. missing, incorrect or difficult to understand procedures or processes;

 Simulator performance issues that may have a detrimental effect on the training provided.

Following the SAT process ensures that information gathered during the training is captured and considered in a systematic manner to ensure any lessons learnt during the training are considered for future training improvement.

The decisions and supporting actions to support any changes in the training process are decided upon at the appropriate committee with membership and input provided from all with a vested interest, contributing to the required output of the training. Based on the analysis of the reports, the implementation of any changes should go through the SAT process to identify that the training need is, in fact, still valid.

5.1.3. Managers feedback

The collective performance of NPP personnel should be continually evaluated to identify any gaps, omissions or AFI to their training programmes. Simulator training provides a unique opportunity for NPP managers to observe performance and re-enforce the desired managerial standards of control room personnel in such areas as reactor/plant configuration and control, responding to infrequent, abnormal operations, or any emergency situations which they may encounter and have to react to during an event. This is important to demonstrate high standards regarding reactor and process safety, to all those within the business, to the stakeholders and investors, to the wider nuclear community and to the regulatory bodies and the general public.

Thus, NPP managers, particularly those responsible for control room operations, should establish schedules to periodically observe simulator training activities for all shift teams/crews.

These observations should be collectively evaluated to identify overall strengths and opportunities for improvement in control room standards and personnel performance. Similarly, this performance should be benchmarked against the operator shift performance observed in the plant MCR to ensure that management standards and expectations are being maintained in a consistent manner in the simulator and control room alike. Simulator instructors should also facilitate and participate in such observations and evaluations.

5.1.4. Performance indicators

Identifying trends in performance to prevent the drift of standards away from the desired performance levels over time, in any part of the NPP business, is typically supported by identifying and monitoring number of KPIs (key performance indicators). The operations world will have number of KPIs identified through different facets of its business, which are constantly under review. The operator performance levels can be based on on these KPIs and hence this can be used to monitor/measure operator training intervention effectiveness, if used correctly. Well-conceived, monitored, measured and acted upon KPIs are a powerful tool to support the prioritizing and directing of resources to aspects of the business that obtain the biggest effect/gain in safety and production regarding time and money spent. Review of performance improvement or degradation using KPIs as one tool, can indicate or predict degraded training programme trends and/or effectiveness before they result in an unwanted event on the actual plant.

The principal challenge is to identify appropriate KPIs, monitor and trend the information provided by them accurately and review the information obtained as to its significance. These indicators should be based upon, and have a visible link to, the organization’s current policies, philosophies and principles, leading to setting the departmental aims, goals and objectives. The relevant departmental manager is then made responsible for resourcing to enable their team/department to act upon these KPIs accordingly.

Detailed operational KPIs could result from safety and/or performance issues highlighted by the NPP looking at occurrences and the underlying causes, such as:

 Technical Specification non-conformances (operations rule breaches at varying levels of significance);

 Number of Planned/controlled reactor shutdowns;

 Number of unplanned reactor shutdowns or trips;

 Safety rules non-conformances;

 Plant configuration and control misalignments, etc.

Some of the short comings and causal factors of the above may lead to training being highlighted as one of the tools to rectifying the problem. This in turn might lead to more specific KPIs to be monitored to see that any adopted training cure is trained upon, measured and has the desired effect on performance.

Examples of interventions leading to operator training specific KPIs, in relation to simulator training, which could influence the above KPIs might include:

 Operator training hours over a set period (operator time on simulator scenarios);

 Review of failures in the operator fundamentals being analysed, measured and assessed in their component parts to see if there are common themes or weaknesses in:

 Knowledge (operator knowledge related to unit trips);

 Monitoring;

 Controlling;

 Teamwork (human factor related events included);

 Decision making;

 Periodical Assessment performance/exam results;

 Failures to apply performance tools to the required standard;

 Simulator availability or down time;

 Cancelled training activities.

Continually reviewing and trending these KPI values, amongst any others identified, is seen as good operational practice and is one of the ways NPP can demonstrate their ability to meet compliance conditions to their regulator. The KPIs should include a log of what training observations and coaching activities have been carried out, and by whom, as well as recording and trending plant performance. The idea is to relate plant performance issues, whether it be man or machine, to any lack of or incorrect training issues.

5.2. METHODS