Professor in Social Psychology/ Department of Social Psychology/ University of Almería/ La Cañada de San Urbano/ 04120-Almería, Spain
+34 950 015261/ ealonso@ual.es
Llacuna-Morera, Jaume
Lecturer in Social Psychology/ Department of Social Psychology/ University of Barcelona/ Mundet, Ponent, PG. Vall d’Hebron, 171/ 08035-Barcelona, Spain jaumellacuna@ub.edu
Pozo-Muñoz, Carmen
Professor in Social Psychology/ Department of Social Psychology/ University of Almería/ La Cañada de San Urbano/ 04120-Almería, Spain
+34 950 015732/ cpozo@ual.es
Martos-Méndez, Mª José
Lecturer in Social Psychology/ Department of Social Psychology/ University of Almería/ La Cañada de San Urbano/ 04120-Almería, Spain
+34 950 015732/ mjmartos@ual.es
Salvador-Ferrer, Carmen Mª
Lecturer in Social Psychology/ Department of Social Psychology/ La Cañada de San Urbano/ 04120-Almería, Spain
+34 950 015732/ cmsalva@ual.es
ABSTRACT
ABSTRACT
Evaluation plays an important role, as it reflects a systematic procedure to obtain information about the training needs of the staff of an organisation, with regard to risk prevention, and consequently, it pinpoints what aspects need to be stressed in the appropriate training. This study proposes a guide to evaluate training programmes on risk prevention at work, thus making it possible to analyse the basic elements of the training, i.e., defining the prevention objectives, the design of the programme, the implementation of the plan, its results and its impact.
Key words:
Key words:
Programme, Training, Evaluation, Prevention
INTRODUCTION
INTRODUCTION
The role of training within the area of risk prevention at work is crucial to provide the employees involved with the necessary competencies required for their job. To this end, it is necessary to design training programmes that will lead organisations to apply risk prevention principles effectively, encouraging an appropriate preventive culture in the work environment rather than remaining a mere statement of intention. This merely formal or bureaucratic fulfilment of preventive duties entails a transmission of information lacking in persuasiveness and assertiveness that will not encourage the recollection and hence the implementation of the contents and knowledge acquired. Thus, in order to avoid this situation it is necessary to devise coherent and efficient training plans that will allow us to attain specific operational results that can be observed and assessed [1].
For all these reasons, we have to point out that the learning acquired through these training programmes, should bring about a change in the way risk is perceived. Therefore, this perception factor has a fundamental role for individuals are considered as basically cognitive beings seeking and processing information rationally. However, there are also psychosocial factors to be taken into consideration for they disregard the belief in individuals as merely cognitive beings. Thus, those training programmes on risk prevention at work aimed at modifying reckless behaviour in workers must take into account, besides cognitive aspects (information, knowledge, etc.) certain psychosocial factors (attitudes, rules, behaviour habits, etc.) which influence the perception of risk, and other variables of a more global nature or macrosocial that refer to the improvement of the work environment [2].
It is in this context where evaluation can be a necessary instrument to extract information on the lacking aspects of an organisation and on the skills, abilities or behaviour of the individuals within it (identifying their problems and analysing their needs), and thus realising what aspects need to be emphasised for the training to be appropriate. To be specific, evaluation allows us to define the objectives of training, to establish in which areas training is required and who needs to be trained, thus making it possible to select the different groups or agents to participate in this activity. [3] Thus, training programmes can be designed, organised and implemented with a view to improve professional competencies with regard to working conditions. Likewise, through evaluation we can learn the results obtained by the programmes in question (summative or result evaluation) and at the same time we can verify if the objectives have been achieved (efficacy) and the impact of the results on the organisation (impact evaluation) [4].
In conclusion, given the need to put plans into action that do not promote purely formal training, it is necessary to analyse beforehand their basic characteristics, the theories on which they are based, the limitations or barriers that may be encountered for their implementation; for these purposes, evaluation is even more vital.
PHASES ON THEEVALUATION IN TRAINING PROGRAMMES
We shall now present the main phases that the evaluation of risk prevention training programmes must consist of. From the beginning we start from an evaluation notion that is focused on improving and perfecting training activities, hence our choice of proposing a summative evaluation (carried out at the end of the programme to specify the achievements attained) and a formative evaluation (that takes place as the
Approach to Evaluation
Thus, table 1 briefly summarises the activities of this phase, during which the evaluator will hold a meeting with the managers or those responsible for the training programme in order to clarify some relevant matters linked with evaluation. It is necessary to bear in mind that at this first meeting, important questions are decided, such as the type of evaluation to be developed or the aims pursued by it, which will guide the whole subsequent evaluation process.
Table 1
EVALUATION ACTIVITIES |
|
EVALUATION APPROACH |
• Title and venue of the training programme• The aim of the evaluation• Type of evaluation to be carried out
|
Planning Phase
Needs: At this stage, it is necessary to look into the needs analysis put into practice by those responsible for the programme design before its implementation. Identifying who is affected by the problem, will be the key result of this phase. The training programme designers will have identified the people affected by the problem, described their basic characteristics, identifying and prioritising their needs so that the training will focus on the most relevant needs [6].
Objectives: Once the needs have been identified and prioritised, we must ensure that the training programme seeks to respond to them. This is the time to analyse the proposed objectives in order to detect whether they correspond to the needs, are precisely and exactly designed and their fulfilment will verify that the problem of the target group has been eliminated or at least ameliorated. If the programme objectives are not clearly defined, the evaluator should make a list of them and define them correctly. It is also necessary to distinguish between goal, general objective and specific objectives.
Pre-evaluation: This section has a key function in the planning process; for it is at this stage when it is decided which actions or interventions are most appropriate to solve the problems detected at the beginning. To this end, besides a clear specification of the problem situation and the most appropriate solution for it, it is necessary to review training programmes that have achieved good results in other contexts, to peruse specialised literature on risk prevention, to analyse the resources that the training programme will require in order to accomplish the planned aims. The sections of this phase are represented in the following table 2.
EVALUATION ACTIVITIES |
|
PLANNING EVALUATION |
Needs• Problem at which the programme is aimed: causal theory (antecedents y consequents)
• Judgement of the pertinence of the programme Objetives• Identification of the goals of the programme
Pre-evaluation
• Pilot or experimental studies• Potential assessment barriers |
Programme Design Phase
There are two key aspects for any risk prevention at work training programme: firstly, it should be specified (action, means, resources, users and every detail concerning the programme should be in writing) and secondly, it must be designed in an organised manner. If the programme fulfils these requisites it will greatly simplify the evaluator’s task.
The tasks to be developed in this phase are described in table 3. Regardless of the activities listed before, it must be clear that the programme must be adapted at every step to the previously established objectives. This will allow us to express a evaluation, at least theoretically, on whether a training programme is sufficient and appropriate to meet the aims set from the start and consequently pursued through the
Table 3
EVALUATION ACTIVITIES |
|
EVALUATION OF THE PROGRAMME DESIGN |
|
Programme implementation evaluation: formative evaluation
Implementing the programme entails putting it into practice exactly as it was initially designed; therefore, its evaluation seeks to determine to what extent it meets all the requirements established in the programme, both with regard to action and to the rest of the elements (timetable, predicted users, methodology used, use of resources, etc.) in order to improve it. This type of evaluation allows us, likewise, not only to establish if the programme components are being applied correctly, but also if the training programme progresses consistently, if the whole target group is included and if it is effective to achieve the goals set [7].
To carry out this type of evaluation, we need to decide what kind of information we wish to obtain, or in other words, which programme variables need to be analysed. To a great extent, this type of decisions is established by the aims of the evaluation and the type of information required by those who are going to make use of it [8]. Thus, in terms of the aspects we intend to evaluate, a decision will have to be made about the instruments required to gather information and the sources from which it will be obtained. If there were no specific techniques available to evaluate what we have in mind, it would be necessary to design them ad hoc.
Without presenting an exhaustive compilation, the evaluator’s tasks at this phase can be seen in table 4. To summarise, process or formative assessment should analyse each phase in the implementation of the programme, to the extent that it can be stated that its functioning matches what was previously programmed.
EVALUATION ACTIVITIES |
|
PROGRAMME IMPLEMENTATION EVALUATION: FORMATIVE EVALUATION |
|
Evaluation of the results: summative evaluation
Once the programme has been implemented, the evaluation of its results, effects and impact will start. This is, therefore, an evaluation that complements the formative one carried out in the previous phase. It needs to be stressed that this evaluation type aims at reporting on the most representative finds, thoroughly examining the programme and fulfilling its social responsibility [9].
This evaluation phase will lead us again to express a series of valuations, in this case referred to efficacy (the level of compliance with the intervention objectives), effectiveness (other effects provoked by the programme, not included in its planning), efficiency (relation cost/benefit) and finally, the impact of the programme (effects the programme has on non-target population and in the implementation context). In table 5 we can see the typical evaluation activities that can be developed at this stage.
To be able to value the efficacy, effectiveness, efficiency and impact of the training programme and to draw conclusions from the final results of the evaluation, it is crucial to be especially careful when registering and analysing data, deciding which indicators to use to report the intervention results, which measuring instruments to employ verify the information and from which sources it will come. Thus, the evaluation design we have decided to use will guide the whole data gathering and analysing process. We must bear in mind that the data will have to be converted into results and the results will have to be duly justified in the final report, providing answers to the questions asked by those that requested the evaluation.
Table 5
EVALUATION ACTIVITIES |
|
RESULTS EVALUATION: SUMMATIVE EVALUATION |
|
Final Evaluation Report
Once the evaluation of training programmes is finished, reporting on the results is the last stage to end this evaluation. This guarantees the relaying of information to the different decision making stakeholders on the weaknesses and strengths found. However, the report cannot just be a dossier listing the most significant finds, these have to be accompanied by value judgements resulting from the evaluative activities and by the most important recommendations to perfect and improve these programmes. The report should be adapted to the needs of the audience it targets to guarantee an appropriate use of the evaluation results. The report should be clear and concise, with a language lacking in technical jargon, simple and adapted to the needs of the target audience, and the information gathered should be structured on the basis of the different phases of the evaluation process previously described.
REFERENCES
- 1. Llacuna, J. (2006). Proscribir el cumplimiento meramente formal de la enseñanza. Seguridad y Salud en el Trabajo, 39, 1623.
- 2. AlonsoMorillejo, E. & Pozo, C. (2002). La percepción del riesgo en la prevención de accidentes laborales. Apuntes de Psicología, 20 (3), 415426.
- 3. AlonsoMorillejo, E., Pozo, C. & Martos, M.J. (2008). Intervención Psicosocial y Evaluación de Programas en el Ámbito de la Salud. Jaén: Formación Alcalá.
- 4. AlonsoMorillejo, E., Pozo, C. & Hernández, J.M. (1999). La prevención de riesgos laborales en la organización: propuesta de un programa de formación. Apuntes de Psicología, 17 (12), 137146.
- 5. Scriven, M. (1996). Types of evaluation and types of evaluators. Evaluation Practice, 17, 151161.
- 6. Altschuld, J.W. & Witkin, B.R. (2000). From needs assessment to action. Transforming needs into solution strategies. Thousand Oaks: Sage.
- 7. Pozo, C., AlonsoMorillejo, E. & Hernández, S. (2004). Teoría, modelos y métodos en evaluación de programas. Granada: Grupo Editorial Universitario.
- 8. Chen, H.T. (1996). A comprehensive tipology for program evaluation. Evaluation Practice, 17, 121130.
- 9. Wholey, J.S. (1996). Formative and summative evaluation: Related issues in performance measurement. Evaluation Practice, 17, 145146.
Papers relacionados





