The issue of training evaluation raises several questions:
Why is evaluation being done?
What is being evaluated?
Who should set the learning standards?
Who will be conducting the evaluation, i.e., who will judge the results of the training (participants, facilitators, both of these, outside individuals or groups)?
How is the evaluation to be done, i.e., how will results be monitored/evaluated? By what measures? By what criteria?
The answers to the first two questions will help to answer the overall question: “Should evaluation be done?” Evaluation is not always necessary, and unnecessary evaluation may not be a good idea because it is time consuming and expensive and because it generates expectations that something will be done with the data obtained. So the answer to the “should” question almost always is either “Yes, if . . .” or “Not
unless . . . .” Yes, if it is driven by a purpose: to determine something or to justify something. No, if the results will not be used, if the trainers or the client do not care what the results are, or if the subject matter or results may be too sensitive.
The purpose of is to obtain information. Before initiating or agreeing to an evaluation effort, it is wise to ask: What kind of information do you need? What kinds of questions are you trying to answer? What questions will give you that information?
The impetus to begin training and development in an organization often comes from management’s belief that training is an important benefit to employees, that it is a worthwhile investment and that it will help employees to fulfill their potential. However, management also hopes that it will increase personal and job satisfaction, increase motivation and productivity, and decrease turnover. In today’s organizations, the emphasis often is on “the bottom line,” return on investment. Managers and others who contract for training programs need to understand that it is impossible to measure the effects of training in such terms. One would have to measure all the other factors in the organization, over a stipulated period of time, in order to determine what part training played. Obviously, this would be almost impossible if not merely more time consuming and expensive than would be realistic. However, many managers still ask for training to be measured in terms of “increased productivity” or “effect on morale” or similar results. The HRD staff must educate such people in the realities of measurement and research. Behavior does not change in the moment at the time of training. A host of personal and organizational factors affect how well the training “takes” and whether changed attitudes or behaviors are permitted, supported, and reinforced in the workplace. Too often, the people who expect an evaluation are as confused about what is to be measured as they are about why the evaluation is being done.
Probably the best reason for evaluating training is to help the facilitators to examine the design and to improve it, if necessary. Probably the worst reason is to prove that the training was worth the time and effort that it took. If those who are sponsoring the training (this problem occurs primarily in organizational contexts) do not understand the intangible effects of human resource development, the trainers would be wise to educate them or to seek work elsewhere.
What can be measured realistically is whether the participants were satisfied with the training; whether they felt valued because of having been offered the training; whether they thought it was interesting, helpful, or useful; and whether they think that they will use the skills, change their attitudes or behaviors, or have achieved some type of self-development as a result of the training. Some discrete skills also can be measured in a short period of time.
The most important thing in deciding to do evaluation is to be clear about why you are doing it, what or whom you are doing it for, and what or whom you are evaluating. Evaluation done for the purpose of justification is different from evaluation done for the purpose of documentation, and that is quite different from evaluation done to determine something.
The evaluation forms or survey materials should be geared toward obtaining the responses or the quantity and quality of information that you need. For example, justification might include the need to show that the trainees were satisfied with the training. The evaluation form then would not ask “Were you satisfied with the training?”; rather, it would contain questions such as “Which activity (or part of the training) was the most satisfying?” The report then could say that the data shows that ____ percent of the trainees found ____ portion of the training to be the most satisfying. For documentation, you may need to show that so many people attended, that there was follow-up, that the training was timely or what was requested, etc., or you may need to keep a head count in order to show that so many people were trained per year or that so many managers were included in the HRD efforts. In order to determine something, you need to frame the inquiry so as to elicit useful information (e.g., What other job skills would be useful in this training program? How do you plan to use this training?). The techniques used to obtain information for evaluation purposes are basically the same as those used to obtain information for the needs assessment.
End of Training Questionnaire
Instructions: The organizers of this program want your frank evaluation of its value and how it was conducted. You need not write your name on this questionnaire; no attempt will be made to identify the responses of any individual. It is hoped that your replies will be useful in improving future programs. For each multiple-choice item, write a check mark in the box next to the response that is most appropriate for you. When comments are called for, please print.
- I regard this program as:
o Very valuable for my work.
o Definitely useful for my work.
o Somewhat useful for my work.
o Of little or no use for my work.
- The instruction in this program was:
o Very interesting and highly effective.
o Fairly interesting and reasonably effective.
o Marginally satisfactory.
o Boring—should be improved.
- The best feature of this program was
- The feature of this program that I found least satisfactory was
- The investment made by my organization in my training in this program:
o In the long run will pay big dividends.
o Should be considered worthwhile.
o Is neither good nor bad.
o Should be considered a waste of money.
- To someone in my situation, I would recommend this program:
o As very good.
o With reservations.
o As something to avoid.
Instructions: Sometime ago you participated in (give title and place of program). The organizers of this program are interested in your present evaluation of the program and the degree to which it has been helpful to you in your work. Please print all comments.
- Read the four responses listed below and check the box next to the response that best indicates how useful the program turned out to be in terms of helping you in your work:
o Valuable but not indispensable.
o Somewhat useful.
o Of no value.
- The most useful aspect was
- The part that was least useful or least satisfactory was
- During the training did you establish a goal to achieve at work? Check the appropriate box.
- If you answered yes, what was the goal, how did you attempt to achieve it, and what were the results?
- If you achieved your goal satisfactorily, try to estimate the dollar value of your success to the organization over a period of one year.__________
- What suggestions do you have for improving this training?
- List topics or areas that should have been included or emphasized more strongly so that the program would have had greater value for you.
- What topics might be dropped or given less emphasis?
If the training facilitators are not to be involved in the evaluation phase, they should be permitted to assess the evaluator methods and to know who the evaluators will be. This is necessary for two reasons. The first is that one cannot design effectively until one knows what will be evaluated. When the goals of the training and the outcomes to be measured are specified clearly and are related to each other, the training staff has a clear notion of what to design for.
The second reason to ask questions about evaluation before beginning are related to professional ethics if not self-preservation. If it is not clear that the evaluation has a realistic purpose, that the proper issues or people are being assessed, that the methodology suits the purpose, and that the evaluators are qualified to conduct the inquiry, then the facilitators may well question whether they want to accept a training assignment that will be evaluated inappropriately
And have a joyfull December