The world of vocational training is undergoing profound changes that are leading it to integrate numerous complementary formats: e-learning (online learning), face-to-face (collective classroom courses), distance learning (distance learning), collective, individual and, in the context of digital learning: synchronous distance learning (virtual classroom), asynchronous distance learning (social learning on forums).
It is in this sense that the so-called Blended Learning approach is today an effective way of bringing these different modalities together. Blended Learning selects the strengths of each approach and integrates them at the right time into a coherent training programme in order to better disseminate knowledge to learners.
In this context, various studies show that Blended Learning is experiencing sustained development, with steady double-digit growth in recent years.
In its 6th annual survey, the ITSF stated in particular that 74% of companies that deploy training courses have already used Blended Learning (Source). It should also be noted that this appetite for Blended Learning concerns all companies, with mainly a very strong interest in large companies and ETIs (medium-sized companies).
Evaluating blended learning devices from the LMS training platform
If, as we have said, the Blended Learning approach is developing very strongly and generating appreciable results.
Such a system nevertheless requires supervision, management and evaluation according to training and pedagogical engineering adapted to each project. In this respect, evaluation occupies a special place. Indeed, the approach created in the 1950s by Donald Kirkpatrick is a reference in terms of evaluation of traditional training plans and is perfectly applicable to an exhaustive evaluation of Blended Learning type campaigns. Four levels of evaluation are addressed as shown in the diagram below:
(source : https://www.formaperf.eu/)
These 4 levels of Kirkpatrick's evaluation simply need to be adapted to the multimodal aspect of Blended Learning training.
In this context, there are new indicators specific to digital technology that allow to see if the training has been a success. The LMS platforms allow a lot of information to be automatically fed back: training completion rate, time spent on the different modules, etc.
As for the employees' hot and cold feelings, they can be obtained by means of a simple survey on the LMS.
The last level of evaluation, according to Kirkpatrick, relates to the actual increase in skills of the trained employees and is a way for the company to draw the ROI (return on investment) from the training it has deployed to train its teams.
This stage of the evaluation is the most complex because it requires the implementation of many other evaluation tools on the employees' workstation and to have this data before and after the training in order to obtain a real comparison.
It is also a time-consuming evaluation stage that can generate reservations about the direct link between training and production, especially if the organization has implemented new production processes during the training period for its employees or if another change has affected production, outside of the training itself.
These are all complementary elements that can contribute to the full success of a training plan.
We are therefore entering here into a very fine analysis of additional data.
On this subject, it should be noted that the training teams must therefore take this important point into account before launching the courses and help to define relevant evaluation indicators (creation of specifications, creation of tools and implementation of evaluation processes, etc.).
Evaluation engineering, an integral part of training engineering
One thing is certain, it is important to be able to manage and evaluate the entire training system from a single point.
It will then be necessary to take into account assessment criteria such as multiple evaluation questionnaires - which can be carried out in different environments -, interviews with employees, surveys on the qualitative aspects of the training: skills acquired, skills development, skills transfer.
In parallel with these elements, factual data analysis must be carried out on the LMS platform and/or within the IS or Information System such as training completion rates, participation rates, scores achieved, connection times, activity on the platform, etc.
Through these different elements, it is then possible to put in place the necessary foundations to carry out a 360° evaluation.
Beyond these first structuring elements, it is then necessary to define evaluation criteria at two levels: for trained employees, but also for training managers.
It should also be noted that the indicators should not be a sanctuary, but should be continuously enriched for an ever finer analysis and evaluation of the training provided. It is also important not to wait until the end of the training course to evaluate the quality of the tutoring and obtain feedback from learners throughout the course.
In the image of the open and constantly changing professional world in which we live, training and evaluation systems must be flexible, agile and dynamic.
Evaluate to increase the value of the training program and offering
Remember that the purpose of the assessment is to enter into a cycle of continuous improvement. The objective is to measure how curricula and formats can evolve by making relevant changes.
Once these improvements have been defined, it will then be necessary to develop an action plan and manage the schedule to integrate these changes and launch the new training courses.
It is also important to remember that training is becoming an increasingly important part of a company's budget. In this sense, the impact and the quality of the courses provided must be quickly measurable.
It is a question here of preserving or rather optimising the investment capacity of companies in training by delivering campaigns that generate skills development for employees. The notion of return on investment (ROI) is therefore a fundamental criterion in the evaluation system. At Rise Up, at the request of our clients, we cross-reference their internal data with our training data to derive a ROI..
Facilitate the interpretation of results for training managers and teams
In order to properly evaluate the relevance of its campaigns (in a multi-channel logic), it is also important to be able to easily interpret the results presented. In this context, training teams must be able to access summaries, indicators and representations from a single point. Some LMSs, such as Rise Up and in particular its Data Lab service with its predefined evaluation dashboards, make this possible.
A good evaluation system therefore also relies on the ability to use ergonomic digital tools designed to synthesize and integrate all the evaluation indicators mentioned above.
Be careful not to underestimate this key point, which will serve as a real dashboard for the teams in charge of training management.
The notion of evaluation of training courses is positioned as a strategic data. Complex, this evaluation process leads companies and organizations to take a step up in order to successfully carry out such a project which integrates many variables and many formats.
It is by taking all these data into account that the evaluation can be treated as a whole and that training managers can truly analyze the relevance and effectiveness of their training offer.