Data about the specific indicators is collected as part of an ongoing process during teh implementations of the project. This provides constant information on the project’s progress, the objectives achieved and the use of the available means.
Many organisations have introduced deadline, cost and quality controls for this purpose. They compare the activity plans with the interim reports about the outputs as well as budgets with expenditure. From time to time they take a look at the on-site implementation of activities for themselves.
One of the roles of monitoring is to ensure that the data required for outcome and impact assessment is being collected and recorded reliably. Interfaces with project and risk management might need to be clarified. There is no consensus among experts about whether a monitoring system should only include the output level while outcome and impact data can be added later, or whether the monitoring system should include the outcome and impact levels as well, as proposed for example by the World Bank’s Results-Based Monitoring approach.
If it is noticed that actual service provision diverges from the planned output, then the reasons for and consequences of this should be analysed. Corrective measures can then be taken so as not to jeopardise the planned results.
In practice, there is a danger that the monitoring system can be either too superficial or too complicated. In the first case there is too little data; in the latter this leads to so-called “data cemeteries” that are never used – planning is often too ambitious and the set indicators cannot be measured. This sort of monitoring system is quickly abandoned. As early as the planning stage, adequate attention should be paid to ensuring that the monitoring system is feasible and the effort commensurate.