
Copyright © 2025
Print ISSN: 2960-1541
Online ISSN: 2960-155X
Inclusive Society Institute PO Box 12609
Mill Street
Cape Town, 8000 South Africa 235-515 NPO All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means without the permission in writing from the Inclusive Society Institute D I S C L A I M E R
Views expressed in this report do not necessarily represent the views of
the Inclusive Society Institute or those of their respective Board or Council
members.
JANUARY 2025
by Nondumiso Alice Sithole
Abstract
Monitoring and evaluation (M&E) systems are the heartbeat of effective public management. Although these systems were often neglected by governments in the past, they have become a critical part of the functioning of a democratic society. And yet, in many developing and emerging countries, M&E systems are underdeveloped and insufficiently institutionalised. A recent study showed that, while 91% of national development strategies approved after 2015 explicitly refer to the 2030 Agenda and SDGs, only 35% of them have the required data and systems to track implementation. The outcome was that there is a lack of a conducive legal and regulatory environment, insufficient capabilities, weak accountability mechanisms, and fragile frameworks to institutionalise the use of M&E. In South Africa, the ill effects are particularly evident at municipal level, where, despite improvements in access to basic services, the outcomes are still well below standard.
This paper investigates the challenges and possibilities M&E systems present. It explains how the stand-alone concepts of monitoring and evaluation are architecturally intertwined. It touches on the various types of M&E systems or methods and explores both the challenges of implementing these systems and the way forward. Finally, the paper offers useful recommendations for actions stakeholders should take with regard to M&E, NGOs, think tanks as well as governments.
Introduction
There is continuous expansion and development of “think tank” structures or systems occurring across the globe. These mechanisms of cross culture are aimed at undertaking important work in terms of an array of critical issues in governance, politics, and research. Or, better put, they are focused on researching and analysing data in their respective sectors or industries, with the aim of shaping policies on all levels. The shaping of these policies is not only regional but can also be used nationally and even on a global scale. However, at the July gathering of the 7th China-Africa People’s Forum and 7th China-Africa Young Leaders Forum, it was established that there seems to be an oversight by think tanks on an important component in measuring the impact and effectiveness of all bilateral work, and work in general, in different countries.
As Valadez and Bamberger (1994) view it, perhaps there is still an absence of a theoretical framework for international and comparative evaluation. That there is currently a lack of a framework probably has a detrimental impact on evaluation as well as on the very processes and outcomes that evaluation, as a measuring tool, hopes to improve and enhance within monitoring and evaluation (M&E) systems. Without the means to synthesise findings, test hypotheses, develop laws, and cumulate knowledge, international and comparative evaluation of programmes results in disconnected, invalid, and unreliable results in so far as regional, national, and global policy initiatives are concerned.
Lusthaus et al (1999) points out that although there have been significant strides made in terms of technology and economic solutions, not only in South Africa’s context but also globally, there are still countries where there has been inadequate improvement in the conditions for a large number of people. There are still some nations that have difficulty learning how to create appropriate roles for the state in development within their own contexts; how to organise and manage their systems so that they can identify priority problems, formulate policies, and create ways to have these policies implemented in a sustainable way (Hilderbrand & Grindle, 1994).
South Africa falls within this category. Although it has made great strides in many aspects, there still exists a weakness in its monitoring and evaluation systems or mechanisms, particularly when it comes to the local sphere of government.
This paper advocates for an assessment of whether M&E systems are being utilised effectively by government, more specifically by local government, in their partnerships, bilateral relations, and policy implementation, and of what can be done to strengthen the capacity of government. It further recommends a greater focus on M&E systems as a tool to measure the impact of programmes and policy initiatives not only on a local scale but also on a global one. Finally, this paper encourages the implementation of monitoring and evaluation as an oversight mechanism and tracking tool of accountability, which will aid in bridging the gap between the people being governed and the government or administrations that are voted into power. This paper aims to highlight progress and/or weaknesses in key governance areas. The findings of this paper will demonstrate that, thus far, governments have not aggressively made the effort to utilise monitoring and evaluation systems as much as they should.
Monitoring and evaluation deconstructed
Monitoring and evaluation is an important part of public management. Although it was often neglected by governments in the past, it has become a critical part of the functioning of a democratic society. Monitoring and evaluating the outcomes of a project or public campaign allows managers to determine how successful it has been in achieving its desired goals (Wits, N.d.). It is also aligned with accountability, and jointly, monitoring and evaluation activities tend to ensure that projects achieve both upward and downward as well as horizontal accountability demands (Okafor, 2021).
It is essential to understand both the distinct elements of monitoring and evaluation and the relationship between them in order to have a complete or holistic understanding of the combined concept. The World Bank defines monitoring as “a continuing function that uses systematic collection of data on specified indicators to provide management and the main stakeholders of an ongoing development intervention with indications of the extent of progress and achievement of objectives and progress in the use of allocated funds" (World Bank, N.d.).
Monitoring is a process of continuous and periodic surveillance of the physical implementation of a programme or policy, through timely gathering of systematic information on work schedules, inputs, delivery, targeted outputs, and other variables of the programme or policy, in order to have the desired effects and impact. It is an integral part of a management support function: it relates to monitoring a programme or policy and its components; managing the use of resources; guiding the progress of the programme or policy towards the desired end; and making sure that planned activities occur. The data gained from monitoring activities feed into and guide the decisions of managers or implementors of policy. Monitoring is also an integral part of the Management Information System; thus, it is a management tool (UWC, N.d.).
Shapiro (N.d.) describes monitoring as the systematic collection and analysis of information as a project or a programme progresses. The aim of which is to improve the efficiency and effectiveness of a project or organisation. It is based on targets set and activities mapped out during the planning phase of work. Monitoring helps to keep the work on track and alerts management when things are going wrong. If executed properly, it is an invaluable tool for good management and provides a useful base for evaluation. It enables an organisation, or policy implementors, to determine whether the resources that have been availed are sufficient and are being utilised well, and whether there is appropriate capacity and the project implementors are following through with the original plans (Shapiro, N.d.).
Valadez and Bamberger further add that monitoring is more of a programme activity – its role is to determine whether project activities are implemented as planned. If the finding is negative, it determines the cause of the anomaly and what can be done to address it (Valadez & Bamberger, 1994).
On the other hand, evaluation is defined by the World Bank as "the process of determining the worth or significance of a development activity, policy or programme … to determine the relevance of objectives, the efficacy of design and implementation, the efficiency of resource use, and the sustainability of results. An evaluation should (enable) the incorporation of lessons learned into the decision-making process of both partner and donor" (World Bank, N.d.).
Evaluation is a process to determine (as systematically and objectively as possible) the extent to which programme needs and results have been or are being achieved and analyse the reasons for any discrepancy (UWC, N.d.). It is the periodic assessment of ongoing and/or completed projects, policies, or programmes using a systematic and objective approach (Noltze et al, 2021).
It is important to note that evaluation is the second component or stage of the M&E system process and is supposed to provide responses or answers to be implemented once the situation has been assessed during the first stage or phase of monitoring. The University of the Western Cape (N.d.) describes evaluation as being used for measuring programme effectiveness, and evaluation processes may be used to demonstrate to planners, donors, and other decision-makers that the programme activities have achieved measurable improvements.
Monitoring and evaluation can also indicate whether and where resources are being used efficiently and where strategies for resource allocation may need to be considered or reconsidered. In addition, the following pertinent questions should be asked: Is the programme doing what it set out to do? Or, is the programme succeeding in what it set out to accomplish? Is it providing a useful – or needed – service? Is it providing services to the intended audience? Have there been measurable changes – improvements – in the conditions that the programme set out to address? Have resources been used efficiently?
Evaluations are meant to furnish an objective view through rigorous research methods to inform conclusions about performance, reasons for good performance and poor performance, and to suggest recommendations for improvement in respect of programmes and policies.
Monitoring makes contributory inputs for evaluation, and this makes it an integral part of the overall evaluation process. Nyonje, Ndunge and Mulwa (2012) opined that monitoring in nature is descriptive and provides information on the status of project intervention in relation to the assigned project targets and outcomes. Contrastingly, evaluation is seen as an assessment of ongoing and/or concluded projects in an organised, systematic, and objective way with the aim of providing on a timely basis, the assessment of relevance/importance, the efficiency and effectiveness, as well as the impact, sustainability, and overall progress. Monitoring and evaluation applied as a function, is a fundamental part of project management that involves reflection and communication to support efficient and effective project implementation through informed/evidence-based decision-making (Nuguti, 2009).
When using this definition in the context of policy monitoring, policy refers to a programme of action to give effect to specific goals and objectives aimed at changing (and preferably improving) an existing unsatisfactory situation. Evidence-based policy is an approach to policy analysis and management that helps people make well-informed decisions about policies, programmes and projects by putting the best available evidence at the heart of policy development and implementation (Cloete, 2009).
Policy monitoring itself is the regular, systematic collection of data on the basis of specified indicators to determine levels of progress and achievement of goals and objectives. This is normally a very important project implementation and management tool but has over time been linked to the concept of evaluation. The results of the monitoring process are normally regularly reported in prescribed standardised formats.
In contrast to monitoring, policy evaluation is a systematic judgement or assessment of policy programmes. It can include a systematic assessment of resources, organisational processes to convert such resources into policy outputs or products, and the extent to which these policy programmes have the intended results in the form of outputs, outcomes or impacts, measured against envisaged goals and objectives. Systematic, evidence-based assessment can only be undertaken if the evidence is available to assess. The evidence is collected, stored and processed through systematic, rigorous monitoring and reporting processes. This establishes the link between these processes, which have over time become known as monitoring and evaluation (M&E) (Cloete, 2009).
Governments across the world have specialised institutionalised monitoring and evaluation systems for a variety of reasons, namely: They are essential for effective public policymaking; enable institutions to assess the effectiveness of policy decisions and programmes, and monitor progress towards national goals and the Sustainable Development Goals (SDGs); direct the focus of governments as needed to accelerate progress in policy programmes and initiatives; enhance public accountability for results; and provide opportunities for dialogue between citizens and public institutions.
The 2015 Evaluation Year, along with the United Nations (UN) General Assembly’s adoption of the Agenda 2030 and SDGs, strengthened the movement towards increased governmental awareness and interest in evaluation, while raising expectations about the pace of monitoring and evaluation capacity development at national level. A strong demand for individual evaluation training and capacity development also emerged from this movement and some countries took steps towards the institutionalisation or implementation of evaluation.
Types of M&E systems at a glance
Different countries use different terminology to describe evaluations, but the underlying principles of the evaluation process remain the same. The set of types of evaluation is grounded in the base logic model (cause-effect) – linking inputs to activities, outputs, outcomes, and impacts – which is also used in the Framework for Managing Programme Performance Information. It is important to take account of the fact that government interventions are implemented within socio-economic contexts that are complex, dynamic, and structurally inequitable. For example, while South Africa has over the past 25 years made remarkable strides in mitigating patriarchal norms and practices, gender inequalities persist, and many gender gaps remain unaddressed.
According to Rabie and Cloete (2009:11), the ongoing or process performance evaluation is done at different intervals “when a policy project or programme is still being implemented”. This type of evaluation is used to assess what has actually been accomplished at a particular time during the implementation process. Ongoing or process performance evaluation is undertaken to keep track of the timeframe and the spending patterns of the programme.
Secondly, formative evaluation is conducted in order to determine the policy outcomes of a generally unknown future and relies on complex technology-based trend-projection techniques that are not necessarily known to all evaluators. Cloete (2009) argues that formative evaluation is frequently required at a very early stage in the policy planning process to undertake a formal assessment or appraisal of the feasibility of the different policy options that one can choose from (Cloete, 2009:296).
Lastly, summative evaluation takes place after the completion of the policy, project, or programme – for example, at the end of the financial year or the term for which the policy was planned. “Summative evaluation focuses on both the short-term end products (outputs), the medium-term sectoral outcomes, and the long-term intersectoral impacts or changes that the end product brought about” (Cloete, 2009:296). Determining whether an evaluation is ongoing or is conducted at the end of the implementation process requires data both about the status quo ante (so-called baseline data: before the policy project was initiated), and data at the cut-off point, which signals the end of the evaluation period (so-called end or culmination data) (Cloete, 2017).
An important distinction to make here is between outputs and outcomes, as these areas in monitoring and evaluation often overlap. The Organization for Economic Co-operation and Development (OECD) Development Assistance Committee’s (DAC) definition acknowledges this confusion by saying that an output “may also include changes resulting from [an] intervention which are relevant to the achievement of outcomes”. Even with clearly delineated guidelines on these concepts, an organisation will need to deal with the different interpretations staff may have on a case-by-case basis (Intrac, 2016).
Are the South African Government and Global Community Utilising M&E Systems?
In many developing and emerging economies, M&E systems are somewhat underdeveloped. Furthermore, they are not sufficiently institutionalised. It goes without saying that resources and capabilities to provide quality M&E services are therefore insufficient. A recent study by the Global Partnership for Effective Development Co-operation (GPEDC) showed that, while 91% of national development strategies approved after 2015 explicitly refer to the 2030 Agenda and SDGs, only 35% of them have the required data and systems to track implementation (OECD, 2019). The outcome of the study alluded to the fact that there is a lack of a conducive legal and regulatory environment to create demand for M&E services on the part of public institutions; insufficient capabilities to procure, provide, and use evaluations; weak accountability mechanisms on the use of evidence and results; and weak frameworks to institutionalise the use of M&E in decision-making.
In the South African context, the creation of a dedicated Ministry of Performance Monitoring and Evaluation (PME) within the president’s office in 2009 as a result of concern that, while there was a slight improvement in access to basic services, the outcomes, or the reality on the ground, was still below standard. The quality of services – for example, in education and health – were substandard or poor in many areas across the country. However, massive increases in budget expenditure on services have not always brought the results anticipated. The underlying reasons for this vary from, among others, lack of political will, inadequate leadership, and management weaknesses to inappropriate institutional design and misaligned decision rights (Engela & Ajam, 2010).
The above remains the status quo, to a certain degree. There are not many municipalities that either have, develop, or consistently maintain adequately operational institutionalised M&E departments within their structures (the choice here of using a scenario involving local authorities is because they are the closest to the citizenry). Therefore, it certainly makes sense that there should be a strong focus on the maintenance of such systems.
There is a general weakness in respect of resources and capacities at national level in governments on the African continent and developing countries in other parts of the world, countries in the Middle East and North Africa (MENA) region. The weakness lies in the lack of support for the institutionalisation of evaluation. There is also a demand for greater accountability that is aligned to economic, cultural, and political backgrounds (World Bank, N.d.).
Like other African countries, South Africa faces multiple deeply rooted challenges with regard to transformation. Interconnected crises – including sanitary, geopolitical, economic, humanitarian, migratory, etc. – the development and strengthening of M&E systems, and capacities of both public actors and local M&E stakeholders remain crucial factors in the achievement of national development goals and the SDGs. There must be consistent awareness building, advocacy efforts, and the training of M&E stakeholders in all spheres of government. There needs to be fostering and strengthening of efforts, as such is required, in order to promote national evaluation policies and strategies and to build more effective national M&E systems.
The World Bank reported that the Independent Evaluation Group (IEG) launched the Global Evaluation Initiative (GEI), a partnership developed with the objective of being a catalyst for M&E systems. Furthermore, its aim was to provide a pool of key actors and experts in the evaluation field to assist governments in developing countries in placing evidence at the heart of decision-making. The Every Newborn Action Plan (ENAP) is one of the key implementation partners of the GEI, as it is ideally positioned to help the GEI achieve its mandate in some parts of the African continent and in the MENA regions.
South Africa has also taken some action, having developed a revised National Evaluation Policy Framework for the period 2019 to 2024, based on a review of the successes and challenges of the first Policy Framework adopted in 2011. The quest to sustain public sector reforms demands sustained and deliberate coordination interventions to foster compliance with and accountability for the implementation of development policies. Policymaking and monitoring is mainly informed by the quest to improve quality. Therefore, any benchmarking that is conducted is meant to contribute to a better perspective on how South Africa compares to other developing countries and the global community at large. It is also aimed at setting above-average standards of improvement – linked to accountability and sound governance is the economical use of resources in compliance with prescriptive management systems (Masombuka & Thani, 2023).
Until 2005, only individual staff performance evaluations were institutionalised and regularly and systematically carried out in the South African government. Policy programme monitoring and evaluation, however, were not undertaken, managed, and coordinated systematically in the South African public service. These activities were undertaken sporadically by line function departments for purposes of their annual departmental reports. Some departments were more rigorous than others in this process, while the Public Service Commission (PSC) undertook to monitor and evaluate the South African government’s adherence to a restricted number of principles of good governance that the PSC derived from the Constitution of 1996 (Cloete, 2009).
The concept of a developmental state has been firmly reflected in almost all policy development and the trajectory of government since 2009. The focus on performance management and critical assessment of outcomes and the impact of government programmes through the results-based approach became the hallmark of then President Jacob Zuma’s administration. Both Netshitenzhe (2015) and Gumede (2017:10) allude to the prominence of improving government performance through the adoption of the NDP by the Cabinet and Parliament in 2012 as a development trajectory for the country. The formalisation of monitoring and evaluation capacity and performance agreements to improve accountability throughout the three spheres of government became a seal of political will.
There seems to be a commitment towards improving the quality of service delivery in South Africa. However, typical or relatively accurate indicators of improvements and success are yet to find expression in terms of measurement of policy outcomes. Masombuka and Thani (2023) state that the government policy reviews that have been conducted on the performance of the South African government over the past 25 years reveal the following: the agility, capability, and responsiveness of the state to socio-economic challenges, among others, is questionable in the 25-year review of government performance; the constant changes in political leadership and corruption have impacted the impetus for accountability; dwindling public trust and poor management of administrative interface, strategic and institutional capacity, technical capacity, and organisational culture are persistent challenges for attention in realising the objectives of a developmental state; the Covid-19 pandemic demonstrated the weaknesses in the system in terms of bringing to the forefront shortcomings in governance and accountability that manifested in corruption, besides also showing the glaring inequalities that exist.
The question here is, what has become “practice” or what are essentially deemed as strategies or practical measuring methods of the assessment of monitoring and evaluating performance of policy initiatives or progress by government in line with their mandates? Firstly, reform in respect of public expenditure is meant to be spearheaded by the National Treasury through the Public Finance Management Act 1 of 1999. The National Treasury is tasked with regulating financial management across the three spheres of government. In addition, is the development of the departmental strategic and annual plans for accountability on the commitments within the departments of the three levels of government (Masombuka & Thani, 2023).
Secondly, is the Department of Public Service and Administration (DPSA) through the White Paper on Transformation of the Public Service (1995), which focuses on performance management systems, knowledge management, continuous learning, and introduction of monitoring and evaluation. Strengthening intergovernmental relations and the role of leadership in institutionalising monitoring and evaluation as a strategic management function remains a crucial instrument in respect of the three spheres of government. Clear roles, responsibilities, and adequate investment of resources is another prerequisite to comply with principles and values governing public administration, monitoring, and evaluation (Masombuka & Thani, 2023).
It is still a challenge to compare the implementation experiences of the emerging M&E systems in South Africa (discussed above) with those of other countries, due to the different developmental and governance contexts and dynamics, which are often country specific. Until recently, empirical literature tended to have a donor perspective, rather than a government perspective. Where the literature is related to government, the focus tends to be on the project, programme, or sector level rather than from a government-wide perspective (Engela & Ajam, 2010).
Moreover, the M&E framework is a developed-country framework, and some do not recognise that in a developing country additional layers of complexity for the selection, implementation, and success of a local programme may have a serious impact. How, for example, does global economic penetration affect the selection, implementation, and sustainability of social programmes? Who is measuring “development”, the developing country or an agent of the developed world, such as the World Bank? What weight should be assigned to the values of various stakeholders in the developing country versus the international institutions? What possibly inappropriate assumption is the evaluator making about the selection, implementation, and success of a programme? Under what conditions might it be inappropriate to evaluate a social programme in a developing country?
The uniqueness of African M&E systems is that most of them are still in the early phase of development. Africa has taken lessons from countries, like Chile, that have adopted a whole-of-government approach that is centrally driven, focusing on three dimensions: utilisation of M&E information, sustainability, and good quality M&E information (Mackay, 2007). African countries that have developed their M&E systems have also modelled their systems along these three dimensions. However, the approach has proven to be weak, as there are low levels of ownership, especially by ministries, departments and agencies (MDAs) (Chirau et al, 2022; Mackay, 2007).
In some African countries – such as Ghana, South Africa, and Tanzania – M&E systems exist in different MDAs and subnational governments, but they work in silos, therefore they are seldom coordinated (Masuku & Ijeoma, 2015:15). Despite M&E capacity strengthening efforts made by these governments, the M&E infrastructure remains biased towards producing monitoring data as the main performance management input and accountability mechanism. Evaluation remains on the periphery. The accountability and overemphasis on monitoring have led to a culture of malicious compliance. There is too much attention paid to measuring inputs and activities without attention to outcome and impact of programmes (Chirau et al, 2022).
The above is seemingly the global position. A common issue faced by all countries is capacity – the capacity of evaluators in a country to conduct evaluations and the capacity in government to commission, undertake, manage, and use evaluations. Until training in evaluation becomes more widespread, this will remain a major constraint.
Can governments, NGOs, think tanks use M&E as a tracking tool for oversight and holding leaders accountable?
Different scholars in the field have laid down key considerations for a monitoring and evaluation plan. These factors complete the M&E plan and give better coverage in terms of providing oversight and direction to the project during implementation. These are some of the considerations: financial resources and human capacity to carry out M&E activities (Brignall & Modell, 2000); feasibility, timeline, and ethical considerations (Armstrong & Baron, 2013). These details tend to ask important questions that require the project teams to provide answers that in turn shape and guide implementation.
An understandable monitoring and evaluation strategy is one of the best ways to help think tanks achieve the greatest impact in the most cost-effective way. Yet, measuring the impact and results of a think tank’s work is challenging, as these are often intangible (e.g., building relationships with policymakers, playing a key role in debates or networks, etc.). The ultimate goal is policy change, but this takes a long time and often cannot be attributed to a specific action or organisation, but rather, it is the result of many factors and actors.
Think tanks have become critical in terms of being key stakeholders in the realm of governance across the globe. Various networks of think tanks have been established, with the aim of assisting and aiding government in terms of research. These research networks can play an integral role in the field of monitoring and evaluation. It is suggested that think tanks should aim to do the following when undertaking or conducting M&E work: ensure that their mission and projects address challenges in their area of focus, by having a crystal clear policy influence objective. Think tanks must select their policy influence strategies or plans of action, and how these strategies and/or plans can and will be measured. Lastly, they should ensure that they not only have the buy-in from the relevant stakeholders, but also that they have the requisite resources to achieve results (MERICS, 2023).
In the event that think tanks establish sound or stable monitoring and evaluation plans, and when they carry out changes and improvements based on the lessons from the M&E work that they conduct, these can both assist in them becoming more effective in achieving their objectives and also serve somewhat as an “oversight” mechanism.
To be effective, think tanks must also communicate their findings to a range of stakeholders. The effectiveness of these communication efforts can be monitored and evaluated, and organisations can learn how to improve their communications to achieve greater impact. Think tanks must be able to package their research in a manner that is useful to their governments or stakeholders and easy for them to disseminate the research findings in order for that research to feed into the policy debates meaningfully (MERICS, 2023).
Organisational M&E systems involve implementing effective communication processes that support various strategies. The importance of communication in M&E lies in ensuring that employees have enough information to provide feedback for progress reports related to service delivery. Effective implementation and sustainability of an M&E system requires the development of institutional capacity, encompassing critical technical and human skills (Kusek, Rist & White, 2005). Communication advances coordination, cooperation, and general support tasks, which are crucial for a successful M&E system (Kadel, Ahmad & Basnet, 2020). In addition, clear performance indicators are essential for monitoring and providing information about progress towards achieving goals.
Efforts to provide the above resources must be spearheaded by governments themselves, as these are the essential tools required. Furthermore, municipalities must ensure that managers and staff align their roles with the priorities and objectives outlined in the municipality’s integrated development plan (Van der Waldt, 2018). The organisational challenges include poor alignment with municipalities’ strategic plans, a lack of coordination, poor management, and limited government M&E of these organisations within their jurisdictions (Ngumbela & Mle, 2019). These challenges are caused by a lack of M&E training opportunities and networks for most personnel in government institutions and municipalities, which is considered a significant drawback (Engela & Ajam, 2010). Adequate training is essential for both the custodians of the system and end users (Ile et al, 2012). Once adequate training has been provided, performance agreements can be designed to address the legacy of institutions underperforming. Specifically, adequate training will reduce the lack of accountability that has become characteristic of South Africa’s local government (Van der Westhuizen, 2016).
M&E systems are important in aiding government MDAs to measure the results (outputs, outcomes, and impact) achieved by their respective development policies, programmes, and projects. In reality, although there is an M&E system at the central level, MDAs and local government tend to have their own M&E systems that co-exist within a broader centralised M&E system (Goldman et al, 2012). Both systems provide information on the performance or non-performance of government policies, projects and programmes at the national, sector and local government levels. Importantly, in the process of measuring the results at various levels of outputs, outcomes and impacts, the M&E system should be able to identify what works and what does not, and why (Mackay, 2012).
Monitoring and evaluation systems help improve government performance and help development programmes to achieve their objectives. In so doing, M&E systems provide vital evidence to ensure accountability to citizens, legislatures, and civil society (Mackay, 2012). Sound evidence is equally critical in improving programme planning, budgeting, policymaking, and decision-making. There is evidence that points to the value of developing a system (both an M&E system and evaluation system) and an M&E policy, although one (an M&E system or evaluation system) can come before the other (an M&E policy or evaluation policy). Chirau, Waller and Blaser-Mapitsa (2018) argue that there is a direct link between national evaluation policies and the development of strong national evaluation systems (NESs). Effective evaluation systems are dependent on evaluation policies for framing the purpose of M&E and a delineation of institutional M&E responsibilities.
M&E should be considered as one of several tools governments can use to assess whether public policies and expenditures are achieving their objectives in the most cost-effective manner. If adequately orchestrated with other tools such as audits, regulatory impact assessments, performance budgeting and spending reviews, M&E can prove to be highly impactful sources of information for sound and smart policy and resource allocation decisions (Organisation for Economic Co-operation and Development, 2023).
An overview of M&E challenges in the context of local government and governments in general
The Local Government Municipal Structures Act of 1998 outlines the establishment of municipal committees tasked with formulating, implementing, monitoring, and evaluating the activities and operations of municipal councils and their service delivery to communities. Likewise, the Municipal Systems Act of 2000 is unambiguous about the importance of M&E in local government to the extent that it includes how a municipal council exercises its legislative and executive authority to implement M&E systems. Concerning M&E, Section 11(3) states that a municipal council exercises its legislative and executive authority by monitoring and regulating municipal services, monitoring the impact and effectiveness of any services, policies, programmes, or plans, and establishing and implementing performance management systems. This policy guideline does not prevent or provide any excuses for municipalities not to implement M&E systems under any circumstances (Yekani et al, 2024).
According to the Auditor-General’s reports, there is a widespread lack of financial controls and project monitoring, an ongoing culture of a lack of accountability and tolerance for transgressions, which results in a further regression in audit outcomes in municipalities, making improvements rare – the general trend over the past three years has remained negative. Eight municipalities could not adequately support the information reported in their financial statements and received disclaimed audit opinions (National Treasury, 2020). This evidence highlights the ongoing challenge of inadequate M&E systems for the effectiveness and efficiency of initiatives and interventions at the local government level (Yekani et al, 2024).
Therefore, there is a need to further investigate the challenges associated with implementing M&E systems within municipalities. However, these investigations must not only be carried out when or if a crisis arises. The assessment for where local governments and governments in the global community at large go wrong, or rather, the collection of data by M&E systems must not be a reactionary response.
The main challenges faced by the local authorities can be attributed to various underlying issues. For one, the knowledge, skills, and competence required for those aspiring to perform and those already performing tasks related to M&E of public projects are limited. Municipal officials of the various projects also fail to understand the importance of M&E at local government level. Ultimately, local governments have failed to develop an appropriate institutional M&E system (including M&E plans, indicators, and tools) (Mthethwa & Jili, 2016).
This demonstrates that although much has been achieved in terms of providing services to the majority of South Africans, much is still required to be done in terms of training, workshops, and dialogue on the manner, when, how and what of M&E systems in terms of suitability and implementation at local government level to enhance service delivery. Moreover, the definition of an M&E system requires that such a system be established across provinces to attain effective and efficient service delivery.
Another challenge is that many organisations at the level of local government need to attract and retain highly skilled workers from an increasingly diverse and mobile labour market. Currently, local municipalities are losing suitably qualified workforce due to a host of issues. Proper and effective planning is a critical aspect; the impact of not having capacity, or rather the human resources department within these organisations not planning adequately to attract and retain a diverse and capable workforce for the benefit of the organisation, cannot be overstressed (Mthethwa & Jili, 2016).
Municipalities must ensure that the right people with the right skills are in the right place at the right time, and that they are able to perform their duties successfully to add value to the organisation, for example, by employing people with skills, knowledge, and experience in monitoring and evaluating a project at local government level.
In addition, inadequate financial planning is and has been a constant systematic weakness facing project management in local governments, including most municipalities in South Africa. Projects continue to be abandoned in local governments due to inadequate funds. Unfortunately, the flow of funds cannot be fully guaranteed, especially as municipalities are confronted with fluctuations in world oil prices, inflation, mismanagement, corruption, and failure to explore internal sources of revenue and to use scarce resources (Mthethwa & Jili, 2016).
Mismanagement of funds and corruption hinder successful M&E and the completion of projects at local government level, which in turn leads to dissatisfaction among residents and culminates in violent service delivery protests. A municipality should involve the local community in the planning, initiation, formulation, and execution of projects to ensure their success. Local communities should be carried along at every stage of a project. They should be consulted first, so that the municipality can deliver services according to the people’s preferences and needs. This way, M&E can be utilised as a critical tool to aid effective monitoring with regard to outputs (Mthethwa & Jili, 2016).
The people’s understanding of the environment and support for a project create a moral basis for its success. Project planning should take a bottom-up approach to bring citizens directly into the process of running projects that will improve their quality of life. To some degree, community participation legitimises projects that are meant for the residents and ensures that there is a nexus between the residents being served and their government. When residents have adequate awareness of what projects are being implemented by their government, a relationship is built between the two parties, which leads to the residents becoming interested in participating in the affairs of the government.
Lastly, one of the most fundamental obstacles to successful M&E of projects and effective policy implementation is a lack of expertise. Clearly, knowledge is power: the standard or level of success in the completion of a project depends to a large extent on the amount of accurate information available to local government project managers.
Recommendations: the way forward for entrenching M&E systems
The guiding principles or recommendations discussed hereunder are general. They are applicable to working with a ministry, department, or agency in any country, but with obvious regard for the context or the socio-economic conditions that exist in different countries.
Securing political and administrative buy-in and will is crucial to making sure that M&E becomes a valued practice in governance and development practices. To this effect, raising awareness about the value of M&E among high-ranking political and administrative leaders is paramount.
The use of political leaders or “champions” can exert significant influence on ministries, departments, agencies, and subnational government, as they are strategically positioned to promote and advocate for institutionalisation of M&E practice in planning, budgeting, policy design, implementation, and general decision-making within governments.
Leadership is an integral component. Instrumental, empirical desktop research findings regarding municipalities reveal a lack of leadership support and institutional readiness for change management in the context of M&E. Municipal employees’ perceptions can be very obstructive in some instances, therefore the issue of implementation of M&E needs much change management. Some officials in governments, especially at local level, are so used to and comfortable with their old ways of operating that they are not evolving and do not align themselves with the changing times. Municipal employees should be exposed to new systems and processes. Effective implementation of critical aspects of governance such as M&E systems becomes challenging due to this resistance to change and the organisational culture (Mthethwa & Jili, 2016).
Countries that already have established national M&E evaluation systems are key to M&E work and this factor should be part of the criteria when considering which countries to form a working relationship with.
Partnerships with local institutions and individuals that are knowledgeable about the country context should be another key criterion when selecting a country to work with. The approaches used must be unique and must make use of the country’s local entities to strengthen their M&E systems. Evaluation capacity development partnerships with government and development partners is crucial in developing networks for capacity and as support infrastructure.
Linking monitoring and evaluation with the budget process is a politically sensitive reform. It requires an interface between administrative practices and political support for a joint effort, to ensure that the supply and demand of monitoring and evaluation are accounted for within the existing budgetary framework. This is evident in the current use of monitoring and evaluation, where findings and results are not used to inform planning and budgeting, departmental plans are still fragmented, and instability is apparent in the administrative leadership within the public services.
Legal frameworks constitute the key basis for embedding the practice of evaluations across government in a systematic way. Around two-thirds of responding countries have created a legal basis for requiring and enabling policy evaluation.
Policy frameworks can give strategic direction to a specific sector or thematic area and can help to support the implementation of quality evaluation. They also have the potential to provide high-level guidance and clarity. The centre of government is the main actor that provides strategic direction for policy evaluation.
There must be operational detail, emanating from the national level, that truncates down to other spheres of government. The overall strategic vision and priorities of the national government, and the criteria and procedures to be followed, must be clear and precise in order to achieve those goals.
One useful mechanism to ensure clarity and viability of objectives is to break the policy processes into parts. Organisations should not only focus their goals on driving direct policy change but should also try to affect what happens before, throughout, and afterwards.
M&E is closely associated with top management, highlighting the need for a shift in management perception. An ongoing issue is that a lack of M&E training opportunities and networks for M&E personnel in most government institutions and ministries is one of the main drawbacks to achieving an effective M&E system.
NGOs and think tanks who form partnerships with governments must be clear about what their policy influence objectives are and make sure that these are realistic, given the work that these stakeholders do. This will be drawn from the organisational mission, project proposals, strategic discussions, and should be informed by research into the problem that these stakeholders are trying to address. Conducting a situation analysis, mapping exercise, or diagnostic analysis could inform these objectives. After this exercise is completed, the NGO, think tank, or stakeholder must align themselves to a department or ministry that is in accord with the work that the stakeholder does.
Conclusion
This paper sought to assess whether M&E systems are being utilised effectively by government, more specifically by local government, in their partnerships, bilateral relations, and policy implementation, and to analyse what can be done to strengthen the capacity of government. Its aim was to highlight progress and/or weaknesses in key governance areas. The findings demonstrate that, thus far, governments have not aggressively made the effort to utilise monitoring and evaluation systems as much as they should.
A good starting point for improving this position is the essential task of fully understanding the current status of M&E in the country. The Centre for Learning on Evaluation and Results - Anglophone Africa's situation analysis tool looks at the “wider ecosystem of M&E”, which means it looks at the government itself, higher education institutions, civil society organisations (CSOs), Parliament, and voluntary organisations for professional evaluation (VOPEs). The analysis should zoom in on the supply and demand of M&E information, examining strengths and weaknesses, as well as opportunities and threats. The analysis naturally feeds into an M&E capacity strengthening strategy. This strategy then explains the results that need to be achieved for the country to be able to improve its individual, institutional, and systemic M&E capacities, along with the specific approaches to be followed for achieving these capacities. The strategy further indicates who will do what and when, what resources will be required, and where these will come from (Chirau et al, 2022).
Without a clear theory to guide evaluation in developing countries, selection and application of appropriate research designs and methods becomes situational. Arguably, methods are always somewhat situational in that they are dependent upon the country in question, the quality and availability of the data, and the skill of the evaluator.
M&E can support an evidence-informed policymaking approach by bringing an understanding of how existing policies are performing and if they are effective. As such, strong M&E frameworks can support governments in addressing complex policy challenges by increasing the understanding of policy trade-offs and impacts.
However, an adequate supply of trained personnel (including those with both monitoring and technical evaluations skills) is key for the sustainability of monitoring and evaluation systems (Lahey, 2012). Bits of training is not sufficient and should be supplemented by technical assistance, coaching, and mentoring to ensure knowledge and skills acquired through training are put to suitable use.
This is a holistic approach to developing sustainable M&E systems, ensuring an equilibrium between the supply and demand of M&E information and use thereof across the M&E ecosystem of a country – composed of individual and institutional M&E capacities, demand for M&E information by decision-makers, and ultimately, an evaluative culture.
References
Armstrong, M. & Baron, A. 2013. Performance Management: The New Realities. Chartered Institute of Personnel and Development.
Brignall, S. & Modell, S. 2000. An Institutional Perspective on Performance Measurement and Management in the “New Public Sector”, Management Accounting Research, 11(3):281-306.
Chirau, T.J., Waller, C. & Blaser-Mapitsa, C. 2018. The National Evaluation Policy landscape in Africa: A comparison. Johannesburg: Twende Mbele.
Chirau, T., Dlakavu, A. & Masilela, B. 2022. Strengthening Anglophone Africa M&E systems: A CLEAR-AA perspective on guiding principles, challenges and emerging lessons, African Evaluation Journal, 10(1).
Cloete, F. 2009. Evidence-based policy analysis in South Africa: Critical assessment of the emerging governmentwide monitoring and evaluation system, South African Journal of Public Administration, 44(2): 293-311.
Cloete, F. 2017. Evidence-based Policy Making and Policy Evaluation, 3rd International Conference on Public Policy (ICPP3), Singapore, June 28-30.
Engela, R. & Ajam, T. 2010. Implementing a Government-wide Monitoring and Evaluation System in South Africa, ECD Working Paper Series, 21.
Goldman, I., Engela, R., Akhalwaya, I., Gasa, N., Leon, B., Mohamed, H. et al. 2012. Establishing a national M&E system in South Africa (English), PREM Notes; no. 21, Special series on the Nuts and Bolts of Monitoring and Evaluation (M&E). Washington, DC: World Bank.
Gumede, V. 2017. Presidencies and policy in post-apartheid South Africa. Pretoria: Unisa Press.
Hilderbrand, M.E. & Grindle, M.S. 1994. Building Sustainable Capacity: Challenges for the Public Sector, Paper prepared for the United Nations Development Programme, Pilot Study of Capacity Building (INT/92/676). Cambridge.
Ile, I.U., Allen-ILE, C. & Eresia-Eke, C.E. 2012. Monitoring and Evaluation of Policies, Programmes and Projects. South Africa: Van Schaik Publishers.
Intrac. 2016. Outputs, outcomes and impact. [Online] Available at: https://www.intrac.org/app/uploads/2016/06/Monitoring-and-Evaluation-Series-Outcomes-Outputs-and-Impact-7.pdf [accessed: 13 February 2025].
Kadel, I.M., Ahmad, F. & Basnet, N. 2020. Rethinking monitoring and evaluation practices: Lessons from the COVID-19 pandemic. [Online] Available at: https://www.icimod.org/article/rethinking-monitoring-and-evaluation-practices-lessons-from-the-covid-19-pandemic [accessed: 13 February 2025].
Kusek, J Z., Rist, R.C. & White, E.M. 2005. How Will We Know the Millennium Development Goal Results When We See Them?: Building a Results-based Monitoring and Evaluation System to Give Us the Answers, Evaluation, 11(1): 7-26.
Lahey, R. 2012. The Canadian M&E system, in G. Lopez-Acevedo, P. Krause & K. Mackey (eds.), Building better policies: The nuts and bolts of monitoring and evaluation systems. Washington, DC: World Bank.
Lusthaus, C., Adrien, M. & Perstinger, M. 1999. Capacity Development: Definitions, Issues and Implications for Planning, Monitoring and Evaluation, Development, 35: 1-21.
Mackay, K. 2007. How to Build Monitoring and Evaluation Systems to Support Better Government. Washington DC: World Bank.
Mackay, K. 2012. The Australian Government M&E system, in G. Lopez-Acevedo, P. Krause & K. Mackey (eds.), Building better policies: The nuts and bolts of monitoring and evaluation systems. Washington, DC: World Bank.
Masombuka, S.S.N. & Thani, X.C. 2023. Challenges and Successes of the Government-wide Monitoring and Evaluation System, Administratio Publica, 31(3).
Masuku, N. & Ijeoma, E. 2015. A Global Overview of Monitoring and Evaluation (M&E) and its Meaning in the Local Government Context of South Africa, Africa’s Public Service Delivery & Performance Review, 3(2).
Mercator Institute for China Studies (MERICS). 2023. Monitoring, evaluation and learning for think tanks: How to match strategy with objectives for different areas of work. [Online] Available at: https://merics.org/en/think-tank-toolbox/monitoring-evaluation-and-learning-think-tanks-how-match-strategy-objectives [accessed: 13 February 2025].
Mthethwa, R.M. & Jili, N.N. 2016. Challenges in implementing monitoring and evaluation (M&E) : the case of the Mfolozi Municipality, African Journal of Public Affairs, 9(4).
National Treasury. 2020. The State of Local Government Finances and Financial Management as at 30 June 2020, 2019/20 financial year, Analysis Document.
Netshitenzhe, J. 2015. Class dynamics and state transformation in South Africa, Journal of Public Administration, 50(3):549–561.
Ngumbela, X. & Mle, T.R. 2019. Assessing the role of civil society in poverty alleviation: A case study of Amathole district in the Eastern Cape province of South Africa, The Journal for Transdisciplinary Research in Southern Africa, 15(1).
Noltze, M., Köngeter, A., Römling, C. & Hoffmann, D. 2021. Monitoring, evaluation and learning for climate risk management, OECD Development Co-operation Working Papers, 92.
Nuguti, E.O. 2009. Understanding Project Monitoring and Evaluation. Nairobi, Kenya: EKON Publishing.
Nyonje, R. O., K. D. Ndunge, & A. S. Mulwa. 2012. Monitoring and Evaluation of Projects and Programs - A Handbook for Students and Practitioners. Nairobi, Kenya: Aura Publishers.
Okafor, A. 2021. Influence of Monitoring and Evaluation System on the Performance of Projects, IJRDO - Journal of Social Science and Humanities Research, 6(8): 34-49.
Organization for Economic Co-operation and Development (OECD). 2019. Making development co-operation more effective: How development partners are promoting effective, country-led partnerships, Part II of The Global Partnership 2019 Progress Report. [Online] Available at: https://www.effectivecooperation.org/system/files/2020-06/%5BTitle%5D.pdf [accessed: 13 February 2025].
Organization for Economic Co-operation and Development (OECD). 2023. Public policy monitoring and evaluation. [Online] Available at: https://www.oecd.org/en/topics/public-policy-monitoring-and-evaluation.html [accessed: 13 February 2025].
Shapiro, J. N.d. Monitoring and evaluation. [Online] Available at: https://civicus.org/view/media/Monitoring%20and%20Evaluation.pdf [accessed: 13 February 2025].
University of the Western Cape (UWC). N.d. Defining Monitoring and Evaluation, Monitoring and Evaluation for Health Services Improvement I, Unit 2.
University of the Witswatersrand (Wits). N.d. M&E for improved decision-making and project planning. [Online] Available at: https://online.wits.ac.za/blogs/me-for-improved-decision-making [accessed: 13 February 2025].
Valadez, J. & Bamberger, M. 1994. Monitoring and evaluating social programs in developing countries: a handbook for policymakers, managers and researchers. Washington DC: World Bank.
Van der Waldt, G. 2018. Local economic development for urban resilience: The South African experiment, Local Economy, 33(7): 694-709.
Van der Westhuizen, E.J. 2016. Human Resource Management in Government: A South African perspective on theories, politics and processes. South Africa: Juta.
World Bank. N.d. What is monitoring and evaluation? [Online] Available at: https://ieg.worldbankgroup.org/what-monitoring-and-evaluation [accessed: 13 February 2025].
Yekani, B., Ngcamu, B. & Pillay, S. 2024. Management and leadership considerations for managing effective monitoring and evaluation systems in South African municipalities, Journal of Local Government Research and Innovation, 5: a154.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

This report has been published by the Inclusive Society Institute
The Inclusive Society Institute (ISI) is an autonomous and independent institution that functions independently from any other entity. It is founded for the purpose of supporting and further deepening multi-party democracy. The ISI’s work is motivated by its desire to achieve non-racialism, non-sexism, social justice and cohesion, economic development and equality in South Africa, through a value system that embodies the social and national democratic principles associated with a developmental state. It recognises that a well-functioning democracy requires well-functioning political formations that are suitably equipped and capacitated. It further acknowledges that South Africa is inextricably linked to the ever transforming and interdependent global world, which necessitates international and multilateral cooperation. As such, the ISI also seeks to achieve its ideals at a global level through cooperation with like-minded parties and organs of civil society who share its basic values. In South Africa, ISI’s ideological positioning is aligned with that of the current ruling party and others in broader society with similar ideals.
Email: info@inclusivesociety.org.za
Phone: +27 (0) 21 201 1589
Web: www.inclusivesociety.org.za
Comments