ICT in education in developing countries: considering the evidence and implications for South Africa

ICT in education in developing countries: considering the evidence and implications for South Africa

At the best of times, it is hard to know which interventions work best and under what conditions in public schools in developing countries. What makes this harder is the knowledge that most developing countries do not have the financial muscle to experiment with solutions that yield no immediate results.

This is why our latest workshop offering, which examined the impact of mobile technologies on a range of educational outcomes, could not have come at a better time. We wanted to do two things at the workshop. One, we wanted to say thank you to mentees from the Department of Basic Education (DBE) who were some of the first inductees in our ground-breaking mentorship programme. They have enabled us to experiment with new ideas about how to build the capacity of government officials in using evidence in decision-making processes in government.  Another reason why we were so grateful to host the DBE mentees at this closing workshop was the fact that they helped us not only to focus on mentoring individuals, but through the work we have done with them and other mentees, we have begun testing team mentorship models where an entire unit in a department participates in a structured and goal-oriented mentorship.

Two, we used this workshop to introduce important research that our programme has pioneered and to share the initial findings of this research as a way of jump-starting a bigger conversation on the role of ICT evidence in government decision-making. Our programme researcher, Mr Laurenz Langer, made a few telling points, which include

  • That his research shows that compared to other education interventions, the use of mobile technologies has a positive and statistically significant relationship on a number of educational outcomes.
  • That these effects are often statistically larger than other interventions (for example, smaller class sizes, introduction of new curriculum etc.).
  •  That our programme intervention, which focused on refining the implementation of the e-Education White Paper in South Africa, was well-grounded.

The discussions that followed the presentation by Mr Langer focused on a range of issues, but the contributions were united by their desire to assess the implications of this research for the local context. It was brought to the attention of the presenter that many of these studies assume a certain level of ICT infrastructure provisioning, which is often absent in remote rural locations in South Africa. Given the fact that these studies were mostly located in urban areas, both the presenter and members of the audience cautioned about the applicability of the review to local conditions.

Some of the other questions that were highlighted focused on the quality of studies and how this could alter results, whether the results are representative of effects over time, and realising that the review was limited in scope, given the overall complexity of the ICT in education field.  In response to these questions, Mr Langer indicated that the systematic review did perform sensitivity analyses and that the results appear robust when quantitative factors were considered. The study did not consider the question whether the same effects of mobile technology persist over time, and he agreed that the assumption that mobile technology infrastructure is in place cannot be sustained in the South African context. In response to the limited nature of the enquiry, Mr Langer responded that the scope is determined by clients and is not independently set by researchers.

Arguably, one of the most important inputs for the day involved the presenter mapping portions of existing e-Education policy against some of the evidence that has emerged from his research. This line of enquiry provides useful support to those who are desirous to translate research into evidence and at the same time, understand the inherent limitations of research in a practical and applied policy setting. This represents another potential method that can be used by policy-makers as they increase their engagement with research evidence.

Our programme for the day dictated two discussions, namely one that related to the presentations made by Mr Langer, while the subsequent discussions focused on the big picture of evidence uptake, how to increase evidence uptake in government, and the potential role of programmes such as UJ-BCURE in this policy space.

Members of the audience indicated that a number of key ingredients ought to be in place in order to support the increased use of evidence in government departments. These include

  • Building solid and viable relationships with officials in government and being respectful of the agenda and policy cycles that are inherent to the work of the government;
  • Making sure that researchers and organisations such as UJ-BCURE deliver research on time (timely and relevant research); and
  • Finding ways of packaging research for policy-makers that can be easily digested and where the policy implications of the evidence are spelled out clearly.

The role of our programme was further reinforced when attendees indicated that the research and policy spaces are different and need a mediating agent that can create safe spaces where these two stakeholders can interact. Attendees qualified this statement by saying that this work requires a specific skills set (understanding research methods and the policy environment, being smart enough to building strong relationships, great communication skills etc.) and that we exhibit many of the desired qualities.

In response to concerns that there are many variables that may impact on whether evidence is used in decision-making in government, we were urged to take a ‘long view.’ An analogy was drawn with the promotion and adoption of outcome-based Monitoring and Evaluation systems, which took a while to root and will take even longer to implement appropriately.

In the end, our workshop has struck the right balance between the need to provide (credible) detail and the need to tease out potential implications for policy-makers at the national and the provincial levels of government. While everyone who attended acknowledged the challenges in promoting the use of this kind of evidence, our programme management were asked to be patient about the work we do and that prudent actions are required to support the increasing uptake of evidence in decision-making in government.