The promise of technology in the classroom is great: enabling personalized, mastery-based learning; saving teacher time; and equipping students with the digital skills they will need for 21st-century careers. Indeed, controlled pilot studies have shown meaningful improvements in student outcomes through personalized blended learning.1 During this time of school shutdowns and remote learning, education technology has become a lifeline for the continuation of learning.
As school systems begin to prepare for a return to the classroom, many are asking whether education technology should play a greater role in student learning beyond the immediate crisis and what that might look like. To help inform the answer to that question, this article analyzes one important data set: the 2018 Programme for International Student Assessment (PISA), published in December 2019 by the Organisation for Economic Co-operation and Development (OECD).
Every three years, the OECD uses PISA to test 15-year-olds around the world on math, reading, and science. What makes these tests so powerful is that they go beyond the numbers, asking students, principals, teachers, and parents a series of questions about their attitudes, behaviors, and resources. An optional student survey on information and communications technology (ICT) asks specifically about technology use—in the classroom, for homework, and more broadly.
In 2018, more than 340,000 students in 51 countries took the ICT survey, providing a rich data set for analyzing key questions about technology use in schools. How much is technology being used in schools? Which technologies are having a positive impact on student outcomes? What is the optimal amount of time to spend using devices in the classroom and for homework? How does this vary across different countries and regions?
From other studies we know that how education technology is used, and how it is embedded in the learning experience, is critical to its effectiveness. This data is focused on extent and intensity of use, not the pedagogical context of each classroom. It cannot therefore answer questions on the eventual potential of education technology—but it can powerfully tell us the extent to which that potential is being realized today in classrooms around the world.
Five key findings from the latest results help answer these questions and suggest potential links between technology and student outcomes:
- The type of device matters—some are associated with worse student outcomes.
- Geography matters—technology is associated with higher student outcomes in the United States than in other regions.
- Who is using the technology matters—technology in the hands of teachers is associated with higher scores than technology in the hands of students.
- Intensity matters—students who use technology intensely or not at all perform better than those with moderate use.
- A school system’s current performance level matters—in lower-performing school systems, technology is associated with worse results.
This analysis covers only one source of data, and it should be interpreted with care alongside other relevant studies. Nonetheless, the 2018 PISA results suggest that systems aiming to improve student outcomes should take a more nuanced and cautious approach to deploying technology once students return to the classroom. It is not enough add devices to the classroom, check the box, and hope for the best.
What can we learn from the latest PISA results?
PISA data have their limitations. First, these data relate to high-school students, and findings may not be applicable in elementary schools or postsecondary institutions. Second, these are single-point observational data, not longitudinal experimental data, which means that any links between technology and results should be interpreted as correlation rather than causation. Third, the outcomes measured are math, science, and reading test results, so our analysis cannot assess important soft skills and nonacademic outcomes.
It is also worth noting that technology for learning has implications beyond direct student outcomes, both positive and negative. PISA cannot address these broader issues, and neither does this paper.
But PISA results, which we’ve broken down into five key findings, can still provide powerful insights. The assessment strives to measure the understanding and application of ideas, rather than the retention of facts derived from rote memorization, and the broad geographic coverage and sample size help elucidate the reality of what is happening on the ground.
Finding 1: The type of device matters
The evidence suggests that some devices have more impact than others on outcomes (Exhibit 1). Controlling for student socioeconomic status, school type, and location,2 the use of data projectors3 and internet-connected computers in the classroom is correlated with nearly a grade-level-better performance on the PISA assessment (assuming approximately 40 PISA points to every grade level).4
On the other hand, students who use laptops and tablets in the classroom have worse results than those who do not. For laptops, the impact of technology varies by subject; students who use laptops score five points lower on the PISA math assessment, but the impact on science and reading scores is not statistically significant. For tablets, the picture is clearer—in every subject, students who use tablets in the classroom perform a half-grade level worse than those who do not.
Some technologies are more neutral. At the global level, there is no statistically significant difference between students who use desktop computers and interactive whiteboards in the classroom and those who do not.
Finding 2: Geography matters
Looking more closely at the reading results, which were the focus of the 2018 assessment,5 we can see that the relationship between technology and outcomes varies widely by country and region (Exhibit 2). For example, in all regions except the United States (representing North America),6 students who use laptops in the classroom score between five and 12 PISA points lower than students who do not use laptops. In the United States, students who use laptops score 17 PISA points higher than those who do not. It seems that US students and teachers are doing something different with their laptops than those in other regions. Perhaps this difference is related to learning curves that develop as teachers and students learn how to get the most out of devices. A proxy to assess this learning curve could be penetration—71 percent of US students claim to be using laptops in the classroom, compared with an average of 37 percent globally.7 We observe a similar pattern with interactive whiteboards in non-EU Europe. In every other region, interactive whiteboards seem to be hurting results, but in non-EU Europe they are associated with a lift of 21 PISA points, a total that represents a half-year of learning. In this case, however, penetration is not significantly higher than in other developed regions.
Finding 3: It matters whether technology is in the hands of teachers or students
The survey asks students whether the teacher, student, or both were using technology. Globally, the best results in reading occur when only the teacher is using the device, with some benefit in science when both teacher and students use digital devices (Exhibit 3). Exclusive use of the device by students is associated with significantly lower outcomes everywhere. The pattern is similar for science and math.
Again, the regional differences are instructive. Looking again at reading, we note that US students are getting significant lift (three-quarters of a year of learning) from either just teachers or teachers and students using devices, while students alone using a device score significantly lower (half a year of learning) than students who do not use devices at all. Exclusive use of devices by the teacher is associated with better outcomes in Europe too, though the size of the effect is smaller.
Finding 4: Intensity of use matters
PISA also asked students about intensity of use—how much time they spend on devices,8 both in the classroom and for homework. The results are stark: students who either shun technology altogether or use it intensely are doing better, with those in the middle flailing (Exhibit 4).
The regional data show a dramatic picture. In the classroom, the optimal amount of time to spend on devices is either “none at all” or “greater than 60 minutes” per subject per week in every region and every subject (this is the amount of time associated with the highest student outcomes, controlling for student socioeconomic status, school type, and location). In no region is a moderate amount of time (1–30 minutes or 31–60 minutes) associated with higher student outcomes. There are important differences across subjects and regions. In math, the optimal amount of time is “none at all” in every region.9 In reading and science, however, the optimal amount of time is greater than 60 minutes for some regions: Asia and the United States for reading, and the United States and non-EU Europe for science.
The pattern for using devices for homework is slightly less clear cut. Students in Asia, the Middle East and North Africa (MENA), and non-EU Europe score highest when they spend “no time at all” on devices for their homework, while students spending a moderate amount of time (1–60 minutes) score best in Latin America and the European Union. Finally, students in the United States who spend greater than 60 minutes are getting the best outcomes.
One interpretation of these data is that students need to get a certain familiarity with technology before they can really start using it to learn. Think of typing an essay, for example. When students who mostly write by hand set out to type an essay, their attention will be focused on the typing rather than the essay content. A competent touch typist, however, will get significant productivity gains by typing rather than handwriting.
Would you like to learn more about our Social Sector Practice?
Finding 5: The school systems’ overall performance level matters
Diving deeper into the reading outcomes, which were the focus of the 2018 assessment, we can see the magnitude of the impact of device use in the classroom. In Asia, Latin America, and Europe, students who spend any time on devices in their literacy and language arts classrooms perform about a half-grade level below those who spend none at all. In MENA, they perform more than a full grade level lower. In the United States, by contrast, more than an hour of device use in the classroom is associated with a lift of 17 PISA points, almost a half-year of learning improvement (Exhibit 5).
At the country level, we see that those who are on what we would call the “poor-to-fair” stage of the school-system journey10 have the worst relationships between technology use and outcomes. For every poor-to-fair system taking the survey, the amount of time on devices in the classroom associated with the highest student scores is zero minutes. Good and great systems are much more mixed. Students in some very highly performing systems (for example, Estonia and Chinese Taipei) perform highest with no device use, but students in other systems (for example, Japan, the United States, and Australia) are getting the best scores with over an hour of use per week in their literacy and language arts classrooms (Exhibit 6). These data suggest that multiple approaches are effective for good-to-great systems, but poor-to-fair systems—which are not well equipped to use devices in the classroom—may need to rethink whether technology is the best use of their resources.
What are the implications for students, teachers, and systems?
Looking across all these results, we can say that the relationship between technology and outcomes in classrooms today is mixed, with variation by device, how that device is used, and geography. Our data do not permit us to draw strong causal conclusions, but this section offers a few hypotheses, informed by existing literature and our own work with school systems, that could explain these results.
First, technology must be used correctly to be effective. Our experience in the field has taught us that it is not enough to “add technology” as if it were the missing, magic ingredient. The use of tech must start with learning goals, and software selection must be based on and integrated with the curriculum. Teachers need support to adapt lesson plans to optimize the use of technology, and teachers should be using the technology themselves or in partnership with students, rather than leaving students alone with devices. These lessons hold true regardless of geography. Another ICT survey question asked principals about schools’ capacity using digital devices. Globally, students performed better in schools where there were sufficient numbers of devices connected to fast internet service; where they had adequate software and online support platforms; and where teachers had the skills, professional development, and time to integrate digital devices in instruction. This was true even accounting for student socioeconomic status, school type, and location.
COVID-19 and student learning in the United States: The hurt could last a lifetime
Second, technology must be matched to the instructional environment and context. One of the most striking findings in the latest PISA assessment is the extent to which technology has had a different impact on student outcomes in different geographies. This corroborates the findings of our 2010 report, How the world’s most improved school systems keep getting better. Those findings demonstrated that different sets of interventions were needed at different stages of the school-system reform journey, from poor-to-fair to good-to-great to excellent. In poor-to-fair systems, limited resources and teacher capabilities as well as poor infrastructure and internet bandwidth are likely to limit the benefits of student-based technology. Our previous work suggests that more prescriptive, teacher-based approaches and technologies (notably data projectors) are more likely to be effective in this context. For example, social enterprise Bridge International Academies equips teachers across several African countries with scripted lesson plans using e-readers. In general, these systems would likely be better off investing in teacher coaching than in a laptop per child. For administrators in good-to-great systems, the decision is harder, as technology has quite different impacts across different high-performing systems.
Third, technology involves a learning curve at both the system and student levels. It is no accident that the systems in which the use of education technology is more mature are getting more positive impact from tech in the classroom. The United States stands out as the country with the most mature set of education-technology products, and its scale enables companies to create software that is integrated with curricula.11 A similar effect also appears to operate at the student level; those who dabble in tech may be spending their time learning the tech rather than using the tech to learn. This learning curve needs to be built into technology-reform programs.
Taken together, these results suggest that systems that take a comprehensive, data-informed approach may achieve learning gains from thoughtful use of technology in the classroom. The best results come when significant effort is put into ensuring that devices and infrastructure are fit for purpose (fast enough internet service, for example), that software is effective and integrated with curricula, that teachers are trained and given time to rethink lesson plans integrating technology, that students have enough interaction with tech to use it effectively, and that technology strategy is cognizant of the system’s position on the school-system reform journey. Online learning and education technology are currently providing an invaluable service by enabling continued learning over the course of the pandemic; this does not mean that they should be accepted uncritically as students return to the classroom.