It’s common practice to rollout a new technology to all students as quickly as possible. But what if you could learn how likely a new technology was to work at your school, in your district, or in your unique setting before you rolled it out at scale? Here are somethings to think about to help you get the data needed to make evidence-based decisions.
When we are interested in understanding how outcomes changed as a result of using an
educational technology, we may wonder why we can’t just look at growth over the course of the
year for the individuals who used the technology.
One way to test whether an educational technology is moving the needle is to conduct a matched comparison design evaluation. This method matches technology users to similar nonusers and then compares the differences in outcomes between the two groups. Here are some things to think about when selecting two similar groups.
It is common for your context to limit the type of questions you can ask when conducting an evaluation. It is important to think about those limitations before crafting a research question to make sure your evaluation is based on a question you can actually answer.
When you test your technology, your data will reflect the observed outcomes for the students or teachers who used the technology. But those values are only some of the possible outcomes. In order to know the true effect of the technology you would need to test it with all students everywhere. But that’s not possible, so your results will reflect some uncertainty about the technology’s effectiveness.