What are frequently asked questions (FAQ) about Rapid Cycle Evaluation (RCE)?

This guide answers some of the most common questions about Rapid Cycle Evaluation (RCE) in LearnPlatform.

What are usage clusters and how are they formed?

Usage clusters are subsets of students grouped together based on how much they use an edtech product (e.g., low use, moderate use, or high use). Rapid Cycle Evaluation (RCE) statistically generates these clusters based on natural usage patterns using an advanced algorithm. The algorithm identifies the optimal number of clusters based on similarities in total product usage (e.g., total minutes using the edtech product). RCE compares usage patterns and product effectiveness across these usage clusters.

What is a trial (or pilot) and how is it integrated into Rapid Cycle Evaluation (RCE)?

A trial (or pilot) uses a research-backed survey to help users gather feedback and insight from educators regarding perceptions of edtech effectiveness. It allows stakeholders to generate qualitative and quantitative data (i.e., product grades based on the core criteria of the LearnPlatform Grading Rubric and open-ended comments) from educators across an entire school, district, or state. In addition to product feedback sourced from verified educators in LearnPlatform, RCE integrates trial results in the Feedback section of the RCE report, allowing users to better understand how their educators and those in the LearnCommunity  evaluate the product on the core criteria deemed most important when trying, buying, or using an edtech product.

How does the RCE divide the sample into treatment and control groups?

  • Control study design: Treatment and control (or comparison) groups are determined by the school or district in a Control study design. If a school or district assigns students to the treatment and control groups (either using random assignment or not), then these pre-defined groups are used in the RCE.
  • Comparative study design: Many schools and districts choose to run widespread edtech implementations rather than conduct a trial (or pilot) via experimental design. As another alternative, schools and districts may provide historical data to evaluate edtech usage and impact without having previously employed a research design. In cases like these, treatment groups consist of students who used the edtech product, and control groups consist of students who did not use the product.
  • Correlative study design: Correlative studies do not include a control group. These studies only include a treatment group who received the intervention. In these study designs, RCE examines the relationship between product usage and an educational outcome, while statistically controlling for covariates.

Does RCE account for the variety of variables impacting edtech effectiveness?

In addition to the effects of edtech on student achievement, there are other factors that impact the effectiveness of any given intervention, such as quality of instruction and student demographic or achievement differences. RCE has the ability to account for student, class, and school variables such as grade level, previous performance, student demographics, and many other factors. RCE accounts for all covariates included in the data and statistically adjusts the effect size accordingly.

How does effect size within performance quintiles inform decisions on closing the achievement gap?

An examination of performance quintiles allows RCE to determine whether an edtech product demonstrates the ability to close the achievement gap.

  • Comparative/Control study designs: First, within treatment and control groups, students are grouped into quintiles based on their prior performance (e.g., GPA prior to the intervention, previous test score). RCE then computes an effect size within each achievement group, demonstrating how well an edtech product works for students at different achievement levels (i.e., standardized mean difference between the posttest scores of students at each achievement level). Edtech products that show a large, positive effect size for historically low performing students are products that may help close the achievement gap. For example, within a Control or Comparative design, if RCE finds that effect sizes for an edtech product are positive and higher for students in the low achievement quintiles, then this product demonstrates potential effectiveness at closing the achievement gap.
  • Correlative study designs: RCE groups treatment students into quintiles based on their prior performance. Then, an effect size is computed within each achievement group, demonstrating the relationship between product usage and posttest student achievement. Positive effect sizes represent that as usage increases, achievement generally increases, whereas negative effect sizes indicate that as usage increases, achievement generally decreases. For example, within a Correlative design, if RCE finds a negative effect size for a product for students in the Lowest Achievement quintile, this indicates that greater product usage related to lower posttest scores for this specific student group.

How are the results shared with stakeholders?

Administrators have complete flexibility and control to share results across their organizations and with key stakeholders. LearnPlatform offers administrators the ability to share RCE reports, teacher feedback results, and usage dashboards with a unique URL to the report and/or in printed format. Administrators can set login permissions to allow each type of user to access results relevant to them/their role. In addition, all graphics and visual displays in the RCE can be exported (e.g., PNG, JPEG, or SVG).