You’ve invested in the tools. Teachers are logging in. Students are clicking through activities. Usage reports show green across the board. But in Leadership conversations with board members and parents, one question inevitably surfaces: Is this actually helping students learn? If that question feels difficult to answer with confidence, you’re not alone.
Most campus administrators are sitting on mountains of ed tech usage data and very little outcome data. That gap, between what tools track and what actually matters, is exactly where the ed tech investment can break down. The real measure isn’t return on investment in the financial sense. It’s Return on Instruction (ROI): the degree to which a tool produces measurable gains in student learning. That’s the standard every tech tool on your campus should be held to, and most aren’t being evaluated that way.
This post gives you a practical, vendor-neutral framework to change that. You’ll walk away with three diagnostic questions to apply to every tool in your inventory, a simple audit process you can run with your technology team, and a free worksheet to get started today.
The Problem with Counting Clicks
Usage metrics feel like proof. Login rates, session counts, time-on-task reports: they show up in vendor dashboards as colorful graphs, and they’re easy to present at a budget meeting. The problem is they measure activity, not learning.
Think of it this way. A student can log into a reading platform every day for a semester and show zero growth in reading comprehension. The dashboard says the tool is working. The assessment data tells a different story.
Activity is not evidence. Engagement is not achievement. Until administrators make that distinction clearly, technology budgets will continue to be driven by the loudest vendor pitch rather than the strongest student outcomes.
The good news is that most districts already have the data they need to make better decisions. They just aren’t connecting it to the right questions.
What Real Educational Technology ROI Actually Looks Like
In education, return on investment isn’t just financial. Real ed tech ROI has three dimensions.
Learning gains are the most important. Is there measurable evidence that students using this tool are making academic progress? This can come from benchmark assessments, progress monitoring data, or state assessment trends over time.
Teacher efficiency matters, too. A tool that saves teachers meaningful planning or grading time has real value, as long as it isn’t replacing instructional quality with automated busywork.
Equity of access is the dimension most often skipped. A tool that produces strong average outcomes but leaves your most vulnerable learners behind isn’t delivering full value. It may actually be widening gaps.
When all three dimensions show positive results, you have a tool worth keeping. When one or more falls short, you have a decision to make.
The Three Questions Every Administrator Should Ask
These questions work whether you’re evaluating a tool you’ve had for three years or one you’re considering for a pilot.
Is it being used as intended, and could it be used more effectively?
Most tech tools are designed around a specific usage model: a certain number of minutes per week, a particular sequence of activities, a defined level of teacher facilitation. When students and teachers use tools differently than intended, the outcomes the vendor promises don’t apply.
Before drawing any conclusions about a tool’s effectiveness, verify that it’s being implemented as designed. If adoption is low or inconsistent, the problem may be training, fit, or competing demands on teacher time, not the tool itself. That distinction matters before you renew or cancel.
But implementation fidelity is only the first half of this question. Even when a tool is being used exactly as intended, it’s worth asking whether it’s being used as powerfully as it could be. Many tools have features that go largely untouched: built-in formative assessment functions, collaborative activity modes, real-time feedback loops, or interactive content formats that move students from passive consumption to active engagement. A tool that’s technically in use but limited to low-level tasks, digital worksheets, or independent practice drills may be delivering a fraction of its potential instructional value.
It’s also worth recognizing that the same tool may look very different across content areas and grade levels. A platform used for vocabulary practice in a fourth-grade ELA class may serve an entirely different instructional function in a high school science classroom. That variation isn’t a problem. It’s actually valuable information. Understanding how a tool is being adapted across your campus tells you a great deal about its flexibility and instructional range.
One of the best places to surface these insights is the PLC meeting. Bring the tool into the conversation and ask teachers directly: How is this tool supporting the learning process in your classroom? Where is it working well, and where is it falling short? How could it be used more effectively to engage students, deepen understanding, or support assessment? These conversations often surface instructional strategies and creative uses that never show up in a usage report.
Critically, ask teachers to connect their observations to data. Anecdotal impressions are a starting point, not a conclusion. Encourage teachers to identify what they’re seeing in student work, formative assessment results, or progress monitoring data that supports or challenges what they observe. When teacher insight and classroom data align, you have a much stronger basis for an instructional decision.
The right question isn’t just “Are teachers following the usage model?” It’s “Are students experiencing deeper learning because of this tool than they would without it?”
Is it correlated with measurable outcomes?
This is the core question, and it’s more answerable than most administrators think. Start by identifying what learning goal the tool was purchased to address. Then pull the outcome data most closely tied to that goal: benchmark scores, formative assessment results, attendance trends, or course completion rates.
Look for patterns. Are students who use the tool consistently performing better on that measure than those who don’t? You don’t need a formal research study to spot a meaningful signal in your own campus data. A simple side-by-side comparison, organized by usage level and outcome, will tell you a great deal.
Cross-referencing tool usage reports with your existing assessment data is the single highest-leverage move in an ed tech audit.
Is it working equitably?
Disaggregate your outcome data by student group. Are English learners, students receiving special education services, and economically disadvantaged students experiencing the same benefits as the broader student population? If a tool produces strong average results but underserves specific groups, that’s a significant finding.
Equity analysis doesn’t require sophisticated data infrastructure. It requires the willingness to ask the question and act on what you find.
How to Run a Simple Tool Audit
You don’t need a full semester or a dedicated research team. A focused audit can be completed in a few weeks with your existing technology and instructional leadership staff.
- Inventory your tools. List every software program your campus is actively paying for. Include the cost, the intended instructional purpose, and the grade levels or subject areas it serves.
- Map each tool to a learning goal. If you can’t articulate what measurable outcome the tool is supposed to support, that’s your first finding.
- Pull the data. For each tool, gather usage reports from the vendor and outcome data from your internal systems. Look for correlation between consistent use and the targeted outcome.
- Make a decision for each tool. Keep it, cut it, or move it to a structured pilot with clearer success metrics and a defined evaluation timeline.
Download the Ed Tech ROI Evaluation Worksheet to run this process with your team.
Making the Case to Stakeholders
When you present findings to your school board, teachers, or community, lead with outcomes, not features. “This tool is associated with a measurable gain in third-grade math benchmark scores among consistent users” is a far more compelling case than “Students spent 40,000 minutes on the platform last semester.”
Outcome language builds trust. It also sets a standard that vendors will need to meet, which raises the quality of every future purchasing conversation.
Key Takeaways
- Usage data measures activity. Outcome data measures learning. They are not the same thing.
- Real ed tech ROI has three dimensions: learning gains, teacher efficiency, and equity of access.
- Three questions drive every effective ed tech evaluation: Is it used as intended? Is it correlated with outcomes? Is it working equitably?
- A simple audit, inventory, goal mapping, data review, and decision, can be completed with existing staff and data.
- Outcome language strengthens every budget and stakeholder conversation.
Tools should earn their place in your campus budget the same way any instructional resource does: by demonstrating measurable value for students. The framework above gives you a starting point to make those evaluations confidently and consistently.
Download the Ed Tech ROI Evaluation Worksheet to begin your audit. And if you’d like deeper support, TCEA offers campus and district-level technology audits that provide personalized insight and recommendations based on the tools you’re using and how they’re actually being used. Reach out to learn more about how we can help by emailing Dr. Bruce Ellis at bellis@tcea.org for details.
