
Your sprint ended. The release went out. But the PM is asking why a user story that was "just a small change" took 18 days to ship. Your developer worked on it for two days. So where did the other 16 days go?
This is the question that cycle time vs lead time data in Jira is built to answer, and most project managers running Jira boards are not tracking either one with enough precision to respond. They're watching velocity charts and burndowns while the real delivery data sits untouched in Jira's issue history.
This guide breaks down cycle time vs lead time vs throughput in Jira: what each metric measures, how they differ, where Jira's native reporting falls short, and how to use them to run more predictable, evidence-based projects.
Before you can use these metrics to improve your Jira workflow, you need a clear picture of how they differ. The confusion between cycle time vs lead time in Jira is common; both measure time, both involve the same issue, and both end at the same "Done" status. The difference is where the clock starts.
Lead time starts the moment an issue is created in Jira. It ends when the issue is resolved. It captures everything: time sitting in the backlog, time waiting for prioritization, time in active development, time in review, time waiting for deployment. Lead time is the customer's clock. It answers the question your stakeholders are always asking: "How long did I have to wait?"
Cycle time starts when your team begins working on an issue, typically when it moves to "In Progress" in Jira, and ends when it reaches the same "Done" status. It excludes all pre-work waiting time. Cycle time is the team's clock. It answers the internal question: "How fast do we build once we start?"
Throughput is different in nature from the other two. Rather than measuring how long a single issue takes, it measures how many issues your team completes in a given time period, per week, per sprint, per month. It is the delivery volume signal. Where cycle time and lead time tell you about the speed of individual items moving through your Jira workflow, throughput tells you about the consistency and capacity of the system as a whole.
The most important thing to understand about cycle time vs lead time in Jira: cycle time is always less than or equal to lead time. The gap between them is pure waiting time, issues sitting in the backlog, stuck in a review queue, or waiting for someone to pick them up. For most Jira teams, that gap accounts for 60–80% of total lead time. That is where the delivery problem usually lives, and that is what your velocity chart will never tell you.
Formula: Lead Time = Issue Resolution Date – Issue Creation Date
In Jira terms, this means:
Example:
What to watch for: If your backlog is messy or issues sit unprioritized, your lead time will spike, even if your team is fast at execution. That’s not a bug, it’s the signal.
Formula: Cycle Time = Done Date – In Progress Date
In Jira terms:
Example:
Important nuance: If issues move in and out of “In Progress” multiple times, you need to decide:
Jira’s default reports don’t handle this cleanly; that’s where time-in-status data becomes critical.
Formula: Throughput = Number of issues completed in a given time period
In Jira terms:
Example:
What matters here: Consistency > spikes. A team completing 40–45 issues every sprint is far more predictable than one jumping between 20 and 70.
Jira gives you a solid starting point for tracking delivery metrics like lead time, cycle time, and throughput. You can use built-in reports like control charts and sprint reports, along with basic time tracking in Jira, to get a high-level view of how work is moving.
However, when you want to go deeper, like:
You’ll need more granular visibility than Jira’s native reports typically provide. That’s where Jira add-ons come in.
With the right Jira plugin, you can:
For example, Jira tools like RVS Time in Status Report extend Jira’s reporting by pulling detailed data directly from issue history and presenting it in an easy-to-act-on format.

While lead time, cycle time, and throughput give you the what, you still need clarity on the where. That’s where Time in Status Report by RVS Softek adds value.
Instead of looking at total durations alone, it breaks those numbers down into time spent across each workflow stage. This makes it easier to connect your metrics to actual workflow behavior, whether delays are happening in review, waiting states, or handoffs between teams.
With structured views of status-level data and flexible reporting, teams can:
This level of detail turns your metrics from static numbers into diagnostic signals, helping you improve delivery predictability with precision.
The cycle time vs lead time distinction in Jira is not a semantics argument. It is the difference between knowing your team is slow and knowing where they are slow, and those two insights lead to completely different actions.
Lead time tells you what your stakeholders experience. Cycle time tells you where your process breaks down. Throughput tells you whether your improvements are holding. For project managers running Jira, these three metrics are the foundation of evidence-based delivery. They're already in your Jira data. The question is whether you're set up to see them.
