Windows 10 Task Manager ‘% CPU’ skew – A Tale of Two Metrics

EDIT: My co-worker, Aaron Margosis, wrote his take on this issue, you can read about it here: Task Managers CPU Numbers Are All But Meaningless!

Windows 10 Task Manager is often used by end users to gauge the performance of their machine, especially when they think something is amiss. There are several reasons why this isn’t really a good performance gauge.

  1. It’s a point-in-time measurement that lacks context of the overall scale of resource usage.
  2. It doesn’t see inside processes to understand the impact of Anti-Virus and other security software on the processes.
  3. Task Manager CPU stats are deceptive and inconsistent, at time of writing (rest of this blog explains).

Say what?! A primer on this can be found at CPU usage exceeds 100% in Task Manager and Performance Monitor if Intel Turbo Boost is active .

However, Intel Turbo Boost is not the only scenario. Any scenario where the CPU cores change from their default 100% output will skew results in Task Manager.

Things like thermal throttling, Intel Speed Step, Intel Turbo Boost, AMD Precision Boost 2, AMD Precision Boost Overdrive, C-State management for power savings when Balanced or Power Saver power plans are enabled (Balanced is by default btw). All of these technologies modify the speed of one or more cores inside a CPU. And the point of this article is not that these technologies are bad, it’s that Task Manager currently does not take their modifications to the output of the core into account in calculations, per se. Or maybe rather, it does but doesn’t tell you.

One would normally expect a CPU to just have 0-100% and an app uses 5%. But if the CPU is 8 cores and the core the thread for the app is on is in boost mode for example, it’s not 100% CPU we’re measuring, its like, 112%. So now the Processes tab is showing you 100% of 112%, etc.

Example, my co-worker Aaron Margosis wrote a utility that can run a thread at 100% CPU on a core. So in the screenshot below, I’m running a single thread at 100% CPU on one core, on an AMD Ryzen 5900x CPU.

Single Core Details Single Core Details

Which is accurate? It ‘depends’ on what you want to know. If it’s utilization of all the cores at 100%, it’s 4% CPU. But the core the thread is on is likely boosted by AMD’s chip technology, so it’s really 6% of the 4% capable core due to boosting.

Does this seem like a large inconsistency? 2% is no big deal, right?

Let’s expand the experiment to 8 cores (the chip has 24 so we’ll be ok to run this test).

This is a slightly larger variance. So if I were a user complaining my machine was slow, I’d obviously think my CPU was being eaten by this test program, at 42.4%, when in reality it’s 33.333%. So 9% variance. Not huge in the world but still, it’s confusing. Especially since the tooltip on CPU in both tabs of Task Manager say the same thing “Total processor utilization across all cores”.

Below is running 12 cores of my 24 core AMD 5900x.

So now we’re seeing a 13.5% variance. So over 1% per core. My systems’ BIOS is not set to aggressively OC the CPU, I could probably get bigger variances by doing so. Maybe that’s post #2 for this topic.

These tie back to Performance Monitor as well. The more accurate data points for CPU measurements in Windows 8 and Server 2012 and above are

  • Processor Information\% Processor Utility
  • Processor Information\% Privileged Utility

Which is where the “Processes” view is getting it’s values.

So one can think of Processes tab on Task Manager as the “% CPU used of all available % of CPU available” vs the Details tab which is more “% CPU used of 100%/core”.

Microsoft is looking at a better way to display this all the time, cognizant that end users are used to Task Manager for gauging performance, not say, Perfmon or an ETW trace with the Windows ADK.

It is worth noting this variance does not appear to impact virtual machines, so far as I’ve been able to observe at this point.

Happy performance profiling

Jeff

 

6 Comments

  1. This is great information which helps a lot with the ever increasing core count and different core speeds depending on a walth of factors. Will Task Manager also show cores in Big/Little CPU differently. This one will be challenging to visualize.
    To be honest so far I never had to look at the Power section in WPA. Are there any public blogs/examples online which issues you can troubleshoot typically? Most columns except for C-States do not tell me much but even when the CPU is in a lower C-State how can I deduce from that how much slower it is compared to a higher C-State?

  2. You say “so it’s really 6% of the 4% capable core due to boosting.”

    I don’t understand that phrase. Can you give us an example of the kind of arithmetic that might result in that 6.6% we are seeing on the Processes tab ?

    • I owe you a response on this, because yeah you’re right, it’s unclear. I’m in the middle of doing some writing, let me unpack this data again and do a sample of how it works. Probably this weekend.

  3. Greate article! This has been wondering me for a while. An in fact, e.g. right now, perfmon tells me that my processor time is on average 1,44% where as my processor utility is at around 11%. I still cant explain that big of a difference though, especially as task manager shows that my cpu speed is pretty much its nominal speed.

Leave a Reply