Comment 6 for bug 513848

Revision history for this message
Chase Douglas (chasedouglas) wrote :

@Peter Matulis,

First, please edit the description for consistency. First you say that using tomcat on karmic does not produce an expected load. Then you say that "stress tools applied to Karmic does result in an increase in load (as expected)." I am guessing that you are meaning to say that Jaunty is reporting the load avg correctly, but karmic is not.

Load avg is a time-weighted average of the CPU process queue length. For example, if you have a load avg of 5, that means approximately 5 processes are ready to run on the CPU at any given moment. If you have a dual cpu machine, you are overloading it because you only have 2 cores. However, if you have an eight-core machine all processes may run, so you have not overloaded it.

One can easily test load avg results by running a simple test case. In bash, run the following:

  while true; do (( i++ )); done

This should run as one process. It is simple and does not need to wait for disk or anything else, so it's pretty much always runnable. If you run it for a minute you should see the 1 minute load average increase by about 1 (very rough, but if you wait 15 mins you should see the 15 min average stabilize better than the 1 minute average). If you were to run two shells like this you should see the load average increase by 2. I have tested this in karmic and found it to be the case.

Can you provide further details on why there is an issue with karmic? Also, can you verify that you are running the exact same tests on the exact same hardware and provide the same screenshot between karmic and jaunty? It's difficult to understand what the real differences are when the screenshots are not easily comparable.

Thanks