Understanding Linux IOWait (2024)

I have seen many Linux Performance engineers looking at the “IOWait” portion of CPU usage as something to indicate whenever the system is I/O-bound. In this blog post, I will explain why this approach is unreliable and what better indicators you can use.

Let’s start by running a little experiment – generating heavy I/O usage on the system:

Shell

1

sysbench--threads=8 --time=0 --max-requests=0fileio --file-num=1 --file-total-size=10G --file-io-mode=sync --file-extra-flags=direct --file-test-mode=rndrd run

CPU Usage in Percona Monitoring and Management (PMM):

Understanding Linux IOWait (1)

Shell

1

2

3

4

5

6

7

8

9

root@iotest:~# vmstat 10

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----

rb swpd free buffcache si sobibo in cs us sy id wa st

360 713715226452 76297200 405001714 2519 469316 55 353

280 713810026476 76296400 34497117 20059 378653 137 735

080 713916026500 76301600 34744837 20599 379354 175 723

270 713973626524 76296800 33473014 19190 362563 154 716

440 713948426536 76290000 253995 6 15230 279342 116 774

070 713948426536 76290000 350854 6 20777 383452 133 775

So far, so good, and — we see I/O intensive workload clearly corresponds to high IOWait (“wa” column in vmstat).

Let’s continue running our I/O-bound workload and add a heavy CPU-bound load:

Shell

1

sysbench --threads=8 --time=0 cpu run

Understanding Linux IOWait (2)

Shell

1

2

3

4

5

6

7

8

9

10

11

12

root@iotest:~# vmstat 10

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----

rb swpd free buffcache si sobibo in cs us sy id wa st

1240 712164026832 76347600 480341460 2895 544367 47 373

1330 712041626856 76346400 25646414 12404 25937 69 1500 16

880 712102026880 76349600 32578916 15788 33383 85 15000

1060 712146426904 76346000 32295433 16025 33461 83 15001

970 712359226928 76352400 33679414 16772 34907 85 15001

1330 712413226940 76355600 38638410 17704 38679 84 16000

970 712825226964 76360400 35619813 16303 35275 84 15000

970 712805226988 76358400 32472314 13905 30898 80 15005

1060 712202027012 76358400 38042916 16770 37079 81 18001

What happened? IOWait is completely gone and now this system does not look I/O-bound at all!

In reality, though, of course, nothing changed for our first workload — it continues to be I/O-bound; it just became invisible when we look at “IOWait”!

To understand what is happening, we really need to understand what “IOWait” is and how it is computed.

There is a good article that goes into more detail on the subject, but basically, “IOWait” is kind of idle CPU time. If the CPU core gets idle because there is no work to do, the time is accounted as “idle.” If, however, it got idle because a process is waiting on disk, I/O time is counted towards “IOWait.”

However, if a process is waiting on disk I/O but other processes on the system can use the CPU, the time will be counted towards their CPU usage as user/system time instead.

Because of this accounting, other interesting behaviors are possible. Now instead of running eight I/O-bound threads, let’s just run one I/O-bound process on four core VM:

Shell

1

sysbench--threads=1 --time=0 --max-requests=0fileio --file-num=1 --file-total-size=10G --file-io-mode=sync --file-extra-flags=direct --file-test-mode=rndrd run

Understanding Linux IOWait (3)

Shell

1

2

3

4

5

6

7

8

9

10

11

12

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----

rb swpd free buffcache si sobibo in cs us sy id wa st

310 713030827704 76359200 6200012 4503 857735 69 203

210 712714427728 76359200 6709814 4810 925325 70 202

210 712844827752 76359200 7276015 5179 994625 72 201

400 713306827776 76358800 6956629 4953 956225 72 211

210 713132827800 76357600 6750115 4793 927625 72 201

200 712813627824 76359200 5946115 4316 827225 71 203

310 712971227848 76359200 6413913 4628 885425 70 203

200 712898427872 76359200 7102718 5068 971826 71 201

100 712823227884 76359200 6977912 4967 954925 71 201

500 712850427908 76359200 6641918 4767 913925 71 201

Even though this process is completely I/O-bound, we can see IOWait (wa) is not particularly high, less than 25%. On larger systems with 32, 64, or more cores, such completely IO-bottlenecked processes will be all but invisible, generating single-digit IOWait percentages.

As such, high IOWait shows many processes in the system waiting on disk I/O, but even with low IOWait, the disk I/O may be bottlenecked for some processes on the system.

If IOWait is unreliable, what can you use instead to give you better visibility?

First, look at application-specific observability. The application, if it is well instrumented, tends to know best whenever it is bound by the disk and what particular tasks are I/O-bound.

If you only have access to Linux metrics, look at the “b” column in vmstat, which corresponds to processes blocked on disk I/O. This will show such processes, even of concurrent CPU-intensive loads, will mask IOWait:

Understanding Linux IOWait (4)

Understanding Linux IOWait (5)

Finally, you can look at per-process statistics to see which processes are waiting for disk I/O. For Percona Monitoring and Management, you can install a plugin as described in the blog post Understanding Processes Running on Linux Host with Percona Monitoring and Management.

Understanding Linux IOWait (6)

With this extension, we can clearly see which processes are runnable (running or blocked on CPU availability) and which are waiting on disk I/O!

Percona Monitoring and Management is a best-of-breed open source database monitoring solution. It helps you reduce complexity, optimize performance, and improve the security of your business-critical database environments, no matter where they are located or deployed.

Download Percona Monitoring and Management Today

Related

Subscribe

Connect with

Login

0 Comments

Inline Feedbacks

View all comments

Understanding Linux IOWait (2024)

References

Top Articles
Latest Posts
Article information

Author: The Hon. Margery Christiansen

Last Updated:

Views: 5859

Rating: 5 / 5 (70 voted)

Reviews: 93% of readers found this page helpful

Author information

Name: The Hon. Margery Christiansen

Birthday: 2000-07-07

Address: 5050 Breitenberg Knoll, New Robert, MI 45409

Phone: +2556892639372

Job: Investor Mining Engineer

Hobby: Sketching, Cosplaying, Glassblowing, Genealogy, Crocheting, Archery, Skateboarding

Introduction: My name is The Hon. Margery Christiansen, I am a bright, adorable, precious, inexpensive, gorgeous, comfortable, happy person who loves writing and wants to share my knowledge and understanding with you.