Тесты Производительности Ebs

Привет!

Тесты производительности EBS

В Интернете много положительной/негативной информации об EBS. Также есть немало тестов на их работоспособность.

Я решил сам провести несколько тестов и провести небольшое исследование по этому поводу.

Итак, машина приняла участие в тесте м1.большой к которому были смонтированы диски:

  • Стандарт EBS, 100 ГБ
  • EBS IO-1 500 IOPS, 100 ГБ
  • EBS IO-1 1000 IOPS, 100 ГБ
  • EBS IO-1 2000 IOPS, 200 ГБ
  • 8x EBS Стандарт, 30 ГБ, RAID 10
  • Эфемерный, 450 ГБ
Было проведено несколько тестов:
  
  
  
  
  
  
  
  
   

# hdparm -tT /dev/xcdX



# dd if=/dev/zero of=tempfile bs=5M count=1024 conv=fdatasync,notrunc # echo 3 > /proc/sys/vm/drop_caches # dd if=tempfile of=/dev/null bs=5M count=1024



# sysbench --num-threads=16 --test=fileio --file-total-size=5G --file-test-mode=rndrw prepare # sysbench --num-threads=16 --test=fileio --file-total-size=5G --file-test-mode=rndrw run

Результаты вывода консоли: Стандарт EBS, 100 ГБ

# hdparm -tT /dev/xvdj /dev/xvdj: Timing cached reads: 4866 MB in 2.00 seconds = 2428.53 MB/sec Timing buffered disk reads: 242 MB in 3.00 seconds = 80.54 MB/sec Timing cached reads: 5146 MB in 2.00 seconds = 2579.25 MB/sec Timing buffered disk reads: 294 MB in 3.01 seconds = 97.59 MB/sec Timing cached reads: 4870 MB in 2.00 seconds = 2440.55 MB/sec Timing buffered disk reads: 306 MB in 3.00 seconds = 101.89 MB/sec # dd if=/dev/zero of=tempfile bs=5M count=1024 conv=fdatasync,notrunc 1024+0 records in 1024+0 records out 5368709120 bytes (5.4 GB) copied, 161.222 s, 33.3 MB/s # echo 3 > /proc/sys/vm/drop_caches # dd if=tempfile of=/dev/null bs=5M count=1024 1024+0 records in 1024+0 records out 5368709120 bytes (5.4 GB) copied, 86.4683 s, 62.1 MB/s # sysbench --num-threads=16 --test=fileio --file-total-size=5G --file-test-mode=rndrw run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 16 Extra file open flags: 0 128 files, 40Mb each 5Gb total file size Block size 16Kb Number of random requests for random IO: 10000 Read/Write ratio for combined random IO test: 1.50 Periodic FSYNC enabled, calling fsync() each 100 requests. Calling fsync() at the end of test, Enabled. Using synchronous I/O mode Doing random r/w test Threads started! Done. Operations performed: 6000 Read, 4000 Write, 12800 Other = 22800 Total Read 93.75Mb Written 62.5Mb Total transferred 156.25Mb (69.816Mb/sec) 4468.25 Requests/sec executed Test execution summary: total time: 2.2380s total number of events: 10000 total time taken by event execution: 1.5942 per-request statistics: min: 0.01ms avg: 0.16ms max: 65.00ms approx. 95 percentile: 0.02ms Threads fairness: events (avg/stddev): 625.0000/249.48 execution time (avg/stddev): 0.0996/0.03

EBS IO-1 500 IOPS, 100 ГБ

# hdparm -tT /dev/xvdh /dev/xvdh: Timing cached reads: 4314 MB in 2.00 seconds = 2161.08 MB/sec Timing buffered disk reads: 72 MB in 3.05 seconds = 23.57 MB/sec Timing cached reads: 3646 MB in 2.00 seconds = 1826.09 MB/sec Timing buffered disk reads: 76 MB in 3.04 seconds = 25.01 MB/sec Timing cached reads: 4346 MB in 2.00 seconds = 2175.61 MB/sec Timing buffered disk reads: 76 MB in 3.03 seconds = 25.12 MB/sec # dd if=/dev/zero of=tempfile bs=5M count=1024 conv=fdatasync,notrunc 1024+0 records in 1024+0 records out 5368709120 bytes (5.4 GB) copied, 271.993 s, 19.7 MB/s # echo 3 > /proc/sys/vm/drop_caches # dd if=tempfile of=/dev/null bs=5M count=1024 1024+0 records in 1024+0 records out 5368709120 bytes (5.4 GB) copied, 182.106 s, 29.5 MB/s # sysbench --num-threads=16 --test=fileio --file-total-size=5G --file-test-mode=rndrw run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 16 Extra file open flags: 0 128 files, 40Mb each 5Gb total file size Block size 16Kb Number of random requests for random IO: 10000 Read/Write ratio for combined random IO test: 1.50 Periodic FSYNC enabled, calling fsync() each 100 requests. Calling fsync() at the end of test, Enabled. Using synchronous I/O mode Doing random r/w test Threads started! Done. Operations performed: 6000 Read, 4000 Write, 12800 Other = 22800 Total Read 93.75Mb Written 62.5Mb Total transferred 156.25Mb (16.794Mb/sec) 1074.78 Requests/sec executed Test execution summary: total time: 9.3042s total number of events: 10000 total time taken by event execution: 0.2975 per-request statistics: min: 0.01ms avg: 0.03ms max: 30.70ms approx. 95 percentile: 0.02ms Threads fairness: events (avg/stddev): 625.0000/553.34 execution time (avg/stddev): 0.0186/0.02

EBS IO-1 1000 IOPS, 100 ГБ

# hdparm -tT /dev/xvdf /dev/xvdf: Timing cached reads: 5090 MB in 2.00 seconds = 2550.81 MB/sec Timing buffered disk reads: 104 MB in 3.03 seconds = 34.30 MB/sec Timing cached reads: 5000 MB in 2.00 seconds = 2505.62 MB/sec Timing buffered disk reads: 98 MB in 3.10 seconds = 31.64 MB/sec Timing cached reads: 5046 MB in 2.01 seconds = 2507.34 MB/sec Timing buffered disk reads: 98 MB in 3.04 seconds = 32.19 MB/sec # dd if=/dev/zero of=tempfile bs=5M count=1024 conv=fdatasync,notrunc 1024+0 records in 1024+0 records out 5368709120 bytes (5.4 GB) copied, 167.252 s, 32.1 MB/s # echo 3 > /proc/sys/vm/drop_caches # dd if=tempfile of=/dev/null bs=5M count=1024 1024+0 records in 1024+0 records out 5368709120 bytes (5.4 GB) copied, 126.793 s, 42.3 MB/s # sysbench --num-threads=16 --test=fileio --file-total-size=5G --file-test-mode=rndrw run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 16 Extra file open flags: 0 128 files, 40Mb each 5Gb total file size Block size 16Kb Number of random requests for random IO: 10000 Read/Write ratio for combined random IO test: 1.50 Periodic FSYNC enabled, calling fsync() each 100 requests. Calling fsync() at the end of test, Enabled. Using synchronous I/O mode Doing random r/w test Threads started! Done. Operations performed: 6001 Read, 4000 Write, 12800 Other = 22801 Total Read 93.766Mb Written 62.5Mb Total transferred 156.27Mb (37.871Mb/sec) 2423.73 Requests/sec executed Test execution summary: total time: 4.1263s total number of events: 10001 total time taken by event execution: 0.8466 per-request statistics: min: 0.01ms avg: 0.08ms max: 22.31ms approx. 95 percentile: 0.02ms Threads fairness: events (avg/stddev): 625.0625/318.25 execution time (avg/stddev): 0.0529/0.02

EBS IO-1 2000 IOPS, 200 ГБ

# hdparm -tT /dev/xvdi /dev/xvdi: Timing cached reads: 4846 MB in 2.00 seconds = 2428.51 MB/sec Timing buffered disk reads: 90 MB in 3.02 seconds = 29.80 MB/sec Timing cached reads: 5122 MB in 2.05 seconds = 2503.64 MB/sec Timing buffered disk reads: 100 MB in 3.07 seconds = 32.56 MB/sec Timing cached reads: 4330 MB in 2.04 seconds = 2123.52 MB/sec Timing buffered disk reads: 102 MB in 3.05 seconds = 33.41 MB/sec # dd if=/dev/zero of=tempfile bs=5M count=1024 conv=fdatasync,notrunc 1024+0 records in 1024+0 records out 5368709120 bytes (5.4 GB) copied, 161.549 s, 33.2 MB/s # echo 3 > /proc/sys/vm/drop_caches # dd if=tempfile of=/dev/null bs=5M count=1024 1024+0 records in 1024+0 records out 5368709120 bytes (5.4 GB) copied, 155.51 s, 34.5 MB/s # sysbench --num-threads=16 --test=fileio --file-total-size=5G --file-test-mode=rndrw prepare sysbench 0.4.12: multi-threaded system evaluation benchmark 128 files, 40960Kb each, 5120Mb total Creating files for the test. [root@ip-10-98-91-92 4]# sysbench --num-threads=16 --test=fileio --file-total-size=5G --file-test-mode=rndrw run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 16 Extra file open flags: 0 128 files, 40Mb each 5Gb total file size Block size 16Kb Number of random requests for random IO: 10000 Read/Write ratio for combined random IO test: 1.50 Periodic FSYNC enabled, calling fsync() each 100 requests. Calling fsync() at the end of test, Enabled. Using synchronous I/O mode Doing random r/w test Threads started! Done. Operations performed: 6000 Read, 4000 Write, 12801 Other = 22801 Total Read 93.75Mb Written 62.5Mb Total transferred 156.25Mb (74.645Mb/sec) 4777.28 Requests/sec executed Test execution summary: total time: 2.0932s total number of events: 10000 total time taken by event execution: 1.0015 per-request statistics: min: 0.01ms avg: 0.10ms max: 10.10ms approx. 95 percentile: 0.02ms Threads fairness: events (avg/stddev): 625.0000/177.29 execution time (avg/stddev): 0.0626/0.02

8xEBS стандарт, 30 ГБ, RAID 10

# hdparm -tT /dev/md0 /dev/md0: Timing cached reads: 3964 MB in 1.93 seconds = 2048.84 MB/sec Timing buffered disk reads: 230 MB in 3.53 seconds = 65.13 MB/sec Timing cached reads: 3994 MB in 1.99 seconds = 2002.16 MB/sec Timing buffered disk reads: 398 MB in 3.00 seconds = 132.64 MB/sec Timing cached reads: 4334 MB in 2.03 seconds = 2138.00 MB/sec Timing buffered disk reads: 302 MB in 3.02 seconds = 99.84 MB/sec # dd if=/dev/zero of=tempfile bs=5M count=1024 conv=fdatasync,notrunc 1024+0 records in 1024+0 records out 5368709120 bytes (5.4 GB) copied, 113.234 s, 47.4 MB/s # echo 3 > /proc/sys/vm/drop_caches # dd if=tempfile of=/dev/null bs=5M count=1024 1024+0 records in 1024+0 records out 5368709120 bytes (5.4 GB) copied, 97.9346 s, 54.8 MB/s # sysbench --num-threads=16 --test=fileio --file-total-size=5G --file-test-mode=rndrw run sysbench 0.5: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 16 Random number generator seed is 0 and will be ignored Extra file open flags: 0 128 files, 40Mb each 5Gb total file size Block size 16Kb Number of IO requests: 10000 Read/Write ratio for combined random IO test: 1.50 Periodic FSYNC enabled, calling fsync() each 100 requests. Calling fsync() at the end of test, Enabled. Using synchronous I/O mode Doing random r/w test Threads started! Operations performed: 5998 reads, 4002 writes, 12800 Other = 22800 Total Read 93.719Mb Written 62.531Mb Total transferred 156.25Mb (87.287Mb/sec) 5586.40 Requests/sec executed General statistics: total time: 1.7901s total number of events: 10000 total time taken by event execution: 1.1625s response time: min: 0.01ms avg: 0.12ms max: 8.99ms approx. 95 percentile: 0.03ms Threads fairness: events (avg/stddev): 625.0000/171.83 execution time (avg/stddev): 0.0727/0.01

Эфемерный, 450 ГБ

# hdparm -tT /dev/xvdb /dev/xvdb: Timing cached reads: 4048 MB in 2.00 seconds = 2027.97 MB/sec Timing buffered disk reads: 1794 MB in 3.12 seconds = 575.84 MB/sec Timing cached reads: 4854 MB in 2.00 seconds = 2432.18 MB/sec Timing buffered disk reads: 1830 MB in 3.00 seconds = 609.94 MB/sec Timing cached reads: 3434 MB in 2.00 seconds = 1719.73 MB/sec Timing buffered disk reads: 770 MB in 3.13 seconds = 245.97 MB/sec # dd if=/dev/zero of=tempfile bs=5M count=1024 conv=fdatasync,notrunc 1024+0 records in 1024+0 records out 5368709120 bytes (5.4 GB) copied, 95.9093 s, 56.0 MB/s # echo 3 > /proc/sys/vm/drop_caches # dd if=tempfile of=/dev/null bs=5M count=1024 1024+0 records in 1024+0 records out 5368709120 bytes (5.4 GB) copied, 55.5027 s, 96.7 MB/s # sysbench --num-threads=16 --test=fileio --file-total-size=5G --file-test-mode=rndrw run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 16 Extra file open flags: 0 128 files, 40Mb each 5Gb total file size Block size 16Kb Number of random requests for random IO: 10000 Read/Write ratio for combined random IO test: 1.50 Periodic FSYNC enabled, calling fsync() each 100 requests. Calling fsync() at the end of test, Enabled. Using synchronous I/O mode Doing random r/w test Threads started! Done. Operations performed: 6000 Read, 4000 Write, 12800 Other = 22800 Total Read 93.75Mb Written 62.5Mb Total transferred 156.25Mb (11.263Mb/sec) 720.82 Requests/sec executed Test execution summary: total time: 13.8731s total number of events: 10000 total time taken by event execution: 0.1603 per-request statistics: min: 0.01ms avg: 0.02ms max: 2.34ms approx. 95 percentile: 0.02ms Threads fairness: events (avg/stddev): 625.0000/117.83 execution time (avg/stddev): 0.0100/0.00



Обработка результатов

1. Сравнение скорости чтения/записи через dd, Мб/с:

Тесты производительности EBS

2. Количество запросов в секунду через sysbench, Запросов/с:

Тесты производительности EBS



Полученные результаты

Итак, вроде бы Ephemeral самый быстрый, как и ожидалось, но IOPS у него самый низкий.

Скорость соединения со стандартными EBS выше, чем с оптимизированными, даже с 2k IOPS. Но количество IOPS выше у оптимизированного EBS, хотя по IOPS RAID всех переиграл.

В результате, как и следовало ожидать, оптимизированные EBS лучше по количеству операций в секунду и простоте настройки, хотя использовать рейды из них будет еще быстрее.

Если для вас важна скорость доступа к файлам, используйте эфемерный или стандартный EBS. Теги: #aws #Amazon Web Services #ebs #хранилище #benchmark

Вместе с данным постом часто просматривают:

Автор Статьи


Зарегистрирован: 2019-12-10 15:07:06
Баллов опыта: 0
Всего постов на сайте: 0
Всего комментарий на сайте: 0
Dima Manisha

Dima Manisha

Эксперт Wmlog. Профессиональный веб-мастер, SEO-специалист, дизайнер, маркетолог и интернет-предприниматель.