QNAP 於 2018 陸續發佈了三台 9 BAY 10G NAS, 3.5"*5 + 2.5"*4 的 9 bay NAS 在市面上算是首創, 之前入手 TS-531X 5 bay NAS 沒有多久, QNAP 就發表了首台 TS-932X ARM Based NAS, 價格也只比 TS-531x 高一些, 但比 TS-531X 多了 4 bay 2.5" NAS.

既 TS-932X 之後又發表了以 AMD x86 CPU 設計的 TS-963X NAS, 立馬訂了一台, 誰知訂購沒有多久, QNAP 又發表了 TS-951X Intel x86 CPU NAS, 同樣的價格也只比 TS-963X 高一些. 永遠都追不上廠商新機發表的速度

有關這三台 9 bay NAS 規格上的差異, 可以直接參考官網. (官網: 威聯通 9-bay NAS 完備)

這次開箱的是 TS-963X AMD based 10GbE NAS.

TS-963X 回'家'

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱

QNAP TS-963x 9-bay 10G NAS 開箱


效能簡測

先簡單的測了一下這台 10GbE NAS 的讀寫效能.

測試的硬碟及 SSD 為美光 250GB SSD, 以及 Toshiba DT01ACA300 3TB 桌機碟.
QNAP TS-963x 9-bay 10G NAS 開箱

硬碟單顆的傳輸效能約 190MB/s, 其中第五顆是用 HGST 3TB NAS 碟, 讀取的效能約 150MB/s,
美光 SSD 在此台 NAS 的讀取效能約 460MB/s 左右.
QNAP TS-963x 9-bay 10G NAS 開箱


以美光 SSD *2 建立 RAID 0, 預期大約可達 1000Mbps 的讀寫效能. 以 fio 測試內部讀寫效能, 如下大約可以解讀為
read 大約在 900MB/s 以上, 而 write 大約在 700MB/s 以上. 寫入的效能比預期低了些.


[admin@NAS28ABD6 Multimedia]# fio fio.conf
read: (g=0): rw=read, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=16
write: (g=1): rw=write, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=16
fio-2.2.10
Starting 2 processes
Jobs: 1 (f=1): [_(1),W(1)] [100.0% done] [0KB/670.1MB/0KB /s] [0/10.8K/0 iops] [eta 00m:00s]
read: (groupid=0, jobs=1): err= 0: pid=455: Tue Jun 5 03:28:26 2018
read : io=16384MB, bw=965651KB/s, iops=15088, runt= 17374msec
slat (usec): min=31, max=1088, avg=36.37, stdev= 4.75
clat (usec): min=140, max=8634, avg=1020.04, stdev=428.44
lat (usec): min=184, max=8676, avg=1057.10, stdev=428.43
clat percentiles (usec):
| 1.00th=[ 221], 5.00th=[ 350], 10.00th=[ 466], 20.00th=[ 612],
| 30.00th=[ 756], 40.00th=[ 892], 50.00th=[ 1012], 60.00th=[ 1144],
| 70.00th=[ 1272], 80.00th=[ 1416], 90.00th=[ 1576], 95.00th=[ 1704],
| 99.00th=[ 1864], 99.50th=[ 1912], 99.90th=[ 2008], 99.95th=[ 2672],
| 99.99th=[ 7648]
bw (KB /s): min=937600, max=973184, per=100.00%, avg=965620.71, stdev=6593.75
lat (usec) : 250=1.89%, 500=10.83%, 750=16.83%, 1000=19.23%
lat (msec) : 2=51.11%, 4=0.07%, 10=0.03%
cpu : usr=10.89%, sys=59.37%, ctx=159199, majf=0, minf=266
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=262144/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16
write: (groupid=1, jobs=1): err= 0: pid=982: Tue Jun 5 03:28:26 2018
write: io=16384MB, bw=772041KB/s, iops=12063, runt= 21731msec
slat (usec): min=25, max=3551, avg=40.33, stdev= 9.37
clat (usec): min=150, max=158904, avg=1246.84, stdev=1485.86
lat (usec): min=190, max=158943, avg=1287.91, stdev=1485.89
clat percentiles (usec):
| 1.00th=[ 652], 5.00th=[ 964], 10.00th=[ 972], 20.00th=[ 1048],
| 30.00th=[ 1128], 40.00th=[ 1192], 50.00th=[ 1208], 60.00th=[ 1272],
| 70.00th=[ 1304], 80.00th=[ 1368], 90.00th=[ 1448], 95.00th=[ 1464],
| 99.00th=[ 1864], 99.50th=[ 3536], 99.90th=[ 4896], 99.95th=[ 5152],
| 99.99th=[95744]
bw (KB /s): min=542208, max=793728, per=99.99%, avg=771976.93, stdev=49383.89
lat (usec) : 250=0.23%, 500=0.42%, 750=0.64%, 1000=11.19%
lat (msec) : 2=86.70%, 4=0.53%, 10=0.26%, 50=0.01%, 100=0.01%
lat (msec) : 250=0.01%
cpu : usr=47.23%, sys=50.48%, ctx=1421, majf=0, minf=10
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=262144/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
READ: io=16384MB, aggrb=965650KB/s, minb=965650KB/s, maxb=965650KB/s, mint=17374msec, maxt=17374msec

Run status group 1 (all jobs):
WRITE: io=16384MB, aggrb=772040KB/s, minb=772040KB/s, maxb=772040KB/s, mint=21731msec, maxt=21731msec

Disk stats (read/write):
dm-7: ios=262144/260582, merge=0/0, ticks=267386/113937, in_queue=381780, util=99.24%, aggrios=262144/262164, aggrmerge=0/0, aggrticks=267049/114216, aggrin_queue=381489, aggrutil=99.17%
dm-6: ios=262144/262164, merge=0/0, ticks=267049/114216, in_queue=381489, util=99.17%, aggrios=262144/262164, aggrmerge=0/0, aggrticks=264995/112075, aggrin_queue=377557, aggrutil=99.16%
dm-4: ios=262144/262164, merge=0/0, ticks=264995/112075, in_queue=377557, util=99.16%, aggrios=65536/65543, aggrmerge=0/0, aggrticks=66158/27924, aggrin_queue=94184, aggrutil=99.18%
dm-0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
drbd1: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-1: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-2: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-3: ios=262144/262172, merge=0/0, ticks=264633/111699, in_queue=376738, util=99.18%



接著以 iometer 測試 4k seq. read & write 效能, Client 採用 Windows 10 PC (intel i5 cpu), PC & NAS 中間透過 QNAP QSW-1208-8C 10G switch.
QNAP TS-963x 9-bay 10G NAS 開箱
QNAP TS-963x 9-bay 10G NAS 開箱

這部份看的出來比前面 fio 測試的數據更低. 自己的解讀是, 整個傳輸的瓶頸是在 10GbE 的網路上, NAS 本身內部的磁碟 i/o 雖然可達 900MB/s & 700MB/s 以上, 但經 10GbE 網路傳輸後降低了非常多.

當然這只是自己實測的結果, 建議網友也可參考官網上的數據.

如果和 TS-932X 相較, 自己猜測會不會 TS-932x 的讀寫效能比 TS-963X 好呢?
FB: Pctine
遇到的第一個技術問題

為了更貼近實務使用環境, 這次採用 3TB HDD*5 建立 RAID6, 在 TS-963X 已經將 resync 的速度調整為最快, 但同步的速度卻是相當慢.

一開始同步的速度大約在 90MB/s 左右, 但速度上愈來愈慢, 現在大約只有 40-50MB/s 左右. firmware 已經是最新版本, 這部份還要詢問原廠如何解決.



更新, 3TB HDD*5 raid6 resync 終於完成
如下所示, 03:39:26~20:07:19 總計 16:37:53, 若以 3TB 容量多顆同時寫入, 平均 resync 效能約 53MB/sec, 這明顯過低.
FB: Pctine
有關於 system beep
很多新款的 QNAP NAS 都有內建 speaker, 會說出系統運作的狀態, 或是當系統異常時也會以語音提示.

這是非常人性化的設計, 但我發現有一個盲點, 如果當人員不在現場時, 當系統異常時, 它會有語音提示, 但系統並不會持續發出 beep.

這樣的設計存在一個缺點, 有可能造成維護人員未即時處理緊急的狀況, 當然系統提供了多種提醒的方式, 例如發出 email, 但在緊急情況下, 系統持續發出 beep 是一種最直接的警示方法, 這樣的設計不應該被拿掉.
FB: Pctine
RAID6 rebuild 實測

採用 3TB HDD*5 建立 RAID6, 模擬 disk#5 fail rebuild 的情況.


3TB rebuild 費時 08:16:33, 平均約 106MB/sec (ps:設定為磁碟同步優先)
FB: Pctine
uname & cpuinfo

參考用:
Linux 4.2.8
AMD GX-420MC 2.0GHz 四核心, 2MB L2 Cache



[~] # uname -a
Linux NAS28ABD6 4.2.8 #1 SMP Mon May 28 07:31:36 CST 2018 x86_64 GNU/Linux


[~] # cat /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
cpu family : 22
model : 48
model name : AMD GX-420MC SOC
stepping : 1
microcode : 0x7030105
cpu MHz : 2000.000
cache size : 2048 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc extd_apicid aperfmperf eagerfpu pni pclmulqdq monitor ssse3 cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt topoext perfctr_nb bpext perfctr_l2 arat hw_pstate npt lbrv svm_lock nrip_save tsc_scale flushbyasid decodeassists pausefilter pfthreshold vmmcall bmi1 xsaveopt
bugs : fxsave_leak sysret_ss_attrs
bogomips : 3992.84
TLB size : 1024 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management: ts ttp tm 100mhzsteps hwpstate [12] [13]

processor : 1
vendor_id : AuthenticAMD
cpu family : 22
model : 48
model name : AMD GX-420MC SOC
stepping : 1
microcode : 0x7030105
cpu MHz : 2000.000
cache size : 2048 KB
physical id : 0
siblings : 4
core id : 1
cpu cores : 4
apicid : 1
initial apicid : 1
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc extd_apicid aperfmperf eagerfpu pni pclmulqdq monitor ssse3 cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt topoext perfctr_nb bpext perfctr_l2 arat hw_pstate npt lbrv svm_lock nrip_save tsc_scale flushbyasid decodeassists pausefilter pfthreshold vmmcall bmi1 xsaveopt
bugs : fxsave_leak sysret_ss_attrs
bogomips : 3992.84
TLB size : 1024 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management: ts ttp tm 100mhzsteps hwpstate [12] [13]

processor : 2
vendor_id : AuthenticAMD
cpu family : 22
model : 48
model name : AMD GX-420MC SOC
stepping : 1
microcode : 0x7030105
cpu MHz : 2000.000
cache size : 2048 KB
physical id : 0
siblings : 4
core id : 2
cpu cores : 4
apicid : 2
initial apicid : 2
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc extd_apicid aperfmperf eagerfpu pni pclmulqdq monitor ssse3 cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt topoext perfctr_nb bpext perfctr_l2 arat hw_pstate npt lbrv svm_lock nrip_save tsc_scale flushbyasid decodeassists pausefilter pfthreshold vmmcall bmi1 xsaveopt
bugs : fxsave_leak sysret_ss_attrs
bogomips : 3992.84
TLB size : 1024 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management: ts ttp tm 100mhzsteps hwpstate [12] [13]

processor : 3
vendor_id : AuthenticAMD
cpu family : 22
model : 48
model name : AMD GX-420MC SOC
stepping : 1
microcode : 0x7030105
cpu MHz : 2000.000
cache size : 2048 KB
physical id : 0
siblings : 4
core id : 3
cpu cores : 4
apicid : 3
initial apicid : 3
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc extd_apicid aperfmperf eagerfpu pni pclmulqdq monitor ssse3 cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt topoext perfctr_nb bpext perfctr_l2 arat hw_pstate npt lbrv svm_lock nrip_save tsc_scale flushbyasid decodeassists pausefilter pfthreshold vmmcall bmi1 xsaveopt
bugs : fxsave_leak sysret_ss_attrs
bogomips : 3992.84
TLB size : 1024 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management: ts ttp tm 100mhzsteps hwpstate [12] [13]
FB: Pctine
TS-963X 架構探密

有關於 TS-963x 的硬體架構, 試著用 lspci check 整個配置, 但 TS-963x 將這個指令拿掉了, 那麼只好自己再加進來.

安裝 Entware, 並 install pciutils 套件.
https://github.com/Entware/Entware-ng/wiki/Install-on-QNAP-NAS
opkg update
opkg install pciutils


[/share/CACHEDEV1_DATA/.qpkg/Entware-ng/sbin] # ./lspci
00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1566
00:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Device 1567
00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 156b
00:02.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 16h Processor Functions 5:1
00:02.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 16h Processor Functions 5:1
00:02.4 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 16h Processor Functions 5:1
00:02.5 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 16h Processor Functions 5:1
00:08.0 Encryption controller: Advanced Micro Devices, Inc. [AMD] Device 1537
00:10.0 USB controller: Advanced Micro Devices, Inc. [AMD] FCH USB XHCI Controller (rev 11)
00:11.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 40)
00:12.0 USB controller: Advanced Micro Devices, Inc. [AMD] FCH USB EHCI Controller (rev 39)
00:13.0 USB controller: Advanced Micro Devices, Inc. [AMD] FCH USB EHCI Controller (rev 39)
00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 42)
00:14.2 Audio device: Advanced Micro Devices, Inc. [AMD] FCH Azalia Controller (rev 02)
00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 11)
00:14.7 SD Host controller: Advanced Micro Devices, Inc. [AMD] FCH SD Flash Controller (rev 01)
00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1580
00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1581
00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1582
00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1583
00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1584
00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1585
01:00.0 PCI bridge: Integrated Device Technology, Inc. [IDT] Device 8063 (rev 01)
02:02.0 PCI bridge: Integrated Device Technology, Inc. [IDT] Device 8063 (rev 01)
02:04.0 PCI bridge: Integrated Device Technology, Inc. [IDT] Device 8063 (rev 01)
02:06.0 PCI bridge: Integrated Device Technology, Inc. [IDT] Device 8063 (rev 01)
03:00.0 Ethernet controller: Aquantia Corp. Device 0001 (rev 02)
04:00.0 SATA controller: ASMedia Technology Inc. Device 0625 (rev 01)
05:00.0 SATA controller: ASMedia Technology Inc. Device 0625 (rev 01)
06:00.0 SATA controller: ASMedia Technology Inc. Device 0625 (rev 01)
07:00.0 SATA controller: ASMedia Technology Inc. Device 0625 (rev 01)
08:00.0 Ethernet controller: Broadcom Limited NetXtreme BCM5720 Gigabit Ethernet PCIe
08:00.1 Ethernet controller: Broadcom Limited NetXtreme BCM5720 Gigabit Ethernet PCIe


幾個比較重要的訊息.
*10GbE 使用 Aquantia chip
*Gigabit LAN 使用 Broadcom BCM5720
*加上四個 ASMedia SATA Controller.

AMD 此顆 CPU 內建 SATA Port*2, 再採用 PCIe gen2*1 四組接 SATA Controller, 總共可以達到 10 組 SATA Port.(這台是 9-bay).

FB: Pctine
硬碟不在相容列表中, 悲劇的效能

如下圖, 利用手上現有的 3TB 五顆來測試 RAID6 效能, disk#1~#4 為 Toshiba DT01ACA300 3TB, disk#5 為 HGST 3TB NAS 碟. 這顆 Toshiba 3TB 是之前特價買的. 在很多台 NAS 都測試過.



在 QTS 裡面測試循序讀取可以到 190MB/sec.

這五顆 3TB HDD 組 RAID6. 用 fio 測試 64k seq.read & write. 原本預期至少能跑到 500MB/s 以上.


[/share/test] # fio fio.conf
read: (g=0): rw=read, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=16
write: (g=1): rw=write, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=16
fio-2.2.10
Starting 2 processes
Jobs: 1 (f=1): [_(1),W(1)] [73.3% done] [0KB/118.9MB/0KB /s] [0/1901/0 iops] [eta 00m:35s]
read: (groupid=0, jobs=1): err= 0: pid=27257: Wed Jun 6 22:26:41 2018
read : io=16384MB, bw=478174KB/s, iops=7471, runt= 35086msec
slat (usec): min=15, max=9061, avg=38.33, stdev=25.10
clat (usec): min=133, max=66037, avg=2099.06, stdev=3360.25
lat (usec): min=187, max=66074, avg=2138.10, stdev=3359.99
clat percentiles (usec):
| 1.00th=[ 167], 5.00th=[ 191], 10.00th=[ 207], 20.00th=[ 290],
| 30.00th=[ 374], 40.00th=[ 446], 50.00th=[ 548], 60.00th=[ 660],
| 70.00th=[ 812], 80.00th=[ 3536], 90.00th=[ 8768], 95.00th=[10304],
| 99.00th=[12096], 99.50th=[12736], 99.90th=[16320], 99.95th=[18048],
| 99.99th=[25728]
bw (KB /s): min=415745, max=513792, per=100.00%, avg=478462.61, stdev=19086.13
lat (usec) : 250=14.93%, 500=30.86%, 750=21.60%, 1000=7.66%
lat (msec) : 2=1.91%, 4=4.01%, 10=12.97%, 20=6.05%, 50=0.01%
lat (msec) : 100=0.01%
cpu : usr=5.22%, sys=31.16%, ctx=153675, majf=0, minf=265
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=262144/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16
write: (groupid=1, jobs=1): err= 0: pid=28098: Wed Jun 6 22:26:41 2018
write: io=6751.6MB, bw=115216KB/s, iops=1800, runt= 60005msec
slat (usec): min=16, max=74619, avg=73.20, stdev=408.82
clat (usec): min=440, max=127830, avg=8767.65, stdev=8846.50
lat (usec): min=500, max=133935, avg=8841.66, stdev=8873.25
clat percentiles (msec):
| 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 6],
| 30.00th=[ 6], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7],
| 70.00th=[ 8], 80.00th=[ 8], 90.00th=[ 19], 95.00th=[ 24],
| 99.00th=[ 46], 99.50th=[ 62], 99.90th=[ 101], 99.95th=[ 113],
| 99.99th=[ 123]
bw (KB /s): min=51097, max=144384, per=100.00%, avg=115427.11, stdev=13847.96
lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.10%, 4=4.06%, 10=81.08%, 20=6.77%, 50=7.14%
lat (msec) : 100=0.73%, 250=0.11%
cpu : usr=7.76%, sys=12.88%, ctx=39881, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=108024/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16


Run status group 0 (all jobs):
READ: io=16384MB, aggrb=478174KB/s, minb=478174KB/s, maxb=478174KB/s, mint=35086msec, maxt=35086msec

Run status group 1 (all jobs):
WRITE: io=6751.6MB, aggrb=115215KB/s, minb=115215KB/s, maxb=115215KB/s, mint=60005msec, maxt=60005msec


Disk stats (read/write):
dm-15: ios=262144/108033, merge=0/0, ticks=549724/927124, in_queue=1477248, util=99.81%, aggrios=262144/108068, aggrmerge=0/0, aggrticks=549374/927322, aggrin_queue=1476841, aggrutil=99.78%
dm-14: ios=262144/108068, merge=0/0, ticks=549374/927322, in_queue=1476841, util=99.78%, aggrios=262144/108068, aggrmerge=0/0, aggrticks=533067/909952, aggrin_queue=1443356, aggrutil=98.06%
dm-12: ios=262144/108068, merge=0/0, ticks=533067/909952, in_queue=1443356, util=98.06%, aggrios=65585/27022, aggrmerge=0/0, aggrticks=133700/227797, aggrin_queue=361538, aggrutil=98.06%
dm-8: ios=197/0, merge=0/0, ticks=2081/0, in_queue=2081, util=2.18%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
drbd2: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-9: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-10: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-11: ios=262144/108090, merge=0/0, ticks=532720/911191, in_queue=1444073, util=98.06%


沒有看錯, read 大約 400MB/sec 以上, 但 write 只有 100MB/sec. 比單顆硬碟效能還低.

這顆 Toshiba 3TB 不在相容列表, 跑出這數字算是出乎意料之外.
FB: Pctine
到底是 Toshiba DT01ACA300 3TB 碟有相容性問題? 還是 QTS 問題

決定從 Basic, RAID0, RAID1, RAID5...逐步測試這顆硬碟相容性.

BASIC

[/share/test] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md2 : active raid1 sdd3[0]
2920311616 blocks super 1.0 [1/1] [U]


[/share/test] # hdparm -i -Tt /dev/sdd

/dev/sdd:

Model=TOSHIBA DT01ACA300 , FwRev=MX6OABB0, SerialNo= 978T534AS
Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs }
RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=56
BuffType=DualPortCache, BuffSize=0kB, MaxMultSect=16, MultSect=?0?
CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=268435455
IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
PIO modes: pio0 pio1 pio2 pio3 pio4
DMA modes: mdma0 mdma1 mdma2
UDMA modes: udma0 udma1 udma2
AdvancedPM=yes: disabled (255) WriteCache=enabled
Drive conforms to: unknown:

* signifies the current active mode

Timing cached reads: 6488 MB in 2.00 seconds = 3243.79 MB/sec
Timing buffered disk reads: 590 MB in 3.01 seconds = 196.29 MB/sec



fio 測試

[/share/test] # fio fio.conf
read: (g=0): rw=read, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=16
write: (g=1): rw=write, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=16
fio-2.2.10
Starting 2 processes
read: Laying out IO file(s) (1 file(s) / 16384MB)
write: Laying out IO file(s) (1 file(s) / 16384MB)
Jobs: 1 (f=1): [_(1),W(1)] [81.2% done] [0KB/193.2MB/0KB /s] [0/3103/0 iops] [eta 00m:28s]
read: (groupid=0, jobs=1): err= 0: pid=13196: Wed Jun 6 22:55:55 2018
read : io=11242MB, bw=191846KB/s, iops=2997, runt= 60005msec
slat (usec): min=28, max=362, avg=39.56, stdev= 3.50
clat (usec): min=299, max=184670, avg=5293.97, stdev=2401.86
lat (usec): min=339, max=184704, avg=5334.22, stdev=2401.72
clat percentiles (msec):
| 1.00th=[ 5], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 5],
| 30.00th=[ 5], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6],
| 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 7], 95.00th=[ 7],
| 99.00th=[ 7], 99.50th=[ 9], 99.90th=[ 24], 99.95th=[ 33],
| 99.99th=[ 124]
bw (KB /s): min=51200, max=208128, per=99.99%, avg=191821.24, stdev=19035.43
lat (usec) : 500=0.06%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.03%, 4=0.27%, 10=99.35%, 20=0.14%, 50=0.11%
lat (msec) : 100=0.02%, 250=0.02%
cpu : usr=2.55%, sys=13.06%, ctx=179768, majf=0, minf=266
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=179871/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16
write: (groupid=1, jobs=1): err= 0: pid=14029: Wed Jun 6 22:55:55 2018
write: io=11089MB, bw=189064KB/s, iops=2954, runt= 60057msec
slat (usec): min=23, max=29658, avg=47.41, stdev=78.78
clat (usec): min=788, max=90338, avg=5327.62, stdev=2485.51
lat (usec): min=915, max=93850, avg=5375.76, stdev=2489.51
clat percentiles (usec):
| 1.00th=[ 2960], 5.00th=[ 4384], 10.00th=[ 4448], 20.00th=[ 4576],
| 30.00th=[ 4640], 40.00th=[ 5216], 50.00th=[ 5344], 60.00th=[ 5408],
| 70.00th=[ 5472], 80.00th=[ 5536], 90.00th=[ 6304], 95.00th=[ 6304],
| 99.00th=[12992], 99.50th=[13888], 99.90th=[55040], 99.95th=[59136],
| 99.99th=[66048]
bw (KB /s): min=144640, max=209408, per=100.00%, avg=189241.01, stdev=14354.28
lat (usec) : 1000=0.01%
lat (msec) : 2=0.13%, 4=1.83%, 10=96.79%, 20=1.00%, 50=0.10%
lat (msec) : 100=0.14%
cpu : usr=12.84%, sys=14.81%, ctx=175475, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=177416/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
READ: io=11242MB, aggrb=191846KB/s, minb=191846KB/s, maxb=191846KB/s, mint=60005msec, maxt=60005msec

Run status group 1 (all jobs):
WRITE: io=11089MB, aggrb=189064KB/s, minb=189064KB/s, maxb=189064KB/s, mint=60057msec, maxt=60057msec

Disk stats (read/write):
dm-15: ios=179871/177288, merge=0/0, ticks=953687/940388, in_queue=1894212, util=99.88%, aggrios=179871/177758, aggrmerge=0/0, aggrticks=953495/942656, aggrin_queue=1896252, aggrutil=99.89%
dm-14: ios=179871/177758, merge=0/0, ticks=953495/942656, in_queue=1896252, util=99.89%, aggrios=179871/177758, aggrmerge=0/0, aggrticks=952045/940344, aggrin_queue=1892520, aggrutil=99.85%
dm-12: ios=179871/177758, merge=0/0, ticks=952045/940344, in_queue=1892520, util=99.85%, aggrios=44989/44446, aggrmerge=0/0, aggrticks=237979/235434, aggrin_queue=473432, aggrutil=99.85%
dm-8: ios=87/0, merge=0/0, ticks=86/0, in_queue=86, util=0.07%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
drbd2: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-9: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-10: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-11: ios=179871/177784, merge=0/0, ticks=951831/941736, in_queue=1893642, util=99.85%


可以看出單顆 BASIC 時, 讀寫大約在 190MB/s 以上, 運作正常.
FB: Pctine
Disk#1~#4 RAID0

Toshiba DT01ACA300 3TB HDD*4, RAID0


[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md2 : active raid0 sdg3[3] sdf3[2] sde3[1] sdd3[0]
11681245184 blocks super 1.0 512k chunks



[/share/Multimedia] # fio fio2.conf
read: (g=0): rw=read, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=16
write: (g=1): rw=write, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=16
fio-2.2.10
Starting 2 processes
read: Laying out IO file(s) (1 file(s) / 16384MB)
write: Laying out IO file(s) (1 file(s) / 16384MB)
Jobs: 1 (f=1): [_(1),W(1)] [100.0% done] [0KB/502.5MB/0KB /s] [0/8038/0 iops] [eta 00m:00s]
read: (groupid=0, jobs=1): err= 0: pid=16654: Wed Jun 6 23:36:21 2018
read : io=16384MB, bw=744133KB/s, iops=11627, runt= 22546msec
slat (usec): min=25, max=837, avg=38.60, stdev= 3.80
clat (usec): min=141, max=73701, avg=1333.50, stdev=1726.25
lat (usec): min=189, max=73731, avg=1372.80, stdev=1726.22
clat percentiles (usec):
| 1.00th=[ 179], 5.00th=[ 195], 10.00th=[ 215], 20.00th=[ 318],
| 30.00th=[ 414], 40.00th=[ 494], 50.00th=[ 588], 60.00th=[ 708],
| 70.00th=[ 1096], 80.00th=[ 2736], 90.00th=[ 3792], 95.00th=[ 4384],
| 99.00th=[ 5408], 99.50th=[ 5856], 99.90th=[15552], 99.95th=[22144],
| 99.99th=[39168]
bw (KB /s): min=346880, max=803712, per=100.00%, avg=744237.51, stdev=75117.55
lat (usec) : 250=12.91%, 500=27.52%, 750=22.92%, 1000=6.18%
lat (msec) : 2=5.36%, 4=16.93%, 10=8.02%, 20=0.10%, 50=0.06%
lat (msec) : 100=0.01%
cpu : usr=8.25%, sys=48.62%, ctx=153891, majf=0, minf=265
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=262144/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16
write: (groupid=1, jobs=1): err= 0: pid=16870: Wed Jun 6 23:36:21 2018
write: io=16384MB, bw=507969KB/s, iops=7937, runt= 33028msec
slat (usec): min=22, max=10368, avg=48.95, stdev=23.40
clat (usec): min=153, max=79402, avg=1926.10, stdev=3004.34
lat (usec): min=217, max=79435, avg=1975.82, stdev=3004.05
clat percentiles (usec):
| 1.00th=[ 207], 5.00th=[ 342], 10.00th=[ 458], 20.00th=[ 652],
| 30.00th=[ 836], 40.00th=[ 1012], 50.00th=[ 1144], 60.00th=[ 1320],
| 70.00th=[ 1448], 80.00th=[ 1624], 90.00th=[ 3600], 95.00th=[ 9792],
| 99.00th=[11840], 99.50th=[12480], 99.90th=[20608], 99.95th=[57088],
| 99.99th=[75264]
bw (KB /s): min=381312, max=676864, per=100.00%, avg=508081.14, stdev=69594.24
lat (usec) : 250=2.74%, 500=9.37%, 750=13.51%, 1000=13.73%
lat (msec) : 2=44.62%, 4=6.84%, 10=4.55%, 20=4.53%, 50=0.06%
lat (msec) : 100=0.05%
cpu : usr=31.14%, sys=39.49%, ctx=82542, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=262144/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
READ: io=16384MB, aggrb=744132KB/s, minb=744132KB/s, maxb=744132KB/s, mint=22546msec, maxt=22546msec

Run status group 1 (all jobs):
WRITE: io=16384MB, aggrb=507969KB/s, minb=507969KB/s, maxb=507969KB/s, mint=33028msec, maxt=33028msec

Disk stats (read/write):
dm-1: ios=262144/262189, merge=0/0, ticks=349713/406498, in_queue=756956, util=99.39%, aggrios=262144/262592, aggrmerge=0/0, aggrticks=349362/406524, aggrin_queue=756136, aggrutil=99.34%
dm-0: ios=262144/262592, merge=0/0, ticks=349362/406524, in_queue=756136, util=99.34%, aggrios=262144/262592, aggrmerge=0/0, aggrticks=347185/402104, aggrin_queue=749830, aggrutil=99.19%
dm-12: ios=262144/262592, merge=0/0, ticks=347185/402104, in_queue=749830, util=99.19%, aggrios=65568/65651, aggrmerge=0/0, aggrticks=86736/100666, aggrin_queue=187504, aggrutil=99.19%
dm-8: ios=128/0, merge=0/0, ticks=138/0, in_queue=138, util=0.25%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
drbd2: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-9: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-10: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-11: ios=262144/262606, merge=0/0, ticks=346808/402664, in_queue=749881, util=99.19%



fio 測試, read 700MB/s 左右, write 500MB/s 左右. 數據看起來也 ok.
FB: Pctine
這台性價比其實已經算很高了
若預算不夠的公司買這台也剛剛好夠用,可做VM,可做分層儲存又有內建的10G網路,若當備援機也是不錯的。
關閉廣告
文章分享
評分
評分
複製連結

今日熱門文章 網友點擊推薦!