Quantcast
Channel: VMware Communities: Message List
Viewing all articles
Browse latest Browse all 247559

Re: New !! Open unofficial storage performance thread

$
0
0

It's not my post but I implemented the same in our environment and saw some (roughly 10-20%) improvements. Anyways, your numbers don't seem bad at all for a 1Gbit network with 7.2k RPM disks.

esxcli storage nmp psp roundrobin deviceconfig set -d $i -B 800 -t bytes;

800 bytes is not when a path switch should occurr. With Jumbo Frames (MTU 9000), you should switch paths every 8800 payload bytes. If you're not using Jumbo Frames then you should stick with your IOPS policy. Apart from that your commands seem fine and you can check the result with:

# esxcli storage nmp psp roundrobin deviceconfig get -d naa.6000eb38ccef4544000000000000017d
   Byte Limit: 8800
   Device: naa.6000eb38ccef4544000000000000017d
   IOOperation Limit: 1000
   Limit Type: Bytes
   Use Active Unoptimized Paths: false


# esxcli storage nmp device list | grep "Policy Device Config"
   Path Selection Policy Device Config: {policy=bytes,iops=1000,bytes=8800,useANO=0;lastPathIndex=0: NumIOsPending=1,numBytesPending=4096}
   Path Selection Policy Device Config: {policy=bytes,iops=1000,bytes=8800,useANO=0;lastPathIndex=0: NumIOsPending=0,numBytesPending=0}
   Path Selection Policy Device Config: {policy=bytes,iops=1000,bytes=8800,useANO=0;lastPathIndex=1: NumIOsPending=1,numBytesPending=4096}

Run your tests with these values again (if you're using Jumbo Frames!) What about 802.3x flow control btw?


Viewing all articles
Browse latest Browse all 247559

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>