In использования IOPs группа дисков VMware 5.1 HP Fata

необходимо инвертировать сетевые маски... т.е. вместо 255.255.0.0 использования 0.0.255.255

Пример:

ip access-list standard external_traffic
 deny   172.16.0.0 0.15.255.255
 deny   192.168.0.0 0.0.255.255

Кроме того, Вы действительно понимаете необходимость в маршрутизаторе между подсетями VLAN'd, правильно? (не пытающийся звучать сопливым, я просто не уверен в Вашем уровне опыта),

править: Существует ли причина, Вы используете/24 подсети для присвоения адреса, но / 16 для Вашего ACL?

Мое предложение состояло бы в том, чтобы выбросить ACL на 222 и поместить ограничения на 111 подсетей, так как это - то, что Вы на самом деле желаете ограничить так или иначе.

ip access-list extended block-icmp
deny icmp 10.100.1.0 0.0.0.255 10.200.1.0 0.0.0.255
permit ip any any
!
interface vlan 111
 ip access-group block-icmp in
0
задан 8 March 2014 в 22:05
2 ответа

As Shane mentioned sequential IO is generally faster than random IO but there's also one other thing in play here, caching. It's actually quite hard these days to benchmark a system in a way that truly gives you a baseline as there's caching all over the place; on the disks, the channel controllers, the SAN controllers, the HBAs, the OS etc. It's great that you care about this, many don't, but some of the unexpected performance gains will just be regular caching.

I do have some other thoughts however if that's ok. I work for just about the largest EVA/P6k buyer HP have, we have hundreds and thus I know my way around them, I know what I love and I know what's caused us problems along the way. The single biggest issue we've had with storage in the last decade and across a 180,000 employee business was those 1TB FATA disks. They're not rated for 24/7 usage, only 30% of the day and if you do run them for longer then they die very quickly. Your use case may well be fine with this, maybe only using those disks for a few hours a day but there's a big kicker - levelling. When one of those disks dies, and it will, you replace it and that starts a levelling process, that may run for more than 30% of the day, often causing more disk failure. This is what we saw, a single disk died and caused an avalanche of failures. We only ran with those disks for a year or less but saw about 60% of them fail during that time. HP were massively apologetic but did point out in the fine print that they were only rated for a 30% duty cycle and so we ended up just swapping them out, hundreds of them, for 'full duty cycle' disks - i.e. not FATAs. Since then we've obviously avoided them and have in fact now retired all of our non-P6k EVAs. I cannot stress how much I dislike that disk and would urge you to move from them ASAP.

Oh and on a smaller note those older EVAs perform much better if your disk groups work in blocks of 8 disks; i.e. 8, 16, 24, 32 etc. disks - rather than 18, it's just how they do the striping.

Hope this helps.

2
ответ дан 4 December 2019 в 14:02

Do you mean that your storage is rated at 1000-1300 IOPS, but you're observing 2000 IOPS?

There doesn't seem to be anything out of the ordinary there, as small/seqential IOPS are much faster than random large operations. If that 1000-1300 rating is from the vendor, then they're probably targeting more of a worst-case random IO workload for that rating - observing more operations than that is normal.

0
ответ дан 4 December 2019 в 14:02

Теги

Похожие вопросы