Что происходит с пропущенными записями после ясной шпульки?

Легче объявить серверы, которые называют с доменным именем и объявляют сервер по умолчанию, который будет включать доступы прямого IP.

Например, Ваша доменная конфигурация могла использоваться для, скажем, mydomain.com

server {
        listen       80;
        server_name  mydomain.com *.mydomain.com;
        root /var/www/html/mydomain.com
        ...

Наличие регулярных файлов в каталоге /var/www/html/mydomain.com.

И запись по умолчанию только предоставила бы доступ к некоторым определенным файлам в другом месте.
Отметьте ключевое слово по умолчанию после listen 80.

server {
        listen       80 default;
        server_name  _ *;
        root /var/www/html/restricted
        ...

И Вы вставляете файлы по умолчанию /var/www/html/restricted, это будет подаваться для localhost и необработанных IP-адресов.

4
задан 19 June 2013 в 23:07
2 ответа

Обратите внимание:

  1. Каждый блок данных имеет правильную контрольную сумму в ZFS. Таким образом, ZFS знает, на каком диске хранятся правильные данные в резервной настройке при сбое. Запуск zpool scrub ZFSPOOL восстановит данные или распространит данные на все работающие диски для RADZ.
  2. ZFS использует исправление ошибок Рида-Соломона , которое лучше всего подходит для пакетов ошибок. Отсутствующий диск - это такой пакет ошибок, который RS может исправить.

У меня было много ошибок DMA на дисках, когда проблема с состоянием воздуха в центре обработки данных и ZFS смогла исправить этот беспорядок. И это было просто зеркало.

Я помню промо-видео, выпущенное SUN, когда они представили ZFS ... они сделали RAIDZ на USB-флеш-накопителях, развернутых на 8-портовый USB-концентратор, а затем произвольно изменили положение в концентраторе для некоторых из них, в то время как выполнение операций ввода-вывода в этом пуле без отключения электроэнергии.

1
ответ дан 3 December 2019 в 03:31

"Theoretically, ZFS could circumvent this problem by keeping track of mutations that occur during a degraded state, and writing them back to D when it's cleared. For some reason I suspect that's not what happens, though."

Actually, this is almost exactly what it can do in this situation. See, every time the disk in a ZFS pool is written to, the current global pool transaction id is written to the disk. So say, for instance, that you have the scenario you explain occur, and the total time between the connection loss and recovery is less than 127 * txg_timeout (and that's making a lot of gross assumptions about load on the pool and a few other things, but say half that for typical safety's sake, so if txg_timeout is 10 seconds, then 600 seconds or 10 minutes is a reasonable time to expect this to still work).

At the moment before disconnection, the pool was able to successfully write writes related to transaction id 20192. Time passes, and the disk comes back. At the time the disk is once again available, the pool has had a number of transaction groups go through, and is at transaction id 20209. At this point, there is still every possibility ZFS can do what is called a 'quick resilver', where it resilvers the disk, but ONLY for transaction id's 20193 through 20209, as opposed to a full resilver of the drive. This quickly and efficiently gets the disk back up in spec with the rest of the pool.

However, the method to kick off that activity is not 'zpool clear'. If everything works as it should, the resilver should have been kicked in automatically the moment the disk became healthy again. In fact, it may have been so fast, you never saw it. In which case, 'zpool clear' would be the proper activity to clear up the still-visible error count that would have appeared when the device disappeared in the first place. Depending on the version of zfs you're using, what OS it is on, in what manner the device is being listed by zfs at the moment and how long it has been in that state, the 'proper' way to fix this varies. It could actually be 'zpool clear' (clearing up the errors, and the next access of the drive should notice the out of sync txg id and kick in the resilver) or you might need to use 'zpool online' or 'zpool replace'.

What I'm used to seeing, when all this works properly, is the disk disappearing and the drive going into a state of OFFLINE or DEGRADED or FAULTED or UNAVAIL or REMOVED. Then, when the drive becomes accessible again at an OS level, FMA and other OS mechanisms kick in and ZFS becomes aware the disk has returned, and there's a quick resilver and the device appears in zpool status as ONLINE again, but may still have an error count associated with it. The key is it is in ONLINE status, which would indicate automatic repair (resilver) success. You can test it on any drive by pulling it out, waiting a few seconds and checking 'zpool status', and then plugging the disk back in and checking 'zpool status' again and seeing what happens. ZFS isn't the only moving piece here - ZFS actually relies in large part on other OS mechanics to inform it of the disk's status, and if those mechanics fail you'll get different symptoms than if they succeed.

In either event, either the quick resilver is able to be run and succeeds, or it is not possible or fails. If the latter, the disk will have to complete a full resilver before returning to duty, so your two problems listed at the bottom of your post shouldn't usually be possible unless administrative override has allowed a disk with a mismatched txgid to re-enter a pool without any form of correction for that disparity (should not usually be possible). IF that were to happen, I would suspect the next access to the drive would either result in a kick off that quick resilver (and succeed, or fail and knock the disk to a full resilver) or it would end up kicking the disk out -- or possibly panicking, due to the txgid disparity. In any of those events, what would not happen is data loss or a return of incorrect data to a request.

3
ответ дан 3 December 2019 в 03:31

Теги

Похожие вопросы