/dev/sdb became /dev/sdc - now SWRAID1 is broken

I'm running a RAID-check via Icinga on my dedicated server. About a hour ago I was contacted due to a CRITICAL RAID status on that server.

After some investigation I realized that /dev/sda hard drive is still working fine and the old /dev/sdb hard drive became /dev/sdc. Therefore mdstat is showing an RAID sync error.

I have simply no idea how that could happen and how I am able to move /dev/sdc back to /dev/sdb and get the SW-RAID1 in sync again. If that is not possible in an easy way it would also ok for me to remove the "broken"/not longer existent /dev/sdb device from the RAID and add the /dev/sdc hard drive to it.

root@vnode01 ~ # cat /proc/mdstat  
Personalities : [raid1] 
md2 : active raid1 sda3[0] sdb3[1](F)
      1936077888 blocks super 1.2 [2/1] [U_]
      bitmap: 10/15 pages [40KB], 65536KB chunk

md1 : active raid1 sda2[0] sdb2[1](F)
      523712 blocks super 1.2 [2/1] [U_]

md0 : active raid1 sda1[0] sdb1[1](F)
      16760832 blocks super 1.2 [2/1] [U_]

--

root@vnode01 /dev # ls | grep "sd"
sda
sda1
sda2
sda3
sdc
sdc1
sdc2
sdc3
0
задан 12 September 2016 в 18:46
1 ответ

Показать SMART-чтение (smartctl --all /dev/sdc'). Проверьте такжеdmesg` на наличие других сообщений. Вероятно, у вас есть некоторые проблемы с этим диском. Я вижу такой тип поведения на некоторых наших серверах, обычно перед тем, как диск вообще выйдет из строя ;).

0
ответ дан 5 December 2019 в 09:33

Теги

Похожие вопросы