Ungracefully terminating pods

To test that the high-availability mode does what it should, I need to kill the instances during testing, and I need to kill them in such way that they can't report they are disconnecting or similar.

For practical reasons I need to do the testing on smaller environment that does not have a separate node for each pod, so I can't test by turning off the machines, but need to kill the processes instead. I can kill them with docker kill, but that requires logging into the node and finding the docker id of the container. Is there some way to achieve similar effect via exec kill, but sending SIGKILL to process ID 1 is not allowed.

I also see delete being used, but I have some case where there is a difference: when deleting, the container is recreated with clean state, but merely restarting it does not and the deployment I am testing actually looks at the state during start-up and has problem starting up, so I need to test the case where it is not deleted.

Can I forcibly terminate pods without giving them any chance for cleaning up via the kubernetes API/kubectl?

3
задан 2 November 2018 в 13:09
1 ответ

Вы можете попробовать использовать команду

kubectl delete pods <pod> --grace-period=0 --force

Ключ - -grace-period = - время, в течение которого Kubernetes ожидает корректного завершения работы Pod. Если он равен 0, SIGKILL будет немедленно отправлен любому процессу в модуле. Ключ - force должен быть указан для такой операции в версиях Kubernetes 1.5 и выше.

Для получения дополнительной информации вы можете проверить официальную документацию .

2
ответ дан 3 December 2019 в 06:54

Теги

Похожие вопросы