Контроль лака с Heartbeat и кардиостимулятором

при использовании хоста только опция, это означает, что VM может только проверить с помощью ping-запросов хост-машину и не другой vm, если Вы хотите, чтобы это проверило с помощью ping-запросов друг друга, необходимо изменить настройки

4
задан 26 July 2012 в 02:37
1 ответ

Your cluster architecture confuses me, as it seems you are running services that should be cluster-managed (like Varnish) standalone on two nodes at the same time and let the cluster resource manager (CRM) just juggle IP addresses around.

What is it you want to achieve with your cluster setup? Fault tolerance? Load balancing? Both? Mind you, I am talking about the cluster resources (Varnish, IP addresses, etc), not the backend servers to which Varnish distributes the load.

To me it sounds like you want an active-passive two-node cluster, which provides fault tolerance. One node is active and runs Varnish, the virtual IP addresses and possibly other resources, and the other node is passive and does nothing until the cluster resource manager moves resources over to the passive node, at which point it becomes active. This is a tried-and-true architecture that is as old as time itself. But for it to work you need to give the CRM full control over the resources. I recommend following Clusters from Scratch and modelling your cluster after that.

Edit after your updated question: your CIB looks good, and once you patched the Varnish init script so that repeated calls to "start" return 0 you should be able to add the following primitive (adjust the timeouts and intervals to your liking):

primitive p_varnish lsb:varnish \
    op monitor interval="10s" timeout="15s" \
    op start interval="0" timeout="10s" \
    op stop interval="0" timeout="10s" 

Don't forget to add it to the balancer group (the last element in the list):

group balancer eth0_gateway eth1_iceman_slider eth1_iceman_slider_ts \
    eth1_iceman_slider_pm eth1_iceman_slider_jy eth1_iceman eth1_slider \
    eth1_viper eth1_jester p_varnish

Edit 2: To decrease the migration threshold add a resource defaults section at the end of your CIB and set the migration-threshold property to a low number. Setting it to 1 means the resource will be migrated after a single failure. It is also a good idea to set resource stickiness so that a resource that has been migrated because of node failure (reboot or shutdown) does not automatically get migrated back later when the node is available again.

rsc_defaults $id="rsc-options" \
    resource-stickiness="100" \
    migration-threshold="1" 
3
ответ дан 3 December 2019 в 03:46

Теги

Похожие вопросы