In-bound traffic dropped to domain controller after RDP connection attempt on Azure

Using Azure IaaS (via ARM) I have a configuration which has some non-domain authenticated RDP gateways. These devices are used as a stepping stone onto the virtual network, which then allows onward connection to all the domain registered devices on the virtual network.

I don't have any reputation, so can't insert images inline. Find a diagram here

In general this works as intended allowing RDP access to all the devices on the domain (shown as subnet 10.1.10.0/24 and 10.1.11.0/24), with the exception of the domain controllers, which cannot be reliably logged onto in this manner.

The most frequent symptom is a failure to connect, but occasionally the RDP session is correctly initiated and then subsequently fails (after a couple of seconds).

  • If I set up a perpetual ping to a domain controller from (for example) a device in the 10.1.10.0/24 subnet those pings are responded to until the instant the RDP connection fails, at which point they are dropped. The instant the RDP failure message returns or the RDP connection is terminated the ping is responded to again.

  • If I set up a perpetual ping from the domain controller to (for example) a device in the 10.1.10.0/24 subnet those pings are always responded to, even when the pings in the other direction fail.

  • If I surface the domain controllers with a publicly routable IP address and permit that inbound traffic through the Domain Controller Network Security Group I can reliably connect (using the same credentials as above) from a public IP address, but that vector is exactly what I am trying to avoid with this configuration.

  • If I set up a perpetual ping to the publicly routable IP address it always succeeds even in the scenario where the internal ping (first bullet point, above) fails.

  • If I create another default installation (non-domain joined) virtual server in the Domain Controller subnet it doesn't have the same problem. If I join the new server to the domain it still doesn't have the problem. It seems clear it's something to do with being a DC.

The DNS records for all the Azure devices point to the domain controllers and everything else is ostensibly OK, though the domain controllers don't have any other services enabled.

All servers are Windows2016.

What am I missing? What is causing the domain controller to lose inbound connectivity when it has a persistent connection from within the vNet, but not from outside?

0
задан 10 January 2017 в 00:28
1 ответ

Похоже, что ответ на этот вопрос заключается в моей неспособности следовать инструкциям.

Microsoft заявляет , что статические IP-адреса виртуальных машин контроллера домена должны быть настроены с помощью конфигурации сетевой карты Azure. . Менее очевидна важность того, чтобы вы вообще не настраивали статические адреса в гостевой ОС. Petri.com сильнее в своей формулировке :

«Никогда не следует настраивать IP-конфигурацию виртуальной машины Azure в гостевой ОС. Новый контроллер домена будет жаловаться на конфигурацию DHCP - пусть жаловаться, потому что не будет никакого вреда, если вы будете следовать правильным процедурам ».

Вышеупомянутые симптомы выглядят как поведение, которого можно ожидать, если вы не последуете этому совету.

0
ответ дан 5 December 2019 в 08:49

Теги

Похожие вопросы