Back to Insights

How I Set Up Azure for Server Migrations

Christian Scott·

When I set up Azure for an IT shop planning to migrate on-premises servers, I use the same Microsoft Cloud Adoption Framework (CAF) foundation as any other Azure deployment - management groups, landing zones, RBAC, policy, budgets. The governance structure is identical. What changes is everything below it: instead of container registries and Front Door, you're deploying VPN gateways, domain controllers, backup vaults, and the Azure Migrate appliance.

This article covers that foundation. A separate article will cover disaster recovery with Azure Site Recovery, which builds on this same platform setup.

1. Management Groups and Landing Zones

The management group hierarchy for server migrations follows the same CAF "start small and expand" structure as any other deployment. The key difference from an online application setup is the landing zone label. Rather than an Online landing zone (for user-facing web applications), migration workloads go into a Corplanding zone - Corp being the conventional CAF label for corporate line-of-business applications: internal systems, ERP, file services, databases, and the workloads you'd find in a traditional datacenter.

The structure is: Tenant Root Group at the top, your Organization management group, branching into Platform and Landing Zones. Platform holds a Shared subscription for infrastructure that spans everything - networking, domain controllers, backup, and the migration tooling. Landing Zones holds the Corp management group, which holds a Dev/Test subscription and a Prod subscription where migrated servers will live.

Why management groups instead of just subscriptions?Subscriptions are Azure's billing and quota boundary, but they have no native hierarchy. If you manage governance at the subscription level - assigning policies, RBAC, and budgets to each subscription individually - every new subscription requires manual configuration, and there's no consistent enforcement baseline across all of them. Management groups sit above subscriptions and let you apply governance once at a level that cascades automatically to every subscription underneath, including subscriptions added in the future. That's the core value: governance that scales without repetition.

Why separate Platform from Landing Zones?Platform infrastructure - your hub VNet, VPN Gateway, domain controllers, DNS - is shared by every workload team. It has different ownership, a different change cadence, and different access requirements than the servers running business applications. Separating Platform into its own management group and subscription means the networking team can have Contributor access to platform resources without any access to Corp workload subscriptions, and vice versa. Platform governance policies don't interfere with workload policies, and the two can evolve independently. If a workload team misconfigures something in their Corp subscription, it has no blast radius into the shared networking infrastructure that every other workload depends on.

Why multiple landing zone subscriptions instead of one? Azure subscriptions have hard limits, not soft ones - per-region VM core quotas, storage account limits, resource group caps. More importantly: if Dev/Test and Prod share a subscription, a misconfigured policy or a runaway deployment in Dev/Test can affect Prod, and billing is co-mingled. Separate subscriptions give each environment its own quota pool, its own billing boundary, and its own blast radius. Adding a new environment is a new subscription - the management group hierarchy handles governance automatically with no additional configuration.

Policies at the Corp level enforce what matters for server workloads: no accidental public IP assignments, required tagging for cost tracking, allowed regions. RBAC and budgets cascade down automatically.

# Create management group hierarchy
az account management-group create   --name "org-contoso"   --display-name "Contoso"

az account management-group create   --name "platform"   --display-name "Platform"   --parent "org-contoso"

az account management-group create   --name "landing-zones"   --display-name "Landing Zones"   --parent "org-contoso"

az account management-group create   --name "corp"   --display-name "Corp"   --parent "landing-zones"

# Deny public IPs at the Corp landing zone level
az policy assignment create   --name "deny-public-ips"   --scope "/providers/Microsoft.Management/managementGroups/corp"   --policy "6c112d4e-5bc7-47ae-a041-ea2d9dccd749"

# Move subscriptions into the right management groups
az account management-group subscription add   --name "platform"   --subscription <shared-subscription-id>

az account management-group subscription add   --name "corp"   --subscription <corp-prod-subscription-id>

2. Use Your M365 Tenant - Identity and Access

If your organization already has a Microsoft 365 tenant, your Azure deployment should live in that same tenant. Entra ID is shared between M365 and Azure - your existing user accounts, groups, and conditional access policies all carry over. When you assign Azure RBAC roles, you're assigning them to the same identities your users already authenticate with. No second directory to manage, no duplicate accounts.

Creating a separate Azure tenant from your M365 tenant means managing two directories, two identity sets, and either duplicate accounts or a guest B2B relationship that adds friction everywhere. Start in the right place.

Once you're in the right tenant, three access controls matter most:

MFA enforcement.All users accessing Azure should require MFA. A Conditional Access policy scoped to the "Microsoft Azure Management" app requiring phishing-resistant MFA closes a large attack surface in about 10 minutes.

Privileged Identity Management (PIM). Admins should not have standing Contributor or Owner access. Set up PIM eligible assignments so that when an admin needs elevated access, they activate their role with a justification and it expires after a set window - typically 1 to 8 hours. Day-to-day work happens as Reader. A compromised account with Reader access does far less damage than one with standing Contributor.

Break-glass accounts. Two accounts with Owner role at the Organization management group level, excluded from all Conditional Access policies including MFA, with long random passwords stored offline in a physical vault or a break-glass password manager. These exist for one scenario: your identity infrastructure breaks - tenant lockout, Conditional Access misconfiguration, PIM outage - and you need to get back in. They are never used for normal operations. Alert immediately on any sign-in activity from these accounts.

# Assign permanent Reader to the infra admins group at org level
az role assignment create   --role "Reader"   --assignee-object-id <infra-admins-group-id>   --scope "/providers/Microsoft.Management/managementGroups/org-contoso"

# PIM eligible assignments are configured in the Entra ID portal:
# Entra ID > Identity Governance > Privileged Identity Management
# > Azure Resources > <management group> > Eligible assignments
# Assign Contributor as eligible (not active) to the infra admins group

3. Hub Networking - VPN Gateway and VNet Peering

Server migrations require a private network path between your datacenter and Azure. That path lives in the Shared subscription and is then extended to Corp subscriptions via VNet peering.

The Shared subscription gets a hub VNet. A GatewaySubnet inside that VNet hosts either a VPN Gateway (site-to-site IPSec tunnel back to your on-premises firewall) or a Virtual WAN hub (better for multiple branch locations or if you want Azure to manage routing topology). For a single-datacenter migration, a standalone VPN Gateway is the right call - lower cost, simpler to operate, and sufficient bandwidth for the replication traffic.

Corp VNets peer to the hub VNet with --use-remote-gateways. This means Corp VNets inherit the VPN Gateway from Shared and get on-premises connectivity without deploying their own gateway. The Shared VNet sets --allow-gateway-transit on its side to permit this.

# Shared subscription - create resource group and hub VNet
az group create   --name rg-shared-network   --location eastus

az network vnet create   --name vnet-shared   --resource-group rg-shared-network   --address-prefixes 10.0.0.0/16

# GatewaySubnet is a required name for VPN Gateway
az network vnet subnet create   --name GatewaySubnet   --resource-group rg-shared-network   --vnet-name vnet-shared   --address-prefixes 10.0.0.0/27

# Subnet for shared services (domain controllers, etc.)
az network vnet subnet create   --name snet-shared-services   --resource-group rg-shared-network   --vnet-name vnet-shared   --address-prefixes 10.0.1.0/24

az network public-ip create   --name pip-vpngw-shared   --resource-group rg-shared-network   --sku Standard   --allocation-method Static

# VPN Gateway takes 30-45 minutes to provision
az network vnet-gateway create   --name vpngw-shared   --resource-group rg-shared-network   --vnet vnet-shared   --gateway-type Vpn   --vpn-type RouteBased   --sku VpnGw1   --public-ip-address pip-vpngw-shared
# Corp subscription - create VNet with AD DNS servers
az network vnet create   --name vnet-corp-prod   --resource-group rg-corp-network   --address-prefixes 10.1.0.0/16   --dns-servers 10.0.1.4 10.0.1.5

# Capture VNet resource IDs for peering
SHARED_VNET_ID=$(az network vnet show   --name vnet-shared   --resource-group rg-shared-network   --query id -o tsv)

CORP_VNET_ID=$(az network vnet show   --name vnet-corp-prod   --resource-group rg-corp-network   --query id -o tsv)

# Peering from Corp to Shared - inherits the VPN Gateway
az network vnet peering create   --name peer-corp-to-shared   --resource-group rg-corp-network   --vnet-name vnet-corp-prod   --remote-vnet $SHARED_VNET_ID   --use-remote-gateways

# Peering from Shared to Corp - enables gateway transit
az network vnet peering create   --name peer-shared-to-corp   --resource-group rg-shared-network   --vnet-name vnet-shared   --remote-vnet $CORP_VNET_ID   --allow-gateway-transit

For outbound internet traffic from migrated servers, there are three options. Forced tunneling routes all outbound traffic back through the VPN Gateway and out through your on-premises firewall - useful if your security team requires existing inspection infrastructure, at the cost of hairpin routing and added latency. A network virtual appliance (NVA) deployed in Azure (Palo Alto, Fortinet, etc.) mirrors your on-premises firewall in Azure - familiar operationally but adds VM management overhead. The cleanest long-term option is a secured Virtual WAN hub with Azure Firewall - platform-native, no VMs to manage - but it carries a higher per-hour cost. The right choice depends on your security requirements and how much operational overhead you want to carry in Azure.

Get future articles delivered to your inbox

Sign up to receive new articles on cloud infrastructure, DevOps, and software development as they're published.

4. Shared Platform Resources - Domain Controllers and Backup

Migrated servers are almost always domain-joined. Active Directory domain controllers need to be reachable from Azure before the first server lands. Deploy two Windows Server VMs into the shared services subnet of the Shared VNet, promote them as AD DS domain controllers joining the existing on-premises domain via the VPN tunnel, and configure them as additional DCs in the same AD site or a new Azure AD site.

Once the DCs are up, update the Corp VNet DNS servers to point to the DC IPs - which is why --dns-servers in the VNet create command above references those addresses. Migrated servers need to resolve AD DNS queries (SRV records, domain controller locator records) for domain join and Kerberos authentication to work. Do not leave Corp VNets pointing at Azure default resolvers.

Azure Backup gets deployed in two places. A Recovery Services Vault in the Shared subscription backs up the domain controllers. A separate vault in each Corp subscription backs up the migrated servers. Keeping vaults per subscription means restore operations stay within subscription boundaries, and Corp teams can manage their own backup policies without touching platform infrastructure.

# Recovery Services Vault for shared infrastructure (backs up domain controllers)
az backup vault create   --name rsv-shared   --resource-group rg-shared-backup   --location eastus

# Recovery Services Vault for Corp subscription (backs up migrated servers)
az backup vault create   --name rsv-corp-prod   --resource-group rg-corp-backup   --location eastus

# Enroll a migrated VM in backup once it's in Azure
az backup protection enable-for-vm   --resource-group rg-corp-prod   --vault-name rsv-corp-prod   --vm <vm-name>   --policy-name DefaultPolicy

5. Azure Migrate - Discovery and Migration

With the platform in place - management groups, networking, domain controllers, backup - you can set up Azure Migrate. The project lives in the Shared subscription because migration is a platform-level activity that spans multiple Corp landing zones. There's no reason to create a new migrate project per destination subscription.

Create the Azure Migrate project in the Azure portal (search "Azure Migrate", create a project in a resource group in the Shared subscription). Then download and deploy the Azure Migrate appliance into your on-premises environment. The appliance is a pre-built VM image available for Hyper-V, VMware/vCenter, or physical servers. It registers back to your Azure Migrate project over HTTPS on port 443 - no inbound firewall rules needed, just outbound internet access from the appliance VM. If you prefer to keep registration traffic private, it can route through the VPN tunnel instead.

# Create resource group in Shared subscription for Azure Migrate
az group create   --name rg-azure-migrate   --location eastus

# Azure Migrate project creation is done in the Azure portal:
# portal.azure.com > Azure Migrate > Create project
# Select the rg-azure-migrate resource group

# Once the appliance is deployed on-premises, check registered appliances
az offazure hyperv site show   --resource-group rg-azure-migrate   --name <site-name>

# List discovered machines after appliance connects
az offazure hyperv machine list   --resource-group rg-azure-migrate   --site-name <site-name>

Once the appliance is running and connected to vCenter or Hyper-V, discovery starts automatically. Within a few hours you'll see all discovered VMs in the Azure portal under the Migrate project. From there, enable agentless dependency analysis - the appliance collects network connection data from discovered VMs without installing any agent, letting you visualize which servers communicate with which others. Use this to define move groups: sets of servers that should be migrated together because they have application-layer dependencies.

Run performance-based assessments to get right-sized Azure VM SKU recommendations based on actual CPU and memory utilization over the last 30 days rather than provisioned specs. This typically reveals significant over-provisioning in on-premises environments and produces a more accurate cost estimate for the migrated environment.

When ready to migrate, start replication from the Migrate project. Replication is agentless for VMware workloads and uses the Hyper-V provider for Hyper-V environments. Data replicates to a staging storage account in the Shared subscription, then you perform a test migration to a test VNet to validate the VM boots and is reachable before cutting over to the Corp production VNet. Migrated VMs land in resource groups you specify in the Corp Dev/Test or Prod subscription, attached to the Corp VNet, which through peering can reach the domain controllers in Shared and on-premises resources through the VPN Gateway.

What Comes Next

This foundation - management groups, M365 tenant alignment, PIM, hub VNet with VPN Gateway, shared domain controllers, backup vaults, and Azure Migrate - is the platform layer. It applies equally to lift-and-shift migrations and to disaster recovery setups using Azure Site Recovery. The initial Azure setup is identical for both; the difference is in what you layer on once the platform is ready.

DR with Site Recovery is covered in a separate article: the same management groups, the same hub VNet, the same domain controllers in Shared - but Site Recovery vaults and replication policies instead of Migrate projects and appliances.

This is the approach covered in the Azure Engineering Bootcamp, a 12-session hands-on program that takes your engineers from zero to migrating or deploying production workloads on Azure. Get in touch to discuss your needs.