Azure Networking Basics

Akins IT • June 5, 2019
Connect with us

This blog serves to provide a quick high-level overview of what’s required to build out a basic Azure network infrastructure.


VIRTUAL NETWORK ADDRESS SPACE AND SUBNETS


Before any Azure VM’s can be deployed, a virtual network (VNET) must first be implemented within your Azure tenant. During the VNET provisioning process, you will also be required to specify a resource group for the VNET to exist in. Resource groups are logical containers for Azure objects.


When deciding on your Azure VNET, keep in mind that this VNET must be unique and cannot overlap with existing network ranges. If you have an on-prem network of 10.10.0.0/24 for example, your Azure VNET address space can be 10.20.0.0/16, which ensures that each virtual subnet you provision within Azure remains unique and routable.


Azure subnets are carved out of your configured VNET address space. Utilizing a /16 address space will provide you with a large enough range to accommodate your design requirements. From the 10.20.0.0/16 address space example above, additional subnets of 10.20.1.0/24, 10.20.2.0/24, 10.20.3.0/24 and so on, can be provisioned.


The subnets provide network segmentation like that of an on-prem infrastructure utilizing VLAN’s. VM’s can be deployed within each subnet based on their tier (i.e. front-end web server, application, database) and then segmented and secured using network security groups.

By default, subnets within a VNET have complete access to communicate with one another. To restrict access between subnets, a network security group (NSG) must be deployed and associated to each subnet where access control is needed (apply directly to VM interface for per VM access control). NSG’s can also be used to restrict or limit access between Azure subnets and On-Prem networks.


ON-PREM TO AZURE CONNECTIVITY


On-prem to Azure connectivity can be quickly achieved by deploying Azure’s virtual network gateway (VNG). The VNG acts as a public gateway for site-to-site connectivity using IPSEC VPN tunneling. It’s also important to note that the VNG is strictly for site connectivity and does not provide any security features.


For added security, it is recommended that a virtual security appliance / firewall be deployed in Azure and used as the VPN gateway instead. A virtual FortiGate for example would provide site-to-site IPSEC VPN connectivity as well as the usual UTM features like AV, IPS, web and application filtering.


When a virtual security appliance is used to secure the perimeter of the Azure network, a user defined route (UDR) must also be implemented. The UDR allows Azure traffic to be routed through the virtual security appliance.

By Shawn Akins October 20, 2025
October 20, 2025 — Early today, Amazon Web Services experienced a major incident centered in its US‑EAST‑1 (N. Virginia) region. AWS reports the event began around 12:11 a.m. PT and tied back to DNS resolution affecting DynamoDB , with mitigation within a couple of hours and recovery continuing thereafter. As the outage rippled, popular services like Snapchat, Venmo, Ring, Roblox, Fortnite , and even some Amazon properties saw disruptions before recovering. If your apps or data are anchored to a single cloud, a morning like this can turn into a help‑desk fire drill. A multi‑cloud or cloud‑smart approach helps you ride through these moments with minimal end‑user impact. What happened (and why it matters) Single‑region fragility: US‑EAST‑1 is massive—and when it sneezes, the internet catches a cold. Incidents here have a history of wide blast radius. Shared dependencies: DNS issues to core services (like DynamoDB endpoints) can cascade across workloads that never directly “touch” that service. Multi‑cloud: practical resilience, not buzzwords For mid‑sized orgs, schools, and local government, multi‑cloud doesn’t have to mean “every app in every cloud.” It means thoughtful redundancy where it counts : Multi‑region or multi‑provider failover for critical apps Run active/standby across AWS and Azure (or another provider), or at least across two AWS regions with automated failover. Start with citizen‑facing portals, SIS/LMS access, emergency comms, and payment gateways. Portable platforms Use Kubernetes and containers, keep state externalized, and standardize infra with Terraform/Ansible so you can redeploy fast when a region (or a provider) wobbles. (Today’s DNS hiccup is exactly the kind of scenario this protects against.) Resilient data layers Replicate data asynchronously across clouds/regions; choose databases with cross‑region failover and test RPO/RTO quarterly. If you rely on a managed database tied to one region, design an escape hatch. Traffic and identity that float Use global traffic managers/DNS to shift users automatically; keep identity (MFA/SSO) highly available and not hard‑wired to a single provider’s control plane. Run the playbook Document health checks, automated cutover, and comms templates. Then practice —tabletops and live failovers. Many services today recovered within hours, but only teams with rehearsed playbooks avoided user‑visible downtime. The bottom line Cloud concentration risk is real. Outages will happen—what matters is whether your constituents, students, and staff feel it. A pragmatic multi‑cloud stance limits the blast radius and keeps your mission‑critical services online when one provider has a bad day. Need a resilience check? Akins IT can help you prioritize which systems should be multi‑cloud, design the right level of redundancy, and validate your failover plan—without overspending. Let’s start with a quick, 30‑minute review of your most critical services and RPO/RTO targets. (No slideware, just actionable next steps.)
By Shawn Akins October 13, 2025
How a Zero-Day in GoAnywhere MFT Sparked a Ransomware Wave—and What Mid-Sized IT Leaders Must Do Now
By Shawn Akins October 13, 2025
The clock is ticking: Learn your options for Windows 11 migration, Extended Security Updates, and cost‑smart strategies before support ends.
More Posts