GCP Networking Best Practices: Global VPC, Shared VPC, and Cloud Interconnect

Master Google Cloud networking with this practical guide. Learn global VPC design, firewall rules, Shared VPC, Cloud Interconnect, and VPC Service Controls with production-ready examples.

Share:

GCP Networking: Production-Ready Guide

This is Part 3 of our Cloud Networking series.

Series Navigation

Quick Start: Deploy Your First GCP VPC

Terminal window
# Run in Google Cloud Shell
# Create custom VPC
gcloud compute networks create production-vpc \
--subnet-mode=custom \
--bgp-routing-mode=regional
# Create subnet in us-central1
gcloud compute networks subnets create us-central1-subnet \
--network=production-vpc \
--region=us-central1 \
--range=10.0.0.0/20 \
--enable-private-ip-google-access \
--enable-flow-logs
# Create Cloud NAT for outbound internet
gcloud compute routers create nat-router \
--network=production-vpc \
--region=us-central1
gcloud compute routers nats create nat-config \
--router=nat-router \
--region=us-central1 \
--auto-allocate-nat-external-ips \
--nat-all-subnet-ip-ranges

Cost: $30-60/month (Cloud NAT + data)

GCP VPC Architecture

Key Difference: GCP VPCs are global by default! This is fundamentally different from AWS and Azure, where VPCs/VNets are regional. In GCP, you create one VPC and then add regional subnets to it as needed. This simplifies multi-region architectures significantly.

What this means for you:

  • No VPC peering needed between regions—it’s all one VPC
  • Internal IPs are routable globally within your VPC
  • Simplified network architecture for global applications
  • Lower latency between regions (traffic stays on Google’s network)

Global VPC (10.0.0.0/16)

europe-west1

us-central1

Subnet (10.1.0.0/20)

Subnet (10.0.0.0/20)

Outbound

Outbound

Internal

Internet

VM Instances

GKE Cluster

Cloud NAT

VM Instances

GKE Cluster

Cloud NAT

Global Load Balancer

Key GCP Networking Concepts

1. Global VPC

Unique to GCP: VPCs span all regions globally!

Global VPC Benefits

  • Single VPC globally - One VPC spans all regions, unlike AWS/Azure
  • No VPC peering - No need to peer VPCs between regions
  • Simplified architecture - Reduces complexity for multi-region deployments
  • Global routing - Internal IPs routable across all regions

Global VPC Best Practices

  • Use /16 for VPC - Not required but provides good IP space
  • Regional subnets - Create subnets in regions as needed
  • Global routing mode - Enable for multi-region applications
  • Plan secondary ranges - Essential for GKE pods and services

2. Subnets (Regional)

In GCP, subnets are regional resources that span all zones within a region. This is different from AWS where subnets are tied to a single Availability Zone. This design simplifies high availability—you don’t need to create separate subnets for each zone.

Key characteristics:

  • Regional scope: One subnet spans all zones in a region (e.g., us-central1-a, us-central1-b, us-central1-c)
  • Expandable: You can expand the CIDR range without downtime or recreating the subnet
  • Secondary ranges: Support additional IP ranges for GKE pods and services
  • Private Google Access: VMs without external IPs can access Google APIs

Why this matters: When you deploy VMs across multiple zones for high availability, they can all use the same subnet. No need to manage separate subnets per zone like in AWS.

Example Terraform:

resource "google_compute_network" "vpc" {
name = "production-vpc"
auto_create_subnetworks = false
routing_mode = "GLOBAL"
mtu = 1460
}
resource "google_compute_subnetwork" "us_central1" {
name = "us-central1-subnet"
ip_cidr_range = "10.0.0.0/20"
region = "us-central1"
network = google_compute_network.vpc.id
# Secondary ranges for GKE
secondary_ip_range {
range_name = "pods"
ip_cidr_range = "10.0.64.0/18"
}
secondary_ip_range {
range_name = "services"
ip_cidr_range = "10.0.32.0/20"
}
private_ip_google_access = true
log_config {
aggregation_interval = "INTERVAL_5_SEC"
flow_sampling = 0.5
metadata = "INCLUDE_ALL_METADATA"
}
}
resource "google_compute_subnetwork" "europe_west1" {
name = "europe-west1-subnet"
ip_cidr_range = "10.1.0.0/20"
region = "europe-west1"
network = google_compute_network.vpc.id
secondary_ip_range {
range_name = "pods"
ip_cidr_range = "10.1.64.0/18"
}
secondary_ip_range {
range_name = "services"
ip_cidr_range = "10.1.32.0/20"
}
private_ip_google_access = true
log_config {
aggregation_interval = "INTERVAL_5_SEC"
flow_sampling = 0.5
metadata = "INCLUDE_ALL_METADATA"
}
}

3. VPC Firewall Rules

GCP’s firewall rules work differently than AWS Security Groups or Azure NSGs. They’re applied at the VPC level but enforced at each VM instance. Think of them as distributed firewalls that follow your VMs wherever they go.

How they work:

  • Stateful: Like AWS Security Groups, return traffic is automatically allowed
  • Applied by tags or service accounts: Instead of applying to subnets or instances, you target VMs by network tags (e.g., web-server) or service accounts
  • Priority-based: Rules are evaluated in priority order (0-65535, lower = higher priority)
  • Default behavior: Deny all ingress, allow all egress

Why tags matter: Using network tags is powerful—you can apply firewall rules to VMs based on their role, not their location. This makes it easy to enforce consistent security policies across regions.

Rule evaluation:

Firewall Rules (Priority Order)

Internet

Priority 1000: Allow HTTPS from Internet

Priority 2000: Allow SSH from IAP

Priority 3000: Allow Internal

Priority 65534: Deny All

Production Example:

# Allow HTTPS from internet to web servers
resource "google_compute_firewall" "allow_https" {
name = "allow-https"
network = google_compute_network.vpc.name
allow {
protocol = "tcp"
ports = ["443"]
}
source_ranges = ["0.0.0.0/0"]
target_tags = ["web-server"]
priority = 1000
}
# Allow SSH via Identity-Aware Proxy (IAP)
resource "google_compute_firewall" "allow_iap_ssh" {
name = "allow-iap-ssh"
network = google_compute_network.vpc.name
allow {
protocol = "tcp"
ports = ["22"]
}
# IAP's IP range
source_ranges = ["35.235.240.0/20"]
priority = 1000
}
# Allow internal traffic
resource "google_compute_firewall" "allow_internal" {
name = "allow-internal"
network = google_compute_network.vpc.name
allow {
protocol = "tcp"
ports = ["0-65535"]
}
allow {
protocol = "udp"
ports = ["0-65535"]
}
allow {
protocol = "icmp"
}
source_ranges = ["10.0.0.0/8"]
priority = 2000
}
# Deny all other traffic (explicit)
resource "google_compute_firewall" "deny_all" {
name = "deny-all"
network = google_compute_network.vpc.name
deny {
protocol = "all"
}
source_ranges = ["0.0.0.0/0"]
priority = 65534
}

4. Cloud NAT

Cloud NAT is GCP’s fully managed NAT service. Like AWS NAT Gateway and Azure NAT Gateway, it provides outbound internet connectivity for VMs without public IPs. The key difference? Cloud NAT is configured on a Cloud Router, not directly on subnets.

How it works:

  • Cloud NAT is configured on a Cloud Router (required component)
  • You specify which subnets or VMs can use the NAT
  • Automatically allocates public IPs or uses specified IPs
  • Scales automatically based on demand
  • Regional service—deploy one per region

Cost: $0.044/hour ($32/month per region) + $0.045/GB data processed

Cloud NAT Benefits

  • No VM management - Fully managed service, no NAT instances
  • Automatic scaling - Handles traffic spikes automatically
  • Regional HA - Built-in high availability within region
  • Cost-effective - ~$32/month per region + data processing

Implementation:

# Cloud Router (required for Cloud NAT)
resource "google_compute_router" "router" {
name = "nat-router"
network = google_compute_network.vpc.name
region = "us-central1"
bgp {
asn = 64514
}
}
# Cloud NAT
resource "google_compute_router_nat" "nat" {
name = "cloud-nat"
router = google_compute_router.router.name
region = google_compute_router.router.region
nat_ip_allocate_option = "AUTO_ONLY"
source_subnetwork_ip_ranges_to_nat = "ALL_SUBNETWORKS_ALL_IP_RANGES"
log_config {
enable = true
filter = "ERRORS_ONLY"
}
}

5. Identity-Aware Proxy (IAP)

Identity-Aware Proxy is one of GCP’s most underrated features. It provides zero-trust access to your VMs and applications without requiring VPNs, bastion hosts, or public IPs. IAP verifies user identity and context before granting access.

Why it’s better than traditional approaches:

  • No VPN needed: Users connect directly through their browser or gcloud CLI
  • No bastion hosts: No jump boxes to manage, patch, or secure
  • Context-aware: Can enforce access based on user identity, device security status, and more
  • Free: No additional cost for IAP TCP forwarding (SSH/RDP)
  • Audit trail: All access is logged for compliance

How it works:

  1. User requests access to a VM
  2. IAP checks user’s identity and context against access policies
  3. If approved, IAP creates an encrypted tunnel to the VM
  4. User connects over the tunnel using standard SSH/RDP

Identity-Aware Proxy (IAP) Benefits

  • No bastion hosts - Eliminates need for jump boxes
  • No VPN required - Direct access without VPN setup
  • Google Identity integration - Uses existing Google accounts
  • Free! - No additional cost for IAP access

Setup:

Terminal window
# Enable IAP for SSH
gcloud compute firewall-rules create allow-iap-ssh \
--direction=INGRESS \
--action=allow \
--rules=tcp:22 \
--source-ranges=35.235.240.0/20
# Connect to VM via IAP
gcloud compute ssh my-vm \
--zone=us-central1-a \
--tunnel-through-iap

Shared VPC: Multi-Project Architecture

Enterprise pattern for centralized network management.

Host Project (Network Admin)

Service Project 3 (Team C)

Application

Service Project 2 (Team B)

Application

Service Project 1 (Team A)

Application

Shared VPC

Firewall Rules

When to Use Shared VPC

  • Multiple teams/projects - Separate projects sharing network infrastructure
  • Centralized network management - Single team manages networking
  • Shared services - Common DNS, monitoring, and security services
  • Cost allocation - Track costs by project while sharing network

Setup:

Terminal window
# Enable Shared VPC in host project
gcloud compute shared-vpc enable host-project-id
# Associate service projects
gcloud compute shared-vpc associated-projects add service-project-1 \
--host-project host-project-id
gcloud compute shared-vpc associated-projects add service-project-2 \
--host-project host-project-id
# Grant subnet access to service project
gcloud projects add-iam-policy-binding host-project-id \
--member="serviceAccount:service-account@service-project-1.iam.gserviceaccount.com" \
--role="roles/compute.networkUser"

Terraform Example:

# Host project
resource "google_compute_shared_vpc_host_project" "host" {
project = "host-project-id"
}
# Service project association
resource "google_compute_shared_vpc_service_project" "service1" {
host_project = google_compute_shared_vpc_host_project.host.project
service_project = "service-project-1"
}
# Grant subnet access
resource "google_project_iam_member" "subnet_user" {
project = "host-project-id"
role = "roles/compute.networkUser"
member = "serviceAccount:service-account@service-project-1.iam.gserviceaccount.com"
}

Cloud Interconnect

Cloud Interconnect provides dedicated, private connectivity between your on-premises network and Google Cloud. Unlike Cloud VPN which goes over the public internet, Cloud Interconnect uses a direct physical connection, offering higher bandwidth, lower latency, and more predictable performance.

Two types of Cloud Interconnect:

  1. Dedicated Interconnect: Direct physical connection to Google’s network (10 or 100 Gbps)
  2. Partner Interconnect: Connection through a supported service provider (50 Mbps to 50 Gbps)

Why use Cloud Interconnect:

  • Higher bandwidth: Up to 100 Gbps vs Cloud VPN’s 3 Gbps per tunnel
  • Lower latency: Direct connection to Google’s network backbone
  • Predictable performance: Dedicated bandwidth, not shared with internet traffic
  • Cost savings: Lower egress costs for high-volume data transfer
  • Compliance: Some regulations require private connectivity

How it works:

  1. Order a connection at a supported colocation facility (Dedicated) or through a partner (Partner)
  2. Google provisions a VLAN attachment in your VPC
  3. Configure BGP routing between your router and Google’s Cloud Router
  4. Traffic flows over the private connection, bypassing the internet

Comparison:

Cloud Interconnect Options

Dedicated Interconnect:

  • Bandwidth: 10 Gbps or 100 Gbps
  • Best for: Large enterprises, high bandwidth needs
  • Setup: Weeks (requires colocation facility access)
  • Cost: Highest (~$1,700/month for 10 Gbps)

Partner Interconnect:

  • Bandwidth: 50 Mbps to 50 Gbps (flexible)
  • Best for: Medium enterprises, don’t have colocation
  • Setup: Days (through service provider)
  • Cost: Medium (varies by provider and bandwidth)

Cloud VPN:

  • Bandwidth: Up to 3 Gbps per tunnel
  • Best for: Small to medium workloads, backup connectivity
  • Setup: Minutes (fully self-service)
  • Cost: Lowest (~$36/month per tunnel)

Implementation:

# Cloud Router for BGP
resource "google_compute_router" "interconnect_router" {
name = "interconnect-router"
network = google_compute_network.vpc.name
region = "us-central1"
bgp {
asn = 64514
}
}
# Dedicated Interconnect Attachment
resource "google_compute_interconnect_attachment" "dedicated" {
name = "dedicated-interconnect"
region = "us-central1"
type = "DEDICATED"
router = google_compute_router.interconnect_router.id
interconnect = "my-interconnect-id"
vlan_tag8021q = 1000
bandwidth = "BPS_10G"
bgp {
session_range = "169.254.0.1/30"
advertised_route_priority = 100
customer_router_ip_address = "169.254.0.2"
}
}
# Cloud VPN (as backup)
resource "google_compute_ha_vpn_gateway" "vpn_gateway" {
name = "ha-vpn-gateway"
network = google_compute_network.vpc.id
region = "us-central1"
}
resource "google_compute_vpn_tunnel" "tunnel1" {
name = "ha-vpn-tunnel1"
region = "us-central1"
vpn_gateway = google_compute_ha_vpn_gateway.vpn_gateway.id
peer_external_gateway = google_compute_external_vpn_gateway.external.id
shared_secret = var.vpn_shared_secret
router = google_compute_router.interconnect_router.id
vpn_gateway_interface = 0
}

VPC Service Controls

VPC Service Controls is an advanced security feature that creates a security perimeter around Google Cloud resources to prevent data exfiltration. Think of it as a virtual fence that controls which services can be accessed from where, and by whom.

The problem it solves: Even with proper IAM permissions, a compromised credential could allow someone to copy data from Cloud Storage or BigQuery to an external location. VPC Service Controls prevents this by restricting access to Google Cloud services based on network context.

How it works:

  1. Create an Access Policy at the organization level
  2. Define Access Levels (who can access: IP ranges, user identities, device attributes)
  3. Create Service Perimeters (security boundaries around projects and services)
  4. Specify which services are protected and which access levels are allowed

Key capabilities:

  • Context-aware access: Allow access only from specific IP ranges or VPCs
  • Service restrictions: Control which Google Cloud APIs can be accessed
  • Data exfiltration prevention: Block copying data outside the perimeter
  • VPC-SC bridges: Allow controlled communication between perimeters

Example scenario: You have sensitive data in Cloud Storage and BigQuery. With VPC Service Controls, you can ensure this data can only be accessed from your VPC or office IP ranges, preventing a compromised credential from being used to exfiltrate data from an external location.

VPC Service Controls Use Cases

  • Protect sensitive data - Create security perimeters around data
  • Compliance requirements - Meet HIPAA, PCI-DSS, and other standards
  • Prevent data exfiltration - Stop accidental or malicious data exposure
  • Control API access - Restrict access to Google Cloud APIs

Example:

# Access Policy
resource "google_access_context_manager_access_policy" "policy" {
parent = "organizations/${var.org_id}"
title = "Production Access Policy"
}
# Access Level
resource "google_access_context_manager_access_level" "access_level" {
parent = "accessPolicies/${google_access_context_manager_access_policy.policy.name}"
name = "accessPolicies/${google_access_context_manager_access_policy.policy.name}/accessLevels/production_access"
title = "Production Access"
basic {
conditions {
ip_subnetworks = ["10.0.0.0/8"]
members = ["user:admin@example.com"]
}
}
}
# Service Perimeter
resource "google_access_context_manager_service_perimeter" "perimeter" {
parent = "accessPolicies/${google_access_context_manager_access_policy.policy.name}"
name = "accessPolicies/${google_access_context_manager_access_policy.policy.name}/servicePerimeters/production_perimeter"
title = "Production Perimeter"
status {
restricted_services = [
"storage.googleapis.com",
"bigquery.googleapis.com"
]
access_levels = [
google_access_context_manager_access_level.access_level.name
]
vpc_accessible_services {
enable_restriction = true
allowed_services = [
"storage.googleapis.com",
"bigquery.googleapis.com"
]
}
}
}

GKE Networking

Google Kubernetes Engine (GKE) networking requires careful IP planning because each cluster needs three separate IP ranges: one for nodes, one for pods, and one for services. Understanding this is crucial to avoid running out of IPs as your cluster scales.

Why GKE needs three IP ranges:

  1. Node subnet (primary): IPs for the GKE nodes themselves (VMs)
  2. Pod range (secondary): IPs for Kubernetes pods running on those nodes
  3. Service range (secondary): IPs for Kubernetes services (cluster IPs)

VPC-native vs Routes-based clusters:

  • VPC-native (recommended): Pods get IPs from your VPC’s secondary ranges, routable within your VPC
  • Routes-based (legacy): Pods get IPs from a separate range, requires routes

The IP math you need to know:

  • Each GKE node can run ~110 pods (default maximum)
  • A 100-node cluster needs ~11,000 pod IPs minimum
  • Always plan for growth—use larger ranges than you think you need

IP Planning for GKE:

VPC: 10.0.0.0/16
Node Subnet (primary):
- 10.0.0.0/20 (4,096 IPs for nodes)
Pod Range (secondary):
- 10.0.64.0/18 (16,384 IPs for pods)
Service Range (secondary):
- 10.0.32.0/20 (4,096 IPs for services)
Rule: Each node needs ~110 pod IPs
100 nodes = 11,000 pod IPs minimum

Implementation:

resource "google_container_cluster" "primary" {
name = "production-gke"
location = "us-central1"
# VPC-native cluster
network = google_compute_network.vpc.name
subnetwork = google_compute_subnetwork.us_central1.name
ip_allocation_policy {
cluster_secondary_range_name = "pods"
services_secondary_range_name = "services"
}
# Private cluster
private_cluster_config {
enable_private_nodes = true
enable_private_endpoint = false
master_ipv4_cidr_block = "172.16.0.0/28"
}
# Master authorized networks
master_authorized_networks_config {
cidr_blocks {
cidr_block = "10.0.0.0/8"
display_name = "Internal"
}
}
# Network policy
network_policy {
enabled = true
provider = "CALICO"
}
}

Cloud Armor

Cloud Armor is GCP’s web application firewall (WAF) and DDoS protection service. It sits in front of your global load balancers, protecting your applications from common web attacks, DDoS attempts, and malicious traffic before it reaches your backend services.

What it protects against:

  • DDoS attacks: Automatic detection and mitigation of volumetric attacks
  • OWASP Top 10: SQL injection, cross-site scripting (XSS), and other web vulnerabilities
  • Bot attacks: Identify and block malicious bots
  • Geographic attacks: Block traffic from specific countries or regions
  • Application-layer attacks: Custom rules to protect your specific application logic

How it works:

  1. Attach a Cloud Armor security policy to your backend service (behind a load balancer)
  2. Define rules with priorities (lower number = evaluated first)
  3. Rules can allow, deny, or rate-limit traffic based on conditions
  4. Traffic is inspected before reaching your backends

Key features:

  • IP allowlist/blocklist: Block or allow specific IP ranges
  • Rate limiting: Limit requests per IP (prevent abuse)
  • Geo-based control: Allow/deny traffic by country
  • Custom rules: Use Common Expression Language (CEL) for complex logic
  • Preconfigured rules: OWASP ModSecurity Core Rule Set integration
  • Adaptive protection: ML-based detection of L7 DDoS attacks

Cost: Free for basic DDoS protection; advanced features ~$5/policy/month + $0.75 per million requests

Example:

resource "google_compute_security_policy" "policy" {
name = "production-security-policy"
# Block specific IP ranges
rule {
action = "deny(403)"
priority = "1000"
match {
versioned_expr = "SRC_IPS_V1"
config {
src_ip_ranges = ["203.0.113.0/24"]
}
}
description = "Block malicious IPs"
}
# Rate limiting
rule {
action = "rate_based_ban"
priority = "2000"
match {
versioned_expr = "SRC_IPS_V1"
config {
src_ip_ranges = ["*"]
}
}
rate_limit_options {
conform_action = "allow"
exceed_action = "deny(429)"
enforce_on_key = "IP"
rate_limit_threshold {
count = 100
interval_sec = 60
}
ban_duration_sec = 600
}
description = "Rate limit: 100 req/min"
}
# Default allow
rule {
action = "allow"
priority = "2147483647"
match {
versioned_expr = "SRC_IPS_V1"
config {
src_ip_ranges = ["*"]
}
}
description = "Default allow"
}
}
# Attach to backend service
resource "google_compute_backend_service" "default" {
name = "backend-service"
health_checks = [google_compute_health_check.default.id]
security_policy = google_compute_security_policy.policy.id
}

Cost Optimization

Monthly Cost Example:

Cloud NAT (us-central1): $32
Cloud NAT (europe-west1): $32
Cloud Load Balancer: $18
Data Transfer (inter-region): $20
Total: ~$100/month

Optimization Strategies

Enable Private Google Access (FREE)

Allow VMs without external IPs to access Google APIs and services at no cost. Eliminates Cloud NAT charges for Google service traffic.

Single Cloud NAT for Dev/Test

Deploy one Cloud NAT per region in non-production environments instead of multiple NATs to save ~$32/month per additional NAT.

Use Standard Tier Networking

Switch from Premium to Standard tier for non-critical workloads. Can reduce egress costs by up to 50% with acceptable latency trade-offs.

Implement Cloud CDN

Use Cloud CDN to cache content at edge locations, reducing origin egress charges and improving performance for global users.

Analyze VPC Flow Logs

Review VPC Flow Logs to identify high-traffic patterns, optimize routing, and reduce unnecessary inter-region data transfer.

Optimize GKE IP Allocation

Right-size GKE secondary IP ranges to avoid wasting IP space. Use /24 for services and /18-/20 for pods based on actual needs.

Common Issues

Issue 1: Can’t SSH to VMs

Solution: Use Identity-Aware Proxy (IAP)

Terminal window
gcloud compute ssh my-vm --tunnel-through-iap

Issue 2: High Cloud NAT Costs

Solution: Enable Private Google Access

Terminal window
gcloud compute networks subnets update SUBNET \
--enable-private-ip-google-access

Issue 3: GKE Out of IPs

Prevention: Plan secondary ranges generously

  • Each node needs ~110 pod IPs
  • Use /18 or larger for pod range

Troubleshooting Tools

GCP Networking Troubleshooting Tools

  • Network Intelligence Center - Topology visualization and network insights
  • Connectivity Tests - Verify and diagnose network paths
  • VPC Flow Logs - Detailed traffic analysis and monitoring
  • Cloud Logging - Firewall rule logs and audit trails
  • Performance Dashboard - Network performance metrics and diagnostics

This completes the 3 part cloud networking series. Hope this was helpful!

Series Navigation

Resources

Need help? Contact Quabyt for GCP networking architecture support.

Back to Blog