Auto-Anchor Mobility Fundamentals

Having a fully redundant guest network architecture can be beneficial for service availability. Depending on business and operational requirements, many organizations use the guest architecture for purposes where traffic needs to be tunneled to a single point in the network, not necessarily just "guest" traffic in the traditional sense.

In a Cisco Unified wireless network deployment, this is accomplished with Auto-Anchor Mobility Tunneling by establishing Mobility peer relationships between internal production controllers and isolated controllers (typically in a bastion host or DMZ segment firewalled from the internal network). When using the auto-anchor mobility feature, these controllers do not need to have the same mobility group name because no layer 2 fast roaming or session state synchronization needs to occur between the controllers (layer 3 roaming is still performed to allow the IP address to be maintained by the client).

Typically, the DMZ anchor controllers are simply termination points within the isolated network segment and do not directly control any lightweight access points. This allows the DMZ anchor controllers to be smaller scale controllers, sized for bandwidth and throughput rather than AP licensing. Typically 4402-25 or 5508-25 controllers are used. Also, layer 2 roaming between APs on the controllers does not come into play.

Also, note that each production internal controller should have a mobility peer relationship with every other DMZ anchor controller to which it will send traffic. However, each DMZ anchor controller only needs mobility peers with each production internal controller, not with other DMZ anchors controllers.

Mobility communication between controllers occurs using their management interfaces, and uses the following protocols:

  • UDP 16,666 is used for Mobility control traffic between peers (Control Path)
  • IP Protocol 97 is used for Ethernet-in-IP traffic tunneling of client traffic (Data Path)

To setup the mobility peer relationship, navigate to the Controller > Mobility Management > Mobility Groups section. Add a new mobility group member by specifying the peer's management interface MAC address, IP address, and it's mobility group name (not the local one).

The status will initially show both the Control and Data paths down. Once communication is established, the status will show as "Up". Mobility peer connection establishment and keep-alive is performed at a periodic interval which defaults to every 10 seconds.

Note - The control and data paths may individually be shown as down if communication can be established using one protocol but not another. Check network ACLs or firewalls for traffic restrictions if this is the case.

To verify connectivity and peer kee-palive timers at any time, the following CLI commands may be useful:

  • mping peer-ip-address - used to test the Control Path between mobility peers
  • eping peer-ip-address - used to test the Data Path between mobility peers
  • show mobility summary - used to view mobility configuration and timers

Next, anchor the guest WLAN to multiple anchor controllers in the DMZ for round-robin client load balancing and redundancy. All mobility anchor controllers are used (active/active operation). This is accomplished from WLANs > WLAN Name Blue Arrow Drop-Down Button > Mobility Anchors.

On the production internal controller - specify one or more DMZ anchor controllers as the mobility anchors for the WLAN.

On the DMZ anchor controllers - specify its own IP address (local) as the mobility anchor for the WLAN since it will be the termination point for the client traffic.

It is also important to mention that the WLAN configuration on the production internal and DMZ anchor controllers all be identical with only one exception:

  • Interface - the production internal controllers should have the "management" interface assigned to the WLAN to allow the client traffic to be tunneled to the DMZ anchor controllers. The DMZ anchor controllers should have a dynamic interface assigned to the WLAN to forward client traffic out to the network (this is where clients will obtain IP addresses).

In a failover scenario, once the production internal controller recognizes that the anchor controller is no longer reachable (during the next keep-alive interval), it marks the anchor as Down, de-authenticates clients, forces client re-authentication, and anchors them to one of the remaining Up anchor controllers. Failover could occur because of a controller failure, or even failure of a single port link if not using link aggregation (LAG). In the event of a single port failure, the anchor controller migrates the affected logical interface to the backup port assigned to the interface. Production controller failover to other active DMZ anchor controllers does occur, but re-establishment of the mobility relationship occurs within 10 seconds once the backup port becomes active and new clients are allowed to terminate on the anchor again.

Note - this may result in clients obtaining a new IP address if the anchor controllers are not attached to the same client subnets (perhaps they are in different data centers for instance).

In my testing, failover occurs within 6-10 seconds of taking the anchor controller offline.

There are a ton of related features, requirements, and design considerations for auto-anchor mobility, but this should provide a basic overview.