Aerohive Dynamic Airtime Scheduling

Aerohive Dynamic Airtime Scheduling

Photo of author
Written By Eric Sandler

By Lisa Phifer

July 06, 2009

Aerohive’s QoS booster stops slow Wi-Fi clients from hogging more than their fair share of the air.

Aerohive HiveOS 3.2 Dynamic Airtime Scheduling 

Pros:  Fully automated, protocol independent, optimizes downlink without penalizing anyone
Cons: Little visible improvement on uplink, cannot be enabled through HiveUI

In Part 1 of our review, we used Aerohive’s Web-based HiveUI and 300 Series HiveAPs to create a cooperative wireless “hive.” Here in Part 2, we use Aerohive’s HiveManager to optimize our HiveAP’s performance by combining industry standard quality-of-service (QoS) controls with Dynamic Airtime Scheduling, a patent-pending feature introduced in HiveOS 3.2.

Despite its high-throughput moniker, many 802.11n WLANs fail to achieve maximum potential. Reasons vary, but one common culprit is the drag induced by slower Wi-Fi clients. When we conducted open air IxChariot tests, Aerohive’s Dynamic Airtime Scheduling consistently squeezed more juice from faster Wi-Fi downlinks without penalizing slower clients—including distant 802.11n clients.

Unruly competition

All 802.11 clients—whether a, b, g, or n—use carrier sense multiple access with collision avoidance (CSMA/CA) to share a channel’s “air time.” Specifically, Wi-Fi clients implement MAC layer coordination functions (DCF, HCF) to give everyone the same number of transmit opportunities. Because channels are shared media, only one device should transmit at a time. To avoid collisions, any client with data to send must first listen to see if the channel is busy. If the channel is free, the client can transmit a frame (e.g., single or aggregate MPDU). If the channel is busy, the client must wait a random back-off period before trying again.

Ideally, if every client had the same amount of data to send, every client would get an equal slice of “air time.” In a lightly-used WLAN, clients with more data to send get to utilize more of the channel’s free time, while clients with less to send still get an equal shot at transmitting whenever they want.

However, transmission times depend on length and data rates. A frame sent at 1 Mbps takes twice as long to complete as the same frame sent at 2 Mbps. Even in legacy 802.11b WLANs, this disparity exists between clients operating at 11/5.5 Mbps and those operating at 2 Mbps, where clients slowed by distance and interference degrade aggregate WLAN throughput. To optimize 802.11a/g WLANs, many administrators configure APs with higher minimum data rates, trading smaller cell size (AP coverage) for better throughput and lower latency.

HiveAPs.jpg

But in 802.11n WLANs, max data rates are six times higher and differences between clients are far greater. Furthermore, 11n client performance can vary more, influenced by the presence/absence of obstacles that cause multi-path and exploitation of high-throughput options (e.g., number of antennas, frame aggregation). Unexpectedly slow clients can gobble up long chunks of air time, leaving fewer transmit opportunities for 802.11n clients that could otherwise jump in, send data quickly at 300 Mbps, and be done.

Restoring order

Many standard and proprietary mechanisms can influence the way that Wi-Fi clients share the air. With Dynamic Airtime Scheduling, Aerohive has combined conventional QoS control techniques with its own protocol-independent optimized scheduling algorithm.

For example, all 802.11a/g/n APs (including Aerohive’s) support slow 802.11b data rates for compatibility, but can be set to disallow feeble 802.11b clients and costly 11b protection mechanisms. Many dual-radio 802.11n APs (like our Series 300 HiveAPs) can support 802.11g clients in the 2.4 GHz band while communicating with only new clients in the 5 GHz band (or at least on a different channel). These common approaches segregate legacy clients to reduce their drag on 802.11n, but they don’t eliminate distant or under-performing 802.11n clients—including early Draft 1 and consumer-grade Draft 2 clients.

Today’s enterprise 802.11n APs (including the HiveAP 320/340s we tested) support industry standard 802.11e Wi-Fi Multi-Media (WMM) prioritization. WMM defines four wireless access categories—voice, video, best effort, background –with different transmit queues, inter-frame spacing, random back-off periods, and 802.1d priorities. Frames associated with higher WMM priorities can be sent more frequently and/or for longer durations. For example, WMM lets voice handsets send short SIP/RTP frames at consistent intervals in converged WLANs otherwise dominated by bandwidth-hungry data clients.

However, WMM alone cannot distribute airtime equally across clients that send frames at the same priority, but different speeds. Several vendors have taken proprietary whacks at allocating airtime to reduce slow client drag. Meru was the first to add “airtime fairness” to its controller-directed virtual cell architecture, subsequently refined into virtual ports. Last summer, Aruba released an Adaptive Radio Management upgrade that could prioritize 802.11n clients over 802.11a/g clients. This February, Aerohive entered the fray by introducing Dynamic Airtime Scheduling to improve performance in congested WLANs.

Seeing is believing

As CWNP’s Devin Akin observed in his blog post, What is fairness anyway? Vendors disagree about what constitutes fairness and have tackled related challenges in diverse ways. Our goal was not to compare Dynamic Airtime Scheduling to other-vendor optimization approaches. Rather, we wanted to put Dynamic Airtime Scheduling through its paces with our own mix of Wi-Fi clients to see how much it would really improve our own live hive.

When announcing Dynamic Airtime Scheduling, Aerohive published a paper demonstrating this feature’s impact using VeriWave WiMix test results. VeriWave tools are widely used for performance and stress testing by equipment manufacturers. This is a common and effective way to subject APs to generated test traffic representing real-world applications and clients.

Aerohive’s test results were compelling. For example, when Dynamic Airtime Scheduling was enabled for a HiveAP with three 802.11a clients (operating at 54, 12, and 6 Mbps), TCP downlink throughput increased four-fold for the fastest client, without delaying slowest client test completion. The more diverse the example client mix, the more striking the demonstrated improvements.

However, those tests were conducted in a closed environment, without live contention between clients. Closed tests avoid external RF interference, are easier to control, and are generally more reliable than open air tests. But open air tests can be uniquely insightful—especially for a feature like this. Aerohive showed us some informal open air test results, obtained with Ixia IxChariot test scripts. So we decided to begin our test drive there, using IxChariot to measure Dynamic Airtime Scheduling’s impact on our own live 802.11n clients.

Controlling QoS

As explained in Part 1, HiveAPs are configured using a HiveUI Web interface or HiveManager appliance. QoS parameters, such as WMM priorities, can be tweaked using HiveUI, but Dynamic Airtime Scheduling must be enabled via HiveManager or the HiveAP CLI.

Dynamic Airtime Scheduling determines how a HiveAP shares a channel across all clients using a WLAN policy. This simple toggle works in conjunction with other existing QoS control mechanisms, including traffic classification, per-user queuing, rate limits, scheduling, and WMM prioritization.

Within each HiveAP, a policy-driven QoS engine maps arriving frames into per-user queues, based on MAC address, interface, SSID, TCP/UDP protocol, and/or markers (i.e., 802.1p priority, WMM category, DiffServ tag). Each client has eight user queues, one per WMM category. Scheduling parameters control when frames move from user queues onto WMM hardware queues—for example, voice queues can be serviced by strict priority when best effort queues get emptied by weighted round robin. Max rates can also be applied to users, groups, or queues to limit total bandwidth consumption (below).

Fig1-QoS-Policies_sm.jpg

Figure 1. Aerohive QoS Policies.

Turning Dynamic Airtime Scheduling on causes this engine to schedule traffic by airtime instead of bandwidth. Normally, two clients with the same weight would be allowed to send the same amount of data at a given priority, while a client with twice the weight could send double the data. With Dynamic Airtime Scheduling, a HiveAP makes its priority and weighting decisions based not on frame number and size, but on the time they actually take to transmit.

Thus, as a client’s data rate starts to fall, Aerohive’s scheduler can react. Faster clients will start receiving more transmissions sooner instead of waiting around for the slower client to finish. Conversely, as faster clients finish, that slower client gets to use bigger chunks of airtime, making up lost ground. This algorithm uses each HiveAP’s visibility into actual transmit times, adjusting allocations when otherwise speedy clients hit dead spots or interference. But importantly, WMM priorities and rate limits are still enforced—for example, even slow VoIP handsets still need to exchange top-priority SIP frames frequently enough to avoid jitter.

Creating our test WLAN

To exercise this feature in our own live hive, we defined two WLAN policies: one for 2.4 GHz and another for 5 GHz (below). We tied each to a single SSID policy linking our “PerfTest” SSID to one radio band, supported data rates, MAC and traffic filters (null), and beaconed capabilities like WPA/WEP encryption (off) and WMM prioritization (on). We configured our 5 GHz radio profile to permit only 802.11n clients on a 40 MHz channel, with MAC frame aggregation on and no SGI. We configured our 2.4 GHz radio profile to permit 802.11g/n clients on a 20 MHz channel, without aggregation or SGI.

Fig2-WLAN-Policy.jpg

Figure 2. WLAN Policy

These single-SSID, single-band policies let us control multi-band clients without touching them mid-test. To change scenarios, we used our HiveManager to push a new policy under test to one HiveAP. Once activated, all clients automatically reassociated to that AP’s “PerfTest” WLAN using the desired band/channel and data rates. This helped us limit the variables that changed during each run—even minor differences in client location or orientation can affect data rate.

Of course, in open air tests, one can’t control everything. We used a WLAN analyzer to measure air quality and configured our HiveAP to use the cleanest available channel in each band, realizing there would be some interference at 2.4 GHz. But our objective wasn’t to achieve the highest possible throughput. We wanted to measure the difference between our own clients, operating with and without slow client drag, and with and without Dynamic Airtime Scheduling.

At 5 GHz, we staged four 802.11n clients: three dual-band Linksys WPC600n clients and one TrendNET dual-band TEW-664UB client. All four were placed close to the HiveAP, operating reliably at 270 Mbps. From an IxChariot console, we launched Ixia’s High Performance Throughput script, sending each Wi-Fi client a 1 MB file over TCP from a GigE laptop tethered to the HiveAP’s 10/100/1000 Ethernet port. This established our downlink baseline: 50 seconds per completed test run, averaging ~30 Mbps throughput per client.

Introducing drag

We then moved one WPC600n to a location where its Tx/Rx rate consistently fell to 81.5 Mbps, repeating the test without Dynamic Airtime Scheduling. As expected, the under-performing client hurt everyone. Not only did our distant client take 2:40 to finish, but fast clients now took nearly two minutes apiece (below). Moving one 802.11n client—identical in type and configuration—had degraded our entire WLAN’s throughput.

Fig3-Downlink_sm.jpg

Figure 3. Downlink IxChariot High Throughput Tests.

Next, we left those four 802.11n clients in place, checked the “Enable Dynamic Airtime Scheduling” box on our 5 GHz WLAN profile, pushed it to our HiveAP, and waited for associations to be reestablished. Fast client transfer times were cut in half. Even our distant client’s download completed in 1:50 (vs. 2:40). Without a doubt, we saw better aggregate WLAN throughput with Dynamic Airtime Scheduling enabled than without.

Don’t suspect beginner’s luck or a one-time fluke. We repeated all of these 5 GHz tests several times, and ran similar tests at 2.4 GHz, varying client device populations. Dynamic Airtime Scheduling consistently improved our faster client download durations and throughputs, without visibly lengthening slower client downloads. We found this to be true whether clients were slowed by distance, interference, or protocol type. However, we found that client Tx/Rx rates had to drop to produce these results—a “sticky client” that stubbornly refused to adjust its rate triggered varied outcomes.

Looking up

Note that these results illustrate Dynamic Airtime Scheduling’s impact on downlink performance: predominantly one-way file transfers, with some uplink TCP ACK and control traffic. In real life, downlink traffic very often exceeds uplink traffic, but whether this is true for your WLAN really depends upon application mix.

An AP-driven optimization like Dynamic Airtime Scheduling cannot exert tight control over uplink performance because, using standard 802.11 DCF/HCF, APs cannot stop clients from transmitting. According to Aerohive VP of Product Management Adam Conway, uplink data is also more greatly impacted by the Packet Error Rate (PER) of each client.

However, Dynamic Airtime Scheduling does take total client airtime (downstream plus upstream) into account when deciding how to schedule traffic. It can also influence the rate at which TCP applications send data upstream by slowing the return of TCP ACKs, applying upper-layer back-pressure to throttle selected clients. While this isn’t as effective as airtime scheduling, Aerohive argues that it can still have a positive effect on uplink performance—especially when the overall traffic mix is bi-directional.

During our uplink-only tests, four fast client upstream throughputs ranged from 30 to 50 Mbps, with average run time 30 seconds. Adding a slow client resulted in lower throughputs and longer runs for that one client, but did not have noteworthy impact on faster clients. In this case, turning Dynamic Airtime Scheduling on did not visibly change our WLAN’s performance.

According to Conway, “The reason you don’t see much improvement [on your uplink test] is that there is a lot less precision with upstream performance (because we hold the RTT and don’t actually affect the 802.11 MAC).” Furthermore, “the PER is more meaningful than Airtime Scheduling so you get desired results with or without Airtime Scheduling. If the PER is the same for all clients (like with VeriWave) then [Dynamic Airtime Scheduling] is really easy to see.” He suggested that tests with more clients and bi-directional tests might generate more readily-visible benefits.

We did run bi-directional tests, composed of three downloads and one upload. Slowing one download client degraded the throughputs experienced by other download clients, without impacting our upload client. When we enabled Dynamic Airtime Scheduling, all fast client downloads improved—but we still didn’t see significant uplink impact. Conway demonstrated more impact in his own bi-directional IxChariot tests using mostly Intel a/b/g and n clients.

It may be that small bi-directional tests are more strongly influenced by client-specific behaviors. This is why we conducted our tests with three identical clients and one different client, sticking to 802.11n. Moving one of the identical clients let us cause location-based differences without being distracted by client-specific behaviors. On the other hand, including one different client gave us confidence that our results were not unique to one card.

Bottom line

Our tests, while conducted methodically and repeated to produce consistent results, are still informal open air tests. Although we ran 2.4 GHz tests, we chose to describe only 5 GHz results because they had no outside interferers. Similarly, although we tested some mixed 802.11g/n scenarios, we focused on our 11n results because they cannot be achieved through allocation based on protocol type alone.

In the end, it doesn’t much matter whether a slow client is a cranky or mis-configured 802.11n device or a legacy 802.11g NIC that maxes out at 54 Mbps. Any client that requires longer transmit times can dominate the downlink—and we saw for ourselves that Dynamic Airtime Scheduling made a difference. Even though we did not experience much uplink benefit, we never paid a penalty for enabling Dynamic Airtime Scheduling. Admins may as well turn this feature on, available to at no additional charge to Aerohive customers running HiveOS 3.2 or later.

Real-world benefits depend on client and application mix, so we encourage admins to run their own open air tests, with representative devices and traffic. You may not see improvement in lightly-used WLANs where clients operate at similar data rates, but throughput increases are likely in congested, dense WLANs where competition is high, and in sparse or diverse WLANs where one really bad apple can spoil the whole bunch.

Lisa Phifer owns Core Competence, a consulting firm focused on business use of emerging network and security technologies. She has been involved in the design, implementation, assessment, and testing of NetSec products and services for over 25 years.

Leave a Comment