Sunday, January 22, 2012

Measuring Wi-Fi Performance using Ruckus Wireless Zap

Recently, I was provided a free copy of a Wi-Fi performance testing tool for iOS call ZapPerf for iOS by Zaib (@WLANBook). This tool is a port of the open source Zap test tool released in 2010 by Ruckus Wireless. This spurred me into doing a little performance testing using both the iOS version and the Mac version.

The Zap test tool is a bit different from other performance test tools out there, such as iperf, in that it measures the consistency of performance rather than peak or average throughput. This translates into a better measurement of the usability of a Wi-Fi network to handle real-time traffic that cannot tolerate momentary performance degradation, such as voice and video. For example, streaming a H.264 compressed high-definition Blu-Ray quality video could require a consistent 25-35Mbps of throughput. If performance drops even momentarily the video quality could degrade and result in video pixelation or freezing which would be unacceptable to viewer.

Ruckus published a whitepaper on Characterizing Wireless Network Performance, in which they describe the design of the Zap tool:
Prior to Zap, the focus of existing tools in the market has been on measuring average throughput, not worst-case throughput. Ruckus engineers originally designed Zap to measure and predict what type of performance they could expect most of the time (not just some of the time), using a large number of samples.
The output of the tool is a Cumulative Distribution Function (CDF), which is a representation of the probability of achieving a given throughput level.

Cumulative Distribution Function example shows
the percentage of time the system can achieve a given throughput value

I decided to do some testing with two systems, an iPad 1 and a MacBook Pro (Mid 2010). I also decided to run multiple tests on each system with different QoS value to determine how Wi-Fi quality of service impacts device performance. Both systems were tested with two QoS values: Best Effort (DSCP 0, Zap option -q0x00) and Voice (DSCP 46, Zap option -q0xB8).

Note - The Zap QoS option is a bit confusing, as it requires a hexadecimal value representing the entire 8-bit IP ToS header instead of the only the 6-bit DSCP portion. Hence the hex values above account for the two low-order ECN bits as well.

iPad 1 Results
This is a first-gen iPad 1 which sports an 802.11n 1x1:1 radio chipset capable of 65Mbps raw Wi-Fi speed.

You can see that the performance of the iPad 1 was very similar for both QoS queues. This is likely due to the fact that testing was performed using an 11n access point in a non-congested location with little traffic except for the system under test. The AP and RF spectrum were sufficiently able to deliver a good user experience above 30Mbps at all times (100th percentile).

MacBook Pro (Mid 2010) Results
This is a MacBookPro6,2 Core i7 2.66GHz (Mid 2010) model, which sports an 802.11n 2x2:2 radio chipset capable of 300Mbps raw Wi-Fi speed.

Contrast the performance of the iPad with the MBP. The MacBook is capable of 2 spatial streams and 300Mbps raw speed, which is able to saturate the AP and RF spectrum capabilities more thoroughly. In this scenario, we can see that the Voice queue clearly outperforms the Best Effort queue, all other variables constant. The Voice queue is able to achieve 179Mbps at all times (100th percentile) while the Best Effort queue is only able to match that performance about 90% of the time. The small amount of competing Wi-Fi traffic in the area reduces the guaranteed service level of the Best Effort traffic down to 96Mbps.

Concurrent Voice and Best Effort Traffic
Next, I decided to stress the network a little bit more to see the difference in performance between QoS queues when the network is under heavier load. For this test I used two MacBooks, both capable of 300Mbps, with one in the Voice queue and one in the Best Effort queue.

I was expecting to see a significant performance gap between the Voice and Best Effort queue, highlighting the benefit of QoS traffic prioritization. Here's what resulted:

My results matched my expectations. The stream in the Voice queue significantly outperformed the stream in the Best Effort queue. The Voice stream was able to achieve over 212Mbps for 98% of the time and resulted in worst-case performance of 179Mbps. You will also notice that the performance remained very consistent throughout the test, as indicated by the relatively flat line across the graph.

The Best Effort stream on the other hand saw a more varied performance, indicated by the sloped line in the graph. This stream was able to achieve 146Mbps for 98% of the time, but only 124Mbps worst-case.

Revolution or Evolution? - Andrew's Take
When it comes to Wi-Fi and preparing a network for rich multimedia applications such as video streaming, testing consistency of performance becomes more important than measuring peak or average performance, which can masque temporary degradation when sampling a large amount of data. Using the Zap tool by Ruckus Wireless offers a method to measure the sustainable throughput of a Wi-Fi network that more closely resembles the expected user experience.



  1. Interesting tool. What type of AP was used in this test?

  2. I tested with a Ruckus 7962 access point, but the tool can be used between two endpoints on wireless equipment from any vendor.


  3. If you want really consistent tests across all device types, always use a laptop on the other end connected via a non-blocking medium like gig ethernet (not sure what to do with .11ac tho...)

    I find when running tests directly to an AP the processor limitations on some models running embedded implementations of ZAP can cause the reading to be lower than actual value.

    This is only on really really good links though (>170 Mbit/s).

  4. £13.99 on app store? Not sure how they can justify that in comparison to other apps.

    1. ... which other apps? I would expect the price would relate to the the development time and the number of expected downloads or the value provided by the app. The wireless AP test market is much smaller that the "killing time on the bus" market. There are plenty of other expensive apps out there. The one I see is developed by a third party ... good on them!

  5. Thanks for that information Rob. I actually plan on using two laptops to test across non-Ruckus wireless equipment.

  6. ZAP Source Code is available on Google code:

  7. Hi Andrew - for the first 2 tests using ipad and MBP you mentioned that you ran the tests in non-loaded networks. I fail to understand how you could get different results for BE v/s prioritized traffic if the network is not loaded. Prioritization of traffic only makes sense when there are multiple flows competing and the ones with higher DSCP get prioritized. But if there's only one flow (test BE and then then non-BE) then there's no question of competing flows.

    Can you please explain.


    1. Hi mo,
      I can see you're confusion, especially if you're familiar with wired network QoS. In Wi-Fi, the QoS prioritization actually also has components that affect network overhead due to collision avoidance. Specifically, the inter-frame spacing and contention window size are different for various traffic priorities. These differences end up providing a statistically lower wait time between frames for transmission.

      You can read more about that in my QoS serires, see parts 4 and 5:
      Wireless QoS Part 4 - Arbitration Interframe Spacing
      Wireless QoS Part 5 - Contention Window and Final Thoughts


  8. May I ask you how did you use zap tool on the MacBook device? I'm having trouble installing it on MacOS after downloading zap_mac_20100111 as indicated on the Ruckus blog page

    1. It's not obvious, but zapd must be running (even on the client). In other words, you run zapd on the 'server', zapd on the 'client' and then zap on the client.

  9. Good day, for those who is in need for a zap gui for windows and linux you could try zPerf(inspiration came from jperf). One of zperf feature I think that useful is being able generate performance graph easily from zap output and compare it with different other result by combining them into one graph. You could visit (WIP) for more info and here's the direct link Let me know what you think of it thanks.

    @andrew nice article by the way