Breaking News

Wi-Fi news from LitePoint

New Wi-Fi standard will provide breakthrough speed and performance for video streaming, gaming, and virtual reality applications

SAN JOSE, California — Dec.15, 2021 – LitePoint, a leading provider of wireless test solutions, today announced it is collaborating with leading chipset maker MediaTek to deliver turnkey design validation test solutions for MediaTek’s Wi-Fi 6/6E Filogic chips. The collaboration will enable MediaTek customers to simplify and speed test results for this new family of high-performance connectivity chipsets.
Wi-Fi 6/6E provides breakthrough connectivity experiences delivering multigigabit, low latency Wi-Fi and reliable connectivity for applications like streaming, gaming, AR/VR and more.

Through this strategic collaboration, LitePoint worked with MediaTek to create a version of LitePoint IQfact+ test automation software that enables MediaTek customers to test their Wi-Fi 6/6E designs with a complete and easy to use test flow. IQfact+ software works with the LitePoint IQxel family of test systems to provide MediaTek customers with chipset-specific test development software that enables rapid design validation through volume manufacturing with minimal engineering effort.

“LitePoint continues to drive innovation in test solutions for Wi-Fi products and we are pleased to be working with MediaTek to enable their customers to bring Wi-Fi 6/6E devices to market,” said Anna Smith, Vice President Worldwide Sales, LitePoint. “IQfact+ and IQxel offer MediaTek’s customer base a complete and easy-to-use testing platform for rapid design validation.”

“As the market changes and more people are working and studying from home, many applications rely on stable networking service, like video conferencing and 4K streaming. MediaTek’s Filogic family delivers fast, reliable and always-on connected solutions to give consumers a smooth, fast and interactive connected experience with incredible video quality,” said Alan Hsu, Corporate Vice President & General Manager, Intelligent Connectivity at MediaTek. “LitePoint’s testing platform will help our customers quickly and easily validate our Wi-Fi 6/6E solutions to speed up the time to market for the next generation of devices.”

Technical Details
LitePoint’s IQxel fully integrated test RF PHY test platforms include signal generation and analysis for Wi-Fi connectivity, meeting the needs of product development and high-volume manufacturing. The IQxel product family includes support for WLAN IEEE 802.11a/b/g/n/ac/ax/be with a frequency range from 400 to 7300 MHz for validation in the 6 GHz band, as well as a full range of connectivity standards such as Bluetooth 5.x, Zigbee and Z-Wave
To learn more about LitePoint’s IQxel product family, please visit: https://www.litepoint.com/products/iqxel-mw-7g

LitePoint’s Rex Chen is the author of this blog, which is based on his presentation at the RCR Wireless News Editorial Webinar titled Minding the mid-band: C Band and CBRS efforts.

Wireless spectrum has become one of the most important resources to stay connected because spectrum is the utility for all wireless communication. Mid-band spectrum, specifically in the 2 GHz – 4 GHz range, is playing an important role in 5G, both globally and in the U.S which include:

  • Citizens Broadband Radio Service (CBRS), 150 MHz of spectrum in the 3550 MHz to 3700 MHz band in the U.S. It previously was used by the U.S. government until the FCC identified it as spectrum that could be used for broadband services.
  • C band refers to frequencies between 4000 MHz and 8000 MHz that are now being used for satellite communications, RADAR systems and other unlicensed use case such as some Wi-Fi enabled devices.

In 2020, the U.S. C Band spectrum auction raised more than $81 billion, demonstrating the immense value and potential of mid-band spectrum for wireless communication. Each mobile network operator has a different mid-band strategy based on spectrum holdings – both in terms of how much spectrum they own and whether they need more low or high band frequencies.

Mid-band spectrum has the advantage of known transmission characteristics that is similar to existing 4G frequency bands. Mid-band frequencies are considered “goldilocks spectrum” because they have characteristics of good coverage (similar to low band frequencies), and fast data rates (similar to high-band frequencies). But there are challenges to these RF bands due to how the band has been divided for its incumbent users. MNOs then need to devise spectrum sharing and spectrum usage strategies.

Smartphone design using mid-band spectrum can support world phones that have global cellular coverage capabilities. In many instances, it has a contiguous spectrum, and the short wavelength allows innovations like MIMO between 8 and 16 spatial streams. This enables an ideal degree of spatial multiplexing, where operators can use beam forming techniques to target users spatially – increasing the capacity.

Why Mid-Band is so Valuable
Deployments for 5G mid-band is occurring across multiple countries including the U.S., Europe, Korea and China starting in the past year and a half. This trend will continue as MNOs start to work with higher frequency ranges of the spectrum.

Mid-band spectrum provides the opportunity for a contiguous spectrum with the potential for channels that are 200 to 400 MHz wide. This requirement presents new challenges for filters and other RF components and requirements that must be met by both mobile devices and base stations. LitePoint is seeing instance where in actual deployments, carriers aren’t using an entire capability because they’re still testing to make sure it works for across-the-board deployments.

Increase in remote work, video conferencing and other high bandwidth applications means that uplink capacity is becoming more important than ever. This in turn is leading carriers and chipset makers to re-think how they mix and match spectrum resources. There are inherently many use cases for 5G that may result in making deployment challenging – but the variety of device requirements has made mobile network carriers understand that mid-band is necessary as part of 5G growth moving forward.

To gain more insights on 5G mid-band status and wireless test challenges in this spectrum, as well as hear what other industry leaders have to say, watch this editorial webinar. You can access it on the LitePoint webinar page.

LitePoint’s Eve Danel is the author of this blog. In this post, you’ll learn about updates to the Bluetooth, Ultra-Wideband and Wi-Fi specifications that improve location capabilities.

Recent standards updates to Bluetooth, Ultra-Wideband and Wi-Fi have introduced new and more accurate location capabilities that allow wireless device designers to use these technologies to support new indoor location applications. In this blog post we will review these methods and see how they compare to provide improved capabilities.

These location capabilities can be used to find people or objects with high accuracy where GPS and other satellite technologies – which are primarily for outdoor applications – lack precision or are blocked by buildings or other obstructions. While other indoor positioning systems can detect the location of an object, their accuracy can be on the order of several meters – which means that they cannot be used in many applications where more precision is required.

Both Bluetooth and Wi-Fi have already been used in many deployments for indoor location. Previous versions of these technologies had location capabilities that were based primarily on receive signal strength indicator (RSSI). This method can determine the distance between the transmitter and the receiving point by measuring the received signal strength, and then perform positioning calculations based on the corresponding signal attenuation. The accuracy of the RSSI method is limited by several factors, primarily if there are obstacles between the transmitter and the receiver causing attenuation, the accuracy of measurements will be reduced. The signal strength measurements can also be degraded if they are not calibrated, making the location calculation inaccurate by a few dBs.

The RSSI method is also inherently not very secure. It can be subjected to a “man-in-the-middle” (MITM) attack. This happens when attackers amplify the transmission to make the transmitter appear closer than it really is.

The next generation of positioning technology aims to unlock an even higher level of security as well as accuracy down to the tens of centimeters. Micro-location opens up a new generation of use cases allowing users to interact very precisely with their environment and the objects around them.

Bluetooth Low Energy 5.1

The recent addition of the Bluetooth 5.1 standard adds several direction-finding enhancements, providing angle information to the incoming signal. The standard specifies two methods: angle of arrival (AOA) and angle of departure (AOD).

For direction finding, the Bluetooth 5.1 devices transmit specially formatted packets and Bluetooth 5.1 receivers use an antenna array with at least 2 antennas to compute the angle of incidence based on the phase difference between the antennas, the signal’s wavelength and the distance between the antennas.

Combining RSSI information with either AOA or AOD provides the ability to determine the device location with better accuracy.

Figure 1: Bluetooth Low Energy 5.1 Direction Finding

Ultra-Wideband (UWB)

Ultra-Wideband, or UWB, based on the IEEE 802.15.4z-2020  HRP ERDEV (Enhanced Ranging Devices) amendment provides high accuracy and secure positioning.

UWB doesn’t use signal strength to evaluate distance. Instead, it uses time of flight (ToF).

ToF measures the propagation time that it takes for the signal to go from the transmitter to the receiver. Signals travel at the speed of light, regardless of environment. Distance estimation based on timing is more robust and is not affected as much by the environment as the RSSI methods that were described earlier.

UWB is more robust to the man-in-the-middle attack described earlier because a signal cannot be intercepted and retransmitted without adding latency, in addition frames containing an STS (Scramble Timestamp Sequence) with AES-128 encryption provide an added layer of security.

Figure 2: UWB Time of Flight

Wi-Fi IEEE 802.11az

The IEEE 802.11az Next Generation Positioning (NGP) standard is nearing completion and it is building upon the  fine timing measurement (FTM) feature that was part of the IEEE 802.11mc standard release. Unlike the RSSI method using signal strength to evaluate distance, FTM uses round trip time information to estimate distance between Wi-Fi enabled stations and access points.

The 802.11az standard is designed to improve upon the FTM legacy with NGP which uses several enhancements that are aligned with the 802.11ax/ Wi-Fi 6 standard. The Wi-Fi Alliance is also working on the next generation Wi-Fi location R2 program based on NGP.

Figure 3: Wi-Fi Timing Measurement Round Trip Time

FTM uses Time-of-Flight to measure distance. The mechanism is very similar to UWB’s ToF described earlier. Wi-Fi client stations and access points exchange a series of messages with Time of Departure and Time of Arrival timestamps that allow to derive round-trip time. Round trip time mechanism doesn’t require the clocks on the devices to be synchronized.

802.11az is designed to improve upon the legacy FTM by utilizing the latest features in the 802.11ax standard.

  • For improved accuracy 802.11az is making use of wider channel bandwidth, up to 160 MHz for Wi-Fi 6 but this could be extended to 320 MHz for the 802.11be (Wi-Fi 7 generation) since wider bandwidth provides higher resolution. For better resilience to multi-path effects MIMO signal properties are being utilized.
  • For improved protocol efficiency 802.11az reuses the 802.11ax PHY Null Data Packet (NDP) frames that are already defined for beamforming sounding. The timestamps are based on the exchange of HE ranging null data packet (NDP) frames or HE trigger-based (TB) ranging NDP.
  • The Multi-User capabilities of Wi-Fi 6 are also advantageous to 802.11az, when using Trigger based ranging with Uplink and Downlink OFDMA, the AP can effectively get ranging information from multiple stations in a single transmit opportunity. This feature significantly reduces the overhead needed for exchanging ranging information and improves scalability to more stations.
  • 802.11az introduces PHY and MAC layer Security enhancements to prevent spoofing or eavesdropping on ranging messages.

Testing Concerns

What’s true of all these wireless location technologies is that careful design validation testing is required to ensure the best performance. LitePoint’s wireless test product portfolio has the right solution to enable these technologies.

The IQgig-UWB™ tester is a fully integrated, one-box test solution for physical-layer testing of UWB-enabled devices. It supports all the required transmitter receiver testing, but it also has specific mechanism to test UWB ranging with a precision trigger and response mechanism that allows calibration and verification of ToF and AoA measurements.

The IQxel-MW 7G is designed to meet the demands of Wi-Fi connectivity testing up to 7.3 GHz for Wi-Fi 6/6E. And supports Bluetooth device standards testing including Bluetooth LE 5.1.

To listen to my entire webinar on this topic, please go to the LitePoint webinar page.

LitePoint releases turnkey manufacturing test solution for Microchip’s next generation wireless ICs

SAN JOSE, California –  – LitePoint, a leading provider of wireless test solutions, today announced a collaboration with Microchip Technology Inc. to deliver simplified design validation and turnkey manufacturing test solutions for next-generation Internet of Things (IoT) systems, based on Microchip’s Bluetooth and Wi-Fi chipsets.

As part of the collaboration, LitePoint has released a version of its IQfact+TM test automation software tailored for Microchip’s new WFI32E01 series of Wi-Fi MCU modules.

LitePoint’s IQfact+ is a turnkey, chipset-specific test development software that enables rapid-volume manufacturing with minimal engineering effort. With IQfact+, wireless system developers get access to proven software that enables control of devices under test and the test system with calibration routines that are uniquely optimized to reduce time and maximize throughput.

Microchip Technology is a leading provider of Bluetooth and Wi-Fi ICs, modules and reference designs that ensure robust, reliable and safe connections. Microchip’s comprehensive wireless connectivity portfolio meets the range, data rate, interoperability, frequency and topology needs of many IoT applications.

“The rapid growth of IoT is driving the deployment of billions of wirelessly-connected products, making the wireless connection quality of these devices paramount,” said Adam Smith, Director of Product Marketing at LitePoint. “We’re excited to work with Microchip to bring a simplified wireless test solution that will ensure their customers are able setup and test the wireless quality of Microchip-based devices in a matter of hours, compared to traditional test development, where set up and deployment can take weeks, or even months.”

“LitePoint provides a wireless test system that offers significant time-to-market advantages which are critical to our customers who want to integrate our chip-down reference designs into their applications,” said Steve Caldwell, VP of Microchip’s wireless solutions group. “Combining our innovation with LitePoint’s high volume production test technology provides our customers the tools they need to meet their customer’s demand for increasingly complex products, delivered at the highest quality.”

Technical Details
LitePoint’s IQxel flly integrated test platforms include signal generation and analysis for Wi-Fi connectivity, meeting the needs of product development and high-volume manufacturing. The IQxel product family includes support for WLAN IEEE 802.11a/b/g/n/ac/ax/be, as well as a full range of connectivity standards (Bluetooth 5.x, Zigbee, Z-Wave), with a frequency range from 400 to 6000 MHz and upgradable to support up to 7300 MHz for 6 GHz band.

For faster time to market, turnkey IQfact+ software solutions offer customized testing of leading chipsets and enable thorough design verification and rapid volume manufacturing with minimal engineering effort.
All trademarks are the property of their respective companies.

Testing UWB’s New Features

Wireless technologies that enable new micro-location capabilities are expanding use-cases of applications, including those that demand a higher level of security. In the automotive industry, for example, there is a demand for a more secure authentication between the car and the key fob. This has created a perfectly-timed opportunity for ultra-wideband (UWB) technology to flex its unique advantages in localization, position detection and direction, compared to other technologies that make it a good fit for the increased security needed. UWB is ideal for applications that enable two devices to securely find each other and understand where they are relative to one another in space.

UWB uses a technique called time of flight (TOF) to enable its secure method of authentication. Unlike other technologies, which estimate the distance separating two devices by signal strength, TOF involves sending and receiving signals and calculating distance based on the time it takes for a complete send/receive cycle. It is relatively easy to hack signal-strength measurements, but time is nearly impossible to fake. This high level of security makes UWB relevant today.

The three primary use cases for UWB are:

Access control: Embedding UWB into a device such as a key fob allows tight access control, whether access to a secure building or car. A UWB-based system can estimate proximity within 10 centimeters. The access system can be set to allow access only when an authorized user is within a given distance and anyone not carrying the UWB device will be denied entry automatically.

Location-based services: The precise proximity detection enables location-based services such as navigation for indoor spaces and contextual content based on the user’s location. As you move from room to room, the system provides information relevant to each room. Think of a museum, or an office complex, where there is information readily available specific to a certain location within the facility.

Device-to-device communication: UWB allows devices to securely share information, and this can be combined with its localization capabilities to push relevant information from one device to another.

UWB Ranging: A Closer Look
UWB uses electronic pulses that are short in time and wide in frequency—hence the name ultra-wideband. Essentially, one UWB devices emits a signal, and a device that detects the signal sends back a response. This is called either single-sided ranging or single-sided two-way ranging. By measuring the time between sending the signal (called a ping) and receiving the reply (pong), the sending device can calculate the distance separating them with precision.

In some cases, the sending device will send another signal upon receiving the reply, which allows both devices to measure the time. This technique, called double-sided two-way ranging adds a second layer of security and accuracy to the positioning.

Crucial UWB Quality Tests

When evaluating UWB device performance, there are three areas where measurements are important.

Regulatory: UWB signals overlap licensed spectrum, so it is important to validate that the device is not running afoul of regulations established by the country’s spectrum governing bodies. While the absolute transmitter power and power spectrum mask are the main regulatory metrics, other factors in the device’s performance are indicative of whether or not the device will meet the power mask requirements.  This includes transmitter calibration and testing pulse shape. The more power the device emits, the more energy will splatter out of the desired channel and potentially interfere with neighboring devices, or worse, violating licensed frequency bands. Mask refers to the device’s ability to prevent that signal bleed, due to either spurious or linearity of the transmitter, while the power output impacts the operating range of the end-device. Good calibration maximizes power while delivering valid spectral mask performance.

Interoperability: A key metric for UWB interoperability is frequency accuracy. Wireless devices use tuning capacitors to adjust the crystal that determines the broadcast frequency. If this tuning isn’t done properly, the result is carrier frequency offset (CFO), or frequency error. Put simply, this means the device isn’t exactly operating at the same frequency as the other devices. If the CFO is small, the device will be able to communicate, but ranging measurements will have error. If CFO is large, the badly tuned device might not communicate with others at all. Frequency calibration is a key part of ensuring device interoperability. Another tool is a measurement called the normalized mean square error (NMSE), which captures several different effects in the system and provides a summary measurement.

Time of flight: There are two schools of thought about calibrating TOF. One is to calibrate the system at the printed circuit board level and assume that the over-the-air performance with the antenna attached will be within a narrow tolerance range. The other approach is to test the assembled device and calibrate TOF using its actual over-the-air performance. Either approach is effective, but the key takeaway here is that there are many different components and sources of variation in the system. The goal is to minimize variation and achieve the positional accuracy that UWB can provide.

The Angle of Arrival: What it is and How it Works
UWB can work in two modes, triangulation and peer-to-peer. Triangulation requires a series of “anchor” devices placed in the environment that a UWB “tag” device can use to compare its relative position to the multiple anchors.  Peer-to-peer operation allows two devices to communicate directly without the need for anchor devices in the environment. Peer-to-peer operation requires that the device can determine both distance and direction.  Distance is achieved from TOF, direction is determined by Angle-of-Arrival (AoA).  AoA requires that the device has at least two antennas. AoA determines the time (or phase) difference between the different antennas as they detect an incoming signal. This time (or phase) difference provides information that enables the device to calculate the angle from which the signal is coming.

The FiRa Consortium
The FiRa Consortium, founded in 2019, is an organization seeking to bring together industry and product experts and develop an ecosystem in which UWB products are interoperable and work seamlessly with each other. The membership has grown dramatically in the past year. The consortium will be launching a certification program that will ensure devices meet FiRa specifications for interoperability. The certification program will cover conformance test cases for both physical layer functions (PHY) and medium access control (MAC).

Certification means the device has been tested at an independent authorized testing lab (ATL) and found, after rigorous testing, to conform to FiRa specifications. That provides manufactures and users a high degree of confidence that the devices will be interoperable with other devices.

LitePoint Solutions
LitePoint’s UWB testing system includes both hardware and software that enables pre-programmed testing scripts to maximize the testing of a particular chip. LitePoint’s IQgig-UWB is a fully-integrated, FiRa-validated system, designed for UWB PHY layer testing. Turnkey test solutions for UWB chipset and device manufacturers are available, including a FiRa PHY Conformance automation solution. IQgig-UWB is ideal for lab or high-volume manufacturing environments, for UWB-enabled modules and end-products and for UWB-enabled automotive, mobile, asset tracking or healthcare devices.

IQfact+, developed in close collaboration with leading UWB chipset vendors, is a software-based turnkey calibration solution for chipset manufacturers. It supports all of the key wireless connectivity technologies, including UWB.

Conclusion
UWB technology offers powerful capabilities for a wide range of settings where access control, location-based services and device-to-device communications are important. However, actual performance achieved in the end- application depends on ensuring that the key RF parameters meet a high degree of accuracy.  These key measurement parameters seek to ensure the devices meet regulatory guidelines and interoperability requirements as well as deliver a positive user experience. From a test point of view calibrating a UWB device’s frequency and transmitter power are crucial. Precise time-of-flight and angle-of-arrival measurements are needed to validate the centimeter-level accuracy that UWB devices can provide. LitePoint’s solutions provide that testing, helping to unlock UWB’s power.

LitePoint’s Eve Danel is the author of this blog. In this post, you’ll learn about the latest Bluetooth specifications and new test requirements for Bluetooth 5.1.

In 2010, Bluetooth Low Energy (Bluetooth LE) was introduced with the Bluetooth 4.0 standard. Compared to previous generations of Bluetooth, which were known as classic, Bluetooth LE operates at lower power, increasing the battery life of host devices. In 2016, the Bluetooth 5 standard was introduced, doubling the data rate and increasing the operation range. Bluetooth 5.1 was released in 2019, adding direction-finding enhancements that enable location tracking applications.

Bluetooth 5.1 Use Cases

With the direction-finding enhancements in Bluetooth 5.1, the technology is able to provide GPS-like positioning capabilities but operating indoors. The use cases for this type of technology range from asset tracking to locating people in buildings, indoor direction finding, object tracking, among others. And this wide range of use cases can be applied in nearly every market vertical including industrial, medical, retail, hospitality, enterprise, home or transportation.

RSSI-Based Proximity Sensing

Bluetooth is already a popular technology for proximity sensing using Bluetooth beacons. In this method, the locator estimates distance using the received signal strength indicator (RSSI) value, which represents the strength of the signal that is measured on the locator’s receiver. RSSI is used to estimate the distance between a transmitter and a receiver. Knowing the transmitter’s power, it’s possible to estimate the location of an item by measuring signal attenuation.

Figure 1: RSSI based method

RSSI only provides a crude estimation of distance because this method is biased by the environment. Obstacles between the transmitter and receiver that provides attenuation – a crowd of people for example can add significant attenuation to the signal and therefore decrease the accuracy of the distance estimate. In addition, the RSSI method can only detect that the device is located somewhere in a circular zone around the receiver as shown in Figure 1 above as it doesn’t provide information about the position.  By deploying multiple locators and using signal trilateration, it is possible to locate the device with more accuracy, but it also increases the complexity of the system.

Increasing Accuracy with Angle of Arrival and Angle of Departure

Bluetooth 5.1 increases the accuracy of the RSSI method by providing angle of arrival (AoA) and angle of departure (AoD) information.

AoA is designed for use in applications like asset tracking, where a moving AoA transmitter (a mobile phone for example) is sending Bluetooth LE direction-finding signals using a single antenna, and a fixed AoA receiver (for example installed on the ceiling) is equipped with an antenna array with a minimum of two antennas. The array is used to determine the direction of the transmitter by using the phase differences between the antenna. The angle of arrival determination is based on the signal’s wavelength, the distance between antenna and the phase of the received signal.

Figure 2: AoA method

AoD is designed for use in applications like wayfinding for indoor navigation. Where the AoD receiver (for example a phone) receives the direction-finding signal transmitted by the fixed AoD transmitter (for example installed on the ceiling) that is equipped with an antenna array with a minimum of two antennas. Just like AOA, the angle is determined by using the signal’s phase information.

Figure 3: AoD method

Bluetooth 5.1 Constant Tone Extension (CTE)

To enable AoA and AoD, the signal’s phase must be measured. For direction finding, Bluetooth 5.1 core specification adds a constant tone extension (CTE) field that is a sequence of bits with a variable duration from 16 microseconds to 160 microseconds. CTE is only supported for Bluetooth LE operating at rates of 1 Mbps, which is mandatory, or, optionally, 2 Mbps. The CTE field contains a series of modulated 1 bits which must be transmitted at one frequency with a constant wavelength in order to measure the phase of the receive signal. Thus, the signal is not subject to whitening, a process that scrambles signals to ensure there will be no long strings of 1s or 0s.

Figure 4: CTE

How is Bluetooth 5.1 Tested?

The key to a successful technology introduction for location services is how accurately the location can be measured, BT 5.1 aims to provide a sub one-meter level of accuracy. Validation of the solution plays an important role in ensuring the system’s performance.

The Bluetooth SIG has updated the Bluetooth PHY test specifications to include validation of these new direction-finding capabilities. New test cases were added for transmitter and receiver tests for AoA and AoD methods, and the various combinations of Bluetooth data rates that are supported in 1 Mbps and 2 Mbps, as well as the 1 microsecond and 2 microsecond switching and sampling times that are supported in the standard. Overall, there are 23 new test cases that were added to the PHY tests to cover AoA and AoD.

CTE is a new concept for Bluetooth and the test cases are designed to ensure that it can be properly generated by the transmitter. On the receiver side, it’s important to ensure that the IQ measurements on the receiver CTE can be used to accurately derive the phase of the signal.

Figure 5: Complete AoA and AoD test setup

Figure 5 above shows a complete test setup for AoA and AoD transmitter and receiver testing. LitePoint’s IQxel-MW 7G supports the new direction-finding test cases. LitePoint also offers IQfact+ software packages that provide completely automated DUT and tester control for leading Bluetooth 5.1 chipsets.

AoA Transmitter and Receiver Testing

In an AoA transmitter (typically a mobile device), the device has only a single antenna and the CTE field is transmitted continuously without switching at the end of the PDUs. On the received signal, the tester verifies the maximum peak and average power of the CTE signal as well as the carrier frequency offset and the carrier drift of these transmitted signals.

The AoA receiver is more complex because it has multiple antennas and a switch, so when receiving the CTE signal transmitted by the tester, the receiver will switch between the multiple antennas in the array according to a predetermined pattern. The time of the switching is either 1 microsecond or 2 microseconds as defined in the standard. The DUT also samples the received CTE during the allocated sampling slot’s timing of 1 microsecond or 2 microseconds as defined in the standard. The IQ samples taken by the DUT are transmitted to the tester for analysis to ensure that the phase measurements are within specs.

AoD Receiver and Transmitter Testing

An AoD receiver (mobile device with single antenna) needs to sample the received CTE signal at the correct sampling slots. To verify this, the IQ samples that are taken by the DUT are transmitted to the tester for processing and analysis. Just like with the AoA receiver test, this test verifies that the phase values derived from the sampling are within specs.

AoD transmitter devices have an antenna array with a minimum of 2 antennas, therefore the transmitter test verifies that the antenna switching occurs during the proper allotted slots of the CTE and the proper switching pattern within antenna array.

Please visit the IQxel-MW 7G and IQfact+ pages for more information on LitePoint’s test solutions for Bluetooth 5.1.

For more information download our application note on Testing BT 5.1 or download the replay of our webinar on this topic.

LitePoint’s Eve Danel is the author of this two-part blog series. Throughout these posts, you’ll learn about the key RF-PHY changes made in Wi-Fi 6/802.11ax compared to Wi-Fi 5 and older generations: the addition of a new frequency band, higher modulation rates, smaller subcarrier spacing, longer guard intervals and the introduction of the OFDMA modulation technique providing capacity improvements and latency reduction.

In my previous blog post, I explored some of the key RF-PHY changes in Wi-Fi 6, including a fundamental change to the way that Wi-Fi 6 operates, which is using orthogonal frequency division multiple access (OFDMA). Let’s take a closer look at Wi-Fi 6 OFDMA.

Resource Units (RUs)

With OFDMA, the channel bandwidth is divided into resource units (RUs) of various sizes. The size of an RU can vary from the smallest 26 subcarriers (or 2 MHz) up to 996 tones (or 77.8 MHz). The size and the location of the RUs is defined for 20 MHz, 40 MHz, 80 MHz channels, and 80+80 or 160 MHz channels.

Figure 1: RU location in 80 MHz channel

RUs are comprised of:

  • Data subcarriers used to carry data information and form the majority of the subcarrier’s assignment
  • Pilot subcarriers used for phase tracking for channel estimation
  • DC subcarriers at the center frequency of the channel
  • Guard band/Null subcarriers used at the band edges to protect from interference from neighboring RUs

Different numbers and sizes of RUs can be allocated for transmissions to different users, based on how much data each station needs. The access point (AP) is responsible for RU assignment and coordination. For example, applications that require a lot of data, like streaming video, can be assigned a large RU, while applications that require very little data can be assigned a small RU. Each RU can use a different modulation scheme, coding rate and level, and RU assignments can vary on a frame by frame basis.

Figure 2: OFDMA transmission showing 80 MHz channel divided into 8 RUs

For a downlink transmission, an HE MU PPDU carrying a mixture of 26-, 52-, 106-, 242-, 484-, and 996-tone RUs is transmitted from the AP to the stations. The HE MU PPDU preamble is transmitted over the whole channel and contains a new field called the SIG-B that provides information to the stations about their RU size and frequency allocation, the modulation MCS and the number of spatial streams allocated by the AP.

Figure 3: HE MU PPDU

The HE-SIG-B field is only included in HE MU PPDU (downlink) and it includes the information for OFDMA and MU-MIMO resource allocation. The HE-SIG-B field includes a common field followed by user specific fields.

The common field of an HE-SIG-B content channel contains information regarding the RU assignment, the RUs allocated for MU-MIMO and the number of users in MU-MIMO allocations. The SIG-B user specific fields contain information for all users in the PPDU on how to decode their payload.

Wi-Fi 6 OFDMA creates a large number of permutations of possible RU sizes, frequency allocations and power levels that need to be validated to ensure the transmitter and receiver performance meets the standard criteria. All of these permutations can have different error vector magnitude (EVM) and spectral performance.

Wi-Fi 6 OFDMA uplink transmissions are even more complex than downlink operation, because in the uplink, the traffic must be transmitted simultaneously from multiple stations to the AP. In the uplink transmission, the AP acts as an operations and transmission coordinator.

First, the AP sends a trigger frame to all the stations that will be involved in the upcoming transmission, and then these stations transmit simultaneously on their respective RUs in response to the trigger frames. Based on the trigger frame, the client station will need to tune its timing, frequency and power levels to participate in this transmission.

Timing synchronization

Client stations participating in an OFDMA transmission must transmit within 400 ns of each other. In order to synchronize the clients, the AP transmits a trigger frame. This frame contains information about the OFDMA sub-carrier’s RU assigned to each station. In response, the participating clients need to start transmission of the uplink signal after a specified time interval short inter-frame space (SIFS) of 16 µs +/- 400 ns after the end of the trigger frame as mandated by the 802.11ax standard.

Figure 4: Timing synchronization of uplink OFDMA

Frequency synchronization

To prevent inter-carrier interference (ICI) between the clients transmitting simultaneously, all stations participating in the transmission need to pre-compensate for carrier frequency offset (CFO). The client stations adjust their carrier frequency based on the trigger frame received from the AP. The 802.11ax standard requires the residual CFO error after compensation to be less than 350 Hz.

Figure 5: Wi-Fi OFDMA frequency synchronization

Power control

When traffic is transmitted between the AP and client stations that are located at various distances, power control is needed to ensure that stations closer to the AP do not drown out users farther from the AP that are transmitting simultaneously.  The 802.11ax standard requires the stations to adjust their power based on the estimated path loss to the AP. Devices closer to the AP transmit less power while devices farther away transmit more power to achieve the same received power at AP receiver considering the path loss. There are two classes of devices defined in the standard based on how accurately they can control their power. Class A devices control their transmit power within ±3 dB and class B devices control their power within ±9 dB.

Figure 6: Wi-Fi OFDMA power control

The accuracy of the synchronization between the AP and the client is critical because a single bad actor (station) that doesn’t perform as expected will degrade performance for all other users that are sharing the same transmission.

Unused Tone EVM

Another concept introduced in Wi-Fi 6 OFDMA is the unused tone EVM metric. As discussed above, when the stations transmit in the uplink direction on their assigned RU, it is important that emissions do not spill over into other RUs, otherwise that will reduce the system capacity for other users.

Figure 7: Unused tone EVM

To evaluate the station’s performance, the 802.11ax standard introduced this new metric, which is a measure of the in-channel emissions generated by the stations. IEEE specifies RU leakage as a staircase mask requirement, which is called unused tone EVM. The unused tone EVM measures the EVM floor of 26 tone RU over the whole channel bandwidth excluding the position(s) for the active RU.

The validation of uplink multi-user transmission in Wi-Fi 6 OFDMA is one of the most challenging areas of 802.11ax testing. For the tester, it requires the involvement of both the signal generation and signal analysis function, which need to be coordinated to run a test sequence to provide nanosecond-level accuracy measurements to ensure the required 400 nanosecond accuracy of the uplink transmission.

Figure 8: Wi-Fi 6 trigger-based testing

When testing an AP design, the tester emulates a station, and it decodes the received trigger frame information in order to generate a trigger-based physical layer protocol data unit (PPDU) with the correct RU allocation. The tester also needs to adjust its power to the AP requirements in real time because it needs to respond to the AP with a nanosecond level of accuracy.

When testing a client station, the tester emulates an AP, generates a trigger frame and will measure the response with the signal analysis function. The tester measures the trigger-based PPDU generated by the station itself. The tester needs to measure the timing, frequency and power accuracy of this transmission from the client to ensure it meets the 802.11ax standard requirements and the RU leakage requirements.

Conclusion

Beyond testing the traditional transmit and receiver metrics that are necessary for any Wi-Fi generation, the chart below shows an overview of the new test areas that are specific to 802.11ax. These test sequences for Wi-Fi 6 OFDMA require a higher level of test complexity compared to previous Wi-Fi generations.

Figure 9: Wi-Fi 6 key test areas

If the number of test cases and complexity of Wi-Fi 6 OFDMA seems overwhelming, I encourage you to explore LitePoint’s IQfact+ automation solution. The software is tailored to control both the tester and the device under test (DUT) and it provides calibration and verification automation. LitePoint already offers IQfact+ packages for Wi-Fi 6 and Wi-Fi 6E chips for AP and client devices.

 

By Eve Danel

May 25, 2021

LitePoint’s Eve Danel is the author of this two-part blog series. Throughout these posts, you’ll learn about the key RF-PHY changes made in Wi-Fi 6/802.11ax compared to Wi-Fi 5 and older generations: the addition of a new frequency band, higher modulation rate, smaller subcarrier spacing, longer guard interval and the introduction of the OFDMA modulation technique providing capacity improvements and latency reduction. 

Drivers for Wi-Fi 6 vs. Wi-Fi 5: Capacity Improvement

In previous generations, updates to the Wi-Fi standard were mainly focused on increasing the raw throughput. Wi-Fi 6 is faster as well, but different in that it also focuses on improving user experience, especially in dense, congested environments with a large number of users, such as airports, stadiums or educational settings. In these environments, the user experience suffers from inefficiencies when too many users are competing for bandwidth, and multiple overlapping networks interfere with each other.

Wi-Fi 6 adds features that improve average throughput per user by up to four times over Wi-Fi 5 in these highly congested environments. Wi-Fi 6 is also very well suited for use in home networks, where phones and tablets join televisions, home automation controls and other systems in competing for bandwidth. Wi-Fi 6 is particularly important in applications such as video conferencing and content streaming that require very low latency.

Wi-Fi 6 vs. Wi-Fi 5: Key Changes to the RF Physical Layer

Wi-Fi 6, also known as high efficiency (HE) Wi-Fi, is based on the 802.11ax standard and operates in the 2.4 GHz and 5 GHz frequency bands. Wi-Fi 6E, also based on the 802.11ax standard, operates in the 6 GHz frequency band.

Wi-Fi 6 vs. Wi-Fi 5 and Wi-Fi 4

Graphical user interface, application Description automatically generated

The chart above summarizes the key RF-PHY changes made to increase capacity and efficiency in Wi-Fi 6 vs. Wi-Fi 5. Some new features are designed to improve Wi-Fi top speeds. Other new features are designed to improve the user experience in outdoor spaces or in high multipath fading environments. Other features are designed to improve capacity and reduce latency.

Other features not listed in the table, but that still have an impact on the network and devices include target wake time (TWT) for better battery savings and BSS coloring that is designed to improve spatial reuse in congested environments.

802.11ax Modulation

Wi-Fi 6 increases the modulation rates to improve peak data rates. The 802.11ax standard uses 1024 – quadrature amplitude modulation (1024QAM) with a peak data rate that is 25% higher than the peak data rate in the 802.11ac standard, which utilized 256QAM. 1024QAM used in Wi-Fi 6 encodes 10-bits of data per subcarrier, while 256QAM used in Wi-Fi 5 encodes 8-bits of data per subcarrier.

Wi-Fi QAM Rate Evolution

Calendar Description automatically generated

When encoding data at 1024QAM, the main challenge is ensuring high transmitter performance so that signals are correctly demodulated at the receiver. Error vector magnitude (EVM) is the metric that is used to ensure modulation accuracy. EVM measures the difference in dB between the ideal and the actual I/Q positions on a constellation diagram. The IEEE-defined EVM requirements for 1024QAM require a minimum EVM of -35dB, which is 3 dB better than 256QAM. 

When measuring the EVM for a device with a tester, that tester’s EVM floor is critical. To ensure accuracy, the tester’s EVM floor should be 10 dB better than the device that is being measured. For 802.11ax, a tester that can achieve at least -45 dB to -48 dB residual EVM is needed.

OFDM Subcarrier Spacing and Symbol Duration

Another change to the PHY layer is that 802.11ax adds 4 times more subcarriers than 802.11ac, resulting in the subcarrier spacing becoming ¼ of what it was in 802.11ac. For 802.11ax, there is now a spectral spacing of 78.125 kHz between subcarriers. The symbol duration is inversely proportional to carrier spacing and increases four times over Wi-Fi 5, from 3.2 µs in 802.11ac to 12.8 µs in 802.11ax.

Diagram Description automatically generated

There are a few advantages from this change to the subcarrier spacing.  The first is that with more subcarriers there can be finer resource unit (RU) granularity. The resource units are allocated in OFDMA to divide the channel bandwidth into smaller sub-channels assigned to different users. The smallest RU size contains 26 subcarriers (around 2 MHz), and allows up to 74 users in a 160 MHz channel (each user being assigned a 26 tones RU).

The second advantage is that 802.11ax is designed with a higher ratio of subcarriers that carry data vs. those that don’t carry data (null and pilot). The increase in the number of data subcarriers increases the efficiency of the transmission. This change results in about 10 percent improvement over 802.11ac.

It is important to keep in mind that because subcarrier spacing has decreased and the signal processing required is also more complex, Wi-Fi 6 devices need better frequency accuracy for proper demodulation.

Guard Interval / Cyclic Prefix

The standard also introduces new guard interval options. In Wi-Fi 6, longer guard intervals are designed to provide improved performance in environments with multi-path and delay spread. These longer guard intervals help to prevent inter-symbol interference in outdoor environments and therefore improve coverage and performance. 802.11ac had two Guard Interval (GI) options – long GI (0.8µs) and short GI (0.4µs). 802.11ax has three types of GI – the normal GI (0.8 µs), double (1.6 µs) GI and quadruple (3.2 µs) GI. Overall efficiency is preserved since in 802.11ax the symbol duration is four times longer than in 802.11ac, therefore the overhead generated by even the longest guard interval is the same percentage of the symbol time as in the previous generation, but with the added benefits of longer guard interval.

Graphical user interface, application Description automatically generated

OFDMA

Wi-Fi 6 utilizes orthogonal frequency division multiple access (OFDMA). This is a fundamental change to the way that Wi-Fi operates. In previous OFDM-based Wi-Fi standards, bandwidth could be increased, but a user transmission would take the full channel for each transmission whether or not there was enough data to fill the bandwidth of the entire channel. Other senders would queue for access to the channel; creating a big problem in networks that supported a lot of simultaneous users.

By contrast, in OFDMA the channel is divided into subchannels called resource units (RUs). Each RU is made of a pre-defined number of subcarriers. The smallest RU can have 26 subcarriers, which is a little less than 2 MHz of spectrum and the largest RU can be as wide as 996 subcarriers, which is close to 80 MHz of spectrum. There is one user per RU for OFDMA;

Each RU is assigned to a different client station and the size and number of RUs allocated to each station is determined by the access point (AP), based on the data transmission requirements of each station.

For example, an application that requires a lot of data, such as content streaming, can be assigned a large RU. A device that requires less data, an IoT sensor for example, can be assigned a very small RU. Each RU can use a different modulation scheme, encoding rate and power level. RU assignments can vary on a frame-by-frame basis, based on what the AP determines is the most efficient use of the spectrum.

OFDMA does not directly impact Wi-Fi’s raw link speed, but instead, it increases its efficiency and will reduce delay for users that are sharing the spectrum. This provides benefits, especially in congested environments.
In my next blog post, I will expand on OFDMA and how it works, and will present important test considerations for Wi-Fi 6 vs. Wi-Fi 5. In the meantime, please visit LitePoint’s Wi-Fi 6 page to learn more about our test and measurement solutions for Wi-Fi 6 and click here for a replay of my webinar on this topic.

LitePoint IQxel-MW Validates the Performance of Morse Micro System-on-Chip Family

SAN JOSE, California

—LitePoint, a leading provider of wireless test solutions, today announced that Morse Micro, developer of the smallest Wi-Fi HaLow single-chip solution, has standardized on the LitePoint IQxel-MW for design verification of its Wi-Fi HaLow system-on-chip family.

Customers and manufacturing partners integrating Morse Micro’s Wi-Fi HaLow SoC based on IEEE 802.11ah into their IoT design will be able to use the IQxel-MW to test the wireless functionality of their product, helping bring the design to market.

“The number of connected Internet of Things devices is growing rapidly and the low power and long-range capabilities of Wi-Fi HaLow will open many more possibilities for IoT applications,” said Vahid Manian, Chief Operating Officer of Morse Micro. “Original equipment manufacturers and original design manufacturers can now develop Wi-Fi HaLow IoT products with confidence using the IQxel-MW platform for design validation and manufacturing testing.”

Wi-Fi HaLow is well suited for a variety of IoT applications like video cameras, industrial automation, occupancy sensors, motion detectors and more, offering a longer signal range of approximately 1-kilometer, lower power connectivity and the ability to penetrate obstacles.

“Morse Micro provides one of the leading chipsets in the Wi-Fi HaLow market. We’re pleased to be collaborating with them in this space” said Adam Smith, Director of Product Marketing at LitePoint. “LitePoint’s IQxel-MW platform is the ideal solution for thorough verification of 802.11ah OFDM RF PHY operation in the unlicensed Sub-1 GHz frequency bands. It allows Morse Micro’s customers to accelerate their Wi-Fi HaLow design and ensure optimal performance of their IoT products.”

Technical Details

LitePoint’s IQxel-MW platform is a leading test solution for Wi-Fi connectivity, meeting the needs of product development and high-volume manufacturing. The IQxel-MW includes support for 802.11a/b/g/n/ac/ax and 802.11ah as well as wide range of connectivity and IoT technologies with a frequency range from 400 to 6000 MHz and upgradable to support up to 7300 MHz for future Wi-Fi 6E bands.

For more information, visit LitePoint’s IQxel-MW test solution.

– End –

By Eve Danel
March 10th, 2021

EVM (Error Vector Magnitude) is the key metric used to evaluate RF transmitter performance, because it provides a consistent “yardstick” to characterize the transmitter regardless of the receiver implementation and it encapsulates a wide range of possible impairments on the transmitter chain into a single measurement. New Wi-Fi generations have increased the modulation constellation size with 1024-QAM and 4096-QAM, placing an even higher requirement into transmitter accuracy. In this blog, we will look at what EVM measures in Wi-Fi and how it is measured.

Quick Facts about QAM Constellation

With each new Wi-Fi generation, higher data rates are achieved by encoding more data bits per symbol. Higher-order QAM enables to transmit more data bits while maintaining the same spectrum footprint and therefore achieve better efficiency.

  • 802.11ac (Wi-Fi 5) supports up to 256-QAM where 8 bits of data are encoded per symbol
  • 802.11ax (Wi-Fi 6) supports up to 1024-QAM, where 10 bits of data are encoded per symbol
  • 802.11be introduces 4096-QAM modulation with 12 bits of data per symbol

With a higher-order modulation, the constellation points are closer together and are therefore more susceptible to noise and non-linearities.  The digital communication channel requires a better Signal-to-Noise ratio (SNR) to operate error free compared to using lower modulation rates, because the separation between constellation points is reduced and so is the decision distance in the receiver. The transmitter also needs to perform with significantly better accuracy, the Error Vector Magnitude (EVM) is the metric used to quantify the accuracy.

What is EVM?

Error Vector Magnitude is the most commonly used modulation quality metric in digital communications, it is a measure of the deviation of the actual constellation points from their ideal locations in the constellation diagram. The Root-Mean-Square (RMS) error is averaged over subcarriers, frequency segments, OFDM frames and spatial streams and measured at the symbol clock transitions. It is expressed in %Root-Mean-Square or dB.

EVM is a comprehensive measure of the transmit quality because it reflectssignal defects that affect the magnitude or phase of the transmitted symbol. It captures the sum of imperfections in the device implementation that impact the transmit symbol’s accuracy. Possible impairments can arise at the baseband, IF or RF elements of the transmit chain.

Some common types of corruption are:

  • I/Q gain and phase mismatch: gain mismatch originates from the two baseband modulation input signals (I and Q) having an amplitude difference at the upconverter. Phase (or Quadrature) mismatch occurs when the two modulated baseband signals are not 100% in quadrature (90°) when being up converted by the upconverter.
  • Symbol clock error: originating at the encoder, the symbol clock controls the frequency and timing of the transmission of the individual symbols
  • Phase noise: originating at the LO (RF or IF)
  • PA compression and non-linearity: originating in the power amplifier being operated in its compression region or showing non-linearity

IEEE EVM Requirements

The IEEE defines the maximum allowed transmitter constellation error as part of the standard. The maximum allowed depends on the data rate i.e. constellation size, since higher-order constellation requires a tighter modulation accuracy.

  • For 1024-QAM, the EVM requirement is ≤ -35 dB with amplitude drift compensation disabled in test equipment.  If amplitude drift compensation is enabled in test equipment, the EVM requirement is ≤ -35 dB, and ≤ -32 dB with amplitude drift compensation disabled.
  • For 4096-QAM, EVM requirements as defined in IEEE 802.11be Draft 0.2

How is EVM Measured?

The test is performed over at least 20 frames. For 802.11ax HE-MU PPDU and HE-TB PPDU, if the occupied RU has 26 tones, the PPDUs under test shall be at least 32 data OFDM symbols long. For occupied RUs that have more than 26 tones, the PPDUs under test shall be at least 16 data OFDM symbols long. The frames should contain random data.

For an HE-TB PPDU with an RU smaller than a 2×996-tone RU, the test shall also include transmit modulation accuracy for the unoccupied subcarriers of the PPDU.

Because Wi-Fi operates on 3 frequency bands (2.4 GHz, 5 GHz and 6 GHz) the transmitter EVM performance should be verified at all transmit power levels and frequencies where the transmitter will operate.

Test equipment used for EVM measurements should support converting the transmitted signals into a stream of complex samples at 160 MHz or more, with sufficient accuracy in terms of I/Q amplitude and phase balance, dc offsets, phase noise, and analog-to- digital quantization noise to ensure low error margin in the measurements.

EVM Correction Items

EVM shows a dependency on the analysis options in the test equipment (such as Phase Tracking, Channel Estimate, Symbol Timing Tracking, Frequency Sync, and Amplitude Tracking). Because analysis parameters can improve EVM, they should be chosen carefully. Addition of non-standard EVM correction methods can artificially improve the DUT’s EVM measurement and hide defects that would have otherwise been detected. It is recommended to only apply IEEE standard defined EVM analysis methods to ensure an objective characterization of the DUT’s transmitter performance.

The following lists the EVM measurement and correction methods:

IEEE defined EVM analysis:

  • Frequency correction: Because the receiver and transmitters have separate clocks, this correction removes the frequency error of the transmitter from the receiver.
  • Phase correction: Because the receiver and transmitters have separate clocks, devices must correct for clock misalignment.
  • Preamble channel estimate: The channel between the tester and DUT is calculated and corrected based on Pilot tones.

IEEE optional EVM analysis:

  • Amplitude Drift Compensation: The 802.11ax standard places a different target EVM whether amplitude drift compensation is enabled in the test equipment. Amplitude drift compensation, also called amplitude tracking,removes variations due to amplitude changes between symbols. When EVM is measured with the minimum IEEE required PPDU size of 16 data OFDM symbols, the effects of amplitude drift may not be noticeable. However, when longer PPDUs are used (e.g. A-MPDU), amplitude drift may have a more pronounced effect and result in degraded EVM. If amplitude drift causing defects are present, enabling amplitude tracking in the test equipment will hide these defects resulting in a better EVM measurement. While most modern DUT Wi-Fi receivers are able to compensate for amplitude drift, defects causing amplitude drift like PA thermal performance could get masked by the use of amplitude tracking. During design evaluation and troubleshooting, EVM should be measured with and without amplitude drift compensation. A large delta between the 2 measurements will indicate an underlying condition.

Non-standard EVM corrections:

  • Full Packet Channel Estimate: With this correction, the channel between tester and DUT is calculated using all the data packets. When the channel model is estimated based upon the full packet, the EVM improves because the channel model more closely fits the full packet. However, within a real-world receiver application, the packets are processed in real-time, and the channel is estimated based on the header only. This correction artificially improves EVM measurement and could hide possible phase noise, analog flatness or IQ mismatch issues (bad VCO, XTAL, compression, thermal issues, power supply)
  • Channel estimate equalization: This correction removes phase noise and amplitude response from the channel, it assumes flat channel response.  This correction could hide possible phase noise and analog flatness issues (bad PA, matching).

Using non-standard EVM correction can artificially improve measurements and hide underlying EVM impairments, therefore care should be taken in the selection of these parameters.

Test Equipment and Error Margin

As the requirements for the transmitter modulation accuracy rise, so do the requirements of the equipment necessary to test it. The DUT transmitter’s EVM measurement requires the test equipment’s own EVM floor to perform even better in order to provide a small measurement error.

For measuring the same DUT performance, a larger margin between DUT performance and the tester’s EVM floor will result in a smaller measurement error.

As shown in the chart above:

  • 16dB margin between DUT’s EVM and tester’s EVM floor results in a 0.1dB error uncertainty to the measurement
  • 6dB margin between DUT’s EVM and tester’s EVM floor results in a 1dB error uncertainty to the measurement
  • 0dB margin between DUT’s EVM and tester’s EVM floor results in 3dB error uncertainty to the measurement

For high order modulations 1024-QAM or 4096-QAM, that require stringent transmitter accuracy, selecting test equipment with low EVM floor is critical, otherwise the error uncertainty contributed by the test equipment reduces the confidence in the final measurement. In extreme cases, where the tester’s EVM floor equals that of the DUT, the measurement error is too large to determine if DUT meets the IEEE EVM requirements.

Key Takeaways

EVM provides a concise “one number” summary of the transmitter quality as it encapsulates a wide range of possible impairments on the transmitter chain. EVM is used during the design phase to characterize devices and uncover underlying sources of distortions. Because of its simplicity, it is also used in manufacturing to guarantee that transmitters will operate properly in real-world environments. However, it is important to understand that EVM is a calculated metric and numerous correction terms are possible that modify the measurement. The test equipment’s EVM floor is an equally important factor that affects the accuracy of the measurement. LitePoint’s IQxel family of testers provide best-in-class EVM performance to ensure high confidence in EVM measurements for the latest Wi-Fi generation.

To learn more about EVM: Read our Application Note.