Improving BLE immunity on nrf52840 with dynamic frequency hopping

I am a software engineer working on a product that will need to be certified for EMC in the near future. The product uses the nrf52840 for BLE and the software is being developed using Zephyr and the NRF-SDK. 

During a EMC pre-scan at the test lab that will also handle the certification the BLE connection would get interrupted when a 3V/m field was applied at 2,4GHz which which we would expect considering BLE operates between 2,4GHz and 2,48GHz. But it also occurred around 2,2GHz and 2,6GHz, which according to the test engineers working at the certifications lab is outside of the acceptable range for certification. 

The hardware engineers working on this product are working on a way to improve this in hardware. But the test lab also recommended we look into potential ways this could be improved trough changes in the software. They specifically mentioned "listen before talk" and "dynamic frequency hopping" as potential fixes.

I couldn't find much information about listen before talk with BLE. As far as I can tell dynamic frequency hopping should be possible with BLE but I can't find any indication in the Zephyr/NRF-SDK documentation that there is support for this feature and the following post suggests as much:  Adaptive frequency hopping with Bluetooth LE audio broadcast 

I did find a way to manually set a channel map to exclude specific channels, but if I understand how it's supposed to work correctly the BLE connections should switch between the channels all the time and and the device should figure out which channels yield the best performance automatically. 

Is it correct there is currently no support for either of these features? If so, will this be added in the near future?

Or is this something we could implement ourselves fairly easily?

The certification test engineers also mentioned they had run into similar issues certifying other Nordic devices and got better results using the "Direct Test Mode". We looked at this while preparing for the EMC pre-scan and concluded it's a nice tool that lets you select specific channels to try and sweep across channels but it wouldn't necessarily represent real word usage. 

Is direct test mode representative for how the device would behave in practice when maintaining a BLE connection with an application build with Zephyr and NRF-SDK and should we just use that instead for the certification tests?

Or is this just a tool that can help track down a specific issue and the certification test should be done with the actual application that will ship with the product?

Any other advice on how immunity can be improved in hardware or software would also be appreciated. 

Many thanks in advance,

Thomas Gooijers

  • Hi

    We recommend that certification tests are done with the radio_test sample project for 2.4GHz short range, as it's the most malleable radio sample. You want certification to be done with the barebones radio peripheral, and then rather do tests with your product's application separately to see that it behaves as intended.

    BLE does indeed not have listen before talk, and uses adaptive frequency hopping already to find the least busy channels for data transmissions, and is AFAIK very similar to dynamic frequency hopping, but on the 2.4GHz band and suitable for BLE.

    I don't have any good suggestions as to why the device is blocked on the 2.2GHz and 2.6GHz fields, and have asked one of our HW experts to take a look, unfortunately he's out of office today, but I'll get back to you as soon as I hear back from him.

    I'd also suggest opening a HW review ticket so we can take a look at your schematics and PCB layout to make sure everything looks okay from a HW point of view. You can create a private ticket in DevZone that will be handled confidentially by Nordic engineers.

    Best regards,

    Simon

  • Hi Simon,

    Thanks for your quick response. Good to know Adaptive frequency hopping is used already and listen before talk is not supported. 

    We are using the radio_test sample already for emission testing to check the harmonics emitted by the device. The problem we are trying to fix is related to immunity, the connection is lost when a field is applied externally on 2.2GHz and 2.6GHz. 

    We are interested to know if the direct test mode is a suitable way to test EMC immunity or if it should be done using the actual application because that's more representative for how it would work in practice. 

    Best regards,

    Thomas

  • So I heard back from one of our HW experts regarding the EMC immunity tests and here is their feedback:

    EMC immunity should be tested with the stack running and the final firmware I'm being told. There is a 120MHz "exclusion band" on both sides of the 2.4GHz band (EN 301 489.17). The fact that you're failing by losing the connection is strange though, what connection parameters are you using? Do you have enough time to re-send the data before a link loss/connection loss is triggered? The parameter of if one of these tests are failing or not is usually that "the user shouldn't notice anything", meaning it shouldn't disconnect or become noticeably slower, etc. However, in an application like this, more often than not re-transmitting some data packets is required.

    Best regards,

    Simon

  • Hi Simon,

    Good to know the direct test mode is not representative enough for the actual test and we should indeed use the actual application we are going to ship with the device. 

    The amount of time to resend data could be a factor indeed.We do have the supervision timeout set to 300ms because we are working with a application where timing and latency matters so our product would disconnect fairly quickly in case there is disturbance which causes packets to get lost. So we could as an experiment check if increasing this makes any difference. 

    We also have another question about how Nordics implementation of adaptive frequency works.

    Does it just jump between the channels continuously? So that even if a certain channel is busy packets would still get trough via the channels that work better. 

    Or does it go further than that and create a record of how well each channel performs and use the channels with the best chance of the packet getting trough be used more often?

    Also I didn't mention yet we are using Coded PHY (S=8). Does this make any difference and is adaptive frequency hopping also supported in this case?

    Once again thanks for your quick response,

    Thomas Gooijers

  • Hi Thomas

    300ms as supervision timeout sounds a bit low in my opinion. Do you also have details on the other intervals/parameters? 

    An update and correction to my last reply. BLE is not adaptive by design as defined by ETSI, but we have a QoS module that can be used to enable adaptivity. Since it is the central that decides the channel mapping, it's not always possible to archive proper adaptivity. Thus, advertising will always happen on three channels. No BLE stack is adaptive by design. The Bluetooth spec does not have any requirements for adaptivity. So in case this is needed, it has to be a custom implementation and has to be handles by the application. The peripheral must always do what the central tells it to, including connection parameters and channel map.

    I updated my initial reply to better reflect this. Sorry about that, I got confused myself as I found some contradicting information.

    Best regards,

    Simon

Related