Battery level reading using ADC pin from nRF chip shows significant fluctuation

Hi everyone,

We're reading BAT level by reading data from ADC pin and the setup diagram is shown below. 

We tested our device at hospital, and we observed significant fluctuation highlighted in red regarding its BAT level. For example, it went from 65% to 20%, then jumped to 60% again. 

We then took the device and battery back to the office and tried to reproduce the issue on benchtop. However, we couldn't reproduce it. In contrast, the following is what we got, and it worked as expected.

Our hypothesis is that it could be due to the impedance mismatch between ADC input pin and the output of voltage divider. However, based on our calculation, the input impedance of the voltage divider is equivalent to R13//R15 (50kOhm) while the input impedance of ADC pin is more than 1MOhm. This would not cause the impedance mismatch. 

I greatly appreciate if anyone has any suggestions for this issue. Thanks.

Parents
  • Hi Tai,

    I dare not claim to be an expert, but I have been assigned the case, and will try to help.

    This behavior is indeed abnormal. The SAADC on the nRF52 series doesn't have any limitation that would explain that.

    Could you please elaborate what you did in the bench test? I notice that the discharge curve in the field test isn't similar to that in the bench test, assuming that the drain is constant in both scenarios.

    Is it possible that the device underwent any special mode, or was put under abnormal environmental condition during the time with the measurement dip?
    For example, extreme EM noises requiring multiple retries, keeping the radio on for longer, increasing the current draw.

    Is this observed multiple times or just once, and with one particular unit, or multiple units?

    Hieu

  • Hi Hieu. 

    Could you please elaborate what you did in the bench test? I notice that the discharge curve in the field test isn't similar to that in the bench test, assuming that the drain is constant in both scenarios.

    I used exact hardware device and battery used in hospital. I basically turned the device on, connected to our mobile application, then let it run till the battery is fully depleted. I left the device on my desk, which is normal working environment in the office. 

    Side note: The first discharge curve was only 1.5 hr long measurement while the second one conducted on bench test was 8 hr long. 

    Is it possible that the device underwent any special mode, or was put under abnormal environmental condition during the time with the measurement dip?

    It was running under normal mode. For environmental condition at hospital, it has many medical equipment and there is presence of 2.4 GHz frequency component emitted by this equipment. In other words, it is pretty noisy, and we observed several BLE disconnects due to that before.

    However, for this case I did check the RSSI values recorded in our application. It looked normal.

    Is this observed multiple times or just once, and with one particular unit, or multiple units?

    It happened for us twice in different hospitals. For another case, we put a branch new battery, the HW device showed low BAT right away and BLE disconnect happened after a couple minutes.

  • Also SAADC init code? Looks like a lot of work within an interrupt callback complete with floating point calculations not clearly defined; also the compiler needs to know if the hardware FPU is being used to correctly handle the FPU interrupt registers, plus there are known hardware FPU power consumption issues which may or may not be handled by the version of SDK .. often best to simply use fixed-point 32-bit integers with appropriate scaling to avoid those issues

  • Ops. I just added the SAADC init code to my previous comment. Thanks!

  • Using fixed-point 32-bit integers sounds a good idea. I'm thinking of sending the raw ADC values and convert it to BAT level in % on the app side, which would reduce the complexity of what interrupt callback needs to be handled. 

  • On the firmware side, a potential flaw is the application being kept up and waiting for transmission to complete instead of being put to sleep. It's worth checking if this is happening and avoid it.

    Hi  . Just wanted to follow up on this. Could you elaborate it more? There is a parameter, called

    connection supervision timeout
    that determines how long a connection is considered valid after no data packets are received from the central device (i.e., tablet in my case). I'm not sure when to put it into sleep by interpreting your comment. Thanks, Tai!

  • Tai said:
    On the firmware side, a potential flaw is the application being kept up and waiting for transmission to complete instead of being put to sleep. It's worth checking if this is happening and avoid it.

    Hi Hieu . Just wanted to follow up on this. Could you elaborate it more?

    I mean avoid these kinds of approaches:

    ret_code = try_send_notification(...);
    while (ret_code == BUFFER_FULL_TRY_AGAIN) {
        ret_code = try_send_notification(...);
    }

    An example is when sd_ble_gatts_hvx() returns NRF_ERROR_RESOURCES.

    This kind of approach keeps the CPU up and running as long as the send attempt isn't successful, thus draw a lot of power. Instead, it is better to put the system to sleep and try again.

    I tried to look at your battery calculation. I am a little confused how the max physical voltage of 1.69V is associated with a reference voltage value of 805.

    Nonetheless, assuming that is a typo, and the physical max is 1.609V, I calculated this:

    V_physical voltage_var temp_voltage_percent
    1.65 825 105.63425
    1.60 800 98.592
    1.55 775 91.54975
    1.50 750 84.5075
    1.45 725 77.46525
    1.40 700 70.423
    1.35 675 63.38075
    1.30 650 56.3385
    1.25 625 49.29625
    1.20 600 42.254
    1.15 575 35.21175
    1.1 550 28.1695
    1.05 525 21.12725
    1 500 14.085
    0.95 475 7.04275

    Assuming the reading dip is due to an actual physical voltage dip due to consumption, the dip will be roughly from 1.35V to 1.00V.

    I am quite weak when it comes to hardware, so I don't know if such a dip is explainable by increased consumption. However, I think we can replicate the lossy environment in your bench test by intentionally worsen the condition, such as by putting metal plate on top of the antenna. You can try setting that up, confirm via sniffer or the Power Profiler that the connection is actually bad, and then monitor the battery calculation.

Reply
  • Tai said:
    On the firmware side, a potential flaw is the application being kept up and waiting for transmission to complete instead of being put to sleep. It's worth checking if this is happening and avoid it.

    Hi Hieu . Just wanted to follow up on this. Could you elaborate it more?

    I mean avoid these kinds of approaches:

    ret_code = try_send_notification(...);
    while (ret_code == BUFFER_FULL_TRY_AGAIN) {
        ret_code = try_send_notification(...);
    }

    An example is when sd_ble_gatts_hvx() returns NRF_ERROR_RESOURCES.

    This kind of approach keeps the CPU up and running as long as the send attempt isn't successful, thus draw a lot of power. Instead, it is better to put the system to sleep and try again.

    I tried to look at your battery calculation. I am a little confused how the max physical voltage of 1.69V is associated with a reference voltage value of 805.

    Nonetheless, assuming that is a typo, and the physical max is 1.609V, I calculated this:

    V_physical voltage_var temp_voltage_percent
    1.65 825 105.63425
    1.60 800 98.592
    1.55 775 91.54975
    1.50 750 84.5075
    1.45 725 77.46525
    1.40 700 70.423
    1.35 675 63.38075
    1.30 650 56.3385
    1.25 625 49.29625
    1.20 600 42.254
    1.15 575 35.21175
    1.1 550 28.1695
    1.05 525 21.12725
    1 500 14.085
    0.95 475 7.04275

    Assuming the reading dip is due to an actual physical voltage dip due to consumption, the dip will be roughly from 1.35V to 1.00V.

    I am quite weak when it comes to hardware, so I don't know if such a dip is explainable by increased consumption. However, I think we can replicate the lossy environment in your bench test by intentionally worsen the condition, such as by putting metal plate on top of the antenna. You can try setting that up, confirm via sniffer or the Power Profiler that the connection is actually bad, and then monitor the battery calculation.

Children
No Data
Related