Hello,
a follow-up/corresponding question to this topic:
https://devzone.nordicsemi.com/f/nordic-q-a/4517/how-to-calibrate-the-nrf51-adc-to-correct-offset-and-gain-error/426557
after this thread is already 8 years old, have there been some changes with offset and gain error calculations?
Does ADC already work with the stored error values automatically or does it still be have to be applied manually?
This is a question for both nrF51802 and nrF51822 devices.
I am asking this because measurement seems to be more accurate if the errors are not calibrated to the measurement calculation.
for explanation:
I have a signal to be measured, it is in the range of 10-30mV. Depends a little bit on the PCBA and components used,
as we are measuring the voltage drop across a MOSFET device. We apply 1A to the output and measure the Voltage
across the MOSFET (same goes for a measurement of a voltage drop over a resistor).
So if I use our FLUKE TRMS, I can read values of ~23-24mV on the PCB Testpoints.
A) Now if I use this code to measure the voltage, it gets me a value of ~32mV. (already tried to implement a TRMS calculation)
note: "1.171875" is the calculated value of the ADC to mV conversion (1200/1024). the voltages are measured, calculated,
squared (voltage_squared) and summed until the "SAMPLES_IN_BUFFER" value is reached - in this case 20. Then the sum (sum_squares)
is divided through the amount of samples taken (SAMPLES_IN_BUFFER) and from this value the square root is calculated:
void get_fetstate_voltage(void) { U_STATE = 0; float voltage = 0; float sum_squares = 0; float voltage_squared = 0; float mean_squares = 0; for (int i = 0; i < SAMPLES_IN_BUFFER; i++) { nrf_delay_ms(1); nrf_drv_adc_sample_convert(&ADC_U_STATE, &adc_buffer[i]); voltage = adc_buffer[i] * 1.171875; voltage_squared = voltage * voltage; sum_squares += voltage_squared; } mean_squares = sum_squares / SAMPLES_IN_BUFFER; U_STATE = sqrt(mean_squares);
B) If I use this code it gets me this value: 38mV
now this is basically the same code, only with additional gain & offset error calculation
note: here I have implemented the code shown here from 27.10.2015, I have also read out the code via nrfprogj,
same values - so it seems to be correct. Values from memory address show 0x10000024: FFFF0105 (GAIN=1, OFFSET=5).
ADC_RES_10BIT = 1024.
void get_fetstate_voltage(void) { U_STATE = 0; float voltage = 0; float sum_squares = 0; float voltage_squared = 0; float mean_squares = 0; static uint32_t ficr_value_32; static int8_t offset_error; static int8_t gain_error; ficr_value_32 = *(uint32_t*)0x10000024; offset_error = ficr_value_32; gain_error = ficr_value_32 >> 8; for (int i = 0; i < SAMPLES_IN_BUFFER; i++) { nrf_delay_ms(1); nrf_drv_adc_sample_convert(&ADC_U_STATE, &adc_buffer[i]); voltage = adc_buffer[i] * (ADC_RES_10BIT + gain_error) / ADC_RES_10BIT + offset_error - 0.5; voltage = voltage * 1.171875; voltage_squared = voltage * voltage; sum_squares += voltage_squared; } mean_squares = sum_squares / SAMPLES_IN_BUFFER; U_STATE = sqrt(mean_squares); }
C) If I use a standard code to read the ADC (no TRMS), I get this value: 36mV (with gain/offset error calculation)
void get_fetstate_voltage(void) { U_STATE = 0; float voltage = 0; static uint32_t ficr_value_32; static int8_t offset_error; static int8_t gain_error; ficr_value_32 = *(uint32_t*)0x10000024; offset_error = ficr_value_32; gain_error = ficr_value_32 >> 8; nrf_drv_adc_sample_convert(&ADC_U_STATE, &adc_buffer[0]); // get HE current U_STATE = adc_buffer[0] * (ADC_RES_10BIT + gain_error) / ADC_RES_10BIT + offset_error - 0.5; U_STATE = U_STATE * 1.171875; }
D) If I use the most simple code for this, I get this value: 30mV (no gain/offset error calculation)
void get_fetstate_voltage(void) { U_STATE = 0; float voltage = 0; nrf_drv_adc_sample_convert(&ADC_U_STATE, &adc_buffer[0]); U_STATE = adc_buffer[0] * 1.171875; }
So it looks like the gain and offset error calibration somehow fails on my side, I am not sure why,
but measurement A) is taken without error calibration and is more accurate then measurement B)
also measurement D) (no error calibration) is more accurate then measurement C)
****************************************************************************************************************
If I do the same examples with a different PCB (and a different nrf51802 device):
mem read shows 0x10000024: FFFF0001 ((GAIN=0, OFFSET=1),
FLUKE TRMS = ~22mV
Measurement A) = 25mV
Measurement B) = 26mV
Measurement C) = 24mV
Measurement D) = 23mV
same on this measurement, with error calibration the output seems to have more error then without.
Could someone please have a look into this, am i somehow calibrating additional error to my measurement?
Thanks, Philip