This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

TWI HW producing shortened ACK clock pulse

Hi everyone,

we are trying to use the TWI interface of the nRF52832 to communicate with a TI BQ28Z610 fuel gauge, using the TWI driver of the nRF5_SDK_13.0. For the last few days we had seen always the same behaviour: when we were trying to transmit more than 1 byte of data, the device address as well as the first byte were transmitted correctly, the second byte however produced a NACK.

We've been able to narrow down the problem to the ACK clock pulse after the first byte. As you can see from the attached scope screenshots, the relevant clock pulse has circa 1/4 of the length of a normal clock pulse. While I'm assuming that this behaviour brings a technical advantage in most cases (since the shorter clock pulse frees the interface earlier for subsequent communication) it leads to problems with the BQ28Z610.

My workaround at this point is to use the deprecated bit banging methods from twi_sw_master.c. I've insterted a 5ms delay after the function checks for the ack bit to delay the SCL line going low again.

Now the question is, since we're not fully happy with not being able to use the TWI hardware driver, is there any way to lengthen the relavant clock pulse using the TWI hardware? Has anybody else experienced problems with that behaviour?

tek00000.png tek00001.png tek00002.png

  • I think you're seeing the same thing devzone.nordicsemi.com/.../ post talks about.

    It's not .. really a shortened clock pulse and it's got nothing to do with freeing the interface earlier. I believe what you're seeing is the TWI attempting to output a normal clock pulse, however the slave, your TI device, is holding the clock low and stretching it. So really all that time it's low, the TWI interface has it high, but the TI chip is pulling it down. So actually it's a very LONG clock pulse.

    As soon as the slave releases the clock and stops stretching, the clock jumps high, the TWI interface sees that it's been released and is now high (where it's been holding it for some time) and then is able to do what it wanted to do a while back and drive the clock line back low to finish that stretched clock pulse.

    There is a minimum time the TWI hardware is required to hold the clock high after the slave stops stretching it, I had read 4us at 100kB/s but I don't have a spec handy. Can't instantly tell from those traces whether the TWI is compliant and holding the end of the clock after the slave releases it for long enough. Once that ACK is done, the TWI interface is free to start another cycle.

    So not sure here which is the non-standards compliant bit.

  • Thank you for you reply. With "freeing the interface earlier" I've only meant that I assume that the TWI tries to free the lines after reading the ack bit as soon as possible - after holding it high for the minimum time required by the I2C spec.

    According to my scope the width of the SCL pulse for the ack bit is around 1us.

    The link you posted apparently describes the same behaviour that we experience. However, even if the TWI behaves compliant to the I2C spec, the problem persists that the TI chip seems to need a longer clock pulse there to recognize it. Any idea how to accomplish that while using the hardware driver?

  • There's no options in the nRF TWI hardware to change any of this. I checked out the spec sheet on the TI chip (capable chip!) and, as is rather common, there wasn't a lot of information about I2C timings. It does claim minimum 600ns high clock but doesn't really say where it needs that. Since they appear to define high as 90% you may not even have that, really not sure, you could ask TI but I doubt you'll get a great answer.

    So options are. Find another sensor chip, find a to I2C chip which produces I2C your TI chip can handle and interface that over the protocol instead.

    If you can live with a lower data rate xfer on the I2C and are in a hole I might try adding just a little capacitance to the I2C lines see if I can filter that pulse longer. I don't particularly love that idea, but I'd probably try it for interest.

  • Okay, thank you very much. I think we will go with using the bit banging method for now, since it doesn't really have an impact on system performance - it just would have been somewhat nicer to be able to rely on the HW interface. I was thinking about reporting the whole thing to Nordic as a bug but since (almost) nobody else seems to have this problem, it might rather be a problem of the TI chip than of the NRF52.

  • The Nordic guys are really helpful, if you just open a My Page case and reference the thread, if nothing else to get them to check what the I2C delay after clock stretch is and discuss whether that's within the I2C spec (to the extent there is an I2C spec). Then if it's out of spec it can go in as an errata, if it's within, or just not clear there is a value, possibly just document it.

Related