<?xml version="1.0" encoding="UTF-8" ?>
<?xml-stylesheet type="text/xsl" href="https://devzone.nordicsemi.com/cfs-file/__key/system/syndication/rss.xsl" media="screen"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>UART RX using the AsyncAPI generates massive IRQ load</title><link>https://devzone.nordicsemi.com/f/nordic-q-a/105546/uart-rx-using-the-asyncapi-generates-massive-irq-load</link><description>Hi, 
 in our application we observed the receiving of data to be unstable while using the async uart api. 
 While troubleshooting we encountered the following problem: As soon as the function uart_rx_enable() is called, the system is &amp;quot;flooded&amp;quot; with IRQ</description><dc:language>en-US</dc:language><generator>Telligent Community 13</generator><lastBuildDate>Thu, 16 Nov 2023 13:09:16 GMT</lastBuildDate><atom:link rel="self" type="application/rss+xml" href="https://devzone.nordicsemi.com/f/nordic-q-a/105546/uart-rx-using-the-asyncapi-generates-massive-irq-load" /><item><title>RE: UART RX using the AsyncAPI generates massive IRQ load</title><link>https://devzone.nordicsemi.com/thread/455980?ContentTypeID=1</link><pubDate>Thu, 16 Nov 2023 13:09:16 GMT</pubDate><guid isPermaLink="false">137ad170-7792-4731-bb38-c0d22fbe4515:cd03ec38-afea-49d6-b01e-8f9e7f393c73</guid><dc:creator>ovrebekk</dc:creator><description>&lt;p&gt;Hi Jürgen&lt;/p&gt;
&lt;p&gt;Thanks for the detailed report.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;It is interesting that the shorter timeouts would affect the buffering. I assume the number of RX ready events is more or less the same since there is no gap between bytes, but obviously there will be more interference from the timer interrupts.&amp;nbsp;&lt;/p&gt;
[quote user="McVertex"]Does this point to&amp;nbsp;a possibility to replace the default software timers by hardware timers ?[/quote]
&lt;p&gt;Is the&amp;nbsp;UART_0_NRF_HW_ASYNC configuration enabled or not in your case?&amp;nbsp;&lt;/p&gt;
&lt;p&gt;In ASYNC mode it is necessary to count the number of bytes received since you might want to forward the data to the application before the buffers fill up (timeout), but the underlying UARTE peripheral doesn&amp;#39;t make this count easily available. For this reason there are two available methods to handle the byte counting.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;There is a software method which relies on an interrupt every time a byte is received, or a hardware method that uses a PPI channel and a TIMER module to count the bytes in hardware.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;The latter method is more efficient and reliable, but requires the use of an additional TIMER module which you might need for other purposes.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;So enabling this configuration will not solve the issue of getting a lot interrupts when you use a low timeout value, but if you have been testing with it disabled so far it could be interesting to redo your tests with this configuration issue enabled to see if it makes the buffering more reliable when using low timeout values.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Best regards&lt;br /&gt;Torbjørn&lt;/p&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;</description></item><item><title>RE: UART RX using the AsyncAPI generates massive IRQ load</title><link>https://devzone.nordicsemi.com/thread/455783?ContentTypeID=1</link><pubDate>Wed, 15 Nov 2023 13:48:34 GMT</pubDate><guid isPermaLink="false">137ad170-7792-4731-bb38-c0d22fbe4515:d57403ea-7443-48a5-a886-12a96911a402</guid><dc:creator>McVertex</dc:creator><description>&lt;p&gt;Hi&amp;nbsp;Torbj&amp;oslash;rn,&lt;/p&gt;
&lt;p&gt;maybe not dropped bytes, it seems to be more subtle - seems that &amp;quot;something is wrong&amp;quot; with the read position of the used &lt;a href="https://docs.zephyrproject.org/latest/kernel/data_structures/ring_buffers.html"&gt;zephyr ring buffer&lt;/a&gt;, see in the following &lt;span&gt;(sorry for the length)&amp;nbsp;&lt;/span&gt;description:&lt;/p&gt;
&lt;p&gt;In our application we have the following scenario:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;a host application sends requests as&amp;nbsp;packets via uart&amp;nbsp;to the nRF5340&lt;/li&gt;
&lt;li&gt;the packet length varies from request to request&amp;nbsp; ~10 ... 240 bytes&lt;/li&gt;
&lt;li&gt;the baud rate is 115200 baud&amp;nbsp;&amp;nbsp;&lt;/li&gt;
&lt;li&gt;there are no gaps between the bytes inside the packet&lt;/li&gt;
&lt;li&gt;the&amp;nbsp;nRF5340 receives the packets via uart using the async api&amp;nbsp;&lt;/li&gt;
&lt;li&gt;the size of the&amp;nbsp;dma buffers (RX_BUF_REQUEST) is 120 bytes&lt;/li&gt;
&lt;li&gt;the&amp;nbsp;packet receiving part is aware of&amp;nbsp;packets being&amp;nbsp;splitted into multiple &lt;br /&gt;rx events, since this happens regularly when there is a switchover to the next dma buffer&lt;/li&gt;
&lt;li&gt;All&amp;nbsp;received parts of a packet are copied into a zephyr ring buffer and&amp;nbsp;via a message queue, the packet receiving part in the main thread is signaled&amp;nbsp;&lt;/li&gt;
&lt;li&gt;the packet receiving part reads from that ring buffer&amp;nbsp;and checks the completeness and consistency of a received packet by CRC&lt;/li&gt;
&lt;li&gt;if the packet is complete and consistent, it is processed by the application&lt;/li&gt;
&lt;li&gt;after processing the request packet, a reply packet sent back to the host&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Using a rx&amp;nbsp;&lt;span&gt;timeout value&amp;nbsp;in the range below 500us&amp;nbsp;the packet consistency check fails in roughly&amp;nbsp;1% of the packets.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;Using a rx&amp;nbsp;timeout value&amp;nbsp;of 500&amp;nbsp;us&amp;nbsp;the consistency check fails less often (about&amp;nbsp;0.1%).&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;The analysis of received data in case of a failed consistency check showed that the content of the ring buffer contains all received data, but the readout position of the ring buffer seemed to be off&amp;nbsp;by 2 bytes. Once happened, this offset error remains, even if new packets arrive, which led to one missing response and following responses that are out of sync with the requests...&lt;/p&gt;
&lt;p&gt;We searched for errors in the code, but did not find any reason that could cause this strange behavior. Thereupon we analyzed the application with SystemView and were surprised by the high IRQ load...&lt;/p&gt;
&lt;p&gt;Having no special requirement for the response time, we increased the uart rx timeout to 16 ms. The only reason to choose 16 ms was to reduce the ISR load below 1%, see the measurements in my second post.&lt;/p&gt;
&lt;p&gt;Using 16ms for the uart rx timeout, we never observed failed consistency checks...&lt;/p&gt;
&lt;p&gt;So currently i think a safe level for rx timeout highly depends on the application and maybe especially on the already existing IRQ load without uart.&lt;/p&gt;
&lt;p&gt;IMHO&amp;nbsp;a good advice would be to use&amp;nbsp;a rx timeout value &amp;quot;as high as possible&amp;quot; and to point out that a (unexpected) IRQ load is generated that may&amp;nbsp;cause problems.&amp;nbsp;Especially timeout values below ~5ms, since&amp;nbsp;this will lead to a IRQ frequency of 1kHz and above.&lt;br /&gt;&lt;br /&gt;In general it would be better to have the option to use a hardware timer for that purpose.&lt;/p&gt;
&lt;p&gt;Searching&amp;nbsp;for&amp;nbsp;informations about this i found the following:&lt;br /&gt;&lt;br /&gt;&lt;pre class="ui-code" data-mode="text"&gt;config UART_0_NRF_HW_ASYNC
	bool &amp;quot;Use hardware RX byte counting&amp;quot;
	depends on UART_0_NRF_UARTE
	depends on UART_ASYNC_API
	help
	  If default driver uses interrupts to count incoming bytes, it is possible
	  that with higher speeds and/or high cpu load some data can be lost.
	  It is recommended to use hardware byte counting in such scenarios.
	  Hardware RX byte counting requires timer instance and one PPI channel

config UART_0_NRF_ASYNC_LOW_POWER
	bool &amp;quot;Low power mode&amp;quot;
	depends on UART_0_NRF_UARTE
	depends on UART_ASYNC_API
	help
	  When enabled, UARTE is enabled before each TX or RX usage and disabled
	  when not used. Disabling UARTE while in idle allows to achieve lowest
	  power consumption. It is only feasible if receiver is not always on.

config UART_0_NRF_HW_ASYNC_TIMER
	int &amp;quot;Timer instance&amp;quot;
	depends on UART_0_NRF_HW_ASYNC&lt;/pre&gt;&lt;br /&gt;&lt;br /&gt;Does this point to&amp;nbsp;a possibility to replace the default software timers by hardware timers ?&lt;br /&gt;&lt;br /&gt;It even contains&amp;nbsp;the statement &amp;quot;higher speeds and/or high cpu load some data can be lost&amp;quot;...&lt;br /&gt;&lt;br /&gt;Unfortunately,&amp;nbsp;i have not yet found sufficient documentation on these configuration values, or did i miss something ?&lt;/p&gt;
&lt;p&gt;Best regards,&lt;/p&gt;
&lt;p&gt;J&amp;uuml;rgen&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;/span&gt;&lt;/p&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;</description></item><item><title>RE: UART RX using the AsyncAPI generates massive IRQ load</title><link>https://devzone.nordicsemi.com/thread/455539?ContentTypeID=1</link><pubDate>Tue, 14 Nov 2023 13:42:05 GMT</pubDate><guid isPermaLink="false">137ad170-7792-4731-bb38-c0d22fbe4515:87c51c80-f234-4071-8a26-0916e2a7dd24</guid><dc:creator>ovrebekk</dc:creator><description>&lt;p&gt;Hi Jürgen&lt;/p&gt;
&lt;p&gt;You mean to say you were dropping bytes when using lower timeout values?&amp;nbsp;&lt;/p&gt;
&lt;p&gt;If so we should probably try to figure out the range of values that will work reliably, and update the driver and documentation to warn the user if they try to use a value that is too low.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;If you can provide a bit more information about the issues you experienced (other than high CPU usage) I will discuss it with the developers, and request an update either in the docs or the driver.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Best regards&lt;br /&gt;Torbjørn&lt;/p&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;</description></item><item><title>RE: UART RX using the AsyncAPI generates massive IRQ load</title><link>https://devzone.nordicsemi.com/thread/455389?ContentTypeID=1</link><pubDate>Mon, 13 Nov 2023 16:43:11 GMT</pubDate><guid isPermaLink="false">137ad170-7792-4731-bb38-c0d22fbe4515:792caf2a-9dc9-4f2b-9d82-5f169a2b1762</guid><dc:creator>McVertex</dc:creator><description>&lt;p&gt;Hi&amp;nbsp;Torbj&amp;oslash;rn,&lt;/p&gt;
&lt;p&gt;&lt;span lang="en"&gt;&lt;/span&gt;&lt;span lang="en"&gt;Thanks for the feedback.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span lang="en"&gt;I think it would be great to point this out to the users of the async api, as long as the &lt;br /&gt;implementation shows this behavior since the cause of possible errors is not obvious.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;In our case, we had sporadic communication errors in our application - we did not&lt;br /&gt;expect such a relation to the timeout duration&amp;nbsp;&lt;span class="emoticon" data-url="https://devzone.nordicsemi.com/cfs-file/__key/system/emoji/1f609.svg" title="Wink"&gt;&amp;#x1f609;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Best regards&lt;br /&gt;J&amp;uuml;rgen&lt;/p&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;</description></item><item><title>RE: UART RX using the AsyncAPI generates massive IRQ load</title><link>https://devzone.nordicsemi.com/thread/455357?ContentTypeID=1</link><pubDate>Mon, 13 Nov 2023 14:47:59 GMT</pubDate><guid isPermaLink="false">137ad170-7792-4731-bb38-c0d22fbe4515:1d9f0d0a-22ca-43f7-ae98-dc9e512d16ae</guid><dc:creator>ovrebekk</dc:creator><description>&lt;p&gt;Hi Jürgen&lt;/p&gt;
&lt;p&gt;I discussed this with the developers,&amp;nbsp;and they&amp;nbsp;provided some details of the driver implementation which seems to explain&amp;nbsp;your findings.&lt;/p&gt;
&lt;p&gt;To avoid using too many hardware timer modules the timeout timer is implemented using a software timer, and instead of resetting the timer every time a byte is received the timeout timer is running continuously, at an interval equal to the timeout value divided by 5.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;When the interrupt has run 5 times consecutively without any RX bytes received the RX timeout will be triggered.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;So when you have a timeout duration of 500us you would expect a new interrupt every 100us, which seems to more or less match your original diagram.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Knowing this I agree with you that the implementation is not very efficient, and using small timeout values is not recommended for the time being.&amp;nbsp;In particular it should be relatively easy to&amp;nbsp;change the driver to stop the timeout timer when no data is received, and I have requested if this can be improved in a future update.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Best regards&lt;br /&gt;Torbjørn&lt;/p&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;</description></item><item><title>RE: UART RX using the AsyncAPI generates massive IRQ load</title><link>https://devzone.nordicsemi.com/thread/455135?ContentTypeID=1</link><pubDate>Fri, 10 Nov 2023 13:41:45 GMT</pubDate><guid isPermaLink="false">137ad170-7792-4731-bb38-c0d22fbe4515:b299ec04-9f6e-433e-afdd-7528229bda9b</guid><dc:creator>McVertex</dc:creator><description>&lt;p&gt;Hi &lt;span&gt;Torbj&amp;oslash;rn&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span class="Y2IQFc" style="font-family:arial, helvetica, sans-serif;" lang="en"&gt;Thank you for the response, i already suspected that, &lt;/span&gt;&lt;span class="Y2IQFc" style="font-family:arial, helvetica, sans-serif;" lang="en"&gt;but the current behavior is hardly acceptable&lt;br /&gt;&lt;/span&gt;&lt;span class="Y2IQFc" style="font-family:arial, helvetica, sans-serif;" lang="en"&gt;since i (and possibly others too) expect the following:&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;no CPU load&amp;nbsp;while no data is coming in&amp;nbsp;&lt;/li&gt;
&lt;li&gt;no CPU load during&amp;nbsp;data reception of a data packet (as long there is no switchover to another DMA buffer)&lt;/li&gt;
&lt;li&gt;a single ISR call when some data came in and the given timeout has elapsed&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I expect this behavior because many different STM32 MCU&amp;#39;s use exactly this pattern for DMA reception.&lt;/p&gt;
&lt;p&gt;&lt;span class="Y2IQFc" style="font-family:arial, helvetica, sans-serif;" lang="en"&gt;I&amp;#39;m quite new to the nRF MCU&amp;#39;s and Zephyr, so i&amp;#39;m not deep inside the details of the peripheral &lt;/span&gt;&lt;span style="font-family:arial, helvetica, sans-serif;"&gt;units and drivers...&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="font-family:arial, helvetica, sans-serif;"&gt;But i &lt;strong&gt;never expected&lt;/strong&gt; a heavy ISR load&amp;nbsp;as seen here, using a relatively moderate timeout duration of 500&amp;micro;s...&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="font-family:arial, helvetica, sans-serif;"&gt;In our application we can workaround this by increasing the timeout to 16 ms, since we do not have a hard &lt;br /&gt;requirement on this.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;BTW, i&amp;nbsp;already&amp;nbsp;checked&amp;nbsp;the rx data&amp;nbsp;having no gaps:&lt;/p&gt;
&lt;p&gt;&lt;img style="max-height:56px;max-width:640px;" alt=" " height="56" src="https://devzone.nordicsemi.com/resized-image/__size/1280x112/__key/communityserver-discussions-components-files/4/pastedimage1699621509321v1.png" width="640" /&gt;&lt;/p&gt;
&lt;p&gt;Now this is what a&amp;nbsp;complete request&amp;nbsp;and reply packet with a 16 ms timeout look like (115200Baud):&lt;/p&gt;
&lt;p&gt;&lt;img style="max-height:147px;max-width:637px;" alt=" " height="147" src="https://devzone.nordicsemi.com/resized-image/__size/1274x294/__key/communityserver-discussions-components-files/4/pastedimage1699622834212v3.png" width="637" /&gt;&lt;/p&gt;
&lt;p&gt;If there is any way to eliminate the ISR load produced by the async driver implementation i would highly recommend&amp;nbsp;to do so since this is promised by the async api and IMHO expected by most users.&lt;br /&gt;&lt;br /&gt;Best regards,&lt;/p&gt;
&lt;p&gt;J&amp;uuml;rgen&amp;nbsp;&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;</description></item><item><title>RE: UART RX using the AsyncAPI generates massive IRQ load</title><link>https://devzone.nordicsemi.com/thread/455098?ContentTypeID=1</link><pubDate>Fri, 10 Nov 2023 11:00:34 GMT</pubDate><guid isPermaLink="false">137ad170-7792-4731-bb38-c0d22fbe4515:7c58c46d-6dc8-43dc-8333-76bffd93a82b</guid><dc:creator>ovrebekk</dc:creator><description>&lt;p&gt;Hi&amp;nbsp;&lt;/p&gt;
&lt;p&gt;This sounds like expected behavior the way the&amp;nbsp;underlying UARTE peripheral works with the async driver.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;When you have such a short timeout the driver&amp;nbsp;might generate a timeout for every byte transmitted, unless the bytes are sent completely back to back.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;For optimal performance with UART you want to let the DMA buffers fill up before having to process the data, and the larger the buffers the less CPU impact.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Is fast response to incoming data a requirement, since you are testing such short timeout values?&amp;nbsp;&lt;/p&gt;
&lt;p&gt;In order to analyze this further it would be interesting to see the data stream on a scope, to see whether or not data is sent back to back, or if there are a lot of gaps in the data.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Best regards&lt;br /&gt;Torbjørn&lt;/p&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;</description></item><item><title>RE: UART RX using the AsyncAPI generates massive IRQ load</title><link>https://devzone.nordicsemi.com/thread/455062?ContentTypeID=1</link><pubDate>Fri, 10 Nov 2023 08:12:48 GMT</pubDate><guid isPermaLink="false">137ad170-7792-4731-bb38-c0d22fbe4515:4ac80d52-f8ac-4917-8687-86d7f3713b81</guid><dc:creator>McVertex</dc:creator><description>&lt;p&gt;Additional informations:&lt;/p&gt;
&lt;p&gt;The IRQ overhead directly depends on the timout value passed to uart_rx_enable():&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;table style="margin-left:auto;margin-right:auto;" height="193"&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;td style="text-align:center;"&gt;&amp;nbsp;timout [us]&amp;nbsp;&lt;/td&gt;
&lt;td style="text-align:center;"&gt;&amp;nbsp;&lt;span&gt;IRQ overhead&amp;nbsp;&lt;/span&gt;&lt;span&gt;[%]&lt;/span&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="text-align:center;"&gt;&amp;nbsp;&lt;span&gt;ISR 37 period [ms]&lt;/span&gt;&amp;nbsp;&lt;/td&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align:center;"&gt;100&lt;/td&gt;
&lt;td style="text-align:center;"&gt;23&lt;/td&gt;
&lt;td style="text-align:center;"&gt;0.122&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align:center;"&gt;200&lt;/td&gt;
&lt;td style="text-align:center;"&gt;23&lt;/td&gt;
&lt;td style="text-align:center;"&gt;&lt;span&gt;0.122&lt;/span&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align:center;"&gt;500&lt;/td&gt;
&lt;td style="text-align:center;"&gt;21&lt;/td&gt;
&lt;td style="text-align:center;"&gt;&lt;span&gt;0.122 ... 0.152&lt;/span&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align:center;"&gt;1000&lt;/td&gt;
&lt;td style="text-align:center;"&gt;14&lt;/td&gt;
&lt;td style="text-align:center;"&gt;0.213&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align:center;"&gt;2000&lt;/td&gt;
&lt;td style="text-align:center;"&gt;7.1&lt;/td&gt;
&lt;td style="text-align:center;"&gt;0.427&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align:center;"&gt;4000&lt;/td&gt;
&lt;td style="text-align:center;"&gt;3.7&lt;/td&gt;
&lt;td style="text-align:center;"&gt;0.824&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align:center;"&gt;8000&lt;/td&gt;
&lt;td style="text-align:center;"&gt;1.9&lt;/td&gt;
&lt;td style="text-align:center;"&gt;1.6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align:center;"&gt;16000&lt;/td&gt;
&lt;td style="text-align:center;"&gt;0.95&lt;/td&gt;
&lt;td style="text-align:center;"&gt;3.2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align:center;"&gt;32000&lt;/td&gt;
&lt;td style="text-align:center;"&gt;0.48&lt;/td&gt;
&lt;td style="text-align:center;"&gt;6.4&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&lt;br /&gt;A timeout of 16 ms is necessary to reduce the &lt;span&gt;IRQ&amp;nbsp;&lt;/span&gt;overhead to less than 1%.&lt;/p&gt;
&lt;p&gt;Is there a way to reduce that overhead ?&lt;/p&gt;
&lt;p&gt;Maybe it&amp;#39;s unintended behaviour of the async rx implementation ?&lt;/p&gt;
&lt;p&gt;All measurements done with &lt;br /&gt;- nRF SDK 2.5.0&lt;br /&gt;- hardware: nRF5340-DK&lt;br /&gt;- build configuration: nrf5340dk_nrf5340_cpuapp&lt;br /&gt;- build optimization level: default &lt;br /&gt;- application: fund_less5_exer1_solution (see previous post)&lt;br /&gt;- extended prj.conf to enable tracing (see previous post)&lt;br /&gt;- only RECEIVE_TIMEOUT has been changed between measurements&lt;/p&gt;
&lt;p&gt;Regards&lt;/p&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;</description></item></channel></rss>