Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes in internal logic of sending packets #124

Merged
merged 40 commits into from
Dec 7, 2020

Conversation

aggarw13
Copy link
Contributor

@aggarw13 aggarw13 commented Dec 5, 2020

This PR fixes 2 different issues, one for sendPacket and one for sendPublish internal functions, both related to sending packets over the network.

Issue 1

For sendPacket, there is a possibility of encountering an infinite loop if the Transport_Send_t always returns zero bytes sent. The PR addresses this issue by adding a timeout logic on retrying zero send attempts. The timeout can be configured through the MQTT_SEND_RETRY_TIMEOUT_MS macro.

Issue 2

For the sendPublish function (that calls the sendPacket function to send the complete packet over the network), there is a bug of treating partial or zero bytes return value of sendPacket as success. This PR updates the logic to treat partial or zero transmission of packets as failure in the call to MQTT_Publish API.

aggarw13 and others added 30 commits December 3, 2020 03:05
Co-authored-by: Muneeb Ahmed <54290492+muneebahmed10@users.noreply.github.com>

networkBuffer.size = 1000;
incomingPacket.type = MQTT_PACKET_TYPE_PUBLISH;
incomingPacket.remainingLength = MQTT_SAMPLE_REMAINING_LENGTH;

globalEntryTime = UINT32_MAX - MQTT_OVERFLOW_OFFSET;
expectedFinalTime = MQTT_TIMER_CALLS_PER_ITERATION * numIterations - MQTT_OVERFLOW_OFFSET;
expectedFinalTime = globalEntryTime + ( numIterations * MQTT_TIMER_CALLS_PER_ITERATION ) + 1;
Copy link
Contributor

@muneebahmed10 muneebahmed10 Dec 5, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Why this change?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is correcting the calculation of the expected time counter after the call to MQTT_ProcessLoop. The +1 is for the call made to getTime before all iterations of the loop in the API.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the correction? This value is the same as before isn't it?
(UINT32_MAX - MQTT_OVERFLOW_OFFSET) + (numIterations * MQTT_TIMER_CALLS_PER_ITERATION ) + 1 is the same as
((0) - MQTT_OVERFLOW_OFFSET) + (numIterations * MQTT_TIMER_CALLS_PER_ITERATION) or
MQTT_TIMER_CALLS_PER_ITERATION * numIterations - MQTT_OVERFLOW_OFFSET

Also, basing expectedFinalTime off of globalEntryTime results in a calculation that always yields the final value, regardless if the counter rolls over back to 0 or not, which is what this test seems to be testing (e.g. If someone accidentally changed the value of globalEntryTime so there's no longer a rollover, this test would still pass).

Ultimately it doesn't matter, since the value is the same as it was before, but that's why this change looked weird to me

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, basing expectedFinalTime off of globalEntryTime results in a calculation that always yields the final value, regardless if the counter rolls over back to 0 or not,

This is the main motivation from a resiliency and readability perspective.

Copy link
Contributor

@muneebahmed10 muneebahmed10 Dec 7, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was saying it's less resilient since before, accidentally setting globalEntryTime incorrectly could be caught due to the test failing. Now, due to how expectedFinalTime is calculated from globalEntryTime, the test will pass regardless of the initial value of the entry time

uint8_t numIterations = ( MQTT_TIMER_OVERFLOW_TIMEOUT_MS / MQTT_TIMER_CALLS_PER_ITERATION ) + 1;
uint8_t numIterations = ( ( MQTT_TIMER_OVERFLOW_TIMEOUT_MS % MQTT_TIMER_CALLS_PER_ITERATION ) == 0 ) ?
( MQTT_TIMER_OVERFLOW_TIMEOUT_MS / MQTT_TIMER_CALLS_PER_ITERATION ) :
( ( MQTT_TIMER_OVERFLOW_TIMEOUT_MS / MQTT_TIMER_CALLS_PER_ITERATION ) + 1 );
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this just ceil( a / b )? I believe something like ( MQTT_TIMER_OVERFLOW_TIMEOUT_MS + MQTT_TIMER_CALLS_PER_ITERATION - 1 ) / MQTT_TIMER_CALLS_PER_ITERATION would be equivalent and somewhat shorter, right? And a comment explaining that this is just a / b rounded up?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good suggestion, have made the suggested changes.

@@ -1386,7 +1408,7 @@ static MQTTStatus_t sendPublish( MQTTContext_t * pContext,
pContext->networkBuffer.pBuffer,
headerSize );

if( bytesSent < 0 )
if( bytesSent < ( int32_t ) headerSize )
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you going to make the same change in the other places where sendPacket is called, or will that be a separate PR?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Separate PR to avoid scope screep in this PR.

* If the timeout expires, the #MQTT_ProcessLoop and #MQTT_ReceiveLoop functions
* return #MQTTSendFailed.
*
* <b>Possible values:</b> Any positive integer up to SIZE_MAX. Recommended to
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that I think about it, why do we say up to SIZE_MAX if it will only be compared to a uint32_t? On our dev systems, isn't that 2^ 64 - 1? Shouldn't this (and others) be "any positive 32 bit integer"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, good point. Made the change

@@ -183,7 +183,10 @@ typedef int32_t ( * TransportRecv_t )( NetworkContext_t * pNetworkContext,
* @param[in] pBuffer Buffer containing the bytes to send over the network stack.
* @param[in] bytesToSend Number of bytes to send over the network.
*
* @return The number of bytes sent or a negative error code.
* @return The number of bytes sent or a negative value to indicate error.
* If no data is transmitted over the network due to a full TX buffer and
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: We used @notehere when making a similar comment for TransportRecv_t. Also I think there would have to be an empty line between the @return and @note blocks, judging from the other doxygen comments

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Made the API doc for send and receive consistent.

muneebahmed10
muneebahmed10 previously approved these changes Dec 7, 2020

networkBuffer.size = 1000;
incomingPacket.type = MQTT_PACKET_TYPE_PUBLISH;
incomingPacket.remainingLength = MQTT_SAMPLE_REMAINING_LENGTH;

globalEntryTime = UINT32_MAX - MQTT_OVERFLOW_OFFSET;
expectedFinalTime = MQTT_TIMER_CALLS_PER_ITERATION * numIterations - MQTT_OVERFLOW_OFFSET;
expectedFinalTime = globalEntryTime + ( numIterations * MQTT_TIMER_CALLS_PER_ITERATION ) + 1;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the correction? This value is the same as before isn't it?
(UINT32_MAX - MQTT_OVERFLOW_OFFSET) + (numIterations * MQTT_TIMER_CALLS_PER_ITERATION ) + 1 is the same as
((0) - MQTT_OVERFLOW_OFFSET) + (numIterations * MQTT_TIMER_CALLS_PER_ITERATION) or
MQTT_TIMER_CALLS_PER_ITERATION * numIterations - MQTT_OVERFLOW_OFFSET

Also, basing expectedFinalTime off of globalEntryTime results in a calculation that always yields the final value, regardless if the counter rolls over back to 0 or not, which is what this test seems to be testing (e.g. If someone accidentally changed the value of globalEntryTime so there's no longer a rollover, this test would still pass).

Ultimately it doesn't matter, since the value is the same as it was before, but that's why this change looked weird to me

source/core_mqtt.c Outdated Show resolved Hide resolved
source/core_mqtt.c Outdated Show resolved Hide resolved
Co-authored-by: Muneeb Ahmed <54290492+muneebahmed10@users.noreply.github.com>
@gkwicker gkwicker merged commit 0a2172b into FreeRTOS:main Dec 7, 2020
@aggarw13 aggarw13 deleted the fix/send-packet-infinite-loop branch December 22, 2020 08:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants