Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Mellanox] Support SN4600C-C64 as T1 switch in dual-ToR scenario #11261

Merged
merged 7 commits into from
Jul 20, 2022

Conversation

stephenxs
Copy link
Collaborator

@stephenxs stephenxs commented Jun 27, 2022

Why I did it

Support Mellanox-SN4600C-C64 as T1 switch in dual-ToR scenario
This is to port #11032 and #11299 from 202012 to master.

  1. Support additional queue and PG in buffer templates, including both traditional and dynamic model
  2. Support mapping DSCP 2/6 to lossless traffic in the QoS template.
  3. Add macros to generate additional lossless PG in the dynamic model
  4. Adjust the order in which the generic/dedicated (with additional lossless queues) macros are checked and called to generate buffer tables in common template buffers_config.j2
    • Buffer tables are rendered via using macros.
    • Both generic and dedicated macros are defined on our platform. Currently, the generic one is called as long as it is defined, which causes the generic one always being called on our platform. To avoid it, the dedicated macrio is checked and called first and then the generic ones.
  5. Support MAP_PFC_PRIORITY_TO_PRIORITY_GROUP on ports with additional lossless queues.

On Mellanox-SN4600C-C64, buffer configuration for t1 is calculated as:

  • 40 * 100G downlink ports with 4 lossless PGs/queues, 1 lossy PG, and 3 lossy queues
  • 16 * 100G uplink ports with 2 lossless PGs/queues, 1 lossy PG, and 5 lossy queues

Signed-off-by: Stephen Sun stephens@nvidia.com

How I did it

How to verify it

Run regression test.

Which release branch to backport (provide reason below if selected)

  • 201811
  • 201911
  • 202006
  • 202012
  • 202106
  • 202111
  • 202205

Description for the changelog

Link to config_db schema for YANG module changes

A picture of a cute animal (not mandatory but encouraged)

Signed-off-by: Stephen Sun <stephens@nvidia.com>
- global DSCP_TO_TC_MAP

Signed-off-by: Stephen Sun <stephens@nvidia.com>
@stephenxs stephenxs marked this pull request as ready for review July 13, 2022 09:32
@stephenxs stephenxs requested review from a team and lguohan as code owners July 13, 2022 09:32
@stephenxs
Copy link
Collaborator Author

To cleanly cherry-pick to 202205 it depends on #11448

@stephenxs
Copy link
Collaborator Author

/azpw run

@mssonicbld
Copy link
Collaborator

/AzurePipelines run

@azure-pipelines
Copy link

You have several pipelines (over 10) configured to build pull requests in this repository. Specify which pipelines you would like to run by using /azp run [pipelines] command. You can specify multiple pipelines using a comma separated list.

@stephenxs
Copy link
Collaborator Author

/azpw run Azure.sonic-buildimage

@mssonicbld
Copy link
Collaborator

/AzurePipelines run Azure.sonic-buildimage

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@stephenxs
Copy link
Collaborator Author

/azpw run Azure.sonic-buildimage

@mssonicbld
Copy link
Collaborator

/AzurePipelines run Azure.sonic-buildimage

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

Signed-off-by: Stephen Sun <stephens@nvidia.com>
@stephenxs
Copy link
Collaborator Author

stephenxs commented Jul 19, 2022

Previously it failed because t1-lag timeout is reached. #11478 to extend it.

@stephenxs stephenxs closed this Jul 19, 2022
@stephenxs stephenxs reopened this Jul 19, 2022
@stephenxs
Copy link
Collaborator Author

/azpw run azure.sonic-buildimage

@mssonicbld
Copy link
Collaborator

/AzurePipelines run azure.sonic-buildimage

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@liat-grozovik liat-grozovik merged commit 8f4a1b7 into sonic-net:master Jul 20, 2022
@liat-grozovik liat-grozovik changed the title Support Mellanox-SN4600C-C64 as T1 switch in dual-ToR scenario [Mellanox] Support SN4600C-C64 as T1 switch in dual-ToR scenario Jul 20, 2022
@stephenxs stephenxs deleted the dual-tor-t1-c64 branch July 20, 2022 06:50
yxieca pushed a commit that referenced this pull request Jul 28, 2022
…ario (#11261)

- Why I did it
Support Mellanox-SN4600C-C64 as T1 switch in dual-ToR scenario
This is to port #11032 and #11299 from 202012 to master.

Support additional queue and PG in buffer templates, including both traditional and dynamic model
Support mapping DSCP 2/6 to lossless traffic in the QoS template.
Add macros to generate additional lossless PG in the dynamic model
Adjust the order in which the generic/dedicated (with additional lossless queues) macros are checked and called to generate buffer tables in common template buffers_config.j2
Buffer tables are rendered via using macros.
Both generic and dedicated macros are defined on our platform. Currently, the generic one is called as long as it is defined, which causes the generic one always being called on our platform. To avoid it, the dedicated macrio is checked and called first and then the generic ones.
Support MAP_PFC_PRIORITY_TO_PRIORITY_GROUP on ports with additional lossless queues.
On Mellanox-SN4600C-C64, buffer configuration for t1 is calculated as:

40 * 100G downlink ports with 4 lossless PGs/queues, 1 lossy PG, and 3 lossy queues
16 * 100G uplink ports with 2 lossless PGs/queues, 1 lossy PG, and 5 lossy queues

Signed-off-by: Stephen Sun <stephens@nvidia.com>
skbarista pushed a commit to skbarista/sonic-buildimage that referenced this pull request Aug 17, 2022
…ario (sonic-net#11261)

- Why I did it
Support Mellanox-SN4600C-C64 as T1 switch in dual-ToR scenario
This is to port sonic-net#11032 and sonic-net#11299 from 202012 to master.

Support additional queue and PG in buffer templates, including both traditional and dynamic model
Support mapping DSCP 2/6 to lossless traffic in the QoS template.
Add macros to generate additional lossless PG in the dynamic model
Adjust the order in which the generic/dedicated (with additional lossless queues) macros are checked and called to generate buffer tables in common template buffers_config.j2
Buffer tables are rendered via using macros.
Both generic and dedicated macros are defined on our platform. Currently, the generic one is called as long as it is defined, which causes the generic one always being called on our platform. To avoid it, the dedicated macrio is checked and called first and then the generic ones.
Support MAP_PFC_PRIORITY_TO_PRIORITY_GROUP on ports with additional lossless queues.
On Mellanox-SN4600C-C64, buffer configuration for t1 is calculated as:

40 * 100G downlink ports with 4 lossless PGs/queues, 1 lossy PG, and 3 lossy queues
16 * 100G uplink ports with 2 lossless PGs/queues, 1 lossy PG, and 5 lossy queues

Signed-off-by: Stephen Sun <stephens@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants