Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[action] [PR:19681] [Mellanox]Adding SKU Mellanox-SN4700-O32 and Mellanox-SN4700-V64 (#19681) #19822

Closed
wants to merge 1 commit into from

Conversation

mssonicbld
Copy link
Collaborator

A new SKUs for MSN4700 Platform: Mellanox-SN4700-O32 and Mellanox-SN4700-V64

Requirements for Mellanox-SN4700-O32:

8 x 400Gbps uplink to T2 switch (O13 to O20)
24 x 400Gbps downlinks to T0 switch (O1-O12, O21-O32)
Breakout mode No breakout mode. All ports working in 400Gb mode. .
FEC mode: RS
Type of transceiver: 400Gb Optical.
warm boot should be supported “No for T1 role”
VxLAN source port range set N/A
Static Policy Based Hashing supported N/A
Cable length “T0-T1 40m default, 300m max; T1-T2 2000m”
Tradition buffer model is must “Yes”
Shared headroom should be supported “Yes”
Over-subscription ratio: “2”.
Requirements for Mellanox-SN4700-V64

16 x 200Gbps uplink to T1 switch (V-25&V26 to V-39&40)
48 x 200Gbps downlinks to servers (Left panel downlink ports: V-1&2 to V-23&24; Right panel downlink ports: V-41&42 to V-63&64)
Breakout mode split from 400Gbps ports (2x200)
FEC mode: RS
Type of transceiver: 200Gb AOC between T0 and T1; 200Gb DAC between T0 and host.
warm boot should be supported “Yes for T0 role”
VxLAN source port range set N/A
Static Policy Based Hashing supported N/A
Cable length “T0-T1 40m default, 300m max, T0-Server 5m”
Tradition buffer model is must “Yes”
Shared headroom should be supported “Yes”
Over-subscription ratio: “2”.
Additional Details:

QoS configs for Mellanox-SN4700-V64 updated in order to fulfill Dual-ToR buffer (+DSCP remapping) requirements
Support for independent module added for both SKUs, so Auto-negotiation changed to NO

Signed-off-by: Andriy Yurkiv ayurkiv@nvidia.com

…nic-net#19681)

A new SKUs for MSN4700 Platform: Mellanox-SN4700-O32 and Mellanox-SN4700-V64

Requirements for Mellanox-SN4700-O32:

8 x 400Gbps uplink to T2 switch (O13 to O20)
24 x 400Gbps downlinks to T0 switch (O1-O12, O21-O32)
Breakout mode No breakout mode. All ports working in 400Gb mode. .
FEC mode: RS
Type of transceiver: 400Gb Optical.
warm boot should be supported “No for T1 role”
VxLAN source port range set N/A
Static Policy Based Hashing supported N/A
Cable length “T0-T1 40m default, 300m max; T1-T2 2000m”
Tradition buffer model is must “Yes”
Shared headroom should be supported “Yes”
Over-subscription ratio: “2”.
Requirements for Mellanox-SN4700-V64

16 x 200Gbps uplink to T1 switch (V-25&V26 to V-39&40)
48 x 200Gbps downlinks to servers (Left panel downlink ports: V-1&2 to V-23&24; Right panel downlink ports: V-41&42 to V-63&64)
Breakout mode split from 400Gbps ports (2x200)
FEC mode: RS
Type of transceiver: 200Gb AOC between T0 and T1; 200Gb DAC between T0 and host.
warm boot should be supported “Yes for T0 role”
VxLAN source port range set N/A
Static Policy Based Hashing supported N/A
Cable length “T0-T1 40m default, 300m max, T0-Server 5m”
Tradition buffer model is must “Yes”
Shared headroom should be supported “Yes”
Over-subscription ratio: “2”.
Additional Details:

QoS configs for Mellanox-SN4700-V64 updated in order to fulfill Dual-ToR buffer (+DSCP remapping) requirements
Support for independent module added for both SKUs, so Auto-negotiation changed to NO

Signed-off-by: Andriy Yurkiv <ayurkiv@nvidia.com>
@mssonicbld
Copy link
Collaborator Author

Original PR: #19681

@bingwang-ms
Copy link
Contributor

/azp run

Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@ayurkiv-nvda
Copy link
Contributor

/azp run

Copy link

Commenter does not have sufficient privileges for PR 19822 in repo sonic-net/sonic-buildimage

@bingwang-ms
Copy link
Contributor

/azp run

Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@ayurkiv-nvda
Copy link
Contributor

PR check are failing because of broken soft links to file device/mellanox/x86_64-nvidia_sn4280-r0/ACS-SN4280/pg_profile_lookup.ini :

device/mellanox/x86_64-mlnx_msn4700-r0/Mellanox-SN4700-O32/pg_profile_lookup.ini -> ../../x86_64-nvidia_sn4280-r0/ACS-SN4280/pg_profile_lookup.ini
device/mellanox/x86_64-mlnx_msn4700-r0/Mellanox-SN4700-V64/pg_profile_lookup.ini -> ../../x86_64-nvidia_sn4280-r0/ACS-SN4280/pg_profile_lookup.ini

It causes error during build:

# Build the package
dpkg-buildpackage -rfakeroot -b -us -uc -j12 --admindir /sonic/dpkg/tmp.UYkaSmwdoV
popd
mv sonic-device-data_1.0-1_all.deb /sonic/target/debs/bullseye/
/sonic/src/sonic-device-data/src /sonic/src/sonic-device-data
cp: cannot stat '../../../device/mellanox/x86_64-mlnx_msn4700-r0/Mellanox-SN4700-O32/pg_profile_lookup.ini': No such file or directory
cp: cannot stat '../../../device/mellanox/x86_64-mlnx_msn4700-r0/Mellanox-SN4700-V64/pg_profile_lookup.ini': No such file or directory

Link is broken because following PR wasn't cherry-picked to 202311
#19312

@yanmo96
Copy link

yanmo96 commented Aug 16, 2024

/azp run

Copy link

Commenter does not have sufficient privileges for PR 19822 in repo sonic-net/sonic-buildimage

@yanmo96
Copy link

yanmo96 commented Aug 16, 2024

/azp run

Copy link

Commenter does not have sufficient privileges for PR 19822 in repo sonic-net/sonic-buildimage

@bingwang-ms
Copy link
Contributor

/azp run Azure.sonic-buildimage

Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@bingwang-ms
Copy link
Contributor

ayurkiv-nvda

@ayurkiv-nvda Can you check the comments in PR #19312 ? Is #19312 required or not ?

@ayurkiv-nvda
Copy link
Contributor

PR check are failing because of broken soft links to file device/mellanox/x86_64-nvidia_sn4280-r0/ACS-SN4280/pg_profile_lookup.ini

Need to use another PR for merge V64 and O32 to 202311: #20002
Closing this one

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants