Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Peer storage doesn't scale #6652

Open
morehouse opened this issue Sep 7, 2023 · 3 comments
Open

Peer storage doesn't scale #6652

morehouse opened this issue Sep 7, 2023 · 3 comments
Assignees
Labels
discussion protocol These issues are protocol level issues that should be discussed on the protocol spec repo

Comments

@morehouse
Copy link
Contributor

If SCBs get larger than 64KB, the peer_storage message sent to peers gets truncated. This means that later when the SCB is sent back in the your_peer_storage message, it fails the MAC check.

Based on typical scb_chan serialized sizes, peer storage starts to break at ~500 channels. At maximum scb_chan serialized sizes, things break at ~200 channels.

One potential solution is to packetize large SCBs for transfer to/from peers.

@vincenzopalazzo vincenzopalazzo added protocol These issues are protocol level issues that should be discussed on the protocol spec repo discussion labels Sep 7, 2023
@vincenzopalazzo vincenzopalazzo added this to the v23.11 milestone Sep 7, 2023
@vincenzopalazzo vincenzopalazzo self-assigned this Sep 7, 2023
@vincenzopalazzo
Copy link
Collaborator

One potential solution is to packetize large SCBs for transfer to/from peers.

I need to grab again the protocol specification, but this looks a good option.

The only comment I have is that we may want also impose a limit on the number of packets used to split a message. Alternatively, we could create a BLIP to allow nodes to specify the maximum size that the receiver will accept.

I think that one of the current limitations of the peer_storage proposal is to support just small node, because if you have 200 channels you certainly want to use another way. However, I think allow a bigger peer_storage is useful in case of disaster,

@adi2011
Copy link
Collaborator

adi2011 commented Nov 26, 2023

I agree this scheme doesn't scale for big node operators because ideally they shouldn't be relying on it anyways. Although I've an idea to scale it further, please do give your feedback on this.

What if we introduce new message types that appends to the existing backup. This way, nodes could distribute more than 65kb of data, similar to the *_continue messages in commando plugin.

I know this can lead to spamming, but we can pre-specify the amount of data that we'll be sending to the peer (in any pre existing msg if they support the peer_storage feature), so they can safely ignore if we send larger amount.

For example: if my peer provides a rate of 1 sat/kb, I could send 200kb of data to the peer using 'peer_storage' and 'peer_storage_continue' (~4 packets). After receiving the acknowledgment ('your_peer_storage' & 'your_peer_storage_continue'), I would then pay them 200 satoshis.

@cdecker
Copy link
Member

cdecker commented Mar 9, 2024

Well, your problem is actually the solution: having a lot of peers. Just don't deliver the same emergency recovery information to each of them.

Pick a partitioning of your data, use error correcting codes to encode N backups I to M packets where N is the number of channels and M is the number of peers. Then give each peer their packet. The redundancy isn't quite as good as giving each peer a full copy, but combining enough share will reconstruct the full packet. It can even include hints where to get the next segment of data from too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion protocol These issues are protocol level issues that should be discussed on the protocol spec repo
Projects
None yet
Development

No branches or pull requests

4 participants