-
Notifications
You must be signed in to change notification settings - Fork 36.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
p2p: Advertise NODE_FULL_RBF
and connect to 4 outbound full-rbf peers if -mempoolfullrbf
is set
#25600
Conversation
NODE_REPLACE_BY_FEE
and connect to at least 1 outbound full-rbf peers if mempoolfullrbf
setsNODE_REPLACE_BY_FEE
and connect to 1 outbound full-rbf peer if mempoolfullrbf
sets
Isn't this bad for privacy? |
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers. ConflictsReviewers, this pull request conflicts with the following ones:
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first. |
NODE_REPLACE_BY_FEE
and connect to 1 outbound full-rbf peer if mempoolfullrbf
setsNODE_REPLACE_BY_FEE
and connect to 1 outbound full-rbf peer if mempoolfullrbf
sets
I think a mass-spying node could automatically connect to a node and broadcast an opt-out transaction then a higher-fee replacement one to detect if The other concern could be a few spying Though if you have more transaction-relay deanonymization attacks in mind, I'm listening. |
Service flag makes it easier to make a list of nodes with full RBF policy. These nodes could be banned or this information can be used in different attacks. I don't think it is required, requesting LN users and some miners to use full RBF would be enough. Writing 'help' if lost on an island could result in 2 things:
This is a good point which I did not consider earlier though I am not sure about announcing replacement transactions only to those peers. |
Edit: Or just mark it as a temporary service bit like existing usage does |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some initial general thoughts:
- As I understand it, the game theory works out such that as soon as there is a connected cluster of supporting nodes and enough connected mining power, full-rbf will work for everyone who wants to do it, and non-supporting nodes don't matter - they can't stop it from happening or rely on full-rbf not being possible (which they already shouldn't today, but some may do regardless)
- Because of the connectivity of the p2p network, already without any preferential peering, the percentage
p
of nodes that would need to support full-rbf for it to work with a high reliability is rather small, way below 50% - I tried some simulations (which I need to test more, so not too confident in the exact numbers), and the threshold for a>90%
chance of a full-rbf-supporting route to exist between two full-rbf-supporting nodes seems to be aroundp=10%
for reasonably realistic network sizes (10k listening peers, 50k non-listening). See https://github.com/mzumsande/fullrbf_simulation for the code. - This PR would lower the threshold of
p
drastically close to zero, so that merging it would be a strong statement that we want full-rbf to be possible, even if it's just a small minority of nodes that support it. On the other hand, it adds additional complexity to the networking code, and I'm not sure if making p2p peering decisions base on policy is something we'd want in general.
But if adding support for automatic preferential peering is such a strong statement anyway - why don't we just change the default ofmempoolfullrbf
totrue
instead and save us all the added complexity? - Finally, just mentioning the possibility of false signaling. If other nodes want to prevent or delay full-rbf they could signal full-rbf but not implement it. I'm not sure if we can detect or punish them for that.
Well that's a fundamental trade-off between signaling policy to ease peering and retaining information. I think the relevant question is this additional information allow or significantly ease a concrete attack against node operator or funds safety. That said, even if the attack is qualified, the risk could be bearable by the operator, if the documentation is clear in that sense. E.g, in LN we recently specified out zero-conf channels, they're coming with a lot of safety tradeoffs, though they're a UX improvement for sure.
Beware life is risky :)
In case of uncertain privacy/security risks, security-by-compartmentalization.
Well, I used the same experimental bit 26 than the patchset from @petertodd. Let me know if you think I should use another one in the range bits 24-31 because of conflicts with existent bitcoin softwares. |
src/protocol.cpp
Outdated
@@ -196,6 +196,7 @@ static std::string serviceFlagToStr(size_t bit) | |||
case NODE_WITNESS: return "WITNESS"; | |||
case NODE_COMPACT_FILTERS: return "COMPACT_FILTERS"; | |||
case NODE_NETWORK_LIMITED: return "NETWORK_LIMITED"; | |||
case NODE_REPLACE_BY_FEE: return "FULL_RBF"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
case NODE_REPLACE_BY_FEE: return "FULL_RBF"; | |
case NODE_REPLACE_BY_FEE: return "REPLACE_BY_FEE?"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's also fine to call this service bit NODE_FULL_RBF
. The NODE_REPLACE_BY_FEE
name comes from code so old that the terminology hadn't been settled yet and opt-in-RBF didn't exist.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Renamed to NODE_FULL_RBF
.
For more perspective, I would like to recall that seeing full-rbf more deployed on the p2p network makes multi-party funded transactions more robust, at the price of zero-conf transactions accepting services more vulnerable to double-spend attacks. I think it raises an interesting question, in the sense of how as the Bitcoin Core project we should weight for a policy A or policy B offering different set of impacts on Bitcoin applications. In my opinion, the discussions can be tie-breaked on the hard physics of the system and recognizing there is no such thing as a global mempool in a distributed system therefore making assumptions on transactions ordering is inherently unsafe. All of that to say I would like not to adopt a game theory reasoning framework, as ultimately game theory, at least zero-sum games, relies on the assumption of moves triggered by set of players in the pursuit of a gain, at the detriment of the other subsets. I'm thinking more of this change as a "organic" policy release engineering practice, namely giving the tools to the node operators to progressively activate full-rbf and build the full-rbf supporting routes. Even if we can assume a active subset of node operators quickly picking up the changes, social inertia being it won't happen overnight. This should give time for zero-conf transactions accepting services to either deprecate this mode of payment or moving to safe instant payments like LN or deploy (non-perfect) distributed mempools monitoring system. However, I acknowledge this is an ideal policy release practice, and adding automatic preferential peering might be too much a strong statement, or the timing is too short (let's give another buffer release before to introduce automatic preferential peering ?), or we should give more time to express technical dissent from zero-conf services providers or anyone else. Posting on the mailing list was done on this purpose. Beyond, I think this discussion is still interesting even in the lack of policy reversal like with full-rbf when we deploy new transaction-relay policy changes (e.g Erlay, package-relay, new RBF rules), we might firewall the new policy changes behind settings to observe first the performance, privacy-preserving or robustness properties of new features. We could clean up the added code complexity when we estimate the feature as mature. I think it's interesting to have a robust connectivity model of the p2p network, with factors like percentage |
@@ -1053,6 +1053,8 @@ bool CConnman::AddConnection(const std::string& address, ConnectionType conn_typ | |||
case ConnectionType::BLOCK_RELAY: | |||
max_connections = m_max_outbound_block_relay; | |||
break; | |||
case ConnectionType::FULLRBF: | |||
max_connections = m_max_outbound_fullrbf_relay; | |||
// no limit for ADDR_FETCH because -seednode has no limit either |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is getting out of scope for this review. But as an aside, IIUC this function includes manually added connections in its max connections count. I'm not sure that's correct. If I manually add a bunch of nodes that match some category, that doesn't necessarily mean I only want my node connecting to the manually added nodes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, actually, looks like this code doesn't do anything in practice, as AddConnection
is only used by the addconnection
RPC call.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh right, that's correct my mistake. IMHO, we do have a confusing connection type with a "flat" approach instead of an attribute based one as argued previously in #18044 (review). I'll fix that soon.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As you raised, manually adding a bunch of nodes shouldn't necessarily mean that you only want your node to connect to manually added nodes, as you might like to profit from the automatic discovery and connecting logic. It sounds really artificial for us to constraint node operators in their peers topology when there is a deliberate choice to deviate from the spontaneous topology generated by ThreadOpenConnections
.
That said, this approach of enforcing connection limits controls on the types of connection is already what is implemented by AddConnection
, not subjecting outbound full-rbf peers to limits control would deviate from that behavior. So I think the real discussion we should have would be about if those checks make really sense in ThreadOpenConnections
. Though as it's a point preceding this PR, I would leave as it is.
Beyond, I wonder if that really matters as addconnection
is regtest-only (cf. L347, in src/rpc/net.cpp
).
src/net.h
Outdated
@@ -64,6 +64,8 @@ static const unsigned int MAX_PROTOCOL_MESSAGE_LENGTH = 4 * 1000 * 1000; | |||
static const unsigned int MAX_SUBVERSION_LENGTH = 256; | |||
/** Maximum number of automatic outgoing nodes over which we'll relay everything (blocks, tx, addrs, etc) */ | |||
static const int MAX_OUTBOUND_FULL_RELAY_CONNECTIONS = 8; | |||
/** Maximum number of automatic fullrbf peers */ | |||
static const int MAX_FULLRBF_RELAY_CONNECTIONS = 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this should be higher. With just ~1 full-RBF peer per node, a full-RBF tx is propagating in a network that is "chain-like". At each propagation step, the # of new nodes receiving the tx will be approximately one.
You should set this higher, to 4 or so, to ensure that propagation is "exponential" in nature, with each propagating step reaching more and more nodes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You also have to take into account that each node makes 8 automatic outgoing connections that might by chance connect to more full-RBF peers. This chance becomes higher if more peers run full-RBF. So it really comes down to how small of a minority of supporting nodes we require for full-RBF to still work reliably.
Also note that increasing automatic outbound slots too much could be problematic if full-rbf was adopted by many nodes, because there might not be sufficient inbound capacity. Studies like https://arxiv.org/pdf/2108.00815v2.pdf (Fig. 3) suggest that a significant number of nodes already operate close to their maximum inbound capacity today.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You also have to take into account that each node makes 8 automatic outgoing connections that might by chance connect to more full-RBF peers.
The chance is pretty low at small %'s of full-RBF nodes. I know none of my nodes were connected to RBF-advertising peers by accident without this patch.
Also note that increasing automatic outbound slots too much could be problematic if full-rbf was adopted by many nodes
I think that's quite unlikely. I highly doubt more than a few % of nodes would enable this manually. Meanwhile, once a few % of nodes and miners have, any remaining arguments against enabling full-rbf by default are certainly bunk, as getting non-opt-in replacements mined will be easy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the chances to be connected to a full-rbf subgraph by relying only on the 8 automatic outgoing connections sounds really low, even assuming you're a listening node. If you model something with ~60k nodes, ~125 connection slots and 100 full-rbf nodes across the network, the success probability is like what ~1/60 ? This could explain the observed lack of RBF-advertising peers connections.
Thinking back to the model laid out above, rather to define success as the existence of a full-rbf route between two full-rbf supporting nodes another success definition could be the propagation of a full-rbf tx among the quasi-unanimity of the full-rbf peers. Therefore to achieve success we should aim to eradicate disconnected graphs of full-rbf peers. The exact value of MAX_FULLRBF_RELAY_CONNECTIONS
would be function of the set size of peers we assume enabling full-rbf by option.
I think the concern about increased consumption of inbound capacity is qualified, however I would say best we can do is giving us a slot budget. I.e assuming a set size of full-rbf peers, a number of new automatic outbound connections and an occupation rate of the inbound slots of the listening peers and gauge if the result is acceptable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The exact value of
MAX_FULLRBF_RELAY_CONNECTIONS
would be function of the set size of peers we assume enabling full-rbf by option.
No, that's not how this works. Since we're doing preferential peering, the total number of peers with full-rbf on the network isn't relevant. We simply need a figure high enough that a sufficient number of full-rbf-signalling peers we connect to are in fact, actuall full-rbf peers that likewise have preferential peering. Second, by "sufficient number", we need something greater than 1 so we end up with a well-connected group rather than disconnected chains. 4+ should be fine.
So the actual figure we care about is what % of peers with the full-rbf service bit set are running this code (or something like it)? There are a handful of nodes running Knots that advertise the service bit but don't do preferential peering. Once a reasonable number of people (a few dozen) use this flag, they probably won't matter as AFAICT there's less than half a dozen such Knots-running nodes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We simply need a figure high enough that a sufficient number of full-rbf-signalling peers we connect to are in fact, actuall full-rbf peers that likewise have preferential peering. Second, by "sufficient number", we need something greater than 1 so we end up with a well-connected group rather than disconnected chains. 4+ should be fine.
Yes, I think it would be interesting to have a consistent propagation model to determine what exact "sufficient number" would fit to ensure we have well-connected full-rbf group rather than disconnected chains. I wonder if there have been such models established in the past Core discussions about p2p modifications or extensions. Otherwise landing on the number 4+ sounds really found by empiricism and historical observations ? (not a bad method we should just be aware of the limits)
src/protocol.cpp
Outdated
@@ -196,6 +196,7 @@ static std::string serviceFlagToStr(size_t bit) | |||
case NODE_WITNESS: return "WITNESS"; | |||
case NODE_COMPACT_FILTERS: return "COMPACT_FILTERS"; | |||
case NODE_NETWORK_LIMITED: return "NETWORK_LIMITED"; | |||
case NODE_REPLACE_BY_FEE: return "FULL_RBF"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's also fine to call this service bit NODE_FULL_RBF
. The NODE_REPLACE_BY_FEE
name comes from code so old that the terminology hadn't been settled yet and opt-in-RBF didn't exist.
It's not enough. As @mzumsande notes, you need a significant % of nodes supporting full-RBF propagation before txs propagate reliably; once the percolation threshold is reached propagation suddenly becomes quite reliable. Also, as per my review, connecting to a single full-RBF peer is insufficient to achieve good propagation. |
Banning nodes in the Bitcoin P2P network is pretty much pointless, as transactions will just propagate around the ban. In fact, preferential peering helps mitigate that (minor) attack, by making it even more likely that transactions will successfully propagate around the ban. Re: attacking nodes with the service bit set, I had a similar version of this patch years ago, prior to the development of lightning when full-RBF was much more contentious. To the best of my knowledge no-one ever attacked any full-RBF-advertising nodes. |
Not sure if this is worth it. Now that the wallet/rpc signals for rbf by default already (to support fee bumping), this pull here would mostly benefit adversarial settings (such as lightning), where signalling isn't possible. So assuming that in those adversarial settings there exits a direct connection to a fullrbf miner (either by running the code here, or with a manual |
Though this patch avoids the necessity of full-rbf miners identifying themselves, as well as make it easy to use multiple full-rbf miners at once. Without full-rbf by default, this is definitely the next best thing.
My preference as well. |
FYI I tried setting Putting the |
bbc5d58
to
3622534
Compare
Sorry for the delay, updated at 3622534 with few changes. From my understanding, there are few conceptual viewpoints expressed about this change. One is to argue that releasing a full-rbf service bit and corresponding automatic preferential peering is a strong statement in the wide deployment of full-rbf and a more neutral position should be adopted by the project, at least for now. Another one would be shortcut the added complexity and go directly to turn on fullrbf by default. A last viewpoint would be to release only the p2p tools for a node operator to opt and enforce the replacement policy adequate for its application, without aiming for coordinated deployment network-wide. To be clear, I'm still favoring the latest viewpoint, as in light of a more complex ecosystem, I think we should aim for gradual deployment of new policy rules, not only to experiment the robustness/privacy/performance of the changes themselves but also to enable gradual adaptions of the node operators and Bitcoin services. I think stability and previsibility of our p2p and policy rules matter. |
This doesn't make sense. The current de facto "position" is anti-RBF. Making it an option disabled by default is the very minimum to be neutral.
This isn't a new rule to the network, just Core. Knots and other software have supported it for years.
Policy is primarily a per-node matter, not really a per-software matter. P.S. Don't forget to rename the PR |
To be clear, we always have a gradual deployment because adoption of new Bitcoin Core releases is always a gradual process. Just enabling full-rbf by default would be a gradual deployment from that point of view. On the other hand, if you're talking about gradual deployment in the sense of new transaction types getting to miners, we always have a non gradual deployment if preferential peering is not used. Because the math behind percolation means that there is a sharp transition between "not enough nodes to reliably propagate" and "enough nodes to reliably propagate" |
I will reply to the summary of arguments from this comment.
I think a service bit enabled by a
This is a hard part for me. The opponents of
|
Updated at 0112763 Review comments about Still pending on more tests. |
Thanks for the review, note the proposal of introducing automatic preferential peering with a limited number of full-rbf signaling peers is a procedural novation itself in the way we release policy change, imo. Rather than turning on by default
In the former case, it's unclear to me what exact interest of a honest node is damaged by a malicious peer refusing connection, as another more valuable peer will be sought. In the latter case, while it sounds correct to me you will waste an outbound slot of a honest node, this can be already achieved for any regular connection. I don't think we're making strong assumptions of peer reliability in our p2p stack, therefore the redundancy of connections. As laid out by other contributors, as long as there is 1 honest full-rbf peers, there is already a full-rbf propagation efficiency gain. |
If `mempoolfullrbf` config option is set, announce to our peers we support fullrbf by advertising NODE_REPLACE_BY_FEE in our service flag. Ensure the service name is displayed by net-related RPC calls.
If `mempoolfullrbf` config option is set, try to connect to at least 4 outbound-fullrbf-relay (`ConnectionType::FULLRBF`) in `ThreadOpenConnections`. An outbound-fullrbf-relay peer is considered as one signaling NODE_REPLACE_BY_FEE service flag.
888ab75
to
0112763
Compare
@@ -293,6 +293,8 @@ enum ServiceFlags : uint64_t { | |||
// collisions and other cases where nodes may be advertising a service they | |||
// do not actually support. Other service bits should be allocated via the | |||
// BIP process. | |||
|
|||
NODE_FULL_RBF = (1 << 26), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume the comment should be added?
@@ -437,6 +438,7 @@ class NetinfoRequestHandler : public BaseRequestHandler | |||
if (conn_type == "block-relay-only") return "block"; | |||
if (conn_type == "manual" || conn_type == "feeler") return conn_type; | |||
if (conn_type == "addr-fetch") return "addr"; | |||
if (conn_type == "fullrbf") return "full-rbf"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This gets kinda ugly: "fullrbf" is also "outbound-full-relay", (well, so is "manual").
Perhaps "outbound-full-relay" should be renamed to "generic-outbound-full-relay" or something, I dunno.
@@ -1675,13 +1678,15 @@ void CConnman::ThreadOpenConnections(const std::vector<std::string> connect) | |||
// Only connect out to one peer per network group (/16 for IPv4). | |||
int nOutboundFullRelay = 0; | |||
int nOutboundBlockRelay = 0; | |||
int nOutboundFullRBF = 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: why using ugly variable name in new code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you suggest? The variable matches with the naming in the previous two lines.
@@ -41,7 +41,8 @@ const std::vector<std::string> CONNECTION_TYPE_DOC{ | |||
"inbound (initiated by the peer)", | |||
"manual (added via addnode RPC or -addnode/-connect configuration options)", | |||
"addr-fetch (short-lived automatic connection for soliciting addresses)", | |||
"feeler (short-lived automatic connection for testing addresses)" | |||
"feeler (short-lived automatic connection for testing addresses)", | |||
"fullrbf (long-lived automatic connection for full-rbf peers)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like a new slang: what you call "full-rbf" is really "outbound full-rbf". I'd expand this, but as i mention in a prior comment, our categorization of peers is not that consistent anyway.
@@ -3177,6 +3177,13 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type, | |||
return; | |||
} | |||
|
|||
if (pfrom.IsFullRBF() && !HasFullRBFServiceFlag(nServices)) | |||
{ | |||
LogPrint(BCLog::NET, "peer=%d does not offer fullrbf as expected; disconnecting\n", pfrom.GetId()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to the current code, this is impossible? Because the only way to become IsFullRBF()
is if the latter condition is true.
In that case, should it be Assume
instead?
This is an optional, opt-in, feature that only a subset of nodes will ever use. We do not need to analyze to death every single possibility, and in the event of these unlikely attacks, the minority affected can just turn the flag off (potentially with getting a new IP to disassociate from their old "identity").
Anyway, the phrasing "it's unclear to me what exact interest of a honest node" is usually just a polite way of saying "no one has convinced me this is a problem"
I don't believe your continued discussion here is either productive or in good faith.
…On September 22, 2022 5:27:54 PM GMT+00:00, 1440000bytes ***@***.***> wrote:
> In the former case, it's unclear to me what exact interest of a honest node is damaged by a malicious peer refusing connection, as another more valuable peer will be sought. In the latter case, while it sounds correct to me you will waste an outbound slot of a honest node, this can be already achieved for any regular connection. I don't think we're making strong assumptions of peer reliability in our p2p stack, therefore the redundancy of connections. As laid out by other contributors, as long as there is 1 honest full-rbf peers, there is already a full-rbf propagation efficiency gain.
Did you make all those assumption in last core CVE where it was not documented about something?
Anyway, I am serious this change will affect bitcoin. I will report every change. If it still gets igngored, we cant't do anything.
--
Reply to this email directly or view it on GitHub:
#25600 (comment)
You are receiving this because you were mentioned.
Message ID: ***@***.***>
|
@ariard I am surprised someone who guided me with lot of RBF attacks and CVE-2022-35913 wants to go ahead with this. There are some some issue which I don't want to repeat and been mentioned in the pull request. We can get our head in the sand and go ahead merging this because we cannot read anything negative from new contributors. I would keep testing this after merging and please don't blame me if there are any CVE in any bitcoin projects. |
Note: This is the first time bitcoin network will have multiple replacement policies afaik. We should be careful and make something default later that is used most by network. Not by developers. |
Bitcoin has always had different replacement policies: each release with different policies means there are different replacement policies. And of course, there have been full-rbf nodes for ages, including ones with preferential peering.
Re: CVE-2022-35913, that's an example that would be less of a problem with full-rbf, not more.
…On September 22, 2022 10:08:57 PM GMT+00:00, 1440000bytes ***@***.***> wrote:
Note: This is the first time bitcoin network will have multiple replacement policies afaik. We should be careful and make something default later that is used most by network. Not by developers.
--
Reply to this email directly or view it on GitHub:
#25600 (comment)
You are receiving this because you were mentioned.
Message ID: ***@***.***>
|
Closing this PR for now. I think there have been a lot of high-level conceptual and procedural issues not addressed during the last weeks, on which I'm aiming to work more on. |
If this does get re-opened in the future, I noticed that the output of diff --git a/src/bitcoin-cli.cpp b/src/bitcoin-cli.cpp
index 18aeea779..81a078e5c 100644
--- a/src/bitcoin-cli.cpp
+++ b/src/bitcoin-cli.cpp
@@ -525,7 +525,7 @@ public:
// Report detailed peer connections list sorted by direction and minimum ping time.
if (DetailsRequested() && !m_peers.empty()) {
std::sort(m_peers.begin(), m_peers.end());
- result += strprintf("<-> type net mping ping send recv txn blk hb %*s%*s%*s ",
+ result += strprintf("<-> type net mping ping send recv txn blk hb %*s%*s%*s ",
m_max_addr_processed_length, "addrp",
m_max_addr_rate_limited_length, "addrl",
m_max_age_length, "age");
@@ -534,7 +534,7 @@ public:
for (const Peer& peer : m_peers) {
std::string version{ToString(peer.version) + peer.sub_version};
result += strprintf(
- "%3s %6s %5s%7s%7s%5s%5s%5s%5s %2s %*s%*s%*s%*i %*s %-*s%s\n",
+ "%3s %8s %5s%7s%7s%5s%5s%5s%5s %2s %*s%*s%*s%*i %*s %-*s%s\n",
peer.is_outbound ? "out" : "in",
ConnectionTypeForNetinfo(peer.conn_type),
peer.network,
@@ -559,7 +559,7 @@ public:
IsAddressSelected() ? peer.addr : "",
IsVersionSelected() && version != "0" ? version : "");
}
- result += strprintf(" ms ms sec sec min min %*s\n\n", m_max_age_length, "min");
+ result += strprintf(" ms ms sec sec min min %*s\n\n", m_max_age_length, "min");
}
// Report peer connection totals by type.
|
I added your patch and a subsequent fix to my full-rbf tree: https://github.com/petertodd/bitcoin/tree/full-rbf-v24.0 If anyone has any further improvements for it, feel free to open a pull-req on my repo. |
This comment can be valuable to be addressed: #25600 (comment), the other ones if I remember are not functional. |
Related: https://twitter.com/dammkewl/status/1614920179221135360 |
In the context of #27926, I think we might have to reconsider automatic transaction-relay preferential peering on the table, therefore allowing full-node operators to opt-in with Beyond, such use-case, I think the preferential peering could be abstracted like we have for consensus-deployment code in |
mempoolfullrbf
allows a node to accept replace-by-fee transactions without requiring replaceability signaling. While the replacement transaction is locally accepted, it should be rejected by all your opt-in rbf peers, the default behavior. As such, the odds are low the replacement transaction propagates to miners, unless there are full-rbf transaction-relay paths existent between the node and those miners mempools.This PR aims to improve this issue by enabling the setting of full-rbf transaction-relay link, in both manual and automated fashions.
The first commit introduces a
NODE_REPLACE_BY_FEE
service flag to advertise support of full-rbf to our peers ifmempoolfullrbf
is set to true. From the results ofgetpeerinfo
orgetnetworkinfo
, a node operator can find full-rbf peers on the p2p network to which to manually connect to. The service flag bit select is the 26th one to be compatible with previous full-rbf experiences.The second commit introduces a
ConnectionType::FULLRBF
ifmempoolfullrbf
is set to true. InThreadOpenConnections
, we attempt to keep opened at least 4 outbound full-rbf transaction-relay peer. An outbound full-rbf relay peer is considered as one signalingNODE_REPLACE_BY_FEE
service flag.I think if we have concerns on the network-wise resources consumption from those new outbound full-rbf relay peers, we could deduce them
MAX_ADNNODE_CONNECTIONS
. It should be weighted if we have other concerns about the implications on transaction-relay topology robustness or any spying peers flagging themselves asNODE_REPLACE_BY_FEE
to attract early full-rbf traffic. What else ?I don't hold to the approach re-using the current automatic connection logic in
ThreadOpenConnections
, if there are others ones I'm interested to consider them. I can split the PR in two, if automatic connection requires more thinking.TODO:
MAX_FULLRBF_RELAY_CONNECTIONS
configurable up toMAX_ADDNODE_CONNECTIONS
?