Skip to content

Commit

Permalink
Batch claim much more aggressively even on pinnable outputs
Browse files Browse the repository at this point in the history
When we first added batch claiming, we only did so for claims which
we thought were not pinnable, i.e. those for which only we can
claim the output.

This was the conservative choice - any outputs which we think are
potentially pinnable might be pinned, leaving our entire batch
unable to confirm on chain. However, if we assume that pinnable
outputs are, indeed pinnable, and attempts to pin a transaction and
keep it from confirming have highly correlated success rates,
there's no reason we shouldn't also batch claims of pinnable
outputs separately.

Sadly, aggregating other types of inputs is rather nontrivial, as
evidenced by the size of this commit -

HTLC-Timeout claims have locktimes fixed by our counterparty's
signature and thus can only be aggregated with other HTLCs of the
same CLTV, which we have to check for.

Further, our concept of "pinnable" is deliberately somewhat fuzzy -
outputs which we believe are not pinnable will eventually become
pinnable at the height where our counterparty can spend them. Thus,
we treat outputs as pinnable if they're over that height, and if
they're within `COUNTERPARTY_CLAIMABLE_WITHIN_BLOCKS_PINNABLE` of
that height we treat them as neither pinnable nor not-pinnable,
aggregating such claims only with other claims which will become
pinnable at the same height.

However, the complexity required is worth it - aggregation can save
our users a lot of money in the case of a force-close, and directly
impacts the amount of UTXOs they need as reserve for anchors, so we
want to aggregate as much as possible.

Fixes lightningdevkit#3064
  • Loading branch information
TheBlueMatt authored and wvanlint committed Sep 19, 2024
1 parent c783468 commit adcaa1d
Show file tree
Hide file tree
Showing 5 changed files with 647 additions and 671 deletions.
13 changes: 10 additions & 3 deletions lightning/src/chain/channelmonitor.rs
Original file line number Diff line number Diff line change
Expand Up @@ -223,9 +223,16 @@ impl_writeable_tlv_based!(HTLCUpdate, {
(4, payment_preimage, option),
});

/// If an HTLC expires within this many blocks, don't try to claim it in a shared transaction,
/// instead claiming it in its own individual transaction.
pub(crate) const CLTV_SHARED_CLAIM_BUFFER: u32 = 12;
/// If an output goes from claimable only by us to claimable by us or our counterparty within this
/// many blocks, we consider it pinnable for the purposes of aggregating claims in a single
/// transaction.
pub(crate) const COUNTERPARTY_CLAIMABLE_WITHIN_BLOCKS_PINNABLE: u32 = 12;

/// When we go to force-close a channel because an HTLC is expiring, we should ensure that the
/// HTLC(s) expiring are not considered pinnable, allowing us to aggregate them with other HTLC(s)
/// expiring at the same time.
const _HTLCS_NOT_PINNABLE_ON_CLOSE: u32 = CLTV_CLAIM_BUFFER - COUNTERPARTY_CLAIMABLE_WITHIN_BLOCKS_PINNABLE;

/// If an HTLC expires within this many blocks, force-close the channel to broadcast the
/// HTLC-Success transaction.
/// In other words, this is an upper bound on how many blocks we think it can take us to get a
Expand Down
9 changes: 6 additions & 3 deletions lightning/src/chain/onchaintx.rs
Original file line number Diff line number Diff line change
Expand Up @@ -772,8 +772,11 @@ impl<ChannelSigner: EcdsaChannelSigner> OnchainTxHandler<ChannelSigner> {
for j in 0..i {
if requests[i].can_merge_with(&requests[j], cur_height) {
let merge = requests.remove(i);
requests[j].merge_package(merge);
break;
if let Err(rejected) = requests[j].merge_package(merge, cur_height) {
requests.insert(i, rejected);
} else {
break;
}
}
}
}
Expand Down Expand Up @@ -1101,7 +1104,7 @@ impl<ChannelSigner: EcdsaChannelSigner> OnchainTxHandler<ChannelSigner> {
OnchainEvent::ContentiousOutpoint { package } => {
if let Some(pending_claim) = self.claimable_outpoints.get(package.outpoints()[0]) {
if let Some(request) = self.pending_claim_requests.get_mut(&pending_claim.0) {
request.merge_package(package);
assert!(request.merge_package(package, height).is_ok());
// Using a HashMap guarantee us than if we have multiple outpoints getting
// resurrected only one bump claim tx is going to be broadcast
bump_candidates.insert(pending_claim.clone(), request.clone());
Expand Down
Loading

0 comments on commit adcaa1d

Please sign in to comment.