Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Relax closing fee requirements, implement quickclose #4599

Merged
merged 10 commits into from
Sep 9, 2021

Conversation

niftynei
Copy link
Collaborator

@niftynei niftynei commented Jun 15, 2021

Implementation of lightning/bolts#847 (Update: now merged!).

Depends on #4755.

@rustyrussell
Copy link
Contributor

Look good! Needs some testing, and we can really only support this with EXPERIMENTAL_FEATURES since it's not in the spec yet?

@rustyrussell
Copy link
Contributor

OK, this turned into a pretty big rework!

There's now a --experimental-quick-close option (enabled by default if it's an anchor_outputs channel, which is EXPERIMENTAL_FEATURES only anyway) which means we send the tlv. If we send and receive, we quick-close.

On the way I found that our fee estimation for closing txs was not actually using the weight of the closing tx, but the final commitment tx. This is v. different for anchor_outputs!

@niftynei might want me to break off the first commits for this release?

@rustyrussell rustyrussell added closingd protocol These issues are protocol level issues that should be discussed on the protocol spec repo fee labels Jun 28, 2021
@rustyrussell rustyrussell marked this pull request as ready for review June 28, 2021 05:17
@t-bast
Copy link

t-bast commented Jun 28, 2021

I'm having some trouble testing this on regtest for the case where c-lightning is fundee, c-lightning keeps insisting on a range containing a single value (instead of a range) strictly below 1 sat/byte, so I can't get eclair to propose such a range...

Can you help me figure out how to configure c-lightning to use higher on-chain fee estimates? Is there a configuration I can set to hard-code feerates on regtest or something similar?

Note that it made me realize that I got something wrong in eclair: the node operator chooses a fee range when initiating shutdown. But if the remote peer rejects that fee range entirely (with a warning message + disconnect) eclair will just retry the same fee range on reconnection, and I didn't provide a hook for the node operator to send a closing_signed with a different fee range (once shutdown has been initiated, I've made it so the fee range is fixed). I need to change that.

This also highlights a difference in behavior between eclair and c-lightning where I'd like to have your feedback. In my opinion, it never makes sense when you're fundee to reject the funder's proposed fee range like c-lightning does. You're fundee so you're not paying that fee:

  • if you think it's too high, then it's great for you as you're not paying it and it will confirm faster
  • if you think it's too low, then choose the highest fee in the proposed range and just go with it, you can always CPFP it later if needed, but since you didn't initiate the shutdown it's likely your peer will take care of CPFP-ing it (they started shutdown, so they need these funds back on-chain more than you do). I agree that this second case is more arguable, by refusing that fee maybe the funder will propose a higher fee which is better for you, but you're kinda strong-arming your peer for something that he will pay 100%, so it's a bit unfair.

WDYT?

@rustyrussell
Copy link
Contributor

Totally right about upper fee should be infinite. But lower fee risks them blocking you by creating low-fee children. So we do need a lower bound.

Spec should note this! I'll come up with some language

@rustyrussell
Copy link
Contributor

I'm having some trouble testing this on regtest for the case where c-lightning is fundee, c-lightning keeps insisting on a range containing a single value (instead of a range) strictly below 1 sat/byte, so I can't get eclair to propose such a range...

Can you help me figure out how to configure c-lightning to use higher on-chain fee estimates? Is there a configuration I can set to hard-code feerates on regtest or something similar?

In our test suite we pretend to be bitcoind and feed fake fees that way, but you're right, we should just include an option for regtest fees. I'll add a feature request.

Note that it made me realize that I got something wrong in eclair: the node operator chooses a fee range when initiating shutdown. But if the remote peer rejects that fee range entirely (with a warning message + disconnect) eclair will just retry the same fee range on reconnection, and I didn't provide a hook for the node operator to send a closing_signed with a different fee range (once shutdown has been initiated, I've made it so the fee range is fixed). I need to change that.

This also highlights a difference in behavior between eclair and c-lightning where I'd like to have your feedback. In my opinion, it never makes sense when you're fundee to reject the funder's proposed fee range like c-lightning does. You're fundee so you're not paying that fee:

* if you think it's too high, then it's great for you as you're not paying it and it will confirm faster

* if you think it's too low, then choose the highest fee in the proposed range and just go with it, you can always CPFP it later if needed, but since you didn't initiate the shutdown it's likely your peer will take care of CPFP-ing it (they started shutdown, so they need these funds back on-chain more than you do). I agree that this second case is more arguable, by refusing that fee maybe the funder will propose a higher fee which is better for you, but you're kinda strong-arming your peer for something that he will pay 100%, so it's a bit unfair.

OK, I've fixed this so we allow infinite max if we're non-funder. We still require estimatefee 100 as a min though; could possibly drop it below that, but not to the relay minimum I think.

WDYT?

@t-bast
Copy link

t-bast commented Jun 29, 2021

Thanks, I ran more tests with 71cf497 and here is what I found:

  • When eclair is funder:
    • c-lightning now correctly accepts my higher-than-expected fee
    • but eclair isn't happy with max_fee_satoshis = 0xffffffffffffffff because this is actually an invalid value for a satoshi amount since it's bigger than 21 million BTC...since c-lightning is simply accepting a value inside eclair's proposed fee range, I don't think it's necessary to set this to 0xffffffffffffffff (and I think our internal check here makes sense)
  • When c-lightning is funder:
    • c-lightning sent fee_satoshis = 138 and fee_range = [138, 685]
    • eclair answered with fee_satoshis = 138 and fee_range = [138, 138] and published the closing tx
    • c-lightning sent another closing_signed (duplicate of the first one): it's harmless, but c-lightning shouldn't send that 3rd message, it should publish the closing tx (or not do anything if it has already seen it in the mempool)

Note that when we're fundee and accept a fee from the funder's proposed fee range, I chose to answer with a fee range containing only that fee (in the example above [138, 138]) since we both agree on it. Let me know if you think that could cause internal issues for c-lightning (the fundee could instead choose to send back the same fee range it received from the funder, I'm not sure which makes more sense).

@rustyrussell
Copy link
Contributor

OK, I'll fix the range to match (as the spec suggests, I was lazy!)

I'll check the c-lightning-as-funder case using lnprototest, which will catch this weirdness immediately. Thanks!

@rustyrussell rustyrussell force-pushed the nifty/closing-fee-range branch 2 times, most recently from a626cb9 to 574f2b2 Compare July 2, 2021 00:08
@t-bast
Copy link

t-bast commented Jul 2, 2021

I have tested cross-compatibility between c-lightning 574f2b2 and eclair ACINQ/eclair@c37a2ca and everything looks good, thanks!

@rustyrussell
Copy link
Contributor

I have tested cross-compatibility between c-lightning 574f2b2 and eclair ACINQ/eclair@c37a2ca and everything looks good, thanks!

Thankyou! Beers are owed. Given the number of beers I now owe you, I suspect we're going to have to catch up for several weeks somewhere! :)

@t-bast
Copy link

t-bast commented Jul 4, 2021

Beers are owed. Given the number of beers I now owe you, I suspect we're going to have to catch up for several weeks somewhere! :)

I'm down for that, it sounds like a very good plan, beers are owed both ways for everything you've done as well ;)

@niftynei niftynei added this to the v0.10.2 milestone Aug 2, 2021
@rustyrussell rustyrussell self-assigned this Aug 30, 2021
@rustyrussell rustyrussell changed the title rfc-proposal: relax closing fee requirements Relax closing fee requirements, implement quickclose Sep 1, 2021
@rustyrussell rustyrussell force-pushed the nifty/closing-fee-range branch 2 times, most recently from 396b13a to b40cbe0 Compare September 2, 2021 04:48
rustyrussell and others added 5 commits September 8, 2021 09:35
…nt secret.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This check is going away anyway (only Electrum enforced it), but we
know that all wumbo peers expect large HTLCs to work today.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: Protocol: Allow sending large HTLCs if peer offers option_support_large_channel (> 4294967295msat)
This includes the new bolt11 test vectors, and also removes the
requirement that HTLCs be less than 2^32 msat.  We keep that for now
because Electrum enforced it on receive: in two releases we will stop
that too.

So no longer warn about needing mpp in that case either.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Deprecated: Protocol: No longer restrict HTLCs to
…_tx.

This touches a lot of text, mainly to change "if `option_anchor_outputs`"
to "if `option_anchors`"

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It also gets rid of the requirement that close negotiation fee maximum
is the old commitment transaction.  We still do that, however, to
avoid surprising old peers.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
@rustyrussell rustyrussell force-pushed the nifty/closing-fee-range branch 2 times, most recently from 663f4ab to 0b6b40b Compare September 8, 2021 03:19
Based on a commit by @niftynei, but:
- Separated quickclose logic from main loop.
- I made it indep of anchor_outputs, use and option instead.
- Disable if they've specified how to negotiate.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This follows lightning/bolts#847.

For anchor_outputs, we pass down a max_feerate to closingd, and set the
fee ceiling to MAX.  It uses that to estimate the desired closing fee.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-EXPERIMENTAL: Anchor output mutual close allow a fee higher than the final commitment transaction (as per lightning-rfc ElementsProject#847)
This is now allowed for anchors (as per lightning/bolts#847).

We need to play with feerates, since we don't put a discount on anchor
commitments yet.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This affects the range we offer even without quick-close, but it's
more critical for quick-close.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: JSONRPC: `close` now takes a `feerange` parameter to set min/max fee rates for mutual close.
That was quick!

We remove the 50% test, since the default is now to use quickclose.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: Protocol: We now perform quick-close if the peer supports it.
@rustyrussell
Copy link
Contributor

I need to create a fake account to Ack my PRs ;)

Ack 8a9e7e0

@rustyrussell rustyrussell merged commit 84957be into ElementsProject:master Sep 9, 2021
SimonVrouwe added a commit to SimonVrouwe/lightning that referenced this pull request Nov 6, 2021
shutdown_subdaemons frees the channel and calls destroy_close_command_on_channel_destroy, see gdb:

    0  destroy_close_command_on_channel_destroy (_=0x55db6ca38e18, cc=0x55db6ca43338) at lightningd/closing_control.c:94
    1  0x000055db6a8181b5 in notify (ctx=0x55db6ca38df0, type=TAL_NOTIFY_FREE, info=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:237
    2  0x000055db6a8186bb in del_tree (t=0x55db6ca38df0, orig=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:402
    3  0x000055db6a818a47 in tal_free (ctx=0x55db6ca38e18) at ccan/ccan/tal/tal.c:486
    4  0x000055db6a73fffa in shutdown_subdaemons (ld=0x55db6c8b4ca8) at lightningd/lightningd.c:543
    5  0x000055db6a741098 in main (argc=21, argv=0x7ffffa3e8048) at lightningd/lightningd.c:1192

Before this PR, there was no io_loop after shutdown_subdaemons and client side raised a
general `Connection to RPC server lost.`
Now we test the more specific `Channel forgotten before proper close.`, which is good!

BTW, this test was added recently in PR ElementsProject#4599.
SimonVrouwe added a commit to SimonVrouwe/lightning that referenced this pull request Nov 15, 2021
shutdown_subdaemons frees the channel and calls destroy_close_command_on_channel_destroy, see gdb:

    0  destroy_close_command_on_channel_destroy (_=0x55db6ca38e18, cc=0x55db6ca43338) at lightningd/closing_control.c:94
    1  0x000055db6a8181b5 in notify (ctx=0x55db6ca38df0, type=TAL_NOTIFY_FREE, info=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:237
    2  0x000055db6a8186bb in del_tree (t=0x55db6ca38df0, orig=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:402
    3  0x000055db6a818a47 in tal_free (ctx=0x55db6ca38e18) at ccan/ccan/tal/tal.c:486
    4  0x000055db6a73fffa in shutdown_subdaemons (ld=0x55db6c8b4ca8) at lightningd/lightningd.c:543
    5  0x000055db6a741098 in main (argc=21, argv=0x7ffffa3e8048) at lightningd/lightningd.c:1192

Before this PR, there was no io_loop after shutdown_subdaemons and client side raised a
general `Connection to RPC server lost.`
Now we test the more specific `Channel forgotten before proper close.`, which is good!

BTW, this test was added recently in PR ElementsProject#4599.
SimonVrouwe added a commit to SimonVrouwe/lightning that referenced this pull request Nov 17, 2021
shutdown_subdaemons frees the channel and calls destroy_close_command_on_channel_destroy, see gdb:

    0  destroy_close_command_on_channel_destroy (_=0x55db6ca38e18, cc=0x55db6ca43338) at lightningd/closing_control.c:94
    1  0x000055db6a8181b5 in notify (ctx=0x55db6ca38df0, type=TAL_NOTIFY_FREE, info=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:237
    2  0x000055db6a8186bb in del_tree (t=0x55db6ca38df0, orig=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:402
    3  0x000055db6a818a47 in tal_free (ctx=0x55db6ca38e18) at ccan/ccan/tal/tal.c:486
    4  0x000055db6a73fffa in shutdown_subdaemons (ld=0x55db6c8b4ca8) at lightningd/lightningd.c:543
    5  0x000055db6a741098 in main (argc=21, argv=0x7ffffa3e8048) at lightningd/lightningd.c:1192

Before this PR, there was no io_loop after shutdown_subdaemons and client side raised a
general `Connection to RPC server lost.`
Now we test the more specific `Channel forgotten before proper close.`, which is good!

BTW, this test was added recently in PR ElementsProject#4599.
SimonVrouwe added a commit to SimonVrouwe/lightning that referenced this pull request Nov 18, 2021
shutdown_subdaemons frees the channel and calls destroy_close_command_on_channel_destroy, see gdb:

    0  destroy_close_command_on_channel_destroy (_=0x55db6ca38e18, cc=0x55db6ca43338) at lightningd/closing_control.c:94
    1  0x000055db6a8181b5 in notify (ctx=0x55db6ca38df0, type=TAL_NOTIFY_FREE, info=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:237
    2  0x000055db6a8186bb in del_tree (t=0x55db6ca38df0, orig=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:402
    3  0x000055db6a818a47 in tal_free (ctx=0x55db6ca38e18) at ccan/ccan/tal/tal.c:486
    4  0x000055db6a73fffa in shutdown_subdaemons (ld=0x55db6c8b4ca8) at lightningd/lightningd.c:543
    5  0x000055db6a741098 in main (argc=21, argv=0x7ffffa3e8048) at lightningd/lightningd.c:1192

Before this PR, there was no io_loop after shutdown_subdaemons and client side raised a
general `Connection to RPC server lost.`
Now we test the more specific `Channel forgotten before proper close.`, which is good!

BTW, this test was added recently in PR ElementsProject#4599.
SimonVrouwe added a commit to SimonVrouwe/lightning that referenced this pull request Nov 24, 2021
shutdown_subdaemons frees the channel and calls destroy_close_command_on_channel_destroy, see gdb:

    0  destroy_close_command_on_channel_destroy (_=0x55db6ca38e18, cc=0x55db6ca43338) at lightningd/closing_control.c:94
    1  0x000055db6a8181b5 in notify (ctx=0x55db6ca38df0, type=TAL_NOTIFY_FREE, info=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:237
    2  0x000055db6a8186bb in del_tree (t=0x55db6ca38df0, orig=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:402
    3  0x000055db6a818a47 in tal_free (ctx=0x55db6ca38e18) at ccan/ccan/tal/tal.c:486
    4  0x000055db6a73fffa in shutdown_subdaemons (ld=0x55db6c8b4ca8) at lightningd/lightningd.c:543
    5  0x000055db6a741098 in main (argc=21, argv=0x7ffffa3e8048) at lightningd/lightningd.c:1192

Before this PR, there was no io_loop after shutdown_subdaemons and client side raised a
general `Connection to RPC server lost.`
Now we test the more specific `Channel forgotten before proper close.`, which is good!

BTW, this test was added recently in PR ElementsProject#4599.
SimonVrouwe added a commit to SimonVrouwe/lightning that referenced this pull request Nov 26, 2021
shutdown_subdaemons frees the channel and calls destroy_close_command_on_channel_destroy, see gdb:

    0  destroy_close_command_on_channel_destroy (_=0x55db6ca38e18, cc=0x55db6ca43338) at lightningd/closing_control.c:94
    1  0x000055db6a8181b5 in notify (ctx=0x55db6ca38df0, type=TAL_NOTIFY_FREE, info=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:237
    2  0x000055db6a8186bb in del_tree (t=0x55db6ca38df0, orig=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:402
    3  0x000055db6a818a47 in tal_free (ctx=0x55db6ca38e18) at ccan/ccan/tal/tal.c:486
    4  0x000055db6a73fffa in shutdown_subdaemons (ld=0x55db6c8b4ca8) at lightningd/lightningd.c:543
    5  0x000055db6a741098 in main (argc=21, argv=0x7ffffa3e8048) at lightningd/lightningd.c:1192

Before this PR, there was no io_loop after shutdown_subdaemons and client side raised a
general `Connection to RPC server lost.`
Now we test the more specific `Channel forgotten before proper close.`, which is good!

BTW, this test was added recently in PR ElementsProject#4599.
rustyrussell pushed a commit that referenced this pull request Nov 30, 2021
shutdown_subdaemons frees the channel and calls destroy_close_command_on_channel_destroy, see gdb:

    0  destroy_close_command_on_channel_destroy (_=0x55db6ca38e18, cc=0x55db6ca43338) at lightningd/closing_control.c:94
    1  0x000055db6a8181b5 in notify (ctx=0x55db6ca38df0, type=TAL_NOTIFY_FREE, info=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:237
    2  0x000055db6a8186bb in del_tree (t=0x55db6ca38df0, orig=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:402
    3  0x000055db6a818a47 in tal_free (ctx=0x55db6ca38e18) at ccan/ccan/tal/tal.c:486
    4  0x000055db6a73fffa in shutdown_subdaemons (ld=0x55db6c8b4ca8) at lightningd/lightningd.c:543
    5  0x000055db6a741098 in main (argc=21, argv=0x7ffffa3e8048) at lightningd/lightningd.c:1192

Before this PR, there was no io_loop after shutdown_subdaemons and client side raised a
general `Connection to RPC server lost.`
Now we test the more specific `Channel forgotten before proper close.`, which is good!

BTW, this test was added recently in PR #4599.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
closingd fee needs-rebase protocol These issues are protocol level issues that should be discussed on the protocol spec repo
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants