Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add HolderCommitmentPoint struct to track commitment points #3086

Merged

Conversation

alecchendev
Copy link
Contributor

This is the first of several upcoming PRs to complete async signing. This adds the HolderCommitmentPoint struct to consolidate our logic getting commitment points, which will make things easier when this operation returns a result type in an upcoming PR. This refactor helps prepare for async signing, but still assumes the signer is synchronous.

I left a bunch of TODOs that should get removed in upcoming PRs, let me know if they're too much.

@codecov-commenter
Copy link

codecov-commenter commented May 30, 2024

Codecov Report

Attention: Patch coverage is 92.40506% with 12 lines in your changes missing coverage. Please review.

Project coverage is 91.83%. Comparing base (df01208) to head (cf545b4).
Report is 99 commits behind head on main.

Files Patch % Lines
lightning/src/ln/channel.rs 92.40% 6 Missing and 6 partials ⚠️

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3086      +/-   ##
==========================================
+ Coverage   89.90%   91.83%   +1.92%     
==========================================
  Files         117      119       +2     
  Lines       97105   113897   +16792     
  Branches    97105   113897   +16792     
==========================================
+ Hits        87303   104596   +17293     
+ Misses       7243     6976     -267     
+ Partials     2559     2325     -234     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
@@ -1119,6 +1119,81 @@ pub(crate) struct ShutdownResult {
pub(crate) channel_funding_txo: Option<OutPoint>,
}

#[derive(Debug, Copy, Clone)]
enum HolderCommitmentPoint {
Uninitialized { transaction_number: u64 },
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why even have this variant given we always immediately call request_next? ISTM we could drop the panics below if we just elided this and requested in new?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#3109 should hopefully provide more context for this now

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right...related question, how are y'all handling the ChannelSigner::pubkeys call? Are you just blocking for the initial signer setup there or are you doing something more clever to preload the pubkeys (that could also apply to the initial commitment point)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh woops thought i responded to this - we basically preload pubkeys upon the creation of a node for all channels, so what we use for that probably can't directly help with the first commitment point. We could make it so that for async signing you need to pass the initial per commitment point into ChannelManager::create_channel or accept_inbound_channel to avoid doing the Uninitialized variant - and on our side we'd just have to wait for the response there which isn't a ton different from how we're waiting already

@alecchendev
Copy link
Contributor Author

alecchendev commented Jun 6, 2024

squashed because this is still in it's early phases. the major changes were:

  • merged HolderCommitmentPoint::request_next into advance
  • now return an option from current_point which gets rid of the panics, but in several cases I don't want to handle this case in this PR, so i've left some .expect("TODO")s for upcoming PRs. going to open the first PR resolving some of these within the hour, which should provide more context to this

Since this is a pretty big change, my goal is to make this easy to review as possible by splitting it up into separate PRs - some of this is a bit hard (-> .expect("TODO") is my best solution at the moment...), let me know if there are better/preferred ways of doing these things

@TheBlueMatt
Copy link
Collaborator

I almost wonder if we shouldn't introduce the machinery here with nothing fallible so that we don't any any extra expects and then add those in the next PR where we have to actually handle them anyway? We could even do dances like let next_point = Some(advance()); match next_point { None => future code, .. } to land things progressively.

@alecchendev alecchendev force-pushed the 2024-05-holder-commitment branch 3 times, most recently from e22f940 to 9bc514e Compare June 10, 2024 00:59
@alecchendev
Copy link
Contributor Author

Dropped the Uninitialized variant, made current_point infallible

Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, please squash the fixups and lets land this!

@alecchendev
Copy link
Contributor Author

squashed!

@valentinewallace valentinewallace self-requested a review June 10, 2024 16:52
Copy link
Contributor

@valentinewallace valentinewallace left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nothing blocking!

Comment on lines 6537 to 6549
if self.context.signer_pending_funding {
// TODO: set signer_pending_channel_ready
log_debug!(logger, "Can't produce channel_ready: the signer is pending funding.");
return None;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this check exists already about 60 lines up?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oops just moved the log + TODO up there

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sike, just removed the one above, i realized in the next PR we should only set signer_pending_channel_ready after we've passed all the other checks

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sike x2, causes a bug, presumably because it causes us to do this check after modifying state

Comment on lines 1124 to 1134
PendingNext { transaction_number: u64, current: PublicKey },
Available { transaction_number: u64, current: PublicKey, next: PublicKey },
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO these could use some docs at some point, but feel free to hold off if it would make more sense to add them in following PRs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh true, i'll add some interim docs and probably expand them more in an upcoming PR

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@alecchendev alecchendev force-pushed the 2024-05-holder-commitment branch 2 times, most recently from 317426a to 74297f4 Compare June 10, 2024 18:42
@TheBlueMatt
Copy link
Collaborator

Looks like tests are failing: thread 'ln::async_signer_tests::test_async_commitment_signature_for_funding_signed_0conf' panicked at lightning/src/ln/async_signer_tests.rs:238:9:

This includes when building TxCreationKeys, as well as for open_channel
and accept_channel messages. Note: this is only for places where we are
retrieving the current per commitment point, which excludes
channel_reestablish.
@alecchendev
Copy link
Contributor Author

looks to be this, should be fixed now

Comment on lines +1132 to +1133
/// Our current commitment point is ready, we've cached our next point,
/// and we are not pending a new one.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: are "we've cached our next point" and "we are not pending a new one" the same thing? If so, I think one of the clauses could be removed since it sounds like separate things atm

@@ -5417,7 +5417,10 @@ impl<SP: Deref> Channel<SP> where
}

fn get_last_revoke_and_ack(&self) -> msgs::RevokeAndACK {
let next_per_commitment_point = self.context.holder_signer.as_ref().get_per_commitment_point(self.context.holder_commitment_point.transaction_number(), &self.context.secp_ctx);
debug_assert!(self.context.holder_commitment_point.transaction_number() <= INITIAL_COMMITMENT_NUMBER + 2);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Trying to understand the + 2. The bolts seem to read like our transaction number will never be > INITIAL_COMMITMENT_NUMBER, let me know what I'm missing here!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah oops, this should actually be - 2, to assert we have always advanced our commitment point twice before we ever call here

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah that makes sense lol

@@ -5417,7 +5417,10 @@ impl<SP: Deref> Channel<SP> where
}

fn get_last_revoke_and_ack(&self) -> msgs::RevokeAndACK {
let next_per_commitment_point = self.context.holder_signer.as_ref().get_per_commitment_point(self.context.holder_commitment_point.transaction_number(), &self.context.secp_ctx);
debug_assert!(self.context.holder_commitment_point.transaction_number() <= INITIAL_COMMITMENT_NUMBER + 2);
// TODO: handle non-available case when get_per_commitment_point becomes async
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For my understanding, is this referring to when we implement this TODO to add the new HolderCommitmentPoint variant (which I think means that current_point() will become Optional)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep! technically it should never be unavailable by the time we get here, but figured i'd leave a note to think about it again when things change

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like PendingNext would also work here, so the assertion below still doesn't totally make sense to me, but it probably gets clearer in the follow-ups.

@valentinewallace valentinewallace merged commit f2237a7 into lightningdevkit:main Jun 10, 2024
14 of 16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants