Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pool Cleanup #652

Merged
merged 13 commits into from
Jan 6, 2022
Merged

Pool Cleanup #652

merged 13 commits into from
Jan 6, 2022

Conversation

mconcat
Copy link
Collaborator

@mconcat mconcat commented Dec 10, 2021

Closes: #502

Description


For contributor use:

  • Targeted PR against correct branch (see CONTRIBUTING.md)
  • Updated relevant documentation (docs/) or specification (x/<module>/spec/)
  • Added a relevant changelog entry to the Unreleased section in CHANGELOG.md
  • Re-reviewed Files changed in the Github PR explorer

x/gamm/keeper/pool.go Outdated Show resolved Hide resolved
@daniel-farina daniel-farina added this to the 2021 - December Milestone milestone Dec 13, 2021
@ValarDragon
Copy link
Member

Is the idea of setting a destruction time that I can make a pool get deleted in the current block (e.g. at upgrade), and schedule a deletion in the future?

Shouldn't we make it so that the pool deletion also deletes the pool from state, or is it a goal / low cost to keep the literal pool in state as long as its marked inactive for integrators?

Comment on lines 132 to 134
if !pool.GetTotalShares().IsZero() {
panic("pool total share should be zero after cleanup")
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can zero share be assured everytime(division factor etc.)?
My only concern is that it might not: considering the fact that I remember that somewhere in the code what we did was iterate through coins and send a part of it every iteration, and at the last iteration, we would send the all the remainings.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great point, I originally thought the total shares are only added/subtracted, not divided; I will check if there are any case of division.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this fixed? / checked?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because shares are just a sdk.Coin, it could be IBC'd out or sent to module logics, and stay under the module account. I am pretty sure if we count all the address held shares and module held shares, it would sum up to GetTotalShares(). Let me add the logic for module accounts to(or, is it already iterated by GetAllBalances?)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you investigate if module accounts are iterated over? I think we should definitely exclude certain module accounts, e.g. IBC

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The iteration do include the module accounts. I will exclude the module account held LP tokens.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, annoyingly, there is no way to distinguish module address and normal address by only using the address itself. We could read all the module address and compare to them each time iterating over the list of address, but at this stage it feels little adhoc. Let me think about some alternatives a bit.

@mattverse
Copy link
Member

Similar but different to @ValarDragon 's comment, I'm curious when this CleanupBalancerPool is designed to be ran

@mattverse
Copy link
Member

  • A friendly reminder to fix test!

@mconcat
Copy link
Collaborator Author

mconcat commented Dec 16, 2021

I intended pool destruction in order to make it able to resurrected in the future, but I think we don't need to store the actual time here. However, I agree that it does not really make sense to just pause the pool in this use case, as we completely destruct the pool and refund the whole assets. I will change it to state removal.

Is the idea of setting a destruction time that I can make a pool get deleted in the current block (e.g. at upgrade), and schedule a deletion in the future?

Shouldn't we make it so that the pool deletion also deletes the pool from state, or is it a goal / low cost to keep the literal pool in state as long as its marked inactive for integrators?

It is designed to ran on the chain upgrade time, so we don't have a lot of computation time restriction.

Similar but different to @ValarDragon 's comment, I'm curious when this CleanupBalancerPool is designed to be ran

@codecov-commenter
Copy link

codecov-commenter commented Dec 17, 2021

Codecov Report

Merging #652 (0bd19df) into main (8342388) will increase coverage by 0.04%.
The diff coverage is 30.58%.

Impacted file tree graph

@@            Coverage Diff             @@
##             main     #652      +/-   ##
==========================================
+ Coverage   18.69%   18.73%   +0.04%     
==========================================
  Files         171      171              
  Lines       23816    23901      +85     
==========================================
+ Hits         4452     4478      +26     
- Misses      18601    18649      +48     
- Partials      763      774      +11     
Impacted Files Coverage Δ
x/gamm/types/query.pb.gw.go 0.00% <0.00%> (ø)
x/lockup/keeper/lock.go 61.26% <14.28%> (-1.34%) ⬇️
x/gamm/keeper/pool.go 69.67% <52.08%> (-11.41%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 8342388...0bd19df. Read the comment docs.

@mconcat
Copy link
Collaborator Author

mconcat commented Dec 17, 2021

R4R

@ValarDragon
Copy link
Member

Notes from call earlier today/yesterday
* The flow is:
* DeletePool ->
* Force Unlock every LP share (ForceUnlock)
* Iterate through bank, and migrate every LP share to constituent assets
* Delete Pool data (via ClearPool)

poolAssets := pool.GetAllPoolAssets()

// first iterate through the share holders and burn them
k.bankKeeper.IterateAllBalances(ctx, func(addr sdk.AccAddress, coin sdk.Coin) (stop bool) {
Copy link
Member

@ValarDragon ValarDragon Dec 21, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets file an issue to revisit performance of this. Either we:

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait aren't we going to be iterating over IBC's module accounts like this? If so, we will be making IBC'd tokens never redeemable. Do we need to care about this, cc @sunnya97 ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the original issue I think the IBC'd LP tokens are never redeemable for now(also there is no use case of IBCing out LP tokens yet). I could apply the first and the third suggestion now.

return
}

shareAmount := coin.Amount
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lets alias coin to userPoolShare here. I think it makes a lot of subsequent lines easier to read


// Refund assets
for _, asset := range poolAssets {
assetAmount := asset.Token.Amount.Mul(shareAmount).Quo(totalShares)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
assetAmount := asset.Token.Amount.Mul(shareAmount).Quo(totalShares)
// lpShareEquivalentTokens = (amount in pool) * (your shares) / (total shares)
lpShareEquivalentTokens := asset.Token.Amount.Mul(shareAmount).Quo(totalShares)

@ValarDragon
Copy link
Member

I intended pool destruction in order to make it able to resurrected in the future, but I think we don't need to store the actual time here. However, I agree that it does not really make sense to just pause the pool in this use case, as we completely destruct the pool and refund the whole assets. I will change it to state removal.

This still needs to be done right? (I'm still seeing the destruction time logic)

Comment on lines 222 to 226
func (k Keeper) GetLocksDenom(ctx sdk.Context, denom string) []types.PeriodLock {
unlockings := k.getLocksFromIterator(ctx, k.LockIteratorDenom(ctx, true, denom))
notUnlockings := k.getLocksFromIterator(ctx, k.LockIteratorDenom(ctx, false, denom))
return combineLocks(notUnlockings, unlockings)
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pretty surprised we don't already have a function for this lying around, pretty sure we use it in distribution

Copy link
Member

@mattverse mattverse Jan 5, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm... True... Did we / can we use GetLocksLongerThanDurationDenom with duration set to 0?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That worked!

Comment on lines 583 to 606
// remove lock from store object
store := ctx.KVStore(k.storeKey)
store.Delete(lockStoreKey(lock.ID))

var keyprefix []byte
if lock.IsUnlocking() {
keyprefix = types.KeyPrefixUnlocking
} else {
keyprefix = types.KeyPrefixNotUnlocking
}

// delete lock refs from the unlocking queue
err = k.deleteLockRefs(ctx, keyprefix, lock)
if err != nil {
return err
}

// remove from accumulation store
for _, coin := range lock.Coins {
k.accumulationStore(ctx, coin.Denom).Decrease(accumulationKey(lock.Duration), coin.Amount)
}

k.hooks.OnTokenUnlocked(ctx, owner, lock.ID, lock.Coins, lock.Duration, lock.EndTime)
return nil
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we reuse logic from the existing methods for a lock going from NotUnlocking -> Unlocking, and Unlocking -> finish unlocking?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will do so!

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Problem is that the existing method checks whether the time has been passed specified on the lock duration, and ForceUnlock need to ignore them.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed by separating duration validation and actual unlocking logic

@mconcat
Copy link
Collaborator Author

mconcat commented Dec 22, 2021

Few modification needed:

  • Remove pool destruction time, instead delete from state
  • Optimize refund logic performance
  • Check module account are iterated over

@ValarDragon
Copy link
Member

We don't need to optimize performance for this PR, can just write an issue. (Performance gets ~basically solved if we just wait lol)

Co-authored-by: Dev Ojha <ValarDragon@users.noreply.github.com>
@daniel-farina daniel-farina removed this from the 2021-12 Milestone milestone Dec 24, 2021
@mconcat
Copy link
Collaborator Author

mconcat commented Jan 6, 2022

  • Pools are now deleted when cleaned up
  • Reuse existing unlock methods
  • Module accounts are excluded
  • Cleanup process is batched now(single iteration over the account space)

x/gamm/keeper/pool.go Outdated Show resolved Hide resolved
x/gamm/keeper/pool.go Outdated Show resolved Hide resolved
Comment on lines +113 to +116
moduleAccounts := make(map[string]string)
for _, module := range excludedModules {
moduleAccounts[string(authtypes.NewModuleAddress(module))] = module
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

Copy link
Member

@ValarDragon ValarDragon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, looks good to merge to me! Thanks for fixing everything and addressing comments

@ValarDragon ValarDragon merged commit ac37680 into main Jan 6, 2022
@ValarDragon ValarDragon deleted the mconcat/502-pool-cleanup branch January 6, 2022 20:49
@github-actions github-actions bot mentioned this pull request Apr 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

x/GAMM: Create a function to refund all LP shares for a pool
6 participants