Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix #1270 Call price is inconsistent when MCR changed #1324

Merged
merged 36 commits into from
Feb 5, 2019

Conversation

abitmore
Copy link
Member

@abitmore abitmore commented Sep 13, 2018

PR for #1270.

  • test cases
    • modify old test cases related to margin calls, see if anything is broken after the new hard fork time
    • reproduce the issue before the hard fork time, aka changing MCR doesn't trigger margin calls (or push some call orders out of margin call territory)
      • go over the hard fork time, check if margin calls are triggered (or no longer triggered) as expected
    • changing MCR again after the hard fork time, check if margin calls are triggered correctly

@abitmore abitmore self-assigned this Sep 13, 2018
@oxarbitrage
Copy link
Member

oxarbitrage commented Nov 13, 2018

@abitmore, let me know what do you think about this possible framework to modify current testcases for the new after HF logic, i am trying to do this without duplicating too much code.

With the flag available inside each test case we can change the logic of them to match/test the new features for the existing test cases.

@abitmore
Copy link
Member Author

abitmore commented Nov 18, 2018

@oxarbitrage we can have a better test framework for hard forks, but I don't think your solution is good enough. It's incorrect to add a flag to all test cases, instead, we should do it on a case by case basis so far. It doesn't make much sense to have one flag on the top, and worse, we (e.g. Travis) won't modify it when running test cases.

Ideally some test cases should be executed once after EVERY hard fork time, from this point of view, we should modify all the test cases, and have a mechanism to avoid future modifications. IMHO this is out of this PR's scope, although it may be helpful for future work (need to make a decision about priority).

Update:

  1. if need a flag which can be used in different test cases, put it in database_fixture. Using a global variable is troublesome.
  2. your new test cases are failing, that (imho) means they are not well designed. Need to understand them and modify only necessary parts.

@oxarbitrage
Copy link
Member

your new test cases are failing, that (imho) means they are not well designed. Need to understand them and modify only necessary parts.

Yes, that is what i am trying but i was thinking on modifying only the needed parts with the flag in the same test case.

My approach was/is to identify where they fail and then try to understand why and workaround. I was able to identify where they fail in the first 2,3 cases with this.

Anyways, i understand your point, can create separated new test cases for each of the current tests advancing to HF date and modify what is needed there, at least by now.

Thanks.

@oxarbitrage
Copy link
Member

@abitmore please take a look when you can at: oxarbitrage@d9bb527

specifically, the cross test(last one). thank you.

Copy link
Contributor

@pmconrad pmconrad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very good. Only "serious" issue is virtual op IDs in global settlement.

Is it possible to get rid of a sub-index in a boost multi_index? That would free some resources. Would have to be delayed until LIB is past the HF time though.

libraries/chain/asset_evaluator.cpp Show resolved Hide resolved
libraries/chain/market_object.cpp Outdated Show resolved Hide resolved
wlog( "Done updating all call orders for hardfork core-343 at block ${n}", ("n",db.head_block_num()) );
}

/// Reset call_price of all call orders to (1,1) since it won't be used in the future.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is superfluous, since as you say it won't used anymore in the future. Getting rid of this and not tying the hf to a maintenance interval will simplify the PR significantly.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My concerns:

  1. For UI. If old positions' call_price are unchanged but new positions' are 1, people may get confused.
  2. I guess it's faster to compare 1 with 1 when inserting new data (when creating new positions) so it benefits in the long run with the cost of one-time process.

BTW as you've noticed later we need to tie the hf to a maintenance interval anyway.

@@ -62,15 +62,14 @@ void database::globally_settle_asset( const asset_object& mia, const price& sett
const asset_dynamic_data_object& mia_dyn = mia.dynamic_asset_data_id(*this);
auto original_mia_supply = mia_dyn.current_supply;

const call_order_index& call_index = get_index_type<call_order_index>();
const auto& call_price_index = call_index.indices().get<by_price>();
const auto& call_index = get_index_type<call_order_index>().indices().get<by_collateral>();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Switching to a different index here has the potential to change the order in which call orders are closed.
This shouldn't change the outcome in terms of amounts, but it can change the ID of virtual operations. We usually try to avoid that.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will revert this.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done via b494609.

@@ -100,6 +100,9 @@ void graphene::chain::asset_bitasset_data_object::update_median_feeds(time_point
if( current_feed.core_exchange_rate != median_feed.core_exchange_rate )
feed_cer_updated = true;
current_feed = median_feed;
// Note: perhaps can defer updating current_maintenance_collateralization for better performance
if( after_core_hardfork_1270 )
current_maintenance_collateralization = current_feed.maintenance_collateralization();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unconditionally, i. e. always, setting this will simplify the code significantly. I believe the performance penalty will be insignificant since you get rid of a method parameter and several checks at the same time.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because there are several 128-bit multiplications and divisions in maintenance_collateralization(), I thought that it costs more. Anyway we can benchmark it (I haven't done).

libraries/chain/db_maint.cpp Show resolved Hide resolved
Because all positions will be closed anyway, using which index
doesn't matter in terms of amounts, however, using another index may
change the ID of historical virtual operations, specifically,
`fill_order_operation` -- we usually try to avoid this.
@abitmore
Copy link
Member Author

abitmore commented Feb 1, 2019

WRT getting rid of a sub-index in a boost multi_index, I have an idea:

  • replace it with a secondary index, which has a variable indicates whether the secondary index is enabled;
    • before the hard fork, we use the secondary index;
    • at hf time (or after LIB passed hf time?) we update the variable, so the secondary index will be disabled thus will save resources in the future.

I guess it's possible to be implemented, however it would be a bit complicated, I'm not sure if it worth the efforts.

pmconrad
pmconrad previously approved these changes Feb 4, 2019
@pmconrad
Copy link
Contributor

pmconrad commented Feb 4, 2019

Good to go, IMO. Any potential optimizations can be implemented after the release.

Copy link
Contributor

@pmconrad pmconrad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants