Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Drop py27 more #63

Merged
merged 18 commits into from
Jan 14, 2021
Merged

Drop py27 more #63

merged 18 commits into from
Jan 14, 2021

Conversation

kontsaki
Copy link
Contributor

@goodboy would something like this suffice ?

goodboy and others added 15 commits July 5, 2019 12:24
The `defaults` kwarg to `pysipp.agent.Scenario` isn't used so just drop
it and instead comb through provided `kwargs` and pop out values passed
by the user into `pysipp.agent.Scenario.defaults` making these the "new"
defaults applied to agents. Also, rename the default settings dicts more
explicitly.

Resolves #20
These fancy tuple-attribute-properties are supposed to be strictly
assigned tuple types; let's enforce that.

Resolves #12
As per the change for #20, handle the case where the user passes in
a `defaults` kwarg to `pysipp.scenario` and use it override the normal
template. To fully solve #12 also support assignment of `None` which
results in the tuple attribute taking a value of `(None, None)`
implicitly.
After attempting to find an OS portable way to spawn subprocesses using
the stdlib and coming out unsatisfied, I've decided use the new
subprocess launching support in `trio`! This will of course require that
the project moves to python 3.6+ giving us access to a lot of neat
features of modern python including async/await support and adherence to
the structured concurrency principles prominent in `trio`. It turns out
this is a good fit since SIPp already has a built in cancellation
mechanism via the SIGUSR1 signal.

There's a lot of "core" changes to go over in this commit:
- drop the "run protocol" and "runner creation" related hooks since they
  really shouldn't be overridden until there's some need for it and it's
  likely smarter to keep those "machinery" details strictly internal for now
- the run "protocol" has now been relegated to an async function:
  `pysipp.launch.run_all_agents()`
- many routines have been converted to async functions particularly at the
  runner (`pysipp.TrioRunner.run()`, `.get()`) and scenario
  (`pysipp.Scenario.arun()`) levels allowing us to expose both a sync and
  async interface for running subprocesses / agents
- drop all the epoll/select loop stuff as this is entirely delegated to
  `trio.open_process()` and it's underlying machinery and APIs

Resolves #53
@kontsaki
Copy link
Contributor Author

it seems that i haven't picked up the commit that resolved the conflicts even though i forked after that

@kontsaki
Copy link
Contributor Author

kontsaki commented Jan 5, 2021

ping @goodboy , anything else to do here?

assert (
self._prepared_agents and self._runner
), "Must run scenario before finalizing."
return trio.run(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh right this is triggered from sync code yes?

Sorry trying to get my head back in this change set!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, it's not the most elegant solution, do you want me to try something that doesn't require abusing objects attributes and be more functional? (it may change the API a bit though)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you have something functional that you think is cleaner then yeah for sure.

I have no real problems with this I just forgot this is how we have to do it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe we can just return a handle to finalize from Scenario.run(), atm we return the runner which is exposing internals.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah this is a good idea - feels like a continuation/future then.

@@ -56,8 +56,7 @@ async def run(
for cmd in cmds:
log.debug(
"launching cmd:\n\"{}\"\n".format(cmd))
# proc = await trio.open_process(
proc = trio.Process(
proc = await trio.open_process(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we want the async context manager interface here?

I'm pretty sure there's some teardown we may find useful?

I'm not sure how this is going to change though given python-trio/trio#1568.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aren't changes in this trio PR going to make it so you can use trio.open_process inside a nursery?
in pysipp we want the Scenario API to not be async right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we want the async context manager interface here?

what do you have in mind? isn't the teardown part of finalize that will be called from sync code?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in pysipp we want the Scenario API to not be async right?

Right, I think that's what we have to aim for at the outset.

isn't the teardown part of finalize that will be called from sync code?

Yes you're right, so I guess I was thinking we should trigger the shutdown code I linked to, but because it's getting called from sync code we might have to either call Process.aclose() explicitly or we use an async exit stack. I'm thinking the former is simpler and cleaner?

async def finalize(runner, agents, timeout):
"""Block up to `timeout` seconds for all agents to complete."""
# this might raise TimeoutError
cmds2procs = await runner.get(timeout=timeout)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm could be maybe do this in more trionic style using with trio.fail_after(timeout): ?

Not sure what will have to get factored out of .get()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i will give it a try and see if we can refactor finalize() and .get()

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yah I mentioned the spot from my original branch here. Not sure if I'm convinced though given a lot of users might not be familiar with this style?

What do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh yes i would prefer a common global timeout, we can also document it

users might not be familiar with this style

which style are you referring to?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

with trio.fail_after(1):
    scen.finalize()

I'm actually now thinking we can't do this anyway..
If we want to keep a sync api this won't work I don't think (fail_after() won't work outside a call to trio.run() pretty sure). I just don't have my head in this code so I'm probably making bad suggestions.

This should work if we offer an async .aclose() method to do finalization.

Copy link
Member

@goodboy goodboy Jan 9, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh wait, nm .get() is async now - see what I mean, obviously haven't looked at this in a while.

And, duh, we await that in the above code.
So yeah I think maybe pulling out the timeout is the more trionic way to do this thus giving the user more control on waiting / cancellation. The only down side is we'll have to document this style for async users.

Copy link
Member

@goodboy goodboy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kontsaki Thanks so much for this!

I think the one major thing we will need is a .dev0 bump to the setup.py and maybe we need to start a changelog?

@goodboy
Copy link
Member

goodboy commented Jan 8, 2021

We might also need to tweak the docs which has some comments about the old runner.

@goodboy
Copy link
Member

goodboy commented Jan 11, 2021

@kontsaki I think I might just merge this into #55 and we'll go off that branch.

Sound good?

@kontsaki
Copy link
Contributor Author

@goodboy sure!

we can continue discussion there.

@goodboy goodboy merged commit 50cf7a1 into SIPp:drop_py27 Jan 14, 2021
@kontsaki kontsaki deleted the drop_py27_more branch January 14, 2021 17:24
@goodboy goodboy mentioned this pull request Mar 22, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants