-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Drop py27 more #63
Drop py27 more #63
Changes from all commits
701dca4
b833789
c3d6bbd
c275334
4bfe5db
7544b5d
8ecbdb0
a41df32
1ecc513
060e9b5
de44f50
3911de9
d11fe44
53bf511
77cf11a
576b5be
e58c5ed
2d30b87
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -43,6 +43,9 @@ nosetests.xml | |
coverage.xml | ||
*,cover | ||
|
||
# virtual environment | ||
venv | ||
|
||
# Translations | ||
*.mo | ||
*.pot | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -56,8 +56,7 @@ async def run( | |
for cmd in cmds: | ||
log.debug( | ||
"launching cmd:\n\"{}\"\n".format(cmd)) | ||
# proc = await trio.open_process( | ||
proc = trio.Process( | ||
proc = await trio.open_process( | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. do we want the async context manager interface here? I'm pretty sure there's some teardown we may find useful? I'm not sure how this is going to change though given python-trio/trio#1568. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. aren't changes in this trio PR going to make it so you can use There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
what do you have in mind? isn't the teardown part of finalize that will be called from sync code? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Right, I think that's what we have to aim for at the outset.
Yes you're right, so I guess I was thinking we should trigger the shutdown code I linked to, but because it's getting called from sync code we might have to either call |
||
shlex.split(cmd), | ||
stdout=subprocess.DEVNULL, | ||
stderr=subprocess.PIPE | ||
|
@@ -157,32 +156,32 @@ def clear(self): | |
self._procs.clear() | ||
|
||
|
||
async def run_all_agents(runner, agents, timeout=180): | ||
"""Run a sequencec of agents using a ``TrioRunner``. | ||
""" | ||
async def finalize(): | ||
# this might raise TimeoutError | ||
cmds2procs = await runner.get(timeout=timeout) | ||
agents2procs = list(zip(agents, cmds2procs.values())) | ||
msg = report.err_summary(agents2procs) | ||
if msg: | ||
# report logs and stderr | ||
await report.emit_logfiles(agents2procs) | ||
raise SIPpFailure(msg) | ||
|
||
return cmds2procs | ||
async def run_all_agents(runner, agents, timeout=180, block=True): | ||
"""Run a sequencec of agents using a ``TrioRunner``.""" | ||
|
||
try: | ||
await runner.run( | ||
(ua.render() for ua in agents), | ||
timeout=timeout | ||
) | ||
await finalize() | ||
await runner.run((ua.render() for ua in agents), timeout=timeout) | ||
if block: | ||
await finalize(runner, agents, timeout) | ||
return runner | ||
except TimeoutError as terr: | ||
# print error logs even when we timeout | ||
try: | ||
await finalize() | ||
await finalize(runner, agents, timeout) | ||
except SIPpFailure as err: | ||
assert 'exit code -9' in str(err) | ||
assert "exit code -9" in str(err) | ||
raise terr | ||
|
||
|
||
async def finalize(runner, agents, timeout): | ||
"""Block up to `timeout` seconds for all agents to complete.""" | ||
# this might raise TimeoutError | ||
cmds2procs = await runner.get(timeout=timeout) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hmm could be maybe do this in more trionic style using Not sure what will have to get factored out of There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. i will give it a try and see if we can refactor There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yah I mentioned the spot from my original branch here. Not sure if I'm convinced though given a lot of users might not be familiar with this style? What do you think? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. oh yes i would prefer a common global timeout, we can also document it
which style are you referring to? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. with trio.fail_after(1):
scen.finalize() I'm actually now thinking we can't do this anyway.. This should work if we offer an async There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Oh wait, nm And, duh, we |
||
agents2procs = list(zip(agents, cmds2procs.values())) | ||
msg = report.err_summary(agents2procs) | ||
if msg: | ||
# report logs and stderr | ||
await report.emit_logfiles(agents2procs) | ||
raise SIPpFailure(msg) | ||
|
||
return cmds2procs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh right this is triggered from sync code yes?
Sorry trying to get my head back in this change set!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, it's not the most elegant solution, do you want me to try something that doesn't require abusing objects attributes and be more functional? (it may change the API a bit though)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you have something functional that you think is cleaner then yeah for sure.
I have no real problems with this I just forgot this is how we have to do it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe we can just return a handle to finalize from
Scenario.run()
, atm we return the runner which is exposing internals.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah this is a good idea - feels like a continuation/future then.