-
Notifications
You must be signed in to change notification settings - Fork 130
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] How to get the result of the example? #331
Comments
A similar issue is in #329. We are reproducing this issue and fix this problem. Can you provide more information about your running environment? e.g.
We sincerely hope to resolve your issue. We hope you can provide more detailed information so that we can better pinpoint the problem. |
Thank you so much for your reply and help. |
I attempted to replicate your issue on a Linux headless server but was unsuccessful. After executing export python environmentabsl-py 2.1.0 Related issue: lengstrom/fast-style-transfer#253 |
This issue is quite unfamiliar to me, but I found a related issue that might help you. mcfletch/pyopengl#90 |
Thanks to your help, I have successfully implemented the solution. Your support has been invaluable. Thank you! |
Required prerequisites
Questions
After I trained the ppo-lag model using the
python train_policy.py --algo PPOLag --env-id SafetyPointGoal1-v0 --parallel 1 --total-steps 10000000 --device cpu --vector-env-nums 1 --torch-threads 1
, I can not gain the result when usingomnisafe eval runs/PPOLag-\{SafetyPointGoal1-v0\}
and meet the following problem.The text was updated successfully, but these errors were encountered: