-
Notifications
You must be signed in to change notification settings - Fork 418
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
2.0 Some Specific Technical Changes #542
Comments
I would add following points:
Thanks Jerry, I appreciate the discussion. |
|
Just to clarify, I meant to suggest to remove
I am sorry but I don't believe we can close this one just yet. Let me repeat what I've mentioned in #395 issue for clarity: I like this approach it's clear and simple. Unfortunately it's also repetitive and quite error-prone when files are updated over time. I see only one obstacle which prevents this approach: we did not formulate how do we test or otherwise ensure that the require statements are correct in every file of our gem. If we formulate the testing strategy then this will be this best option. In the approach suggested above (having a list of requirable files), each requirable file represents a logical unit of the gem, e.g. a data structure or a concurrent abstraction. Then all tests for a given logical unit can require just the unit then run the tests. If the tests are executed in separation it ensures that the given unit works when required separately. I have a hard time to formulate something similar for every file. (I am assuming class per file policy, not all classes for a given logical unit in one file.) I would like to have a solution for this problem before we commit to make every file requirable by users.
After thinking about this more, I think we should keep both pools for non-blocking and blocking tasks. Simple usage does not require both. An user with few async tasks will just post them to blocking pool and everything will be fine. Since non-blocking tasks can be posted to blocking pool, blocking tasks to non-blocking must not (it would be good to have at least some best afford detection of this). However problems arise when an user has many tasks to process. If all of them are just fed to blocking pool it will have bad performance (context switching) as it will try to create thread for every task. It can also deadlock when all tasks which were able to obtain a thread are blocked on tasks in queue. If nonblocking tasks dominate then it's good approach to post [non-]blocking tasks to their particular pools, then it's naturally throttled by the nonblocking pool (which has a thread per cpu core and executes as fast as possible). Remaining tasks are on blocking pool which can keep up since the assumption for this case is that there is not many of them. If there is more than little of blocking tasks then they need to be throttled with other mechanism (which is why we should provide one). Currently users can create extra pools, which is far from ideal since it creates another threads. Or they can create pools of actors to represent e.g. a pool of 10 DB connections, which can be somewhat heavy weight. For the problems described above I think we should keep both pools and do a much better job on education of our users how to use them properly, and we should prototype some throttling tools. We currently have nice shortcuts using Symbols Fast/io, Non-/blocking names don't feel good, would you have other ideas?
Yeah we do not have to have stable solution, but a prototype should be in place for 2.0, users have asked for both. I already have working prototypes. |
Other classes are using it but I'd rather enforce the contract through common tests. The module itself has become unwieldy.
We don't need to test it. So long as every developer makes an concerted effort to
Any user experiencing this will be able to create their own thread pool and use the
Users with the knowledge to understand the difference will also have the knowledge to create their own thread pool and use the
We can prototype in Edge. If you get this done in time for 2.0 then we can consider releasing them, but if they aren't ready when 2.0 is ready I don't want to delay 2.0 until they are done. Which I hope you would agree with--you are the one who wants to move fast on 2.0. |
A lot of time's passed. We'll re-evaluate what we want from 2.0 in the future. |
Concurrent Ruby 1.0 has been very successful, but we made some mistakes along the way. We had a few ideas that didn't work out as well as we had hoped. We discovered that some features were more trouble than they were worth. And we gained a better understanding of what our users want. Based on this learning we will make several specific changes while designing and building 2.0.
autoload
from two separate gems that we merged in. It worked much better in those smaller, more focussed gems. We won't useautoload
in 2.0.require
statements for loading the entire gem was problematic in 1.0. Because we eat our own dog food we ran into issues with the load order. In 2.0 every file will explicitlyrequire
every file it depends on. This is one case where will be non-idiomatic and instead mimic Java and C#.require
the entire gem imposes an unnecessary and unwanted memory and load-time dependency. Users should be able to load only the parts that they need.The text was updated successfully, but these errors were encountered: