You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
⚠️ Please check that this feature request hasn't been suggested before.
I searched previous Ideas in Discussions didn't find any similar feature requests.
I searched previous Issues didn't find any similar feature requests.
🔖 Feature description
My proposal is to make Axolotl more production-friendly by adding a RESTful web server layer. This server will serve as a gateway, allowing users to interact with Axolotl through specific URLs like https://axolotl-server/train and https://axolotl-server/finetune. Users will be able to send in configurations through simple JSON payloads, specifying exactly what they need Axolotl to do. This method makes it straightforward to communicate with the system and also offers flexible options for saving the trained models, either locally or on cloud storage — though we can figure out the details of cloud storage later.
✔️ Solution
The goal is to make Axolotl ready for real-world use, whether on cloud platforms like Kubernetes or on remote machines with GPUs. By setting up a RESTful server, teams can easily put Axolotl to work in their own systems and start training or tweaking models with just a few clicks. This setup removes the technical hurdles and lets anyone manage machine learning tasks through simple web requests.
For this, I am open to using any web server, whether Django, FastAPi or Flask.
❓ Alternatives
I haven't seen any alternative to axolotl, so making a wrapper on top of axolotl is the only way to make it a webserver. I am open to suggestions.
📝 Additional Context
I would be happy to contribute in this enhancement.
Acknowledgements
My issue title is concise, descriptive, and in title casing.
I have searched the existing issues to make sure this feature has not been requested yet.
I have provided enough information for the maintainers to understand and evaluate this request.
The text was updated successfully, but these errors were encountered:
I think that similarly to vLLM providing multiple OpenAI inference endpoints, it would also be a useful target to emulate the OpenAI fine tuning endpoint with axolotl backing it.
Would give you much closer to full coverage of emulating their major api endpoints with open source alternatives.
🔖 Feature description
My proposal is to make Axolotl more production-friendly by adding a RESTful web server layer. This server will serve as a gateway, allowing users to interact with Axolotl through specific URLs like
https://axolotl-server/train
andhttps://axolotl-server/finetune
. Users will be able to send in configurations through simple JSON payloads, specifying exactly what they need Axolotl to do. This method makes it straightforward to communicate with the system and also offers flexible options for saving the trained models, either locally or on cloud storage — though we can figure out the details of cloud storage later.✔️ Solution
The goal is to make Axolotl ready for real-world use, whether on cloud platforms like Kubernetes or on remote machines with GPUs. By setting up a RESTful server, teams can easily put Axolotl to work in their own systems and start training or tweaking models with just a few clicks. This setup removes the technical hurdles and lets anyone manage machine learning tasks through simple web requests.
For this, I am open to using any web server, whether Django, FastAPi or Flask.
❓ Alternatives
I haven't seen any alternative to axolotl, so making a wrapper on top of axolotl is the only way to make it a webserver. I am open to suggestions.
📝 Additional Context
I would be happy to contribute in this enhancement.
Acknowledgements
The text was updated successfully, but these errors were encountered: