Hi, thanks for checking it out! You bring up some great points.
When it comes to SWEs doing ML, our goal to bring together a bunch of apps that get people most of the way to something they can bring into production.
I appreciate your skepticism of online IDEs, and it’s unlikely we’ll ever completely replace notebooks or whatever IDEs people prefer hacking on locally. Instead, we’d like to take a hybrid approach, in which our online IDE is sufficient for making minor tweaks to a model after its in production, but the brunt of development can still happen locally, in which changes would be pushed to Git and synced with our online IDE.
It’s true that SWEs can deploy their own APIs, but that feels like an unnecessary annoyance to take for granted. At a high level you’re just setting up an API, but really you’re also going to setup some versioning system and a Docker file and monitoring, and all of that adds up to a lot of cognitive overhead.
BTW - the Jupyter-like cell execution can be turned on by clicking the “Interactive Mode” button on the bottom right corner of the IDE.
Makes a lot of sense! It really is true that ML models kind of live in its own little world with their training loop and should interact with the rest through a rest api. And now and then new data gets added for training, as well as the api should change a bit, maybe we tweak the labels. You managed to encapsulate that part. I might port one of our text model to it to try it out :)
The core code editor itself uses Monaco (the same thing under the hood of VSCode), but everything else is custom (i.e. file browser, language server, syntax highlighting, tabs, etc.)
We think developer experience is the factor that has been sorely lacking from the ML tooling space. I’m curious how your experience using Sagemaker has been?
Hi, thanks! The main difference is that HuggingFace contains a huge repository of pretrained models, whereas we're providing the scaffolding to build your own end to end applications. For example, in Slai you could actually embed a HuggingFace model (or maybe two models), combine them into one application, along with API serialization/deserialization, application logic, CI/CD, versioning, etc.
You can think of us as being a store of useful ML based microservices, and not just a library of pre-trained models.
Thanks for checking us out! We sort of anticipated that — we just launched the unit test feature this week, and it’s not easily distinguished from the test panel. We’ll probably rename the test panel to something more descriptive, like API runner.
Interactive mode is a feature to set interactive breakpoints in your code, it’s really useful for iterating on individual blocks of code without having to run the entire app E2E.
Hi, thanks for the questions! We're planning on adding a way to synchronize a git repo to a sandbox, should be out by April.
Right now, you can upload a trained binary to a sandbox and return it from the train function, and then use it for inference. So it's a bit manual at this point, but we're planning on improving that workflow shortly.
We built linting and testing into the sandbox, but testing is currently triggered manually - we're planning on building both into our CI/CD system (scheduled training)
Thanks, that means a lot! We've spent a lot of time thinking about the pitch, given how many other ML tools are out there. We're hoping to strike a chord with simplicity and developer experience.
When it comes to SWEs doing ML, our goal to bring together a bunch of apps that get people most of the way to something they can bring into production.
I appreciate your skepticism of online IDEs, and it’s unlikely we’ll ever completely replace notebooks or whatever IDEs people prefer hacking on locally. Instead, we’d like to take a hybrid approach, in which our online IDE is sufficient for making minor tweaks to a model after its in production, but the brunt of development can still happen locally, in which changes would be pushed to Git and synced with our online IDE.
It’s true that SWEs can deploy their own APIs, but that feels like an unnecessary annoyance to take for granted. At a high level you’re just setting up an API, but really you’re also going to setup some versioning system and a Docker file and monitoring, and all of that adds up to a lot of cognitive overhead.
BTW - the Jupyter-like cell execution can be turned on by clicking the “Interactive Mode” button on the bottom right corner of the IDE.