If there's one tool I'd recommend everyone to learn it's rebasing. Especially interactive rebasing because it enables things like splitting commits apart, squashing them together and even run automated programs.
The interactive tools exist outside of rebase. You can also split commits apart as you work on them with for example `git add -i` without needing to rebase.
I've met junior developers I'd never trust to get rebase right, but interactive add was an important tool to teach them.
`git add i` is indeed powerful and lets you interactively craft your index before you create a commit. It doesn't enable you to split up an existing commit though.
Interactive hunk splitting the index maps to the mental model of "splitting the work in progress commit". That may be all a developer needs in a "rebase never" workflow.
over the past two months I made it myself a challenge to write an ebook about rebasing in git and today I'm happy to launch a first version of it!
I see a lot of people struggling with having a fundamental understanding of how git works. Rebasing in Git is probably one of the most powerful features of the tool that can make everyone extremely productive when it comes to version control and working with peers on projects.
I just took a look at your course platform and it looks really interesting! I like the interactive editor and also the fact that it comes with an integrated test runner. Well done!
You might want to look into how to make landing pages a bit more attractive though as it's not selling your platform "strongly" enough.
That's correct! You'll be able to execute your machine learning code in our cloud + plus you can share it with others for easy exploration.
For uploading data sets we're working on a resolver strategy that lets you specify endpoints from where the data can be downloaded before the lab is executed.
We'll provide containers with all common libraries. In the future you'll be able to configure your own containers which will allow you to run pretty much whatever you want.
Do you have public code for containers as Floydhub has? What machines are you using in your cloud? Can I specify the machine CPUs and RAM size? Do you have time/memory limit for job execution?
>>Do you have public code for containers as Floydhub has?
Can you elaborate what you mean with that?
>What machines are you using in your cloud? Can I specify the >machine CPUs and RAM size?
We'll have different hardware type categories. At the end it boils down to VPS on Google Cloud Engine running Ubuntu. That said, thinking long-term we'd love to give people access to things like Nvidia Tesla V100.
>Do you have time/memory limit for job execution?
Your experiments can run for as long as you wish (but you'll pay for everything that goes beyond free hours). Memory limits will be defined by the hardware category you chose when you run your lab.
Do you have specific requirements regarding memory or CPU/GPU power? We'd love to learn from people like you what the community needs.
I don't have any specific requirements regarding memory or CPU - just asking. It will be nice, if I can use your service to run some kaggle kernels which take too long on kaggle.