These frameworks are inevitably much more generic than any in-house solution. On one side this is great, because it means that the abstractions need to be sensible - but it also means there is much more code to maintain. So yes, sometimes maintaining your own solution will be easier than maintaining an abandoned opensource solution. Of course, even better would be to use a solution which will not be abandoned.
Also note that these frameworks lock you into design & feel of their choice. This might work for some use cases and for now, but when the requirements change you often end up fighting the framework. I don't have a solution to that though. :)
> So yes, sometimes maintaining your own solution will be easier than maintaining an abandoned opensource solution.
I'd argue this depends on your needs. If you're using one of these for your consumer-facing product, perhaps so. Otherwise, if your UI is not what you're selling or your product/team is big enough to end up creating one of these from scratch anyway, then no. In the latter case, that's probably why there's so many of these to begin with.
> Also note that these frameworks lock you into design & feel of their choice.
Some of these frameworks actually let you re-theme them. For example, Ant Design can mostly be re-themed.
I made an opposite journey. While working in C and Java I commented heavily, but in Python I prefer to just leave a comment here and there if I'm not sure the reason for the code and its intent are clear - but at the same time a comment is also a powerful signal for me to rethink that piece of code and make it more obvious.
I find both of your examples readable to about the same level; MySQL one because the naming is helpful (`number_of_bytes_read_from_buffer`) and SQLite one because of the comments - I would be lost without them. For example, why is a function that does "Release the STATIC_MASTER mutex" called "leaveMutex"? On the other hand, SQLite comments explain much more about how the code is used, what assumptions are made, so this is really a bonus.
> Why is docker better than virtualenv with a requirements.txt file?
It's not, it solves a different issue. If you only have Python as dependency, requirements.txt are fine (well, user needs to install correct version of Python, pip and virtualenv / pipenv, but that's doable). But as soon as your app is actually composed of nginx / apache, python, some background process in Rust, bash scripts for cron jobs,... then you have a problem with app delivery, which Docker solves nicely. Just package eveything in a Dockerfile and distribute the image. Bonus point: you can now test it locally, with the same installation.
I jumped on Docker wagon very early for this exact reason. I don't care about hype, but it does solve these kinds of problems.
Old timer to 2 other old timers: Docker does solve a problem, and this is quite close to it... We had a solution that we needed to install on servers for multiple clients. Before Docker, we had to make sure the libraries were correct (on CentOS, Debian, Ubuntu,... you name it), which was a nightmare. We were literally debugging on customers' installations to see why it broke there.
Docker solves this nicely. Apart from the kernel you take all dependencies with you.
In this case, it provides an easy installation of all the dependencies in a single command, while still allowing for customizations, and all in a standard way. Not to mention that certificates and similar are all solved.
Forget about hype and just try Docker. It's a tool like any other. It can be abused, sure, but it's useful too and it's here to stay.
EDIT: difference between Java and Docker is that containers depend on kernel capabilities, not on some 3rd party JAR maintainers. In my experience Docker simply works unless you are doing something really weird.
I've been using Docker for lots of things. To be honest, the dependency part has been helpful but still weak: we inevitably end up finding that our containers are built from things that go stale (apt-get sources, pip, broken URLs, stale SSL certs, etc). I guess there is a skill to doing it well so that this doesn't happen, but there you are - it didn't really solve the problem it's now an extra skill we need to have to solve a problem we had already mostly solved.
What has really sold me is more the orchestration side, with docker-compose allowing us to easily launch a whole fleet of independent services. I assume swarm and kubernetes etc., make this even better. It's really this that lets us decompose our software into finer grained independent components which has a number of other benefits. I don't think we could manage it otherwise.
You still need to do a frequent build to detect broken dependencies, but at least setting up a daily docker build in CI is so much simpler and faster (and cross-platform!) than doing this with vagrant or live systems. At least you'll catch problems a lot sooner.
If you want to protect yourself from breaking dependencies, mirror them locally, and use a network-isolated docker build to verify you've actually got everything you need.
The build process might break because of stale links, like you say, but already built images should continue to function everywhere unless you're doing something really weird.
A few days ago I tried to install LibreOffice because a (legal) copy of MS Word gave me trouble and I needed a fast solution. However max. download speed I could reach was ~50k/s, bringing download time to almost an hour. Net speed test on the same computer reached full speed with no problem. Download from another (Linux) computer from the same URL was a matter of minutes. All of the mirror sites I tried had a similar speed.
I thought it was just some weird misconfiguration, but now I wonder... Could MS be actively slowing down downloads from LibreOffice to discourage users from using it?
Just to show how absurd such practice is - if Mozilla wanted to do the same, they should intercept requests for Google.com and Bing.com, advising users to use Mozilla's search engine instead. I doubt Google would keep paying them if they did that though.
Which is the real problem with Mozilla being dependent on money from Google. Google can do whatever they want against Firefox, but Mozilla is very limited in its response because they can't risk upsetting Google too much.
Presumably their customers don't intend to patch the bugs without help from the vendor. They likely intend to exploit the bugs (in the name of state surveillance). Many/most global governments (all?) aren't trustworthy in this regard. If you disclose a bug to Zerodium, can you trust them not to have "bad" governments as customers? Also, consider a Sybil attack: Zerodium is an untrustworthy government front.
> ZERODIUM customers are mainly government organizations in need of specific and tailored cybersecurity capabilities, as well as major corporations from defense, technology, and financial sectors, in need of protective solutions to defend against zero-day attacks.
> In particular, I've always wondered why "GPL or ask me for permission" isn't being explored more.
Maybe because then you need to have CLA's (Contributor License Agreements)? Otherwise, who is "you" that they need to ask? It's not just your code, it is based on work of many other contributers.
Possibly-ignorant legal question: Could the "CLA" be as simple as a checkbox on the PR submission form that says something to the effect of "You agree that your contributions to this repository, while owned by and credited to you, belong to (ownername) for the purposes of copyright and license enforcement?"
Basically, making anyone who contributes aware that the contribution doesn't give them a claim in the copyright of the project. Short and simple.
A CLA is basically a simple as you describe. It's just that some people (how many?) don't want their contributions to be ever closed source and might not agree to that term. Worse case, they fork your project, applying their changes to their fork.
It's the social issue, not the legal issue, that's annoying about CLAs.
I have been currently exploring the option of open sourcing a project of mine and have researched CLAs a bit, but would love to be corrected if mistaken.
It is as simple - and it is not. You need to take care of special cases, like contributions from company employees in their free time (their company could still own rights to this work), people contributing other people's code (SO answers), patents and whatnot. Fortunately there are existing CLA agreements (Apache for instance).
As for this being the social issue, I don't know yet how big a problem that is. It's never bothered me before, as I recognize that maintainer might want to take the project in another direction in the future and since I don't want to maintain it, I am just happy that they are doing it for me. I will use CLA for my project and if someone doesn't like it, it's fine too - I don't mind forks (if they are well maintained, I'll just switch to them ;), and if someone doesn't want to contribute because of this, I don't want their contribution to be in my code anyway, because it limits my options in the future. There's a new post today on HN [0] that presents options pretty nicely, and it is very aligned with the conclusions I came to.
> Moreover, projects do fail even with a fully experienced, fully vetted staff running the show.
Sure - but in that case they usually don't fail because of technical decisions (which is what OP was talking about). There are however many other ways a project can fail (bad management mostly).
I'm not sure if this is what you are looking for, but I have been using headless Chrome through Selenium (using nightwatch.js) for some time now and it works great. You will probably need to rewrite your tests though, but they will support FF too. YMMV.
Also note that these frameworks lock you into design & feel of their choice. This might work for some use cases and for now, but when the requirements change you often end up fighting the framework. I don't have a solution to that though. :)