LOL. I'm not sure what you mean? He called me up on the phone and asked if I wanted to buy a lot of Beanies. When I went to meet him at his house it was obvious he was legit. I only paid cash on delivery. I got whatever sealed boxes I got. He didn't have the ability to pick and choose which pallet I got and I don't know if we ever figured out what was in each box from the outside.
Terminating a bus factor 1 employee is not the problem. The real problem is not recreating BF1 reloaded with the new hire. Even a team of two is not completely ideal.
Many, mostly small, companies just cannot justify an IT department to mitigate Bf1/2 risks. Also, most times this bus factor problem manifests in non-IT jobs too: sales, contract law, accounting functions.
Small companies need to first become big companies to afford de-risking from BF1. Part of the growth story really.
SQLite not supporting "stored procedures" is a deal-breaker for me. The idea for stored procs is not to "put the process as close to the data" but simply that we have a single place for language-agnostic encapsulation of data procedures.
SQLite is an in-process database. If you need language-agnostic encapsulation of data procedures, SQLite is not for you. I would suggest you consider PostgreSQL.
I don't think I've ever needed language-agnostic procedures in a project where sqlite is also a fit. I like them both but at different times. I'd love to hear your use case though. Do you have microservices in different languages running on the same machine that share a db file? Or maybe a web + command line interface?
Sqlite's internals actually could support something like this: it has a bytecode engine https://www.sqlite.org/opcode.html that's more oriented around executing query plans and it's missing some pieces (e.g. it has no stack, only registers) but much of the machinery is there to expand it to stored procedures
A whole new generation of developers is learning what the previous generation termed "briefcase applications". When client-server applications were the thing late 90s and early 2Ks, internet speeds were a serious limitation. This forced many architectures to work with "local-first", disconnected dataset, eventually-synchronised desktop applications. Microsoft ADO famously touted the "disconnected dataset" for this after Borland's "client dataset" pioneered this concept for the Windows desktop. Eventually even Java got disconnected datasets. All these techs more than two decades ago had real practical problems they solved: one of them I worked on involved collecting water flow data from several thousands of rivers in Southern Africa. Hundreds of users had mobile laptops that fired up desktop apps with authentication and business rules. Users then came back to head office to synchronise or used a branch with network to upload.
They worked and they were necessary.
Things changed when internet connectivity eventually became cheap and ubiquitous and the complexity of developing such applications then didn't merit the effort.
Now, the swing back to "local-first" is mainly for user-experience reasons. The same theoretical complexities of "eventually synchronised" that existed for the "briefcase app" are still present.
The top Reddit response cites some good reasons why this may be so. The writer mentions they needed to move from Node to improve "throughput" (and .NET was suggested). In what way would .NET improve performance that Node can't satisfy. (Isn't Node's worker thread sufficiently close to ASP.NET, ISAPI multi-threaded scenarios?)
In my opinion, you can go a bit more low level in C# and have control over allocations, data copying, threads, etc. Also, .NET is a bit more general purpose than Node. If the workload is more about computation (CPU/GPU intensive stuff) or has some unusual technical requirements (uncommon protocols, interop, etc.), you can really benefit from being lower level and more generic.
But all this is very project specific... it's hard to make these sort of claims about performance.
Yes, somewhat. Not as proper as it should. But it does the job. Esp with such events as splits, mergers, renames you can see it's not a very professional setup. I've had to hack around it by adding and then nulling some assets like "emission claims". Or, in my country, dividend-tax is extracted immediately so I now hack around that by adding that tax as "fee" to ghostfolio. It works. But isn't a replacement for actual book- and portfolio-keeping.
A manual action where I multiply and divide all actions. It works, bit hardly. Won't do it again. A next time, I'll probably try to do a sell all old stocks for X/stock, and then buy all for X/splitamt/stock to handle this.
* assuming that the 100 years hyperbole is accepted