The trackballs are listed as "completely open-source" on their website, but the mouse is listed as "open-source firmware". This is consistent with what's available on their GitHub page—the Altium and STEP files for the trackballs are available on their GitHub page, and the firmware for all their products (including the mouse) has been upstreamed[1].
The website Retraction Watch[1] aggregates these retractions and provides a database that you can query. Reference management software like Zotero[2] can use this to monitor your collection of papers and notify you when one is retracted.
Yeah. They could. But few (zero?) studies are retracted for the sake of being proven incorrect later. And, to be far, it would be ridiculous. Imagine having your career nullified because when you're 60 some major break through shows that your studies aren't relevant anymore. Your work was good when you did it, but now there's something new. It's kind of the definition of scientific progress.
However, as a counter example, in my very narrow specialty there is a well known lab that has produced highly cited bogus studies. I've personally published opposing results and said, "these studies are wrong for these reasons" using almost exactly those words. Should they be retracted? Absolutely. Will they ever be? No. Because, of course, the publisher and the authors just point the finger back at me and say "no, you're wrong!" and that's more than enough to keep the vague debate going.
Just a heads-up, if you're interested. Only SC-IM was fit for the non-database-format files I need to work with. But it does not have a search function! So I'm writing an app in Python for browsing spreadsheets:
https://github.com/dotancohen/osheet
> 5. https://github.com/andmarti1424/sc-im if I want
> a TUI. This is closest to what you were asking for.
Thank you! Yes, this seems to be almost perfect. I cannot believe that there is no "arbitrary string search" feature, but I can grep in another terminal window at least.
I'm just speculating (and haven't read the paper yet), but it may be possible to achieve similar speedups on GPUs by pruning the smallest 20% of blocks of size ≥K×K to produce block-sparse weights[0], rather than pruning the smallest 20% of weights.
The following instructions are for Linux or macOS. It may work on Windows too, but I'm not very experienced with Windows. No special hardware is required (you can run it on a CPU; no GPU needed).
Install pyenv[0] and pyenv-virtualenv[1]. Clone the repo and set up an environment:
I worked at two small ISPs, each with a fiber network spanning a few US states. They had their own databases to track this information, but as far as I know there was no central database. One of them used OSPInsight backed by a self-hosted SQL Server database. I don't know what the other place used.
It's not the case that malloc always works under Linux. It may return NULL if the memory is not available. Try it out! Write a program that allocates half your memory, fills it with 0xFF, then sleeps. Run two instances of it. The malloc will fail in the second one.
According to [1], the OOM killer is a consequence of the fork syscall, and can't be removed without breaking backward compatibility.
The man page for fork indicates that it can fail and return ENOMEM just fine. Technically, it's only supposed to be returned in special conditions where the kernel doesn't have enough memory, but this seems like much less of a breakage than going around killing random processes.
It certainly is permitted to fail by spec. The goal of the entire overcommit / OOM killer business is to yield better real-world outcomes.
If I have overcommit off, and a program that allocates more than half of my physical memory (RAM + swap), it can't fork - even if it's going to immediately exec a tiny program. If I have overcommit on, it can fork and exec, and nobody gets OOM killed.
The tradeoff is that, if I have overcommit on and the child process starts modifying every page instead of exec'ing, then the OOM killer triggers. The bet is that a) probably the child process will be killed, since it has the most physical memory in use (assuming the parent process has not touched every page it has allocated), so this isn't worse than preventing the child process in the first place, and b) this situation is rare.
The point is more that processes holding on to a lot of memory often fork and exec and don’t actually need 2X their current allocation in-between.
Yes, there are alternative APIs that avoid the need to temporarily hold on to that memory in both processes, but the idiom is still incredibly common. And it’s not unusual for server-style processing to fork without an exec and largely share their parent’s allocation via copy-on-write. That defers allocation until there’s a page fault on a simple memory write, which doesn’t have a particularly helpful mapping to C if you want to return failure to the copying process.
I had the same issue. I disabled them in xmodmap for several years, then eventually decided I was never going to use them and cut out the rubber dome. Here's what it looks like: https://i.imgur.com/Bm6KtIN.jpg
[1] https://github.com/qmk/qmk_firmware/tree/master/keyboards/pl...