For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more walrus's commentsregister

The trackballs are listed as "completely open-source" on their website, but the mouse is listed as "open-source firmware". This is consistent with what's available on their GitHub page—the Altium and STEP files for the trackballs are available on their GitHub page, and the firmware for all their products (including the mouse) has been upstreamed[1].

[1] https://github.com/qmk/qmk_firmware/tree/master/keyboards/pl...


A journal can publish a retraction.

The website Retraction Watch[1] aggregates these retractions and provides a database that you can query. Reference management software like Zotero[2] can use this to monitor your collection of papers and notify you when one is retracted.

[1] https://retractionwatch.com/

[2] https://www.zotero.org/blog/retracted-item-notifications/


Yeah. They could. But few (zero?) studies are retracted for the sake of being proven incorrect later. And, to be far, it would be ridiculous. Imagine having your career nullified because when you're 60 some major break through shows that your studies aren't relevant anymore. Your work was good when you did it, but now there's something new. It's kind of the definition of scientific progress.

However, as a counter example, in my very narrow specialty there is a well known lab that has produced highly cited bogus studies. I've personally published opposing results and said, "these studies are wrong for these reasons" using almost exactly those words. Should they be retracted? Absolutely. Will they ever be? No. Because, of course, the publisher and the authors just point the finger back at me and say "no, you're wrong!" and that's more than enough to keep the vague debate going.


It depends what I'm doing, but I typically use one of the following (starting with the most frequent):

1. IPython with Pandas if I expect to do any in-depth exploration/manipulation of the data.

2. A short Python script like `python -c 'import csv, sys; r = csv.reader(sys.stdin); ...' <data.csv` if it needs to run on someone else's machine.

3. https://github.com/BurntSushi/xsv if I want to quickly munge some data or extract a field.

4. Gnumeric if I want a GUI.

5. https://github.com/andmarti1424/sc-im if I want a TUI. This is closest to what you were asking for.


Just a heads-up, if you're interested. Only SC-IM was fit for the non-database-format files I need to work with. But it does not have a search function! So I'm writing an app in Python for browsing spreadsheets: https://github.com/dotancohen/osheet


  > 5. https://github.com/andmarti1424/sc-im if I want
  > a TUI. This is closest to what you were asking for.
Thank you! Yes, this seems to be almost perfect. I cannot believe that there is no "arbitrary string search" feature, but I can grep in another terminal window at least.

Thank you.


> Money laundering is the process of making illegally-gained proceeds (i.e. "dirty money") appear legal (i.e. "clean").

Someone should let FinCEN know that their definition is incorrect: https://www.fincen.gov/history-anti-money-laundering-laws


I'm just speculating (and haven't read the paper yet), but it may be possible to achieve similar speedups on GPUs by pruning the smallest 20% of blocks of size ≥K×K to produce block-sparse weights[0], rather than pruning the smallest 20% of weights.

[0] https://openai.com/blog/block-sparse-gpu-kernels/


The following instructions are for Linux or macOS. It may work on Windows too, but I'm not very experienced with Windows. No special hardware is required (you can run it on a CPU; no GPU needed).

Install pyenv[0] and pyenv-virtualenv[1]. Clone the repo and set up an environment:

  git clone https://github.com/taki0112/UGATIT
  cd UGATIT
  pyenv install 3.6.10  # [2]
  pyenv virtualenv 3.6.10 UGATIT
  pyenv local UGATIT
  pyenv activate
  pip install opencv-python==4.2.0.34 tensorflow==1.14.0
Download the pretrained weights from https://github.com/taki0112/UGATIT/issues/50#issuecomment-53.... Extract them:

  tar xf ugatit100.tar.xz
  mkdir checkpoint
  mv UGATIT_selfie2anime* checkpoint
  mkdir -p dataset/selfie2anime/{train,test}{A,B}
Crop your images in a 1:1 aspect ratio so that they contain only the head. Place them in the dataset/selfie2anime/testA/ directory. Run the program:

  python main.py --dataset selfie2anime --phase test
Open results/*/index.html in your browser to see the results.

[0] https://github.com/pyenv/pyenv#installation

[1] https://github.com/pyenv/pyenv-virtualenv#installation

[2] Other versions may work, but this is the version mentioned in https://github.com/taki0112/UGATIT#requirements


Thank you so much! I'll give this a crack on windows and see how it turns out.


I got it working via WSL on windows. Thanks!


My experience corroborates this.

I worked at two small ISPs, each with a fiber network spanning a few US states. They had their own databases to track this information, but as far as I know there was no central database. One of them used OSPInsight backed by a self-hosted SQL Server database. I don't know what the other place used.


Yeah right. They'll find a way to sell it while claiming they don't share it.


Today: They won't sell real time data anymore.

Tomorrow : Sell one minute delayed...


It's not the case that malloc always works under Linux. It may return NULL if the memory is not available. Try it out! Write a program that allocates half your memory, fills it with 0xFF, then sleeps. Run two instances of it. The malloc will fail in the second one.

According to [1], the OOM killer is a consequence of the fork syscall, and can't be removed without breaking backward compatibility.

[1] https://drewdevault.com/2018/01/02/The-case-against-fork.htm...


The man page for fork indicates that it can fail and return ENOMEM just fine. Technically, it's only supposed to be returned in special conditions where the kernel doesn't have enough memory, but this seems like much less of a breakage than going around killing random processes.


It certainly is permitted to fail by spec. The goal of the entire overcommit / OOM killer business is to yield better real-world outcomes.

If I have overcommit off, and a program that allocates more than half of my physical memory (RAM + swap), it can't fork - even if it's going to immediately exec a tiny program. If I have overcommit on, it can fork and exec, and nobody gets OOM killed.

The tradeoff is that, if I have overcommit on and the child process starts modifying every page instead of exec'ing, then the OOM killer triggers. The bet is that a) probably the child process will be killed, since it has the most physical memory in use (assuming the parent process has not touched every page it has allocated), so this isn't worse than preventing the child process in the first place, and b) this situation is rare.


The point is more that processes holding on to a lot of memory often fork and exec and don’t actually need 2X their current allocation in-between.

Yes, there are alternative APIs that avoid the need to temporarily hold on to that memory in both processes, but the idiom is still incredibly common. And it’s not unusual for server-style processing to fork without an exec and largely share their parent’s allocation via copy-on-write. That defers allocation until there’s a page fault on a simple memory write, which doesn’t have a particularly helpful mapping to C if you want to return failure to the copying process.


I had the same issue. I disabled them in xmodmap for several years, then eventually decided I was never going to use them and cut out the rubber dome. Here's what it looks like: https://i.imgur.com/Bm6KtIN.jpg


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You