Instead of writing out frames at a fixed FPS (ie, 60 fps), the driver would send out updates to individual pixels at precise moments in time, and the unaddressed pixels would remain unchanged. I'm not sure whether or where this kind of technology is used in practice.
Yeah I've been searching for this kind of display technology for the past couple years and concluded I'm using the wrong search terms or it just doesn't exist. Techically possible with oled / microled.
Funny coincidence. I just picked up Database in Depth from AbeBooks a week ago... ChatGPT brought his book up in a fever dream QA session regarding database normalization and I figured I'd give it a look.
Er, on my laptop? It might be more precise to say that I can't find any way to make it use anything else; manpages give me nothing, grepping for "layout" in the source tree gives me nothing (relevant; window layouts show up), and there's an open issue (https://gitlab.com/cardboardwm/cardboard/-/issues/30) asking for it, all of which makes me conclude that it doesn't appear to support doing anything but QWERTY. That said, I said "AFAICT" for a reason; if you can prove me wrong I'll be quite grateful (because, seriously, I would like to use this thing).
Oh I think I understand what you're saying now. You have a laptop with a QWERTY hardware layout, and rely on the WM to remap the keys to a different layout? Sorry I can't help out. Just wanted to understand the issue.
Basically, yes. The hardware is QWERTY, and something in software has to be fiddled with to get my preferred layout - on Linux console that's loadkeys, in X11 it's setxkbmap or a config file, other systems do their own thing. The annoying thing to me is that in X11, the X server (usually Xorg) was the single point of configuration, and setxkbmap would work with any window manager without any special support needed; in Wayland, the compositor replaces Xorg and the window manager, so every single compositor has to do its own support for changing keyboard layout. As a sibling comment notes, wlroots appears to make this easier to implement, but as far as I can make out (I'm not much of a programmer) the dev still has to actually wire it up to do that, and cardboard doesn't seem to have done so. In the general case, I'm annoyed because the end result is that even if everyone did implement it, every single compositor would have its own special setting, when before I could either run setxkbmap or set it in the Xorg configuration file and it would work regardless of what window manager I happened to be running today.
Regarding the speed, many believe that instant / unconfirmed transactions are safe for day to day activities up to 10000USD. It's very difficult to double spend or inject numerous conflicting malicious transactions in hopes that the victim sees a legitimate transaction but a fraudulent transaction gets mined.
Regarding the ease of use, have you tried featherwallet.org or mymonero.com for mobile?
This view seems to imply the double spend problem wasn’t really a problem after all, which seems to invalidate the reason why a peer to peer proof of work/stake system is implemented from the start.
I wonder if any of these companies would be interested in paying Monero and just avoiding taxes entirely. Taxes are such a huge headache for both parties. Of course it's illegal, but hard to trace if done right. I think it would be worth being exposed to a wider variety of talent.
How would you convert monero back to fiat without alerting your bank?(converting to fiat is also necessary cause you can't pay your veggie bills in monero). Any deposit to your bank would easily raise suspicion. Monero to cash is possible but its more difficult to arrange and chances of honeypot are more.
1) as more IDs are generated, the probability of collision increases
2) a non integer primary key is slow
For a single database instance, it's far more performant to leave it as an autoincrementing integer and when it needs to be exposed, encrypt it in the backend before sending it out. Why not hashids.org? It's insecure: https://carnage.github.io/2015/08/cryptanalysis-of-hashids
> I built something better that's actually secure and performant here
That's quite a bold claim, isn't it? I also don't see how it's an alternative to OP considering it doesn't even mention how to integrate the code with any database, let alone Postgres.
As an aside, how does your approach deal with collisions? Why would the chance of collisions be any lower than with the random approach if you're using what seems to come down to a cryptographically secure hash function?
Technically, it can be integrated with Postgres with https://github.com/wasmerio/wasmer-postgres. That said, it makes little sense to integrate this at the database level since it'll bloat up the column size, unless just the encrypted integer is stored (without converting to a base58 string).
As for collisions, a hash function maps an input of any length to an output of fixed length, so collisions will happen eventually. What I'm using is encryption, which cannot collide, otherwise decryption would be impossible https://crypto.stackexchange.com/questions/60473/can-collisi...
It only states that near the end of the page. What is the purpose of it if it's not secure? It only prevents an attacker from performing naive attacks (+1 to get the next id, for example), which might be good enough... But anyone really determined can crack this.
Left the PK as an int or UUID, but added another column, with a trigger to auto-populate, that created a base-64 kind of thing. The trigger also detected colisions and tried again, up to 3 times before giving up and erroring.
There's no rule that says what you expose to the public has to be the pk at all. It seemed to me a good idea for it to not be.
In a database like MySQL where a clustered index exists for the primary key, or even a NoSQL DB like DynamoDB, it often makes sense to expose some version of the PK to the public if your lookup pattern for that resource is going to be by PK --- versus having some other "public" primary key field like you describe.
If you look up the resource by some other field, that means that you now need to support two indexes - one for the primary key, and one for the "public" primary key. This obviously requires more storage and comes with the performance overhead of keeping the second index updated on modifications to the table. Additionally, for something like DynamoDB where you pay per index, it could be cost prohibitive.
A better pattern is to simply encrypt/decrypt the primary key before exposing it publicly, such as in a URL. This requires no additional database overhead.
Xid uses the Mongo Object ID algorithm to generate globally unique ids with a different serialization (base64) to make it shorter when transported as a string: https://docs.mongodb.org/manual/reference/object-id/
- 4-byte value representing the seconds since the Unix epoch,
Ah, I wish I would've known about format-preserving encryption when I did something similar a few years back! I ended up using IDEA, which (at the time) was still in most crypto libraries and has a 64 bit block size.
It is somewhat slower if you are looking up records by key. You still need an index to do that, and non-integer PKs such as UUIDs can be twice as large as the integer alternative, taking longer to search while requiring more memory.
That being said, in PostgreSQL, you are correct --- having a UUID (or something similar) as a PK is usually fine assuming you understand the implications. However I would absolutely avoid it in a DB like MySQL where PKs are clustered.
The alternative that I would suggest you consider is I/O expander chips so you can use any mcu with a handful of pins. MCP23 seems popular, see e.g. https://www.abelectronics.co.uk/p/54/io-pi-plus