For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more antoinealb's commentsregister

The list of small things are for data structure. However, the language is a lot less dynamic than Python:

> Since Codon performs static type checking ahead of time, a few of Python's dynamic features are disallowed. For example, monkey patching classes at runtime (although Codon supports a form of this at compile time) or adding objects of different types to a collection.

While monkey patching is maybe not done so much in Python (outside of unit testing), adding objects of different to a collection is definitely a common operation!


From what I understand, this will be possible in the future with implicit union types. Wouldn't work with _arbitrary_ types, but with a set of types that can be computed at compile time (my guess is that this is possible in most real-world cases).


Yup. One of my previous work was in a very tiny company where everyone was busy, yet my job was basically this type of performance optimization. The reason ? We were building a real time processing product, and we were not hitting our frame time, meaning the product could not be shipped. In those situations, convincing the pm that you are doing important performance work is trivial.


Blocking them inside Iran is done because US companies are forbidden to do business with Iran, due to economic sanctions. Maybe they can with government approval but why would they try ?


From context we can tell that they used to be working before the protest broke out, so the explanation must be something else.


Yup, there's also a slew of companies rescinding offers to companies in these regions lately as well. I think there has been some tightening of sanctions recently or something.


They were working before. Chat applications are not sanctioned.

This is Facebook actually violating sanctions by helping the Iranian government.


Yes, but support for implementing block devices as part of userland is a feature of the next kernel, if I understand correctly [0], while FUSE that this is using is available right away.

[0] https://lwn.net/Articles/903855/


It could act as an NBD server. The in-kernel client for that has been around since Linux 2.1.x.


What about loopback devices using a FUSE file as the host file?


Chromium does not use Bazel, but GN. I would assume they still build most of their dependencies from source, as this is the way Google tends to setup build systems.


one of these being libv8, which can take quite some time itself.

it can also be very memory consuming so as soon as you hit swap things slow down a lot.


How do you prevent someone from just copying the key to another box though ? I think the key argument for CandyCodes is that they are massively more expensive to copy than to generate for the legitimate producer.


For the third point, couldn't you have some overlap between agents, so that each customer is served by N agents, each agent having a different set of M customers ? It is super expensive as an arrangement though.


Yeah that would work, but like you said it is expensive - or at least it is a cost that doesnt exist in the current model and would need specific attention. And we cannot assume customer requests / issues are uniformly distributed - it's not unlikely that 80% of issues come from 20% of customers. But if you give 80% of agents access to the same 20% of customers - you havent really solved the original problem. Albeit the risk factor is certainly lower and I would prefer this setup, who knows if that justifies the additional overhead and access control management.

Technically, the current model is a full M customer x N agents mapping. All customers served by all Agents. Adding some 0's to that mapping should certainly be possible without sacrificing much from current system. No way to know how many is reasonable

All that said, there is a natural logic that arises from this 'analysis' - Limit access to customers who are inactive, or who rarely have issues. A common sense addition would be to raise their default priority for when they do interact.


I just want to point out that drunk driving also kills innocent bystanders: pedestrians, cyclists, sober drivers around, etc.


At my previous job I had to write a device driver for a (simple) I2C device. I already had experience writing drivers for microcontrollers, but to move to Linux I found it was pretty simple to take the code for an existing device that was similar to what I wanted to do and use it as a reference. If you look at the code for a simple serial port [1], with the knowledge gained in the article, you would probably figure out what are the different parts.

[1] https://github.com/torvalds/linux/blob/master/drivers/tty/se...


But in a way you are doing distributed systems; that's what the "with replication" part of your architecture is saying. Now you can choose to never ever test that your replication setup is working, but I think that "distributed systems testing is for big tech" is bad advice. Something that would work pretty well in your setup: take down the Postgres leader, and see if your website is still up. That's already Chaos Engineering.


Well yes exactly.

That's kind of why I brought up the idea of clock errors.

Single leader distributed systems are honestly an entirely different paradigm to multi-leader. You have a main node, the main node controls routing and which data goes where.

There can only be one main node at one time, hence no latency/clock errors/multi leader fail-over/PAXOS/voting/stale keys/exotic application specific conflict resolution logic.

Done, simple.

(Secretly not so simple, all of the above problems are actually hugely important to resolve in any monolithic database. However, Postgres sorts it out for you.)

Really, what I am calling "distributed systems problems" are the subset with multiple simultaneous, geographically separate leaders.


> There can only be one main node at one time, hence no latency/clock errors/multi leader fail-over/PAXOS/voting/stale keys/exotic application specific conflict resolution logic.

This is not accurate.

You can certainly have clock errors in "single leader distributed systems", simply by virtue of the fact that the primary rotates throughout the cluster.

And, perhaps even more surprisingly, you can certainly have clock errors in "single node non-distributed systems like Postgres", i.e. clock errors are completely orthogonal to distributed consensus protocols (unless you're doing something dangerous like leader leases, which I don't recommend).

The problem is simple: what if the system time strobes or gets stuck in the future? How does this affect things like recording financial transactions? What if NTP stops working? Distributed systems aside, it's pretty tricky.

I've worked on this problem of clock errors specifically for TigerBeetle [1], a financial accounting database that can process a million transactions a second. In our domain, we can't afford to timestamp transactions with the wrong timestamp, because this could mean that financial transactions (e.g. credit card payments) don't rollback if the other party to the transaction times out, so liquidity could get stuck and tied up, for the duration of the clock error.

[1] https://www.tigerbeetle.com/post/three-clocks-are-better-tha...


Fair point.


>> There can only be one main node at one time

This is how most distributed system solve the consensus problem: the nodes automatically run leader election algorithm so that eventually only one leader remains.

>> (Secretly not so simple, all of the above problems are actually hugely important to resolve in any monolithic database. However, Postgres sorts it out for you.)

Except that in case of network partition of the primary node, Postgres can't promote a new primary node from the replicas automatically, unlike consensus systems such as ETCD/Zookeeper.


In a way, yes. But that’s not what most people are referring to when they talk about “distributed systems”, so I think you’ve constructed a straw man.


What do those people mean by "distributed systems"? Are they running a single node server with web app and database on the same node? Is there a load balancer/cache/replica/cluster? If it's 1+ machine, it is distributed.

In your opinion what would qualify as a "distributed system"?


Distributed system probably isn't precise enough in of itself.

But I was very much thinking of multiple leader distributed systems.

Anecdotal. But dealing with clock errors and multi-leader issues are of far greater complexity than caching data.

Replicas, load balancers, and all the vagaries of single leader systems are not absurdly difficult technical problems.

They're problems that can be solved by a team with a single senior engineer who knows what they're doing.


I'm not taking a position on what would qualify as a distributed system, and neither was that the point of my comment.

I believe that when most people talk about "distributed systems", they're talking about some form of microservices architecture, where components of a single system are arbitrarily separated by network requests for nebulous reasons.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You