For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more dureuill's commentsregister

More than just that, Result in general also prevents from accessing the value when there is an error and accessing an error when there is a value.


The absence of that safeguard in Go is a feature. It's used when the error isn't that critical and the program can merrily continue with the default value.

Of course, this is also scarily non-explicit.


Global type inference is not a positive in my book. In my experience it becomes very hard to understand the types involved if they are not explicit at systems boundaries.

I can also imagine that it must be hard to maintain, like sometimes the types must accidentally change


Being hard to maintain and having no static types at all, did not stop Python rising to conqueror the world. Type inference allows us to at least give those users the succinctness they are used to, if not the semantics. Those who like explicit types can add as many annotations as they need in OCaml.


> Those who like explicit types can add as many annotations as they need in OCaml.

They cannot add it in other people's libraries.

> did not stop Python rising to conqueror the world

I wasn't talking popularity, I was talking maintainability. Python is not a stellar example of maintainability (source: maintained the Python API for a timeless debugger for 5 years).

Python's ubiquity is unfortunate, thankfully there seems to be a movement away from typeless signatures, both with Python's gradual typing (an underwhelming implementation of gradual typing, unfortunately) and Typescript.


> They cannot add it in other people's libraries.

Does it matter that much how the internals of someone else's library are implemented? The tooling will tell you the types anyway and module interfaces will have type signatures.

> Python's ubiquity is unfortunate,

Well that we can agree on!


There’s a trade off - was the mistake here or there? The type checker cannot know. But for those few cases you can add an annotation. Then the situation is, in the worst case, as good as when types are mandatory.


> But for those few cases you can add an annotation.

not in other people's code. My main concern is that gradual typing makes understanding other people's code more difficult.

Idiomatic Haskell warns against missing signatures[1], Rust makes them mandatory. Rather than global inference, local inference stopping at function boundaries is the future, if you ask me.

[1]: https://wiki.haskell.org/Type_signatures_as_good_style


Huh? If you are consuming code you can’t change from someone else, then I presume this is a published package? Then the IDE will tell you the types.


> the situation is, in the worst case, as good as when types are mandatory

The worst case is actually worse than when types are mandatory, since you can get an error in the wrong place. For example, if a function has the wrong type inferred then you get an error when you use it even though the actual location of the error is at the declaration site. Type inference is good but there should be some places (ex. function declarations) where annotations are required.


As personal anecdata, this checks out.

10 years C++, mostly C++11 once it started to exist and be available.

After a few years of Rust on the side, I was x2 to x3 more productive in Rust than in C++.

I regularly think that the way I express code in Rust would be very tedious to do in C++. The ingredients of the magic recipe are:

- Proper sum types (Rust's enum)

- Exhaustive pattern matching, destructuring, the initialization syntax

- The expression-orientedness

Other things that make working with Rust comparatively a joy:

- 99.9% of the time, no need to hunt for UB during reviews

- quality and "standard-ness" of the tooling: rustfmt, rustdoc, clippy, cargo


In the Q&A from the video, there's a question very similar to yours.

As an answer, Lars specifies that these are regular engineers who were asked to learn Rust, and not early adopters who like shiny things.


Hello, I have a few questions:

- how much time to insert 15 millions of vectors of 768 f32?

- how much RAM needed for this operation?

- if inserting another vector, how incremental is the insertion? Is it faster than reindexing the 15M + 1 vectors from scratch?

- does the structure need to stay in RAM or can it be efficiently queried from a serialized représentation?

- how fast is the search in the 15M vectors on average?


I can answer #3. HNSW will allow for incremental index rebuilding. So each additional insert is a sublinear, but greater than constant time, operation.


I can answer how it would be in Qdrant if interested. The index will take around 70GB RAM. New vectors are first placed in a non-indexed segment and are immediately available for search while the index is being built. The vectors and the index can be offloaded to disk. Search will take some milliseconds.


> it'll still down your service and damn you to a crashloop.

Nope. Your service will be structured such that it is subdivided in tasks, each task being wrapped in a `catch_unwind`[1], such that a panic merely kills the task it occurs in, not your service.

[1]: https://doc.rust-lang.org/std/panic/fn.catch_unwind.html


> What do you do now?

Look for another job

> You’d be amazed at how many C++ codebase in the wild that are a core part of a successful product earning millions and they basically do not compile.

Wow I really hope this is hyperbole. I feel like I was lucky to work on a codebase that had CI to test on multiple computers with WError


> Wow I really hope this is hyperbole.

I am sure its not, I dont have much experience as I have worked in only 3 companies in the last 25 years, but so far I have found no relation between code quality and company earnings.


> so far I have found no relation between code quality and company earnings.

This! What matters is the market fit and customer experience. You can deliver a lot of value with average programmers working on a shitty code base.


I started to joke that in order to have a successful software startup, you need to essentially write the most godawful program code you can get away with. The money is much better spent on a good/aggressive sales strategy.

Elegant technology never wins on its own merits.


> Much has been spoken at various occasions about drivers and I feel that the consensus is to wait for now.

Interesting, I did not follow that development. I thought the plan was to use Rust for some out-of-tree/optional drivers. What changed?


There was opposition to building interfaces for toy drivers and the last thread had suggestions at a rust filesystem interface rebuffed by saying they should try rewrite the ext2 driver in rust to prove that it was usable for real filesystems rather than toy ones. I'd guess similar thought processes fuelled this decision.


It's already used for an intree driver, but a lot of the infrastructure for making Rust drivers was not be upstreamed.


`let` defines a new binding, as opposed to changing the value of an existing binding. You can't remove `let` to keep only `=` because it has a different use case.

Not indicating the type is idiomatic in Rust, but you can annotate it:

    let commit_message: String = repo.head().peel_to_commit().ok()?.message()?.into();
Here this is useful to specify the type `into` should convert to. However, if rewritten as:

    let commit_message = repo.head().peel_to_commit().ok()?.message()?.to_string();
Then it is useless because we're already specifying the type of the variable by using `to_string`.

Note that IDEs are displaying type hints anyway (without you having to type them), so you don't have to suffer if walking through a codebase where people are annotating their types too little for comfort


Interesting, I was lacking this context. Could you provide me with more information about this? I only saw atomic_shared_ptr come up in discussions about bugs up to now.


https://www.youtube.com/watch?v=gTpubZ8N0no

The target of this optimization is low-latency code. Rendezvous will not work for that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You