For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | noescape's commentsregister

Keep studying. I hope you have more success in the future.


Why do rejection emails say "good luck" at the end?


Because you, the recipient, must now continue the job hunt, so luck in finding a better employer would come in handy.


But why is this pleasantry about "luck" and not something else? You don't hear employers say "keep studying" or "we wish you a good effort" or "we hope you build cool stuff".

Isn't the employer referring to luck an admission from the employer that interviewing has a lot to do with luck?


Because "keep studying" can be construed negatively: You're not good enough. Unsolicited advice isn't usually a good idea politically.

Saying "luck" implies several things: - You just weren't the right fit. - I don't wish to imply your skills are lacking. - I hope you succeed, even though I wasn't able to help.


> Because "keep studying" can be construed negatively: You're not good enough. Unsolicited advice isn't usually a good idea politically.

They're rejecting you because you weren't good enough (or because they suck at actually identifying talent).

> Saying "luck" implies several things:

None of which are good things about the process, because a process should be designed to drive the percent influence of luck to as close to zero as practical.

> - You just weren't the right fit.

Bullshit non-reason.

> - I don't wish to imply your skills are lacking.

We also don't want to admit that we may not have any idea what we are looking for or how to evaluate it.

> - I hope you succeed, even though I wasn't able to help.

If I don't need improvement but I succeed elsewhere in a similar job, that just means you screwed up.


I think it roughly translates to: I hope you have more success in the future


Because "keep studying" would just sound condescending.


I personally find "good luck" and "we reject many good candidates" it be even more condescending.


It's a pleasantry. It's a part of communicating in a tactful and empathetic way.


It cleanly cuts the applicant loose, leaving zero ambiguity as to their chances with the current prospect company, while at the same time saying "you got spunk, kid; I like you and I'm rooting for you", all in two little words.


It shows that they have no ill will towards you.


It's happening for the operating system. Will it happen for the programming language?


What if, somehow, there was a guarantee that the code can't harm your machine -- would you use it then?

This isn't a joke.


It's not just about harming your machine - it can harm your* data, or data passing through your* system!

* Where "your" can mean you, your employer, your customer, another company, etc.


What if it can't harm your data, or data passing through your system?

What if it can't harm anything.


How could a system that executed arbitrary code ever make such guarantees? I mean sure, if there were a magical thing that only ever produced correct code for what you wanted to do I guess people would use it.


Is it impossible to do hardware development in C and automatically convert C to Verilog? https://en.wikipedia.org/wiki/C_to_HDL

What does one lose if they do this? Why isn't this conversion more common?


>Is it impossible to do hardware development in C and automatically convert C to Verilog?

Yes, it is. C can be converted to Verilog only if you structure the C code as if it were Verilog.

You have to understand that, fundamentally, the CPU is a lie. It is an illusionary abstraction layer fashioned out of bare transistors that pretends to be a von Neumann architecture machine, with a nice set of registers, a instruction pointer, and so forth that chugs dutifully through assembly language instructions, one after another. (Note that nothing about the von Neumann architecture requires binary; for example, the Babbage Analytical Engine computed in decimal.)

At the transistor / gate level, _everything_ is parallel until you impose some order on it and build in clocks and flipflops and so forth to impose some sort of structure and chronological ordering on things. Few, if any, high-level languages are equipped to describe that in any sensible way.

Verilog, VHDL, and so forth are not "weird". These HDLs, while not without their warts, describes the underlying reality of computing. It is assembly language and the high-level languages that are "weird".

(edit: grammar fixes + clarifications)


Perhaps I'm reading too far into what you wrote, but I'm not sure I agree that the CPU is fundamentally different from other layers of abstraction in computer systems.

All abstractions “lie” in the sense that they present a perspective of the world that is slightly different to the reality — function calls “lie” about the operations that are really being performed, the ISA “lies” about the electronics of the CPU, and transistors “lie” about the underlying behaviour of the universe.


With a CPU, everything is done procedurally. With an FPGA, all the code kind of runs at once. You build your modules and wire them together. The state of the HDL layout is static. There is no stack or heap. If a module is turned on, its doing whatever its supposed to do all the time no matter if you are feeding a real signal into it or not. Imagine threading every possible function you might need in a typical C program and "wiring" everything up with global variables that cannot be initialized at startup. Printf is always printing something and will print out some random garbage at startup unless you tell it to print something else. Simulating larger HDL designs takes a lot of memory because you have to model everything throughout the entire simulation. Simulating a CPU in something like C is much less intensive since you can call an instruction whenever you are ready for it.


I completely appreciate that “writing hardware” is a totally different problem to writing software, and that hardware comes with its own set of quirks and challenges. I'm just saying that I don't think the CPU is fundamentally different than other abstractions in the stack.

C obviously isn't a good fit for a hardware language — it's designed for software! That doesn't mean that there doesn't exist similarly abstract ways of writing hardware though (that express the inherent parallelism, etc.). It is likely that these “higher level languages for hardware” would result in less efficient hardware solutions, but that's always a trade-off that is made through abstractions. Writing a program in properly scheduled assembly code is going to be a lot more efficient than writing the same program in C.

The difference with hardware in terms of language abstractions is not that it behaves differently to software. We could easily define a programming language that expresses parallelism in a way that would map nicely to hardware. The problem, from my perspective at least (please correct me if you think I'm wrong) is that hardware needs to be extremely efficient — particularly because it cannot be easily changed. As a result, hardware languages don't tend to be particularly abstract. But this doesn't mean that hardware languages couldn't be more abstract!


Probably the highest level of abstraction for Verilog is using something like Altera's IP cores. They are binary blobs intended for a specific FPGA model that can be hooked up like any other module but they are configurable for things like bandwidth, latency, and various inputs. You can use things like floating point arithmetic modules or cores used to create things like phase locked loops, a way to convert an input frequency to a different frequency typically using a multiplier and divider. You don't need to know how these modules work underneath, its proprietary anyway but there are usually reference designs you can look at. For example, with a few clicks you can create something like a VGA driver that could interface with a synthesized CPU to create a terminal for it. You can do some neat things with IP cores but there are probably some issues with using them in a commercial form.


You know how you can't just compile normal C to a GPU and expect a speed up because the underlying computational models are so different? It's like that, just a few orders of magnitude worse.


To look at it another way, what does one gain? Hardware is highly parallelizable and HDLs give you that capability by default. Verilog is a really easy language to learn. The hard part is learning how to do hardware design.


One gains not needing to learn another language. I know C, I don't know Verilog or VHDL. I know how to debug C, I don't know how to debug Verilog.

If I can write a working C program, why can't I make working hardware?


Knowing how to debug C won't help you. There is no state machine running your code. Stepping through instructions doesn't apply.

Debugging HDL involves looking at timing waveforms in a simulator.


Could one write a state machine running the code? VMs do.


You can do a lot with Verilog as long as you don't intend for it to run on an actual FPGA. You can initialize registers to values, print to a console, and add in hardware delays for timing. Simulation is what Verilog was meant to do in the first place. Once you want to actually synthesize the code and run it on some hardware, you have to get more creative in how you layout your code. Like having a reset input for the module so it can trigger certain events whenever you want (i.e. register initialization). So when you write your testbench, you must activate the reset input (or have something similar to a reset button on your actual hardware) before trying to use the module otherwise all the registers will just have random junk values. In my class projects we would always have a dedicated reset button on the board that would need to be pressed before you could test the design. In a production design, the reset would be tied to some sort of automatic reset (like a one shot timer or a microcontroller) when the device powers up.


> We're growing fast

Are you growing faster than 5% per week? If you are, that'd be fast.


I wonder if it were simpler for Uber to use RethinkDB instead of this complicated scheme they set up.


In the future please remove the words "Berlin, New York, Paris, LA" from your post. They conflict with search for positions in specific cities.

Since the position is in London, please only use London in the post.


And, what is this particular skill set?


Well, at the time something in the list of "C++ (Boost, STL), Python, SQL (PostgreSQL, SQLite), git" caught his eye.

Since it was RethinkDB I suspect the combination of C++ (which RethinkDB is written in), Python (one of the 4 languages for which there are/were "official" libraries), experience with 2 different DBMSs and an interest in "a challenge as a backend enginer to further hone my C++ skills" were probably all factors.


Can a Reflect employee take 6 months vacation?


If we hired someone who wants 6 months off, we'be made a mistake elsewhere.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You