For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more cdavid's commentsregister

well it is both an easy way to compute in a dataframe context and a reactive programming paradigm. When combined, it gives a powerful paradigm for throwing data-driven UI, albeit non scalable (in terms of maintenance, etc.).


One of the largest, if not the largest python codebase in the world, implements similar ideas to model financial instruments pricing: https://calpaterson.com/bank-python.html.


The issue about executive mandate is likely coming from the context of large corporations. It creates fatigue, even though the underlying technology can be used very effectively to do "real work". It becomes hard for people to really see where the tech is valuable (reduce cost to test ideas, accelerate getting into a new area, etc.) vs where it is just a BS generator.

Those are typical dysfunctions in larger companies w/ weak leadership. They magnified by a few factors: AI is indistinguishable from magic for non tech leadership, demos that can be slapped quickly but that don't actually work w/o actual investments (which was what leadership wanted to avoid in the first place), and ofc the promise to reduce costs.

This happens in parallel to people using it to do their own work in a more bottom up manner. My anecdotal observation is that it is overused for low-value/high visibility work. E.g. it replaces "writing as proof of work" by reducing the cost to write bland, low information, documents used by middle management, which increases bureaucratic load.


My observation is the latter, but I agree the results fall short of expectations. Business will often want last minute change in reporting, don't get what they want at the right time because lack of analysts, and hope having "infinite speed" will solve the problem.

But ofc the real issue is that if your report metrics change last minute, you're unlikely to get good report. That's a symptom of not thinking much about your metrics.

Also, reports / analysis generally take time because the underlying data are messy, lots of business knowledge encoded "out of band", and poor data infrastructure. The smarter analytics leaders will use the AI push to invest in the foundations.


Given the context, I am assuming this is on the "behavioural" side of the IV (aka what most companies call culture fit). And I am assuming you are applying to "traditional" companies, that is companies that have a defined hiring process and are large enough. This includes all FAANG and what not.

My advice:

  - write down the stories (use cases) before the actual IV
  - for each story, focus on what you learnt / succeeded
  - for the really negative ones, focus on the learning
  - for the other ones, focus on the outcomes, mentioning  things that worked and maybe some things that did not work  and how you did it
This is the part where you have to act the game and avoid being too transparent. Mentioning too much the negative will be seen as a red flag by most hiring managers or recruiters.


This article makes the naive assumption that "what is making money" is an objective thing.

Once a company reaches a certain size, basically once it has a financial planning department w/ different VPs owning their PnL, who is making money increasingly becomes a social construct. "how to get promoted" by spakhm is much more closer to how a large, successful org works IMO: https://spakhm.substack.com/p/how-to-get-promoted

I've seen this in my career multiple times. For example, when I was involved in search for some companies, we would demonstrate through A/B testing that we would make X more money per month. Executive team changed, they decided that "A/B test does not work and slows us down", the definition of making money changed, we overnight went from a money-aking org to a cost center.

Nowadays, most companies are pushing GenAI everywhere. Most of those things don't make money, and yet a lot of promotions will be obtained across the board until the tune ends.


Thank you. "Spearheading" is always valued and the way to do it is to leave for some new initiative somewhat before everyone realises the current one is a turd. The blame then attaches to the people left holding the turd.


But still there are objective metrics by which you can track which products are making money and which are not. You can do ALL the financial manipulation to an extent, but everyone above bragging VPs/Directors know which product lines are making money yoy


Say you are meta. You know that a big stream of revenue is ads. You are an engineer working for one of the myriad ML model around feeds, ads click prediction, whatever. Those ML models are in production, and cost a lot of money to maintain / operate. How much you are a cost center or a money maker will depend on a lot of non objective choices.

The essential issue is attribution, which fundamentally requires some choices about how money is actually made. Even when everybody is in good faith, there are reasonable ways to agree. And people are rarely in good faith around those things.


Ads are still bread and butter for Meta. The first to go will be the folks who are in vogue bcoz of cultural reasons, eg. diversity, green, etc. I would even go the extent of saying that a lot of open source maintainers will also be axed, the day Meta stops making ad money and fights for survival. The issue of `attribution` is to be decided at last, when anyway the house is partially burned and the company is fighting for survival. I am talking about the initial phase, where a lot of engineers work on teams that make no monetary/value sense for company and are there, bcoz manager/CEO don't care or are kind(mostly former).


Sure, but things like DEI, OSS, are tiny minority in most companies. At least they were in the companies I've worked at.

You mentioned that attribution is to be decided when house is burnt, but that certainly not my observation. Which department is responsible for what revenue is what senior leadership fights over all the time, whether times are good or not.


You have chosen a very very poor example in calling out Meta Ads. I can assure you that the entire org has a massive magnifying glass over it with insane amounts of data analysis, every % change in revenue is absolutely attributable to exactly what group brought it about whether it be DC hardware, ML teams, or backend infra. It is the lifeblood of Meta with many billions flowing through it of course it isn't just being cowboy'd with random changes that have undefined impact on yoy revenue.


I am not saying mature companies are "YOLO-ing" this randomly, but that many assumptions are made about how the input metrics trickle back to revenue/profits, and those can change. The attribution exists, but how it is done is far from an objective thing. E.g. how do you translate CTR into revenue ? How do you value an additional user ?

This can also be seen with cost saving. There are numerous examples on HN when people wonder why reducing the cost of something by X millions was not recognized (e.g. https://x.com/danluu/status/802971209176477696). Based on my own experience, most likely explanation is that's because there was no item related to this in the financial planning to be recognized.


Hadn’t read “how to get promoted” before!

Damn that’s a good read and sadly rings true


It is at least in part because of how recruiting works, especially in large tech companies. Companies have many candidates for each role, and can't easily review all of them. Between bureaucracy, how busy the hiring manager is, etc. many candidates get lost in the process.

This is why 1) it really helps to have a referral and 2) tell the recruiter when you have competing roles. Neither will change much about getting or not a role, but they will really help the prioritization and avoid you getting lost.

Paradoxically, that effect is bigger nowadays when the market is not as hot as it used to be, because recruiting is more stretched. Even some FAANG are very understaffed on the recruiting side


That's my main use case not-yet-supported by uv. It should not be too difficult to add a feature or wrapper to uv so that it works like pew/virtualenvwrapper.

E.g. calling that wrapper uvv, something like

  1. uvv new <venv-name> --python=... ...# venvs stored in a central location 
  2. uvv workon <venv-name> # now you are in the virtualenv
  3. deactive # now you get out of the virtualenv
You could imagine additional features such as keeping a log of the installed packages inside the venv so that you could revert to arbitrary state, etc. as goodies given how much faster uv is.


So this is probably just me not understanding your use case, but surely this is a nearly identical workflow?

1. uv init <folder-name> # venv stored in folder-name/.venv 2. cd <folder-name> # running stuff with uv run will automatically pick up the venv 3. cd .. # now you get out of the virtualenv


Yes, that's what I do today.

The UX improvement would be to have a centralized managemend of the venv (centralized location, ability to list/rm/etc. from name instead of from path).


negotiation is very much a thing even at FAANG. If anything, it can help you being at the top of your assigned band. Other things can be negotiated.

Source: I am no great negotiator, but I've always negotiated my salary and got 10-15 % more than what I would have had w/o asking anything. This compounds after a few stints. And I've been a manager in startup/mid size/big tech: always negotiate.


There is also the signing bonus, which is usually available for a recruiter to sweeten things if the first offer is marginal. Google-recruiters for years claimed it was non-negotiable, all the while negotiating it for people who were willing to play the game.


A lot of path dependency, but essentially

  1. A good python solution needs to support native extensions. Few other languages solve this well, especially across unix + windows.
  2. Python itself does not have package manager included.
I am not sure solving 2 alone is enough, because it will be hard to fix 1 then. And ofc 2 would needs to have solution for older python versions.

My guess is that we're stuck in a local maximum for a while, with uv looking like a decent contender.


PHP and composer do. You can specify native extensions in the composer.json file, along with an optional version requirement, and install them using composer just fine. Dependencies can in turn depend on specific extensions, or just recommend them without mandating an installation. This works across UNIX and Windows, as far as I’m aware.


PHP and composer do.

Is that a new feature? Pretty sure it didn't a few years ago. If the thing I need needed the libfoo C library then I first had to install libfoo on my computer using apt/brew/etc. If a new version of the PHP extension comes out that uses libfoo 2.0, then it was up to me to update libfoo first. There was no way for composer to install and manage libfoo.


does not seem so... Something as simple as "yaml" already requires reaching to apt-get: http://bd808.com/pecl-file_formats-yaml/

> Php-yaml can be installed using PHP's PECL package manager. This extension requires the LibYAML C library version 0.1.0 or higher to be installed.

    $ sudo apt-get install libyaml-dev
This is basically how "pip" works, and while it's fine for basic stuff, it gets pretty bad if you want to install fancy numerical of cryptography package on a LTS linux system that's at the end of the support period.

I am guessing that PHP might simply have less need for native packages, being more web-oriented.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You