I just do not understand how Azure has the scale it does. You only need to login and click around for a bit to see this is not a coherent system designed by competent people. Let alone try and actually build something on it.
From my old experience in IT - people just default to Microsoft for everything. They don't want to hassle with learning anything else and assume better the devil you know. Glad I'm out of that world but its wild what people will put up with.
People and organizations that built things on top of Microsoft tech. Especially with a long history going back to NT times.
HN, YC, startup environment or academia is a Unix bubble. They all feed into each other. Especially because Linux is gratis which helped all of those to deploy projects/products/papers cheaply. Unix systems traditionally lack much of the upper layers, so it is the responsibility of the company, persons, developers to deal with the OS minutea. You need sysadmins, devops, SREs. Those are common roles again in this Unix bubble. The dependency chains here are usually flatter since it keeps mid-term costs lower.
Other organizations like governments and bigger orgs like banks prioritize having somebody else liable (i.e. they can blame) and they prefer to not hire technical competence in their orgs but rely on other companies. This is where Microsoft gets a lot of clients. You buy a bunch of server licenses. Your Microsoft support person installs them and installs IIS via GUI. And then you just upload your code every now and then. The OS updates, IIS server etc are all the responsibility of Microsoft and the middlemen companies. Minimal competence from the orginal org is required. There are multiple middlemen businesses who all give zero fucks about anything but whatever the immediate downstream from them. This is more usual in already publicly traded huge businesses. Moreover the investors actually mandate certain things that only this kind of layers of irresponsibility can deliver :) So you see this kind of switch happening towards IPOs.
Azure is the cloud labeling and forcing the first paradigm over the second paradigm for Microsoft products. It got lots of support because shareholders liked it. I don't think the original NT design and Microsoft's business model was bad, it actually worked very well. However, shareholders gonna shareholder. So they pushed hard for Microsoft and their clients to move to the "cloud". Microsoft executives saw the huge profit and share value potential of pushing Azure the brand too. It was the AI of 2010s afterall.
If you put me in front of AWS I'd have the same reaction. Or GCP for that matter, where I did have your reaction.
It's familiarity and knowing how the beast operates. I know how to read the docs and understand the licensing.
Any one piece of software could be a pile of shit with a terrible UX, but you're going to find those who are so familiar with it that everything else looks alien.
Because for some it works. At least I haven't heard the stories I see here yet at my workplace. Also I use some Azure, but apart from some weird UI bugs never had real big issues.
Let say I am a junior SWE in EU. I incorporate in Estonia and issue my employer with an invoice from said company. That company pays for my house, my car, my dental service and whatnot, and what's left I take as a employee salary.
I pay local tax for that salary, but that's only a fraction of what I've billed my employer.
There's also the CFC rule, which means that within the EU, if you control a foreign corporation, your country of tax residence can tax undistributed profits.
Often tax offices don't bother and you might not get caught, but 'not getting caught' is not the same as it being legit.
> that company pays for my house, my car, my dental service and whatnot, and what's left I take as a employee salary ... I pay local tax for that salary
Until you get audited by your local tax authority who rules that all of that is disguised salary, or the Estonian tax authority says that that's technically (taxable) profit being paid to the director.
If you're currently doing this, I suggest throwing yourself at the mercy of your local tax authority with the help of a lawyer and an accountant, as it's possible they'll show some leniency if you go to them first and not add penalty fines in addition to needing to back-pay the tax and late payment fines.
You've got to get up pretty early in the morning to fool The Revenue.
That would not be possible at in Germany, everything you just listed is considered a "geldwerter Vorteil" and falls under income tax - even if the company gives you a car that you need to do your job, you will have to pay taxes on it.
To me it feels disgraceful to live in a country, benefit from the taxes that everyone else pays there, and try to avoid paying the taxes yourself. It is true that the ultra-wealthy do this. What we should do is try to make them pay their due taxes as well, not try to imitate them. That paths leads to an impoverished country where you have to live in a gated community with armed guards.
Totally agree! My goal also wasn't to avoid paying taxes. I just would like to have a fully digital and less bureaucratic system for it. But well, I guess I have to wait until Germany finally gets that.
Pretty much every country has a concept of benefit in kind. Some countries will allow some expenses to be covered (Part of your phone/internet subscription if you work from home, meals vouchers. Some countries codified some WFH arrangements) but you absolutely won't be able to pay for everything tax free.
You'd be far better-off jumping between countries to leverage the 30% ruling/Beckham Law/HSM tax arrangements if you can.
With Claude Code I created an agent that spawns 5 copies of itself branching git worktrees from main branch using subagents so no context leaks into their instructions. The agent will every 60 seconds analyze the performance of each of the copies which run for about 40 minutes answering the question "what would you do different?". After they finish the task, the parent will update the .claude/ files enhancing itself reverting if the copies performed worse or enhancing if they performed better. Then it creates 5 copies of itself branching git worktrees from main branch ..........
After 43 iterations, it can turn any website using any transport (WebSocket, GraphQL, gRPC-Web, SSE, JSON API (XHR), Encoded API (base64, protobuf, msgpack, binary), Embedded JSON, SSR, HLS/Media, Hybrid) into a typed JSON API in about 10 - 30 minutes.
Next I'm going to set it loose on 263 GB database of every stock quote and options trade in the past 4 years. I bet it achieves successful trading strategies.
> Next I'm going to set it loose on 263 GB database of every stock quote and options trade in the past 4 years. I bet it achieves successful trading strategies.
I bet it doesn't achieve a single successful (long term) trading strategy for FUTURE trades. Easy to derive a successful trading strategy on historical data, but so naive to think that such a strategy will continue to be successful in the long term into the future.
If you do, come back to me and I’ll will give you one million USD to use it - I kid you not. Only condition is your successful future trading strategy must solely be based on historical data.
Let us perform a thought experiment. You do this. Many others, enthusiastic about both LLMs, and stocks/options, have similar ideas. Do these trading strategies interfere with each other? Does this group of people leveraging Claude for trading end up doing better in the market than those not? What are your benchmarks for success, say, a year into it? Do you have a specific edge in mind which you can leverage, that others cannot?
People used to laugh about quant strategies the same day, I wouldn't count it out so quickly. One of my friends is already turning meaningful profits with agent driven trading (though he has some experience in trading to begin with.)
Classic AI psychosis, you can do it with a single prompt, etc. etc.
If you find such a db with options, it will find "successful trading strategies". It will employ overnight gapping, momentum fades, it will try various option deltas likely to work. Maybe it will find something that reduces overall volatility compared to beta, and you can leverage it to your heart's content.
Unfortunately, it won't find anything new. More unfortunately, you probably need 6-10 years and do a walk forward to see if the overall method is trustworthy.
> Next I'm going to set it loose on 263 GB database of every stock quote and options trade in the past 4 years.
Options quotes alone for US equities (or things that trades as such, like ADS/ADR) represent 40 Gbit per second during options trading hours. There are more than 60 million trades (not quotes, only trades) per day. As the stock market is opened approx 250 days per year (a bit more), that's more than 60 billion actual options trades in 4 years. If we're talking about quotation for options, you can add several orders of magnitude to these numbers.
And I only mentioned options. How do you store "every stock quote and options trade in the past 4 years" in 263 GB!?
I see, I said "stock quote" instead of "minute aggregates". You are correct that data set is much larger and at ~1.5TB a year [0] I did not download 6TB of data onto my laptop. Every settled trade options or stocks isn't that big.
I have Claude Code Max $200 a month plan. I ran aggressively for 4 days and ran through 80% of Opus 4.6 for the week. I was also running it 16 hours a day. Today and tomorrow I will wait until 5pm PST because they have a 50% special to run with the remaining tokens.
The problem was testing it against 5 websites at a time after every change to instructions to ensure there wasn't any regressions. The orchestrator agent tracks all token expenditure and would update its own instructions to optimize.
I use TimescaleDB which is fast with the compression. People say there are better but I don’t think I can fit another year of data on my disk drive either or
I don't understand your question? Are you saying the source of the data I linked to is corrupt or lies? Should I be concerned they are selling me false data?
I think the name "massive" combined with the direct link to the docs is a bit misleading; it's not at all obvious from where you land w/ that link that they are selling the actual data. (It kind of sounds like they're selling software that helps you deal with massive data in general, which, no.)
I might be regressing communicating with other humans after using natural language in prompts 10 hours a day 10 days straight. My spelling is improving however I need to focus more on the context with humans.
you can have it build an execution engine that interfaces with any broker with minimal effort.
how do you have it build a "trading strategy"? it's like asking it to draw you the "best picture".
it will ask you so many questions you end up building the thing yourself.
if you do get something, given that you didn't write it and might not understand how to interpret the data its using - how will you know whether it's trading alpha or trading risk?
I can care less about scraping and web automation and I will likely never use that application.
I am interested in solving a certain class of problems and getting Claude to build a proxy API for any website is very similar to getting Claude to find alpha. That loop starts with Claude finding academic research, recreating it, doing statistical analysis, refining, the agent updating itself, and iterate.
Claude building proxy JSON api for any website and building trading strategies is the same problem with the same class of bugs.
The bigger question is: does Anthropic have a big enough moat to matter?
I've used/use both, and find them pretty comparable, as far as the actual model backing the tool. That wasn't the case 9 months ago, but the world changes quickly.
I don’t believe there will ever be a real moat in terms of technology, at least not for the next year or so. The arms race between the major players still changing month to month, and they will all be able to do what their competitors were doing g three months ago.
None of them are particularly sticky - you can move between them with relative ease in vscode for instance.
I think the only moat is going to be based on capacity, but even that isnt going to last long as the products are moved away from the cloud and closer your end devices.
It matters to me. Claude code is more extensible. They put a lot of efforts to hooks and plugins. Codex may get the job done today. But Claude will evolve faster.
None of that matters if the model is worse. I say this as someone who uses both Claude Code and Codex all day every day — I agree with others in this thread that CC has much better UX and evolves faster, but I still use Codex more often because it's simply the better coder. Everything else is a distant second to model quality.
What kind of tasks are you having success with on codex? I’ve had the opposite experience. I’ll occasional compare solutions between the latest opus and codex with codex on x-high thinking. Sometimes I do get solution from codex that is impressive because it discovered an edge case that Claude missed.
I did notice that codex - like Claude - is now better about auto delegating to agents for keeping the context focused and agents in parallel.
The Claude desktop app is way worse than the Codex desktop app
Even the AI itself is goofy. So many false positives during reviews immediately backtracked with "You're right, I'm sorry" in the next response.
It seems like there's either a paid pro-Anthropic PR campaign on HN because the comments fawning about it don't match my experience with Claude at all, or I keep getting the worse end of the A/B testing stick..
I used to hate Golang for not having generics and how verbose getting basic things done was. Then I read posts like this and realise, my god, Rob Pike was so, so right.
Do these people ever ship anything? Or is it just endless rearranging of deckchairs?
The most annoying part is that you can't just go to source code or docs and understand some code. I still can't do it after spending many years using it. You have to wade through 7 layers of macros and traits to understand some basic thing most of the time.
It is easier to understand musl-libc code compared to understanding a http library in rust which is just insane to me.
If they're smart they'll have multiple accounts with bets each way netting to ~0 on all kinds of topics, and just omit the losing trade on the information they actually have.
Visited Fiji and stayed in the "locals" area rather than in one of the tourist resorts. Everywhere I went, would get stopped by locals and asked how my day was going, where I was going, what I was up to.
Shamefully my tourist-shields were at maximum after experiences in Morocco/Ethiopia and similar, and many people I ignored and kept walking as fast as I could.
Eventually I found myself in a conversation I couldn't easily escape from and I realised... they're just being friendly. They were all just being friendly. I spoke to dozens afterwards and had nice little chats, with no motives, no scams, no sales, no brothers-uncle's shop that I must visit.
(I did get scammed in the taxi though, by someone who didn't make conversation :) )
Solar installer costs are broadly comparable as Australians are better qualified and even if they weren't comparable the fraction of the cost isn't enough to explain the total difference.
There's various studies comparing the two countries, Tesla did one and found various technical approach changes and permitting reforms. It suggests labor is 7% of the cost in the US. Soft costs around acquisition, sales and marketing can be 18%.
Who are the customers? Who is buying this shit?
reply