For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | danielvaughn's commentsregister

I also hate these sharp edges. After a long working session I have deep grooves in my wrists, and my skin is red with irritation. It's uncomfortable enough that it distracts me from work. It's the very antithesis of good design.

Disagree with the overall argument. Human effort is still a moat. I've been spending the past couple of months creating a codebase that is almost entirely AI-generated. I've gotten way further than I would have otherwise at this pace, but it was still a lot of effort, and I still wasted time going down rabbit holes on features that didn't work out.

There's some truth in there that judgement is as important as ever, though I'm not sure I'd call it taste. I'm finding that you have to have an extremely clear product vision, along with an extremely clear language used to describe that product, for AI to be used effectively. Know your terms, know how you want your features to be split up into modules, know what you want the interfaces of those modules to be.

Without the above, you run into the same issue devs would run into before AI - the codebase becomes an incoherent mess, and even AI can't untangle it because the confusion gets embedded into its own context.


I feel like you're pretty strongly agreeing that taste is important: " I'm finding that you have to have an extremely clear product vision...""

Clear production vision that you're building the right thing in the right way -- this involves a lot of taste to get right. Good PMs have this. Good enginers have this. Visionary leaders have this....

The execution of using AI to generate the code and other artifacts, is a matter of skill. But without the taste that you're building the right thing, with the right features, in a revolutionary way that will be delightful to use....

I've looked at three non-engineer vibe-coded businesses in the past month, and can tell that without taste, they're building a pretty mediocre product at best. The founders don't see it yet. And like the article says, they're just setting themselves up for mediocrity. I think any really good PM would be able to improve all these apps I looked at almost immediately.


The way I understood it, the original article is saying the _only_ remaining differentiator is taste and the comment you replied to is saying "wrong, there are also other things, such as effort".

I don't necessarily interpret the comment you replied to as saying that "taste is not important", which seems like what you are replying to, just that it's not the only remaining thing.

I agree that taste gets you far. And I agree with all the examples of good taste that you brought up.

But even with impeccable taste, you still need to learn, try things, have ideas, change your mind etc.. putting all of that in the bucket of "taste" is stretching it..

However, having good taste when putting in the effort, gets your further than with effort alone. In fact, effort alone gets you nowhere, and taste alone gets you nowhere. Once you marry the two you get somewhere.


Aren’t you just making their point stronger? Effort is what is being replaced here, with some taste and a pile of AI (formerly effort) you can go to the moon.

But you still need effort, its not only taste. "Only" means you can do it with no effort.

In other words, it requires a tremendous amount of effort to fully communicate your tastes to the AI. Not everybody wants to expend the time or mental effort doing this! (Once we have more direct brain/computer interfaces, this effort will go down, but I expect it will not be eliminated fully)

This is the second time in two days I've seen a subthread here with folks seemingly debating whether or not defining and communicating requirements counts as work if the target of those requirements is an LLM system.

I'm confused as to why this is even a question. We used to call this "systems analysis" and it was like... a whole-ass career. LLMs seem to be remarkably capable of using the output, but they're not even close to the first software systems sold as being able to take requirements and turn them into working code (for various definitions of "requirements" and "working").

I'm also skeptical that direct brain interfaces would make this any less work; I don't think "typing" or "english" are the major barriers here, anymore than "drafting" is the major barrier to folks designing their own cars and houses... Any fool thinks they know what they need!


Thinking might even be more difficult: Unfiltered thoughts, intrusive thoughts, people with no inner voice to encode as text...

At some point, just an idea will be enough for your Neurolink to spawn an agent to create 1000 different versions of your idea along with things that mimic your tendencies. There will be no effort, only choice.

As both a software engineer and a creative, I absolutely do not want 1,000 versions of what I am trying to make generated for me. I don't care if it's free or even cheap. I want to make things.

I know this is a concept deeply alien to a lot of HN's userbase but I did not get into programming or making art to have finished products; that's a necessary function that is lovely when it's reached, but ultimately, I derive my enjoyment from The Process. The process of finding a problem a user has, and solving it.

And yes I'm sure Claude could do it faster than me (and only at the cost of a few acres of rainforest!) but again, you're missing the point. I enjoy the work. That is not a downside to me.


Could I even remember 1000 versions of a thing and still distinctly know which one is which?

Deciding between 1000 different versions is a lot of effort IMO. With manual coding, you’re mostly deciding one decision point at a time, which is easier when you think about it. It just require foresight which comes from experience

That deciding between 1000 things is a lot of effort is so clear that I must wonder if the one you’re responding to was being ironic.

> Effort is what is being replaced here

Not really. The effort required to produce the same result has declined, but it has been on the decline for many decades already. That is nothing new. Of course, in the real world, nobody wants the same result over and over, so expectations will always expand to consume all of your available effort.

If there is some future where your effort has been replaced, it won't be AI that we're talking about.


Effort is still (and probably will always be) the hardest thing to replace.

Any time someone says AI can do this, and do that, and blah blah. I say ok, take the AI and go do that.. the barrier to entry is so low you should be able to do whatever you want. And they say, oh, no, I don't want to do that (or can't, or whatever). But it should be able to be done.. And I just nod, and sip my drink, and ...

.. and I'd like to point out these are seasoned professionals that I've seen put in effort into other things in their careers that have the capacity to literally do whatever is they want to do, especially now.. and they choose not to do so, at least not without someone guaranteeing them a paycheck or telling them they have to do it to survive.


“ I've looked at three non-engineer vibe-coded businesses in the past month, and can tell that without taste, they're building a pretty mediocre product at best.”

Are you doing this altruistically for friends - or as a consultant?


Both a) to help a friend out and b) to help non-technical founders I've meet at some Meetups/AI events to launch their product. My short-term goal is to put together a checklist/cheatsheet for all the technical things someone needs to do to launch a business because it's not just having a webapp running on Vercel with Supabase. And if they do have an app, is it a complete mess or not.

I think the solo-founder hype is an overplayed unless the person has the right skills, and even worked at a tech company, and knows what they're getting into. Alerting and monitoring for example is one of like 30 things they should be aware of.


> Disagree with the overall argument.

It's leaning in a good direction, but the author clearly lacks the language and understanding to articulate the actual problem, or a solution. They simply dont know what they dont know.

> Human effort is still a moat.

Also slightly off the mark. If I sat one down with all the equipment and supplies to make a pair of pants, the majority of you (by a massive margin) are going to produce a terrible pair of pants.

Thats not due to lack of effort, rather lack of skill.

> judgement is as important as ever,

Not important, critical. And it is a product of skill and experience.

Usability (a word often unused), cost, utility, are all the things that people want in a product. Reliability is a requirement: to quote the social network "we dont crash". And if you want to keep pace, maintainability.

> issue devs would run into before AI - the codebase becomes an incoherent mess

The big ball of mud (https://www.laputan.org/mud/ ) is 27 years old, and still applies. But all code bases have a tendency to acquire cruft (from edge cases) that don't have good in line explanations, that lack durable artifacts. Find me an old code base and I bet you that we can find a comment referencing a bug number in a system that no longer exists.

We might as an industry need to be honest that we need to be better librarians and archivists as well.

That having been said, the article should get credit, it is at least trying to start to have the conversations that we should be having and are not.


It doesn’t really matter how good your taste is if you are drowning in the ocean of crap.

Customers can’t find you


This is an underrated comment. You could have the best product out there, but AI has not only lowered the effort for competitors but has flooded traditional ways to get your product known, from outbound sales to content marketing. Sometimes make you question whether there are customers anymore.

You make a really salient point about having a clear vision and using clear language. Patrick Zgambo says that working with AI is spellcasting; you just need to know the magic words. The more I work with AI tools, the more I agree.

Now, figuring out those words? That's the hard part.


> Now, figuring out those words? That's the hard part.

To be clear, this is the hard part for comp sci majors who can't parse other disciplines. Language isn't a black box for everyone.


Jensen Huang said he commands thousands of AGIs but still feels pretty useful.

Founders and CEOs are still needed to set direction, bring unique vision to life, and build relationships for long-term partnerships—-as long as humans still control the economy, that is.


> Without the above, you run into the same issue devs would run into before AI - the codebase becomes an incoherent mess, and even AI can't untangle it because the confusion gets embedded into its own context.

We have a term for this and it is called "Comprehension Debt" [0] [1].

[0] https://arxiv.org/abs/2512.08942

[1] https://medium.com/@addyosmani/comprehension-debt-the-hidden...


I'm not sure I agree the term applies. Comprehension debt, as I understand it, is just the dependency trap mentioned in that arxiv paper you linked. It means that the AI might have written something coherent or not, but you as a human evaluator have little means to judge it. Because you've relied on it too much and the scope of the code has exceeded the feasibility of reading it manually.

When I talk about an incoherent mess, I'm talking about something different. I mean that as the codebase grows and matures, subtle details and assumptions naturally shift. But the AI isn't always cleaning up the code that expressed those prior assumptions. These issues compound to the point that the AI itself gets very confused. This is especially dangerous for teams of developers touching the same codebase.

I can't share too much detail here, but some personal experience I ran into recently: we had feature ABC in our platform. Eventually another developer came in, disagreed with the implementation, and combined some aspects of it into a new feature XYZ. Both were AI generated. What _should_ have happened is that feature ABC was deleted from the code or refactored into XYZ. But it wasn't, so now the codebase has two nearly identical modules ABC and XYZ. If you ask Claude to edit the feature, you've got a 50/50 shot on which one it chooses to target, even though feature ABC is now dead, unreachable code.

You might say that resolving the above issue is easy, but these inconsistencies become quite numerous and unsustainable in a codebase if you lean on AI too much, or aren't careful. This is why I say that having a super clear vision up front is important, because it reduces this kind of directional churn.


> This is why I say that having a super clear vision up front is important, because it reduces this kind of directional churn.

I'm on my 6th or 7th draft of a project. I've been picking away at this thing since the end of January; I keep restarting because the core abstractions get clearer and clearer as I go. AI has been great in this discovery process because it speeds iteration much more quickly. I know its starting to drift into a mess when I no longer have a clear grasp of the work its doing. To me, this indicates that some mental model I had and communicated was not sufficiently precise.


Yep, for sure. Restarting is the right choice IMO, it's way easier than trying to untangle from a previous iteration.

> I've gotten way further than I would have otherwise at this pace, but it was still a lot of effort, and I still wasted time going down rabbit holes on features that didn't work out.

By the time I'm done learning about the structure of the code that AI wrote, and reviewing it for correctness and completeness, it seems to be as much effort as if I had just written it myself. And I fear that will continue to be the reality until AIs can be trusted.


Well that is not how anyone is doing agentic coding though. That sounds like just a worse version of traditional coding. Most people are building test suites to verify correctness and not caring about the code

Test suites don't verify correctness. They just ensure that you haven't broke something so bad the specific instances that the tests assert have turned into a failure. You can have a factorial function and more likely the test cases will only be a few numbers. Which does not guarantee correctness as someone who know about the test cases can just put a switch and return the correct response for those specific cases.

The compromise is worth it in traditional coding, because someone will care about the implementation. The test cases are more like the canary in the coal mine. A failure warrants investigations but an all green is not a guarantee of success.


Regardless that is why most people are moving faster than writing by hand

Moving faster towards what? The biggest diff? The most tickets closed? Or a working software?

Working software/new features.

I think you're missing the point. Effort is a moat now because centaurs (human+AI) still beat AIs, but that gap gets smaller every year (and will ostensibly be closed).

The goal is to replicate human labor, and they're closing that gap. Once they do (maybe decades, but probably will happen), then only that "special something" will remain. Taste, vision... We shall all become Rick Rubins.

Until 2045, when they ship RubinGPT


> but that gap gets smaller every year (and will ostensibly be closed)

As long as you build software for humans (and all software we build is for humans, ultimately), you'll need humans at the helm to steer the ship towards a human-friendly solution.


The thing is, do humans _need_ most software? The less surfaces that need to interact with humans, the less you need humans in the loop to design those surfaces.

In a hypothetical world where maybe some AI agents or assistants do the vast majority of random tasks for you, does it matter how pleasing the doordash website looks to you? If anything, it should look "good" to an ai agent so that its easier to navigate. And maybe "looking good" just amounts to exposing some public API to do various things.

UIs are wrappers around APIs. Agents only need to use APIs.


> And maybe "looking good" just amounts to exposing some public API to do various things.

Maybe, but you still need humans to make that call. The software is still built for humans no matter how much indirection you add.

There is a conceivable day where that is no longer true, but when you have reached that point it is no longer AI.


> do humans _need_ most software?

Yes, if it's not redundant software. The ultimate utility is to a human. Sure, at some point humans stopped writing assembly language and employed a compiler instead, so the abstraction level and interfaces change, but it's all still there to serve humans.

To use your example, do you think humans will want to interact with AI agents using a chat interface only? For most tasks humans use computers today, that would be very unwieldy. So the UI will migrate from the website to the AI agent interface. It all transforms, becoming more powerful (hopefully!), but won't go away. And just how the advent of compilers led to an increase of programmers in the world, so will AI agents. This is connected with Javon's paradox as well.


Yeah. The UI will still be there, but it'll be a guy.

A little guy who lives in your phone and who's really good at APIs. (And, by that point, hopefully good at keeping track of things, too!)


I imagine that the gap with current work can largely be closed, but are we really confident that this will hold with the new work that pops up? Increasingly I think we’re lacking imagination as to what work can be in a post AI world. I.e. could an abacus wielder imagine all the post computer jobs?

do you need taste if you can massively parallel a/b test your way to something that is tasteful? say like you take your datacenter of geniuses and have a a rubin-loop supervising testing different directions. shouldn't that be close enough?

"taste" here is an intractable solution. Just take a look at how architecture has varied throughout the history of mankind, building materials, assembly, shape, flow, all of it boils down to taste. Some of it can be reduced to 'efficiency' -- like the 3 point system for designing kitchens, but even that is a matter of taste.

Find three professional chefs and they will give you three distinct visions for how a kitchen should be organized.

The same goes for any professional field, including software engineering.


Can infinite monkeys produce Shakespeare?

It was the best of times. It was the blurst of times.

That approach leads you to products like instagram.

Isn't this a temporary situation though.

Today: Ask AI to "do the thing", manual review because don't trust the AI

Tomorrow: Ask AI to "do the thing"

I'm just getting started on my AI journey. It didn't take long before I upgraded from the $17 a month claude plan to the $100 a month plan and I can see myself picking the $200 a month plan soon. This is for hobby projects.

At the moment I'm reviewing most of the code for what I'm working on, and I have tests and review those too. But, seeing how good it is (sometimes), I can imagine a future where the AI itself has both the tech chops and the taste and I can just say "Maybe me an app to edit photos" and it will spit out a user friendly clone of photoshop with good UX.

We already kind of see this with music - it's able to spit out "Bangers". How long until it can spit out hit rom-coms, crime shows, recipes, apps? I don't think the answer is "never". I think more likely the answer is in N years where N is probably a single digit.


No, I don't think it is temporary. As AI becomes more powerful, we'll simply ask it to do more difficult things. There's a level of complexity where "do the thing" is insufficient. We'll never be at a place where AI can infer vast amounts of nuance from simple human requests, which means that humans will always need to be able to describe precisely what they want. This has always been the core skill for software developers, and I just don't see that changing.

Do you believe a junior developer now will never surpass you?

Why couldn’t AI do the same?


It's not a matter of whether it surpasses me. In some respects it already has - I watch Claude Code spitting out long terminal commands that I've never even seen in my 15 year career.

The question is whether AI will ever become good enough to magically infer information where none is provided.

For instance, I've had this startup idea for an itemized physical storage company. We'll never reach a point where I can simply say "Hey AI, create all the software necessary for an itemized physical storage company". It's not because AI won't continue to improve, it's because there's literally not enough detail in that statement to understand what I mean. It's too vague. I'm sure the AI of tomorrow could do a pretty good job in guessing what I mean by it, but the chance of it capturing my vision is literally 0%.


It might have a better vision than you and pursue that vision instead. Why should the AI wait for your impetus when countless founders and CEOs didn’t?

AI has no intrinsic way to align its efforts to solve human problems. In order to solve that problem, you'd need an enormous amount of nearly real-time data feeding into the model. Then the model would need to routinely look for patterns and identify ways to improve human life in some way. It would make today's models look tiny by comparison.

What we're building today isn't even remotely close to that.


> We already kind of see this with music - it's able to spit out "Bangers"

“Bangers” being roughly equivalent to garbage mass marketed radio pop? Or “We are Charlie Kirk” lol


> ... for AI to be used effectively.

I'm continually fascinated by the huge differences in individual ability to produce successful results with AI. I always assumed that one of the benefits of AI was "anyone can do this". Then I realized a lot of people I interact with don't really understand the problem they're trying to solve all that well, and have some irrational belief that they can get AI to brute force their way to a solution.

For me I don't even use the more powerful models (just Sonnet 4.6) and have yet to have a project not come out fairly successful in a short period of time. This includes graded live coding examples for interviews, so there is at least some objective measurement that these are functional.

Strangely I find traditional software engineers, especially experienced ones, are generally the worst at achieving success. They often treat working with an agent too much like software engineering and end up building bad software rather than useful solutions to the core problem.


"AI" tools I've got at work (and am mandated to use, complete with usage tracking) aren't a wide-open field of options like what someone experimenting on their own time might have, so I'm stuck with whatever they give me. The projects are brown-field, integrate with obscure industry-specific systems, are heavy with access-control blockers, are already in-flight with near-term feature completion expectations that leave little time for going back and filling in the stuff LLMs need to operate well (extensive test suites, say), and must not wreck the various databases they need to interact with, most of which exist only as a production instance.

I'm sure I could hack together some simple SaaS products with goals and features I'm defining myself in a weekend with these tools all on my own (no communication/coordination overhead, too!), though. I mean for an awful lot of potential products I could do that with just Rails and some gems and no LLM any time I liked over the last 15+ years or whatever, but now I could do it in Typescript or Rust or Go et c. with LLMs, for whatever that's worth. At work, with totally different constraints, the results are far less dramatic and I can't even feasibly attempt to apply some of the (reputedly) most-productive patterns of working with these things.

Meanwhile, LLMs are making all the code-adjacent stuff like slide decks, diagrams, and ticket trackers, incredibly spammy.

[EDIT] Actually, I think the question "why didn't Rails' extreme productivity boost in greenfield tiny-team or solo projects translate into vastly-more-productive development across all sectors where it might have been relevant, and how will LLMs do significantly better than that?" is one I'd like to see, say, a panel of learned LLM boosters address. Not in a shitty troll sort of way, I mean their exploration of why it might play out differently would actually be interesting to me.


> The projects are brown-field, integrate with obscure industry-specific systems, are heavy with access-control blockers

These are cases where I've seen agentic solutions perform the best. My most successful and high impact projects have been at work, getting multiple "obscure industry-specific systems" talking to each other in ways that unblocks an incredible amount of project work.


> I always assumed that one of the benefits of AI was "anyone can do this". Then I realized a lot of people I interact with don't really understand the problem they're trying to solve all that well

I've been through a handful of "anyone can do this" epiphanies since the 90s and have come to realize the full statement should be "anyone can do this if they care about the problem space".


If every project you have tackled has come out successful, then you are managing to never tackle a problem that is secretly literally impossible, which is a property of whatever prefilter you are applying to potential problems. Given that your prefilter has no false positives, the main bit of missing information is how many false negatives it has.

> Strangely I find traditional software engineers, especially experienced ones, are generally the worst at achieving success. They often treat working with an agent too much like software engineering and end up building bad software rather than useful solutions to the core problem.

This feels a bit like a strawman. How do you assess it to be bad software without being an engineer yourself? What constitutes successful for you?

If anything, AI tools have revealed that a lot of people have hubris about building software. With non-engineers believing they're creating successful work without realizing it's a facade of a solution that's a ticking time bomb.


> without being an engineer yourself?

When did I say I'm not a software engineer? I have a software engineering background (I've written reasonably successful books on software), I've just done a lot of other stuff as well that people tend to find more valuable.

> What constitutes successful for you?

The problem I need to solve is solved? I'm not sure what other measure you could have. Honestly, people really misunderstand how to use agents. If you're aim is to "build software" you're going to get in trouble, if your aim is to "solve problems" then you're more aligned with where these tools work most effectively.


> graded live coding examples for interviews

Yeah, for those you can just relax and trust the vibes. It's for complex software projects you need those software engineering chops, otherwise you end up with a intractable mess.


If it's for a complex software project the first question you need to ask is "does this really need to be software at all?"

Honestly this is where most traditional engineers get stuck. They keep attacking the old problem with new tools and being frustrated. I agree that agents are not a great way to build "complex software projects" but I think the problem space that is best solved by a "complex software project" is rapidly shrinking.

I've had multiple vendors try to sell my team a product that we can build the core functionality of ourselves in an afternoon. We don't need that functionality to scale to multiple users, server a variety of needs, be adaptable to new use cases: we're not planning to build a SaaS company with it, we just need a simple problem solved.

But these comments are a treasure trove of anecdotes proving exactly my point.


I work with security researchers, so we've been on this since about an hour ago. One pain I've really come to feel is the complexity of Python environments. They've always been a pain, but in an incident like this, where you need to find whether an exact version of a package has ever been installed on your machine. All I can say is good luck.

The Python ecosystem provides too many nooks and crannies for malware to hide in.


Glad this was one of the objects captured, it's absolutely stunning to see in person: https://www.metmuseum.org/art/collection/search/24671

I wish they had captured one of their Faberge eggs; those are almost more impressive.


Incredible. Why isn't it in France?


Not sure, but there's also a Van Gogh in that 3D collection, you could ask the same question for that one.


Probably the same reason there are french imperial eagles in British museums.


The provenance according to the Met:

>Henry II, King of France (until d. 1559);

>Carl August, Grand Duke of Saxe-Weimar-Eisenach, Residenzschloss, Weimar (by 1804–d. 1828);

>by descent to Wilhelm Ernst, Grand Duke of Saxe-Weimar-Eisenach, Residenzschloss, Weimar, later Schloss Heinrichau, Lower Silesia, Germany (now Henryków, Poland) (1901–d. 1923);

>his widow, Feodora, Grand Duchess of Saxe-Weimar-Eisenach, Schloss Heinrichau (1923–1929;

>sold in May, 1929, to Kahlert & Sohn);

>[E. Kahlert & Sohn, Berlin, 1929;

>sold on December 14, 1929, for $135,000, to Sir Joseph Duveen for Mackay];

>Clarence H. Mackay, New York (1929–d. 1939; his estate, 1939, inv. no. A-17;

>sold through Jacques Seligmann & Co. on May 15, 1939, to MMA).

Unfortunately, this does not answer "why did it leave France?"

However, the book "Merchants of Art, 1880-1960: Eighty Years of Professional Collecting" (1961) by the rather famous art dealer Germain Seligman offers this missing link:

>Parade armor of King Henri II, embossed, damascened and gilded. Later presented by King Louis XIII to Bernhard von Weimar.


Thanks


The museum helpfully has a "Provenance" tab that gives you the answer to this question. (the answer in this case is market capitalism)


In addition, I think token efficiency will continue to be a problem. So you could imagine very terse programming languages that are roughly readable for a human, but optimized to be read by LLMs.


That's an interesting idea. But IMO the real 'token saver' isn't in the language keywords but it's in the naming of things like variables, classes, etc.

There are languages that are already pretty sparse with keywords. e.g in Go you can write 'func main() string', no need to define that it's public, or static etc. So combining a less verbose language with 'codegolfing' the variables might be enough.


I'm not an expert in LLMs, but I don't think character length matters. Text is deterministically tokenized into byte sequences before being fed as context to the LLM, so in theory `mySuperLongVariableName` uses the same number of tokens as `a`. Happy to be corrected here.


Running it through https://platform.openai.com/tokenizer "mySuperLongVariableName" takes 5 tokens. "a", takes 1. mediumvarname is 3 though. "though" is 1.


You're more likely to save tokens in the architecture than the language. A clean, extensible architecture will communicate intent more clearly, require fewer searches through the codebase, and take up less of the context window.


Go is one of the most verbose mainstream programming languages, so that's a pretty terrible example.


Maybe not a perfect example but it’s more lightweight than Java at least haha


If by lightweight you mean verbosity, then absolutely no.

In go every third line is a noisy if err check.


Well LLMs are made to be extremely verbose so it's a good match!


I think there's a huge range here - ChatGPT to me seems extra verbose on the web version, but when running with Codex it seems extra terse.

Claude seems more consistently _concise_ to me, both in web and cli versions. But who knows, after 12 months of stuff it could be me who is hallucinating...


To you maybe, but Go is running a large amount of internet infrastructure today.


How does that relate to Go being a verbose language?


Its not verbose to some of us. It is explicit in what it does, meaning I don't have to wonder if there's syntatic sugar hiding intent. Drastically more minimal than equivalent code in other languages.


Verbosity is an objective metric.

Code readability is another, correlating one, but this is more subjective. To me go scores pretty low here - code flow would be readable were it not for the huge amount of noise you get from error "handling" (it is mostly just syntactic ceremony, often failing to properly handle the error case, and people are desensitized to these blocks so code review are more likely to miss these).

For function signatures, they made it terser - in my subjective opinion - at the expense of readability. There were two very mainstream schools of thought with relation to type signature syntax, `type ident` and `ident : type`. Go opted for a third one that is unfamiliar to both bases, while not even having the benefits of the second syntax (e.g. easy type syntax, subjective but that : helps the eye "pattern match" these expressions).


Every time I hear complaints about error handling, I wonder if people have next to no try catch blocks or if they just do magic to hide that detail away in other languages? Because I still have to do error handling in other languages roughly the same? Am I missing something?


Exceptions travel up the stack on their own. Given that most error cases can't be handled immediately locally (otherwise it would be handled already and not return an error), but higher up (e.g. a web server deciding to return an error code) exceptions will save you a lot of boilerplate, you only have the throw at the source and the catch at the handler.

Meanwhile Go will have some boilerplate at every single level

Errors as values can be made ergonomic, there is the FP-heavy monadic solution with `do`, or just some macro like Rust. Go has none of these.


Lots of non-go code out there on the Internet if you ever decide you want to take a look.


You’re not missing anything. I’ve worked with many developers that are clueless about error handling; who treat it as a mostly optional side quest. It’s not surprising that folks sees the explicit error handling in Go as a grotesque interruption of the happy path.


That’s a pretty defensive take.

You don’t have to hate Go to agree that Rust’s `?` operator is much nicer when all you want to do is propagate the error.


I think I remember seeing research right here on HN that terse languages don't actually help all that much


I would be very interested in this research... I'm trying to write a language that is simple and concise like Python, but fast and statically typed. My gutfeeling is that more concise than Python (J, K, or some code golfing language) is bad for readability, but so is the verbosity of Rust, Zig, Java.



People make fun of me but I'll never skip a chance to complain about how large these phones are. I hate it so much. I have a standard iPhone, not a max, and it causes real pain in my wrist if I use it too much. Was honestly thinking about downgrading to the last SE model even though it's several years out of date.


I want to +1 every comment in this thread. Phones are too big now. I don't understand Apple's weird obsessions, first trying to make all the phone so thin you cut your hand holding it, and then making it too big to fit comfortably in your pocket unless you are walking around in camo pants.

You know what I would like? When I tap on the search and type the first few letters of an app on my phone, and the app appears, and I click on that -- I would like the app to open. Only happens about half the time now. UI is getting worse with every release.


Many people used low iPhone mini sales to point at the idea that small phones aren't popular anymore.

They might be right, but the "Mini" was more like a return to the size of the 6 & 8; not the same size as the 5 or prior SE. So for me it was still too large.

https://imgur.com/a/iphone-mini-vs-iphone-5-vs-iphone-6-case...

The "usable screen" is where my thumb can reach, not whatever idea people have in their heads about the total size of the phone or anything, truthfully.

Anyway; hit recognition of the keyboard is so far behind where it was in the iPhone 4/5 generation that I doubt modern iOS would even be functional; even if you excused the padding issues that would inevitably be an issue.


> Anyway; hit recognition of the keyboard is so far behind where it was in the iPhone 4/5 generation that I doubt modern iOS would even be functional; even if you excused the padding issues that would inevitably be an issue

Right?? It is worse than I remember right? I'm not crazy.


Using the iOS 26 keyboard on an iPhone SE 2/3 is a truly miserable experience now. Upgrading from 18 was a terrible mistake


POS Apple just made me upgrade my iPhone Mini to 26 so that I could pair my new Apple Watch, because I just broke the old one.

I wasn't sure I wanted another Apple Watch, but it was the easiest thing to buy, and I don't have to figure out how to transfer all the data and set it up somewhere else.

But I definitely regret going the "easy" way; iOS 26 is truly awful, what the fuck.

I'm going to figure out what fitness/sport watch I really want to use next because I doubt I'll be sticking to iPhone with what they have on offer these days...


Luckily, hearing all the complains early adopters of 26 had, I disabled auto updates on my SE. Since you can't go back to previous iOS version, leaving it on is a bit risky in general.


I recently switched from an iPhone 16 to an Air and my experience is the opposite. I type way more accurate on the Air (even when both dictionaries are reset and have no screen protector what could make the touch less sensitive). I do not know why.


> The "usable screen" is where my thumb can reach, not whatever idea people have in their heads about the total size of the phone or anything, truthfully.

In the early days of the phablets I had an observation that has mostly held true all these years later. At the time I noticed you could accurately predict whether someone wanted the large or small form factor based on their usage patterns. Did they tend to use their device while sitting down? Or did they tend to use their device while on the move? This indicated whether or not they typically used 2 hands vs 1 hand.

It turned out the 2 handers dominated the market, unfortunately for people like you & I.


Until you hit 'Search' at the bottom right it shows you a preview result set - that can differ completely from the one you get then. Because two are not enough, they added a third with 'Siri suggestions' as top row. Which is not in the Search settings but in the ones for Siri. The iOS docs[1] misname it as 'Suggest App' when it is called 'Suggest Apps Before Searching' which only the iPadOS docs [2] get right. Did I mention they cut useful info from the iOSv26 version[3] and changed the URL?

[1]: https://support.apple.com/guide/iphone/about-siri-suggestion...

[2]: https://support.apple.com/guide/ipad/about-siri-suggestions-...

[3]: https://support.apple.com/guide/iphone/turn-siri-suggestions...


The silver lining if ai takes all of our jobs will hopefully be that the people responsible for all of this become destitute as well


woah


Apple follows the market. There just aren’t many people who want small phones, HN notwithstanding. If they sold like hotcakes they’d have a full lineup.

And I kind of get it. Philosophically I want a small phone. Realities of age and eyesight forbid.

The market is basically people who don’t read or watch videos on their phone, and who have excellent eyesight, and who don’t care about having the best cameras. 100% legit market segment, but that Venn intersection is too small to be worth it.


I don't agree with this. In my view, there are plenty of cases where the product changes are shoved down our throats.

I think the problem is that the product folks don't actually listen to the market. They read Jobs' biography and are convinced that they will tell their users what product they will like and that they will see the light later on.

The sad reality is: they are not Jobs (and even he was not faultless). So, we get Mac like Windows interfaces, we get mail clients losing features, we get AI in every single app you see, etc.

Just my 2c.


Why do you buy things you don’t like?

And if you’re convinced that most people don’t like most products… why don’t you make a fortune building what people actually want?


> Why do you buy things you don’t like?

In my case, because it's the only option in some situations.

For eg, I wanted a phone that has an unlockable bootloader and a decent/Qualcomm CPU.

I'm immediately down to Motorola and OnePlus.

If I want future proof performance, I'm down to the OP 15 only.

Yes, I have RSI today, but with any luck that can go away. However, a Vivo will not get a BL unlock tomorrow magically.


The iphone air isn’t popular either and yet here we are. They preferred releasing a huge thin phone than a tiny thin phone. Even if the % of clients is small, there are still millions of potential mini clients


I interpret the same facts differently: I see Apple realizing that the SE form factor doesn’t sell enough to be worth it, and trying something different with the Air. It sounds like the Air will likely go the way of the SE, with occasional updates but not every year.

Apple is very good at market research and understanding users… but not perfect. I think they genuinely believed the Air would sell a lot more than it did.

And “millions” is not necessarily a lot. Apple sells 250 million phones a year. A SKU that sells 3 million is a distraction with much lower ROI against R&D than a mainline phone. It takes just as much engineering to create and as much manufacturing to produce, so fixed costs are spread among many fewer units.


> Realities of age and eyesight forbid

Am old. Am experiencing presbyopia. Am still very much tied to my mini on the default font size. When I can't read something I just pinch/zoom. Meanwhile it's easy to hold & use in one hand while walking down the street, and fits into normal sized pockets.


Why do almost all phones have to be in that narrow band of 6.5 to 6.9 inches?

I wish there were more size choices on both ends of the spectrum. While most people prefer more choice below 6", I would like some choice above 7", since I keep my phone in my belly pouch, and never use it one-handed. My current Huawei Mate20X is actually ok at 7.2" (but worse than the Mediapad X1 I had before which at 7" was actually wider) but is way behind on Android updates, and will soon stop running my banking app.


Quick reality check that

- 7" used to be tablet category, e.g the Nexus 7

- anything above 6" would be considered phablet

Phones are really just like cars now, size inflation included.


While I agree with the spirit of the thread and dearly love my mini, I think this reasoning doesn’t account for a substantial reduction in bezels: my iPhone 5S had more than a centimetre of black bars above and below its 4" display (altogether it was 5.4" in diagonal), I bet those phablets you mentioned had even bigger bezels and were closer to modern 8.5" phones.


I loved the size of my iPhone 6, and very iPhone that I’ve used after that has been too big.


It's not an obsession. Its calculation. They noticed bigger phones lead to customers buying more services and apps.


That’s an entertaining construction of “people are more likely to buy things they get more utility from” that somehow removes agency from consumers.


Yes. It's the Apple's product first philosophy that Steve Jobs repeated again and again:

"A lot of times, people don't know what they want until you show it to them."

"Some people say, 'Give the customers what they want.' But that's not my approach. Our job is to figure out what they're going to want before they do."

"You can't just ask customers what they want and then try to give that to them. By the time you get it built, they'll want something new."

"If I had asked customers what they wanted, they would have said 'a faster horse."


Is that the direction of causality or it's the other way around? Maybe people buy larger screens because they want to watch Netflix or TikTok on their phone more comfortably than on smaller screens. I do love small and light phones (an A40 right now) but I watch movies on a tablet. If I were often on the move or sharing home with many people, maybe I would use a larger phone.


> first trying to make all the phone so thin you cut your hand holding it,

Except the cameras that stick out. Why do I want a phone thinner than the camera lenses?


> I don't understand Apple's weird obsessions

Selling you Apple Watch ?


I dived into a niche world of small phones recently while looking for replacement to malfunctioning Pixel 4a (which is apparently now considered compact phone). There's a few small manufacturers in China making some, with 4 inch or 5 inch screen, like Aiphor or Unihertz. And by "small" I mean "they use kickstarter to fund their R&D" small.

Other than that... Nobody's really bothering with compact phones anymore, in the US or in the rest of the world. Bummer.


> Nobody's really bothering with compact phones anymore, in the US or in the rest of the world. Bummer.

And the worst thing is that app developers do not bother with testing their apps on small phones. So even if someone would produce small phone, many apps would be broken on that UI. So there's no way back.

PS 4 inch is not a small phone. iPhone 4S had 3.5" display and it wasn't small, it was normal. Small is something like 2" screen I suppose. All modern phones including these "iPhone Minis" are egregiously huge.


I would not go as far as calling the iPhone Minis "egregiously huge", keep in mind that screen size is not a great measure for phone sizes across different generations. You could easily fit a 4+ inch display into the form factor of the 4S with modern technology, the bezels on those phones were huge. Unless my math is off, the housing of the 4S has a diagonal of just over 5 inches.


> All modern phones including these "iPhone Minis" are egregiously huge.

Agreed - going from the original SE to the mini meant a big downgrade in usability for me, as it's now hard to reach the top of the screen.


My assumption is that very few people who like Dom Joly sized phones use them one handed


I don't give a stuff about the vast majority of "apps". Webpages work fine.

Built in ones work fine - mail, safari, music, maps, photos

Major ones work fine - bbc sounds, slack+teams, whatsapp, various authenticator programs


Yep. Aiphor's BlueFox NX1 with 4 inch screen is roughly the same size as original iPhone, but has a larger screen (iPhone had bigger bezels and the home button underneath). To me it feels a bit too small for things like typing/texting for example.

Unihertz Jelly Star has 3 inch screen, that's way too small for me.

But they exist and so do people who buy them.


We do, and it is a pain. It is incredibly easy to defeat any kind of design or in fact HID guidelines by cranking text size to the max on these smaller devices.


> Nobody's really bothering with compact phones anymore,

They need to show all that ad somewhere right?


The phones get larger and the UI gets less information dense every release. More padding, more offsets, more dead space.


Yeah, this is what bothers too much. Retina displays for low density content? We could've remained at 800x600.


I'd pay good money for a small phone with nothing but a unix terminal


Termux on Android. Lots of hardware choices.


Apple needs all that space so that the aesthetic can fit in.


I have a pet theory about increasing phone sizes

> Screen size is area (x^2) and battery size is volume (x^3). As battery life is a critical feature, a bigger screen supports (a nonlinear) better battery life.

https://news.ycombinator.com/item?id=44588733


This does not square with especially Apple's unending obsession to make phones as thin as possible. Which is doubly stupid when it makes them so fragile that the first thing you do after taking it out of the box is to wrap it in a thick rubber shell.


What obsession about making thin phones? iPhones are pretty thick and have been that way for years. The Air being an outlier, of course, but it's an intentionally thin phone in a lineup of thick and heavy ones.


I think it’s even better than that. Your cellular modem (on all the time) scales at O(1) with phone size. Same for on-board tasks that do not involve the screen. Powering your RAM (also on all the time) is similar, but larger (more expensive) phones may tend to have more RAM.


I had a Sony Xperia Z1 mini, that was close to the size of a SE but had double the battery lifetime.


We made fun of phablets, only for them to become the default.


well we have galaxy fold tablet-as-phone, so... maybe not all is lost?


I have found the iPhone Air much easier to hold than the iPhone 13 Pro it replaced because of how light it is, even though the iPhone Air has a bigger screen.


The 17e weighs roughly the same at a smaller size, and the mini weighs significantly less. Not to mention the first SE, compared to which even the mini is heavy. Yes the Air is lightweight compared to the Pro, but that’s a low bar.

The other thing with the Air is that you can’t really use it one-handed, which is what most people who like small phones are after, besides pockability.


The first SE was the best form factor I've ever owned.

Incredibly small. Incredibly light. Pretty thin, even in a case. Had a headphone jack, Lightning and Touch ID.

The only thing I like about the new iPhone designs is the action button. Having an automation which automatically turns silent mode off or on based on whether I'm home or not is pretty cool. You can't do that with a physical switch.


Agreed about the SE. I’m still depressed anytime I take it in hand that we don’t have something like it anymore.


I don’t think anyone should make fun of you for it but I’m in the opposite boat. I’m so glad that they make the pro max variants because most smartphones are so small that it hurts my fingers to bend them in the unnaturally inward way it requires to hold and interact with them.


It wouldn't be so bad if both options were available. By all means, have your giant pro max or whatever if you want, but that shouldn't be the only reasonable option.


I agree, and ideally neither should be tied to the phone’s technical specs.


For me it's not the fingers, it's the eyesight.


Boban Marjanovic posts on HN? Would never have guessed.


Great reference. It's a shame most people seeing this comment won't get it.


I switched from a pixel 3 to a pixel 9 pro over a year ago, and I still miss the smaller form factor. the pixel 3 really was the perfect size for me and I am sad I can no longer get a smallish phone with a high end processor.


I switched from Pixel 4a to Unihertz Max (5G phone with 5 inch screen from a small Chinese startup). Love the form factor, I can keep the phone in my front pants pocket again, next to my keys or wallet. I'm somewhat reluctant to put anything sensitive on that phone (like my email), but happy overall.


I still have my Pixel3. I use it without a SIM for random stuff, and miss the small form factor. It is half the thickness of a Pixel 10, my current phone!


I’m still running an SE2020. I was expecting the latest update (with liquid) to be the death of it. But performance has actually improved significantly! Very unexpected.

It’s been a great phone!


Same. I'll probably try to run it out for another year and end up with the 18e. Its been a great phone, but its days have to be numbered.


I just bought a refurbished SE. It works great with newest iOS, Liquid Glass, etc. Do it!


Which gen of SE?


3rd is the only one still supported.



Funnily, the large display is the most important thing for me. I find my efficiency directly proportional to display size (which holds for laptops too).

If a 30 second task can be done in just 20 on a device with a larger display, that's absolutely worth it for me.

Also larger device tends to imply longer battery life too.


If the task can’t be done in a few taps I feel I’m better off opening a laptop anyways.

However the market agrees with you so I must be missing something. I used to think it was driven by media consumption on phones, and that I try to avoid, but this isn’t the first time I have heard people tout phone productivity gains from a slightly larger screen.


> I must be missing something

I wouldn't assume that.

The expression 'fat fingers' concerns the phenomena where users (including myself) lack the eyesight and finer motor skills required to type accurately on a small keyboard, so a slightly larger display makes all the difference.

Perhaps you simply have those fine motor skills (and good eye sight) so a larger device isn't necessary to prevent typos and remain productive.


I was able to thumb type at high speed and accuracy on the 3.5 inch iPhones. On modern iPhones, I produce more typos than ever, because apparently Apple thinks it knows which key I meant to hit better than I do, even with all the autocorrect and suggestions turned off.

I've banned social and don't use my phone much anymore, so it's less of an issue than it used to be, but it's really frustrating when I'm clearly hitting the right key and it insists on pretending I hit an adjacent key.


It’s so strange. Like, the obviously correct thing is to have a small ML model that learns the user’s typing patterns, which of their own typos they fix, which auto- and suggested fixes they reject, what rare, made-up, and jargon words they use, what acronyms they use, etc.

Instead, after 20 years of iPhone usage, I am not allowed to type the names of projects I use all the time without fixing the autocorrect every time, or (as you say) carefully hitting the left side of the F key because dead center will produce a G.


My preferred conspiracy theory is that larger, brighter screens hold attention better, so everyone involved in the whole “user experience” (phone manufacturer, application developers, advertisers, etc.) prefers (whether they consciously realize it or not!) phones to have a larger screen. Smaller phones make fewer demands; who would want to make a device like that?


I believe you are correct.


I have my phone with me all of the time and it has an always on connection. My laptop has neither trait


> I have my phone with me all of the time and it has an always on connection

That's a bug, not a feature. You don't need to be able to do every task all the time. In fact, it's nice to be able to separate that aspect.


Yes I can just print out directions on Mapquest before I leave home, tell people to page me and I will call them back from the nearest pay phone, carry around my Walkman and my Polaroid camera with me.

Have you ever thought that with 80% of web traffic coming from mobile, you might be the outlier?

What next? The old Slashdot meme “I haven’t watched TV in 20 years. Do people still watch TV?”


What a ridiculous exaggeration.

I said you don't have to do every task, not do no tasks.

> Have you ever thought that with 80% of web traffic coming from mobile, you might be the outlier?

Wow, snark too. In recent years, I've taken a much more luddite stance against mobile device usage for my own mental wellbeing. Maybe other people should follow suit.

"You should do your taxes on the train". No, I don't think that I will. You're free to stress yourself out like that. Have fun.


So park_match is the arbiter of what tasks should and should not be done on your phone?

> You should do your taxes on the train". No, I don't think that I will. You're free to stress yourself out like that. Have fun.

I along with 90% of the taxpayers in the US take the standard deduction - meaning my taxes are stupid simple.

I logged into the TurboTax app, it offered to download my w2’s, I answered five questions, entered the date that I wanted IRS to take out the taxes we owed and we were done. I don’t have to even file state taxes for the state I live in?

How would that have been easier from a computer? In fact it would have been harder if I had to use a computer because the other option I had to submit my W2 was to take a picture of it.


I believe the GP was talking about trying to do “real work” on a phone, which is something many people try to do — but which many others find a repugnant idea, as they currently use the excuse of the impracticality of doing work on a phone as a lever to push back on letting work intrude on their personal life.


Have you thought that a lot people work remotely and don’t sit at their desk all day? I have deliverables and deadlines to meet like everyone else. But sometime I would rather go for a swim in the middle of the day in the heated pool when the sun is still out - benefit of living in Florida in the winter - and work late and be contactable (wearing my watch) or go to the gym during the day (downstairs). Business traveling is also a thing (much less than I use to), working with people in different time zones where I’m not going to refuse to answer a message from a coworker in India if they need me.

It’s a fair trade off. My company gives me a lot of leeway during the day and I am flexible about time zones.


Is this really a driving factor for people? If I anticipate tasks that I can't wait to get back to a good work environment to do, I'll bring my laptop and tether on my phone. It's a fantastically more productive setup than trying to ssh in via a phone keyboard or even write a long email. 1 inch extra on the phone screen diagonal won't move the needle there for me.


Yes and even though you haven’t watched TV in 20 years ((c) Slashdot) people still watch TV.

The feigned ignorance on HN that most normal people don’t pull their laptops out to do everything in 2026 is amazing


It's not feigned. I'm astonished to learn how hard people will work for the (seemingly to me) false convenience of doing things on their phone which would be (to me) much more straightforward to do on a more suitable device.

So I tend to assume that these stories are often the outliers, and that my personal experience is more common. I recognize the fallacy, and I suspect we're both wrong and we're both right. I just honestly don't know which one of us is more of which.

It probably devolves to a question of what kind of work we're talking about. The work that I do (or the way I do it), I do not believe could be done effectively on a phone or tablet, most of the time. I work with people whose work can be done there. And there are probably more of them that there are of me. But that does not mean I could become one of them.

(addressing your comment on another subthread): if music, camera, and web are a person's "work", then sure. But that does not resemble "work" for me in any way.


So it’s not feigned ignorance…

Again, you can look at the worldwide penetration of cell phones vs laptops, where most web traffic comes from, the amount of resources spent on mobile development vs desktop, the amount of revenue globally of phone sales vs PC sales, etc

I also don’t spend all day working and I definitely don’t take out my laptop when I’m not working


Worldwide is not relevant, and mobile-vs-desktop dev is not relevant.

Mobile-vs-web dev is probably a better metric. And developed, mature markets only. Anything else introduces the second- and third-generation tech gap inconsistencies.


Yes Japan and S. Korea who led in mobile penetration for decades are poor countries..

Are you really arguing in 2026 about time spent on mobile vs PCs?


This is non-responsive to my comments.

Also, you're being unnecessarily unpleasant in these threads; I wish I had read down further before replying initially, but I'm done now.


> Anything else introduces the second- and third-generation tech gap inconsistencies

This is completely responsive to your thread if you think countries that use their phones more than the US is some type of signal they are 3rd world countries.


Only about 70% of Americans even own a laptop[1]. Factor in many of those being ancient with 15 minute battery life, plus user preferences… it’s hard to see how that could be the majority use case.

1. https://www.statista.com/statistics/228589/notebook-or-lapto...


It‘s also generational. My 18yo sister in law is now applying for colleges and the word “application” immediately made her look for an app. That the whole process happened on a (not mobile friendly) website was rather surprising to her.

(English is her 3rd language)


I am 51. The amount of Ludditism on HN shouldn’t come as a surprise to me. But it does. Most older 70+ year old people I know don’t own a computer at all and would never use one. But they do know how to get to things they need on their phones.


It's not feigned ignorance, it's disbelief that people are comfortable working in such an inefficient and frankly unpleasant way.

Can I file my taxes on my phone? Probably. But I could also set myself on fire, and I think that might be more fun. Why would I not want to use a tool that is 100x faster and 1000x easier to use for any task more complex than writing a sentence?

I'm a developer. I've heard of developers SSH'ing from their phone and developing that way. It's impressive, in the same way removing all your fingernails is impressive.


Really? I did file my taxes by phone. It took me all of five minutes.

90% of taxpayers claim the deduction - meaning their taxes are really simple.

I launched TurboTax, it offered to download my and my wife’s W2s, I clicked through a few buttons on a wizard and I was done. It had all of my information from the prior year so it already knew my employer.

As far as speed, have you compared the speed of the fastest iPhone to a low to midrange x86 PC? The latest A series chips in the iPhone are faster in single core performance than an M1 MacBook Air which is no slouch. But all that is besides the point. How fast of a computer do you think you need to file taxes? There was tax filing software for the 1Mhz Apple //e in 1986. You just had to print it out.

I entered maybe one number?

I live in a state without state taxes so I didn’t even have to file states.

FWIW, I also shopped for, did all of the paperwork before closing, for the house we had built in 2016 from my phone.


The things that require more than a few taps to do aren't things that need to be done at a moment's notice. Those things can wait until I'm at my laptop.


Just Thursday, I left home at 6AM got in an uber, waited at the airport got on a plane for an hour and half , waited at another airport, got on another plane for four hours, uber to the Airbnb and while I was out to dinner that night, my wife and I were planning a trip we were taking during the summer.

Are you suggesting that o just queue everything up until I set my laptop up?

Again you realize you’re the odd one right with most activity these days taking place on mobile?


Is there anything you need to do during that time? Or are you looking to fill that time with whatever to keep you occupied and enjoy whatever?

If it's the former, you lead a very different life from me. There are very few things in my life that show up and require immediate action (or action within 24-ish hours for that matter. Most things can wait). If it's the latter, I try to fill that time with reading.


Again, are you so much in the HN bubble you don’t realize that most people don’t wait to get home to their laptop (if they even have a laptop) to get things done in 2026?

Is it really that hard to look at stats and realize that you might not be the normal one?


I'm sure they do it that way. I'm also not convinced there's any actual need to do it that way.

You also didn't answer my question. Nothing in your travel scenario there, if I were in your shoes, would need me to use my phone for more than a few taps per actual task, while the rest of my phone use would go to mindless browsing or reading. What specific tasks are you imagining popping up here that I would then queue to my laptop?


Have you ever thought that the HNs crowd superiority complex above the “commoners” and unwashed masses may be unwarranted?

And no I’m not a young guy - my first computer was in 1986 in 6th grade…


I'm not trying to say my way is superior. On the contrary, I'm asking what use cases you have that you are unable to solve. If you have a genuine need to send emails from your phone at a moment's notice, then I can't argue with that; if you can't wait to respond to the emails you receive, there's nothing else to really do about it. That's why I'm asking what needs you have. I'm trying to better understand your situation, trying to put myself in your shoes.

But if you have no desire to actually respond to my inquiry, I shall remain in the dark.


Yes you will if you think most communication personally or even work related is happening via email…

You know sending email via mobile has been popular since 2003 right?


> Yes you will if you think most communication personally or even work related is happening via email…

The same principles apply to Slack, Teams or whatever else you may use. I don't do work outside of work hours, so what would I know. Email was just the example I thought of in the moment. Again, I'm asking you a question out of a desire to better understand your situation.

Personal correspondence doesn't take many taps to do. It's rarely more than 25 characters at a time in my experience.

> You know sending email via mobile has been popular since 2003 right?

'sending' and 'popular' are doing some pretty heavy lifting here. Reading, sure, I'll buy that. Sending? I'm not sure sending emails longer than two sentences from any device without a keyboard has ever been popular, for values of. It's probably more popular than ever given that touch keyboards make it reasonably possible, but James S. Casual isn't sending a lot of emails from his phone just through the sheer power of not sending many emails to begin with.

And 'popular' for that matter. Possible, sure, but how many people ever even had a mobile device that could send email before the iPhone came out?

I'm sure sarcasm and implying I'm stupid are great ways to convince your interlocutor, or the unseen masses for that matter.


I’m not implying you are stupid. I’m saying straight out that you’re feigning ignorance (ie not that you are ignorant) and you know how the world works in 2026.

Myself personally, I work remotely. I might be running errands during the day and still be monitoring Slack so I can be on a call at 6 or 7 at night with someone in another time zone.

I also travel for work - consulting - and travel personally during the work day and may work after I land. Even if not for work, do you wait to get to your computer to respond to text messages? Check HN?


Believe it or not, I'm not feigning ignorance. I just lead a very different life from you.

> Myself personally, I work remotely. I might be running errands during the day and still be monitoring Slack so I can be on a call at 6 or 7 at night with someone in another time zone.

> I also travel for work - consulting - and travel personally during the work day and may work after I land.

See, I would never do this. A.) I don't work remotely (not out of a desire not to, but it's just not viable with my current line of work), and B.) If I did, that work would be zoned off away from my personal life. If there's downtime, I can kill time by browsing whatever, but I wouldn't be out and about but also 'at work' at the same time. Work-time and personal time basically never mix in my life, and I'd like to keep it that way.

If you're 'at work' for 48 hours at a time, while travelling, then having to respond instantly at any given time makes a lot more sense, although I'd probably still want to defer those responses until I can get some downtime during any given travels to then type up my responses on an actual keyboard. I can however understand if that's not really viable in your life of work.

> do you wait to get to your computer to respond to text messages?

I've never(?) sent a text message longer than maybe 100 characters. Most are a fair bit shorter than that, and I don't send that many to begin with. Same goes for Discord, although confirming that is harder, since it's contaminated with messaged written with an actual keyboard.

> Check HN?

To read? Sure. I even read books on my phone. Respond to a comment? Not unless my response is really short.


You're being pretty defensive / aggressive about what some might call a phone addiction.

Most on HN know the data: healthier people tend to enforce boundaries with their devices. The average person is addicted, yes, but I'm not sure being "the odd one" in an era of actually decreasing literacy and numeracy and attention span is the insult that you seem to think.


No I’m not living in some Luddite bubble. I am sure you’re also surprised that I’m not running Linux and using KDE Connect.

Again, look at the statistics..


I was ready to agree with you, as that was my belief. (I also agree it's a sign of a dangerous addition, but just like everyone in the 60s smoked, everyone today use phones)

Then I cam across this, showing about even split between laptop and phone

https://tgmstatbox.com/stats/united-kingdom-device-usage-bre...

I'd assumed it was more like 80% phone


The statistics suggest that being perpetually glued to a phone is negative for your life across essentially every dimension.


Yes I’m sure that using my phone for things that in the before times I would have used a desktop computer to do over a 2400 baud modem is a negative for my life. Those negatives are around social media


> while I was out to dinner that night, my wife and I were planning a trip

Were you out to dinner with your wife?


Yes, during our first night of our 45 day stay in another country and she got a text from someone she is meeting on the first leg of our trip during our summer 45 day domestic trip asking could we come 3 days earlier. We were looking at our calendar, our Hyatt points, flights etc. while enjoying live music and planning our next get away.

I’m sure you would have thought we should have waited to take out my laptop when we got back home.


I don't understand why are you downvoted. Are people in this thread really pulling out a laptop and trying to get it connected (or pay for one with a cellular modem) every time they need to respond two words to an email, call a uber or look up where is the nearest coffee shop that is open at an odd hour?

HN seems to have some really weirdly prescriptive view of how people ought to use their devices in a way that is almost like Steve jobs.


> every time they need to respond two words to an email

I don't have my work email on my phone, and personal emails basically never need any actual response.

> call a uber

This is a few clicks and not a big ask regardless of the exact device. You can order an Uber regardless of screen size.

> look up where is the nearest coffee shop that is open at an odd hour?

Google Maps works fine on smaller screens. Ask me how I know.


And they probably are also surprised that I’m using an iPhone where I can’t use Docker and have JavaScript enabled on my browser.


> I don't understand why are you downvoted. Are people in this thread really pulling out a laptop and trying to get it connected (or pay for one with a cellular modem) every time they need to respond two words to an email, call a uber or look up where is the nearest coffee shop that is open at an odd hour?

Because some of us read the original comment and thought maybe the discussion should be responsive to it:

> If the task can’t be done in a few taps I feel I’m better off opening a laptop anyways.

Talking about Uber, email and directions in Maps are literally "task[s] that can be done in a few taps". Perhaps being less "weirdly" defensive and taking the time to think about the discussion you're about to jump into would be helpful?


Surely your laptop has a mic on it and probably a camera. It also has blueteeth, wifi and stuff. Your phone has much the same and can act as a proxy to whatever is missing on your laptop and vice versa. Obviously, getting your laptop to fit under or within your "lap" is a bit of an ask!

Things like KDE Connect provide a direct bridge and a bit of imagination does the rest.

If your laptop isn't cutting the mustard then ditch it ...

... Oh your phone has a tiny screen and a shit mic and speakers, unless you stick it in your ear?

Horses for courses.


Oddly enough, I don’t carry around my laptop in my pocket all of the time. You do realize that in 2026 most people do most of their day to day non work tasks on phones don’t you?

Yes most people use KDE Connect..


At least for me, the effect is real, and is driven not by media consumption but ergonomics of use. But at the same time, I'd say you're not missing that much. I always preferred large screens because of productivity gains[2], but even as screens kept getting larger, the set of things that "I feel I’m better off opening a laptop" for remained the same for me.

That is, until I switched to a foldable phone (Galaxy Z Fold 7) half a year ago, and - I kid you not - I haven't used my personal laptop since that day.

FWIW, I still have a proper desktop PC; In the past decade+, I've been using a PC at home, and a "sidearm" on the go / away from home: always a 2-in-1 Windows laptop with top specs[0]. Being always with me, this laptop often replaced use of PC at home too, because of convenience & portability.

So by amount of productive use, for past 10+ years it was sidearm >> PC >> smartphone. But getting a foldable flipped it around. Having twice the screen size of a regular (large) phone is a big productivity win[1], but it's folding that makes the actual qualitative difference. Folded, the device becomes a regular smartphone - i.e. something that fits in my pocket, meaning it's always on me, in my hands, or less than 1 second away. Contrast that with tablets, whose form factor makes them basically just shitty laptops (same logistic as ultraportable, but toy OS of a phone).

I didn't expect this. I didn't even feel this change - I only noticed two months later that my laptop has been sitting unused on my desk, covered by a pile of stuff. Doing "laptop tasks" on a mobile device is still annoying (no keyboard, toy OS), but combining tablet-sized screen with portability of a phone makes them less annoying than logistics overhead of a laptop - and at least in my case, this eliminated the entire[3] space between "smartphone" and "PC".

--

[0] - Think Microsoft Surface, except I could get better specs at half the price if I bought an off-lease but pristine Dell or Lenovo.

[1] - It's not immediately obvious to people, but as things are today, a foldable phone isn't any better at media consumption than regular one, because almost all cinema, TV, videogames, etc. are all produced for widescreen - meanwhile, the inner screen of my Fold is approximately square, so e.g. for most TV, half or more of it is black at all times. However, all that extra space allows to effectively use multiple (3+) apps on screen, not to mention makes spreadsheets actually usable.

[2] - Bigger screen = less scrolling and tapping in menus, but also with text size scaled to minimum, my previous phone (S22) had a big enough screen that running two apps in split-screen became useful on a regular basis.

[3] - Well, almost. There are some tasks I really like physical keyboard and larger screen for - but for those, I just plug the phone into the screen via USB-C, and volia, it turns into a regular desktop. A shitty one, but good enough for occasional use.


That's interesting. I knew foldables have been selling well, and I assumed they were basically the promise that tablets were trying to sell but as you said- usable this time. I've never heard anyone's actual story laid out like this before though.

Now I'm having second thoughts on what I'll do myself because I would have never guessed a foldable would be ideal as you described.

I've been trying to avoid building an $8,000 tech stack of redundant devices that I don't need. Which is what Apple is all about, and then some. It's not the initial investment that bothers me, it's calculating replacement costs over time. It's pretty quickly that you have half a new vehicle in redundant electronics. It leaves you asking: why?

So while I appreciate the longevity and durability of my iPhone 12 mini, along with seamless Airdrop and the Airtag network being as handy as it gets, I'm thinking about going back to Android for docking support. This is a feature I don't think Apple will ever add until the end of time, so I may as well bite the bullet now and get another OS switch over with.

I'm not entirely convinced I would love a foldable like you do, but I am rethinking that now. I've been on the idea that Microsoft's partnership with Samsung for Phone Link features will make my life delightful at my desktop battlestation, and DeX with a lapdock will cover any mobile needs. A lapdock really does create an alternative to the battery life offered by the M-series Macbooks, while leaving me with only two devices to maintain and replace with my desktop and phone.

It's amazing with the flexibility and options offered in the Android space, whether it be my proposal or your foldable experience, how they don't have more marketshare. I think the issue is marketing, people need to be shown what they can do with a product and Apple makes Continuity and closed ecosystem features seem like a value add. When it's kind of a lure to an iCloud subscription and $8,000 personal tech stack.


You could put a sim card in a tablet in that case. Might look a little funny when doing a phone call though.


What, ummm, efficiency benefits are you finding on a smart phone? Is it related directly to the keyboard size when typing? That's kind of all I can think of, other than a really tiny display + big fingers being an issue.

I find my efficiency directly proportional to the distance from my smart phone.


I did downgrade back to my SE (from iPhone 16). Big selling point (aside from its size and rounded corners) is the physical button with fingerprint. I missed that even more than I disliked carrying a big phone around.


I'm typing this on an iPhone SE (2020).

It runs the latest iOS, although it's likely missing some of the new bits.

I prefer the size, although the screen that spans the entire front surface would be the superior device; I like the iPhone 13 Mini.


My SO has the latter and switched from the former when it started behaving erratically.

It's the very last reasonably sized iPhone and one of the very last in this category overall.


I rather have the fingerprint button!


Former small phone person here: I went from a small iphone to a large one just to substitute not having to carry around my ipad. I really wish iphone fold is here sooner.


Same. The Pixel 4a was the perfect phone for me: Light, screen exactly the right size to navigate with a single thumb whilst holding the phone in one hand, enough battery life, small enough to fit in my jean pockets comfortably.

But people buy big phones in preference to small ones, so that’s what Google & Apple manufacture. Nobody (from the POV of Apple/Google decision makers) buys these smaller phones.


Completely agree. And to make matters worse, I can't even switch to android without losing the ability to reliably send quality video to iPhone users.

Apple suffered for decades from Microsoft's anticompetitive OS monopoly, and turned around and did the same thing to the android ecosystem.

I have no idea why this sub is full of Apple fanboys. I was an Apple fan 10 years ago, but these days they no longer deserve your support.


> the ability to reliably send quality video to iPhone users.

Just curious but why? Is it iMessage lock in?


Yes exactly. RCS exists but unless an iPhone user goes and turns it on it won't work. Which means no one has it turned on.


I don't think that's true. Every iPhone user I've texted in the last 6 months at least has had rcs turned on, and that's including some very non tech savvy friends that I doubt did it manually


It's also dependent on the telco supporting it. Australian telcos don't.


RCS compatibility with iMessages relies on the telcos to implement it, particularly for group chats.

iMessages work using SMS 1-to-1, but group chats require the telcos to enable RCS instead of SMS.

Of the 3 telco network operators in Australia, none of them have enabled it.


And no one uses WhatsApp, Telegram, Line or another cross platform messaging service?


You should look into grips that attach via Magsafe.


>it causes real pain in my wrist if I use it too much

LMAO


I really enjoyed Obj-C when I did some iOS work back in 2015/2016. It was my first non-JS language, and it taught me so much that I didn't understand since I started out doing web dev.


It's very obviously not "the easy part", it's definitely hard. It's just not the only hard part. And there may be other parts that are harder in some sense.


Something can be hard and also be the easy part. Imagine you got to see into the future and use a popular app before it was released, and you decided to make it yourself and reap the profits. Would be an absolute cinch to copy it compared to trying to make a successful app from a blank page.


Some code is hard. Most business logic (in the sense of pushing data around databases) isn't. The value is in analysis and action, which the system enablrs, not the machine itself.

Creating a high performance general purpose database is hard, but once it exists and is properly configured, the SQL queries are much easier. They'd better be or we wasted a lot of time building that database.


Also totally depends on what kind of coding you are doing.

Yeah building a web app can become somewhat easy, but is distributed machine learning? What about low level kernel code?


Still easy. You just have to learned different concepts. Someone in web needs to be familiar with databases, some protocols like HTTP, DNS, some basic Unix admin,.. Someone in low level kernel code will need to know about hardware architecture, operating system concepts, assembly,... But the coding skill stays the same mostly. You're still typing code, organizing files, and fiddling with build systems.


Different type of "coding skills" and different type of complexities make these two impossible to put into the same bucket of "still easy". You've probably never done the latter so you're under impression that it is easy. I assure you it is not. Grasping the concept of a new framework vs doing algorithmic and state of the art improvements are two totally different and incomparable things. In 30M population of software engineers around the globe there is only handful of those doing the latter, and there's a reason for it - it's much much more complicated.


You are conflating problem solving and the ability to write code. Web Dev has its own challenge, especially at scale. There’s not a lot of people writing web servers, designing distributed protocols and resolving sandboxing issues either.


I'm not conflating one with each other, I am saying that "coding skill" when dealing with difficult topics at hand is not just a "coding skill" anymore. It's part of the problem.


Not knowing C after a course on operating systems will block you from working on FreeBSD. Knowing C without a grasp on operating systems will prevent you from understanding the problem the code is solving.

Both are needed to do practical work, but they are orthogonal.


Exactly but they are not orthogonal as you try to make them to be. That's just trivializing things too much, ignoring all the nuance. You sound like my uncle who has spent a career in the IT but never really touched the programming but he nevertheless has a strong opinion how easy and trivial the programming really is, and how this was not super interesting to him because this is work done by some other unimportant folks. In reality you know, he just cannot admit that he was not committed enough, or shall I say likely not capable enough, to end up in that domain, and instead he ended up writing test specifications or whatnot. A classic example of Dunning-Kruger effect.

There is a nuance in what you say. You say it is "still easy" but it is not. It is not enough to take a course on operating systems and learn C to start contributing to the operating system kernel in impactful way. Apart from other software "courses" that you need to take such as algorithms, advanced data structures, concurrency, lock-free algorithms, probably compilers etc. the one which is really significant and is not purely software domain is the understanding of the hardware. And this is a big one.

You cannot write efficient algorithms if you don't know the intricacies of the hardware, and if you don't know how to make the best out of your compiler. This cannot be taught out of the context as you suggest so in reality all of these skills are actually interwhined and not quite orthogonal to each other.


I do agree with you that there's a skill tree for any practical work to be done. And nodes can be simple or hard. But even if there are dependencies between them, the nodes are clearly separated from each other and some are shared between some skill sets.

If you take the skill tree you need to be a kernel contributor, it does not take much to jump over to database systems development, or writing GUI. You may argue that the barrier entry for web dev is lower, but that's because of all the foundational work that has been done to add guardrails. In kernel work, they are too expensive so there's no hand holding there. But in webdev, often enough, you'll have to go past the secure boundary of those guardrails and the same skill node like advanced data structures and concurrency will be helpful there.

Kernel dev is not some mythical land full of dragons. A lot of the required knowledge can be learned while working in another domain (or if you're curious enough).


No, it's not mythical but it is vastly more difficult and more complex than the majority of other software engineering roles. Entry barrier being lower elsewhere is not something I would argue at all. It's a common sense. Unless you're completely delusional. While there's a lot of skills you can translate from system programming domain elsewhere there are not a lot of skills you can translate vice-versa.


The easy part and hard are not mutually exclusive.


Honestly, it really is the easy part of the job. Really truly.

It's difficult when you're first learning but there are definitely much harder skills to learn.


99.9% of the code i write is easy, but that's just because of the sort of work i do. Its not far from basic CRUD. Maybe with pubsubs and caching thrown in for fun.

But that doesn't mean there isn't some tricksy stuff. All the linear algebra in graphics programming takes awhile to wrap your head around. Actually, game programming in general i find a bit hard. Physics, state management, multi threading, networking...


For most of the tricky stuff the hard part is thinking clearly. Deciding how to approach the problem, what to prioritize, etc

Translating that into whatever language you chose is not that hard.


Right, the actual engineering part is hard. Typing out the code without botching syntax usually isn't very hard. Unless it's a C++ type with a dozen modifiers.


What's easy for some might be truly hard for others.


Sure; but I'm not humblebragging at how talented at coding I am. I'm good at it because I have a lot of practice and experience, but I'm hardly the best.

It's the easiest part because the hard parts of the job are everything else -- you're a knowledge worker so people look to you to make decisions and figure it out. You figure it out and make it work for whatever "it" happens to be.


> It's the easiest part because the hard parts of the job are everything else -- you're a knowledge worker so people look to you to make decisions and figure it out. You figure it out and make it work for whatever "it" happens to be.

If you thought coding was easy, wait till you see the competition for knowledge workers. You're in a spot now where the part that made you valuable (implementing business rules in software) can now be done by virtually anyone.

Doing all the non-coding parts (or, as you put it, "the hard parts") can now be done by almost any white collar worker.


Sure, anyone with the knowledge and experience lol

"Knowledge worker" isn't a cutesy phrase, it means I don't get paid for my time, I get paid for what I know. Contrast that to, say, working retail where you are paid to staff the store from 8-6. It's not a value judgement (retail is hard work) it's a description.

We've already had years and years of predicting the death of software engineering to offshoring and that didn't happen for the same reason. India turns out plenty of fantastic engineers who can do my job. Those people also have better options than staffing some cut rate code factory, and you can't substitute the latter if you need the former. But nice try lol


> "Knowledge worker" isn't a cutesy phrase, it means I don't get paid for my time, I get paid for what I know.

What you appear to be missing is that (if AI coding is as good as we are told) there will considerably more people with the business knowledge to drive an AI to create their solutions.

The bit that made developers valuable was the ability to actually implement those business rules in software. You will be competing with all those laid off devs as well as those non-developers who have all that business knowledge.

In simple terms, there are two groups of people:

1. Developers, who have some business knowledge, and

2. White collar workers who have no development knowledge.

Previously (or currently, say) the supply of solutions providers came only from group 1. Now they come from both group 1 and group 2.

The supply of solutions providers just exploded, you can expect the same sort of salary that the people in group 2 get, which is nowhere close to what the people in group 1 used to get.


Yeah, nah, that's just a complete misunderstanding of what SWEs actually do lol

I'm not a code factory who occasionally talks to the suits. That isn't the job lol


> I'm not a code factory who occasionally talks to the suits. That isn't the job lol

The problem you are facing is that "person who talks to business" is a huge pool of talent, and now you have to compete with them. Previously your only competition was "person who talks to business and can code".


Nope, those people are not competition lol. I get that you want that to be true for some reason, but it's not.

I do not get paid to write code. No software engineer does. The more senior the engineer, the less code they write.

I don't even really talk to the business folks, that's what a PM is for.

I already told you what I actually do, you're free to read it and learn. Or not, I ain't the boss of you


"I already told you what I actually do, you're free to read it and learn. Or not, I ain't the boss of you"

Nobody listens to someone who talks like this. Nobody learns from someone who talks like this. You're not a leader and you're not a very good software engineer and likely if you boss anyone around, they think you're a clown.


The people for whom I've seen "coding is the hard part" are typically promoted out of the way or fired. They never entered a flow like those who considered it easy and addictive. The latter are the pillars of the eng team.


I'm curious which skills you think are much harder than programming.


It depends on whether you mean programming (typing your solution into your text editor) or programming (formalizing your solution to a problem in a way that can be programmed).


Honestly? Anything that requires a lot of manual dexterity because that takes a long time to master, like a trade or art.

People love to lionize it, but honestly I can teach the basics of coding to anyone in a weekend. I love teaching people to code. It can be frustrating to learn but it's just not that difficult. I've mostly written Python and Ruby and Node for my career. They're not super hardcore languages lol.

What is hard is learning the engineering side. I don't get paid for the code I write, I get paid because I get handed a business wishlist and it's my job to figure out how to make that business reality and execute it, from start to finish.

I tell my boss what I'm going to be working on, not the other way around, and that's true pretty early in your career. At my current level of seniority, I can personally cost the company millions of dollars. That's not even a flex, most software engineers can do that. Learning to make good decisions like that takes a long time and a lot of experience, and that's just not something you can hand off.


Totally. I would add that code that "works" it's easy to do. Code that it's efficient, easy to maintain and safe... That it's another story.

But the sad truth is that most software can be or it's done with shitty code that "kinda works" as long as the CPU it's fast enough.


And if you're in one of those jobs, you don't get paid the big bucks.


Sure, but they're going to be stuck writing software for yesterday's problems. As our tools become more powerful, we're going to unlock new problems and expectations that would be impossible or impractical to solve with yesterday's tooling.


>Sure, but they're going to be stuck writing software for yesterday's problems

As long as they get paid for it (or have fun, if it's a personal project), they couldn't care less about that. Tomorrow's problems are overrated.


How feasible would it be to scale this up to several feet in diameter? Like if you wanted to scan furniture? The device itself by default looks to hold much smaller items.


The dinosaur example lists an iPhone as source and none of their scanner models. It is also saying that it was recorded at a dinosaur theme park in Germany. This one might be meters long.


In that case I think you just take hundreds of photos by hand, probably with software which varies the focus as you take them so everything has a chance to be in focus.

The device is a way to automake taking those ~300 photos (number from the marigold example).


scanning furniture is quite a challenge for photogrammetry. your best option would be NERF or Gaussian splatting and manually guiding the camera.


Can you please explain a bit more about why it's a difficult photogrammetry challenge, or point me in the direction of resources so I can learn more about it myself? This is an exact project on my projects list, so I'd love to have a better grounding in the topic when I get around to diving in to it.

Edit: I'm more focused on getting a dimensionally accurate/stable model, vs an esthetically pleasing one, if that matters. The hope is to be able to scan a broken chair and be able to design a jig in CAD that I could then 3d print for holding a specific piece in place while everything goes back together.


Most recent gaussian and nerf to mesh algorithms are surprisingly good at getting reasonable results for objects that traditional photogrammetry would struggle with. The main challenge are reflective and uniform surfaces (e.g. lether or coated wood). See this overview what you'd want for perfect photogrammetry: https://openscan-org.github.io/OpenScan-Doc/photogrammetry/b... and also the challenging surfaces lower on that site


Same, which is why I asked. My naive intuition is that if you had an industrial grade turntable, like the one in the below video, you could hack together a hardware setup.

https://www.youtube.com/watch?v=YWaJEnKSM0w


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You