For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more andreimackenzie's commentsregister

Assuming the query language for the graph DB you have in mind is declarative like SQL, I recommend templated queries. I have found this technique scales pretty well for query complexity, makes it relatively trivial to "get to the query" if something needs to be debugged in the details more easily outside of the app, and it makes performance-oriented optimization work far easier.

I've had my share of headaches with the various flavors of ORM and GraphQL and always come back to query templates, e.g. MyBatis in the JVM ecosystem or Go's template package. There is still value in abstracting this from the REST web service interface to make it possible to change the app<->database connection without disrupting REST clients. It's possible to reuse parameter structs/classes between the REST client and DB+template client layers to avoid a lot of rote translation. It seems simple and repetitive, but actually saves time compared to the GraphQL/ORM complexity as apps & query complexity scale, in my experience.


Simplicity is a feature. Sure, how complicated can effective enums really be, but Go's general philosophy is to think hard (& sometimes for a long time) before adding every bell and whistle.

I have a far easier time delving in to previously unknown Go code for the first time compared to something like Scala (or even Java). Go is a solid language for those who value that and want to enable the experience for others.


Simplicity can be taken too far. Take this to its logical close cousin and you would have Forth or Basic.

Also, we have been doing computer language design for quite awhile now. This isn’t a new frontier. The deficiencies in Go aren’t in areas of “oh, we never thought of that!”, but are in very well known areas with known solutions.

I find Go code is obscured with house keeping code that isn’t necessary in better languages.


That simplicity can lead to more complex and hard to read/support code.

For example, to encode a JSON structure with a dynamic top-level key you need to write a custom marshaller OR marshal twice. That's... awful. Like bonkers level insane.


About 10 years ago we set up a pretty slick integration from our office wifi controller to Google's account directory using WebDAV and Radius. It allowed us to have a WAP for insiders who could authenticate with their Google-issued one-time-passwords (needed because normally a Google login requires MFA if configured) and access our internal network with their individual Google accounts (no sharing of wifi credentials; no separate wifi login to manage for offboarding).

It was certainly Less Secure by modern standards, but it saved a boatload of time for everyone to be right on our internal network as soon as they connected to wifi instead of having to VPN in.


Agree. The main benefit of environment variables is portability. They work pretty much the same way in every platform I have encountered. As soon as a system gets into files, it's possible to get bogged down in the complexity of Windows vs. Unix, file permissions, SELinux, etc. The simplicity is often worthwhile over the extra security theoretically possible with well-managed files.


Advertisers not spending as much


I wondered about this exact risk as I learned about the "overemployed" phenomenon. It feels like a matter of time before credit bureaus offer this as a service to the many corps that already pay them for credit checks. It's hard to be legitimately employed without a data trail these days.


>It feels like a matter of time before credit bureaus offer this as a service to the many corps that already pay them for credit checks.

Not only does it exist, Equifax itself offers this as a service, Equifax's offering is called The Work Number.

You can access your record here - https://employees.theworknumber.com/?_ga=2.101852741.5983227...

From the article -

>For a company like Equifax, this trends represents a potential business opportunity. The company sells a product called Talent Report Employment Monitoring so other companies can keep tabs on potential moonlighting by their employees, too.


I've really enjoyed Golang for this reason. The powers that be take a cautious approach before adding major new stuff, even when the community complains loudly about lack of important features (e.g. generics for many years until recently). Go code I've written years ago is easy to adapt to new versions and ways of doing things, even when I have to switch gears to others languages for months/years at a time. Go is predictable.


How estimation is marketed matters, too. Don't let your stakeholders think of it as an over-estimate. It's an estimate that considers the needs of the full software development lifecycle: automating tests, developing monitoring, refactoring to clean up old tech debt, etc.


Makes sense. It also must be nice to convey the dev lifecycle factors that go in to estimates.

Where I work refactoring is a trigger word and we are advised to never say the word when talking to management and it must never show up in any planning material/meetings etc.

Someone made the mistake of including refactoring efforts in their plans and they were asked to scrub it out.


This all boils down to how much you trust your management with the long-term health of the business. Not refactoring may actually be the right call. Maybe it will become a necessity later, maybe not.

If management just wants to flex their authority, you can inflate your initial estimates. It's only unfair when one side treats an estimate as a negotiation and the other side doesn't.


These are real problems, but there can also be mitigations, particularly when it comes to people scaling. In many orgs, engineering teams are divided by feature mandate, and management calls it good-enough. In the beginning, the teams are empowered and feel productive by their focused mandates - it feels good to focus on your own work and largely ignore other teams. Before long, the Tragedy of the Commons effect develops.

I've had better success when feature-focused teams have tech-domain-focused "guilds" overlaid. Guilds aren't teams per-se, but they provide a level of coordination, and more importantly, permanency to communication among technical stakeholders. Teams don't make important decisions within their own bubble, and everything notable is written down. It's important for management to be bought in and value participation in these non-team activities when it comes to career advancement (not just pushing features).

In the end, you pick your poison, but I have certainly felt more empowered and productive in an org where there was effective collaboration on a smaller set of shared applications than the typical application soup that develops with full team ownership.


Once ancient content no longer produces enough ad revenue for the companies storing it, I suspect they will support it only in paid tiers for users who really want it. Something like this is already happening in Google's paid tiers: each user is valuable as part of Google's advertising audience, but the economics no longer make sense once those users keep many GB on Google's servers.


Ancient content can be moved away from edge servers and more to archival.


This already happens at YT. It's pretty common knowledge that after 300 views, the handling of a video changes a lot, including moving from a cheap storage medium to the media used for popular content.


already happening. ever notice youtube videos with low view counts take a really long time to load?


Presages an interesting digital future in which nothing ever truly goes away, it just gets slower and slower. An endless inner migration that never quite hits the stopping point, and which could theoretically be reversed if only the slowed content garnered enough attention.


Endlessly falling towards an event horizon of obscurity, but never reaching it (from the perspective of an external observer).


I'd expect a significantly different scale. Specifically, I'd expect normal tiers ranging from "instant" to "several seconds", then a huge gap, then a rock bottom tape archive.

From a more zoomed out look, you could simplify it to only two speeds. The fast speed taking 0-15 seconds, and the slow speed taking minutes to hours. Any content that's been accessed once or twice in the last week or month would be on the normal tiers. Extremely dead content could fall to the tape tier, but it has nowhere further to fall, and it would take only a tiny amount of activity to rescue it.

I don't really see a reason for there to be a continuous falloff in speed. There's not really anything between hard drives and tape for responsiveness, either existing or proposed, that I'm aware of. Nor is there anything slower than tapes.


I suppose that's what it would look like currently, but as content increases and the aggregate long tail of unpopular material grows ever larger there could be shifts in desirable storage solution characteristics that fit different economic niches. An extremely dense, extremely cheap, and extremely slow WORM storage device could find a place somewhere in the future. Cheap as in order(s) of magnitude less.

The next immediate step from online tapes today could be offline tapes with online indexes & robotic retrieval systems. These exist today. The continuous falloff would be a matter of priority ranking given to content requests-- not merely FIFO-- so ever less popular irrelevant content gets shoved further back in the robotic retrieval queue. A recently iced bit of content might be top priority for the tape loader while something not touched in years might sit hours down the queue. The continuous decline isn't defined by the storage media but instead by the capacity of the retrieval systems. Speed would continue on a slow decline as content increases even more and the low economic value of that content make investing in increased capacity impractical.

Eventually you get to a point in some far off future where the retrieval time for some obscure bucket of bits is measured in significant fractions of a human lifetime, where a dying grandfather requests a video of his wedding 70 years earlier only to have it arrive just in time for his own grandson's dying moments decades later.

I think I've gone too far imagining unlikely slow storage dystopian futures though, so I'm going to stop now before I start ranting about the Slow God who needs only enough access requests from the masses of his adherents to prioritize his retrieval from the depths of cold storage. But Dante Alighieri warned of what was stored in the coldest depths and it was no god... Oh God what hath this comment awakened?!?!

Okay now I'm really done.


I don't really see a reason to prioritize content that hasn't been accessed in 2 months over content that hasn't been accessed in 200 months, when picking what tape to get next. Either way there is only one person waiting.

And I'm already assuming the tapes are offline, because online tapes would just be a waste of money.

Another issue is finding enough content suitable for very high latency systems. Right now they seem to basically just be for backups.


Tape and cheap disk are about the same price per GB, but tape is more stable over long periods and doesn't have to be powered (although power for very large, slow and rarely accessed disk is low).


> There's not really anything between hard drives and tape for responsiveness, either existing or proposed, that I'm aware of. Nor is there anything slower than tapes.

It makes me wonder what a storage device would look like that is cheaper than HDD, similar or better storage density, and allows random access, with a trade-off for slower speed?


Blu-ray disc?


Unfortunately, even putting eight double-sided discs in a cartridge only gives you 5.5 TB which is a joke compared to 26 TB hard disks. It costs a fortune as well. https://pro.sony/ue_US/products/optical-disc-archive-cartrid...


They cost more than a hard drive and are less dense. And those "archival disc" cartridges Sony makes are even more expensive, with drives that cost more than tape drives.


Good point, this is what I would also expect.

It's also quite telling how in most of these services, old content is very well hidden in the UI and sometimes near impossible to get to.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You