For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | anbotero's favoritesregister

One pro-tip as I now somehow have a commercial bottling license these days: get pre-hydrated gum Arabic. Much easier to work with. Almost everybody who messes this up will make the mistake at the hydrating the gum Arabic stage. Blend it with any dry ingredients like sugar before using.

If you can’t source it, I’m not going to tell you that you SHOULD pretend to be a bottling company and ask a gum provider to send you some free samples, but you could and the amount they send you will last the rest of your life. TIC gums is pretty awesome and if you’re into frozen desserts has some incredible gum mixtures for ice creams, sorbets, etc.

Also, consider just using water soluble flavor concentrates and skipping emulsification all together. That’s what most pros do and it’s why Sprite isn’t cloudy like it would be if you used oils. My favorite suppliers that sell in consumer and pro-sumer qtys are Apex Flavors and Nature’s Flavors.

This probably won’t work for Cola as I think some of those ingredients have all of their flavor molecules in the oils, but as a general rule, if you can buy it at the store and it is clear, it is made using water soluble. If it is brown it probably isn’t, hence the caramel color additive.


We’ve also learned this lesson the hard way. These are now the clauses we require in every project we do:

- Payment is due X days after receipt of invoice, or immediately after the consultant has addressed any quality issues, whichever is sooner

- Late payment shall incur interest at 8% above the BoE base rate and a late fee of 100 GBP as per the UK Late Payment Legislation. Partial payments on invoices shall apply to late fees, interest, and then principal, in that order.

- In the event of a late payment the invoice for the next deliverable shall immediately fall due.

- The consultant shall be entitled to shift deadlines on deliverables in the event of a late payment as a result of any work disruption, without incurring any liability.

- Payment shall be made in X currency, or an exchange rate at X date on Oanda.com shall apply.

- The client is responsible for any bank fees incurred by their, or any intermediary bank. In the event of a SWIFT transaction it shall be made with the OUR payment code.

- The jurisdiction in the event of a conflict shall be England and Wales. Neither party shall be bound by arbitration.

- The client and consultant shall both indemnify the other up to the total value of the contract and shall not under any circumstance be liable beyond X GBP.

We also no longer share downloadable links of our deliverables until they are paid up. They get a view/comment only link for reports/data etc.

We’ve found that clients that aren’t willing to accept these terms won’t pay you either way.

We determine the net days on the invoice based on the credit rating of the client. Ironically, the good clients pay within 2-3 days normally, and the difficult ones are very “long tail”. About 1% of contracts tend to fully or partially default on their payments.

We’re in a particularly credit poor industry but our average delay due to late payment is 23 days. Those clients where we stop delivery pay on average 11 days sooner than those contracts where we don’t stop delivery.

This is based on around 2,000 invoices sent over the last 5 years.


Every time I get a new Mac, I run these commands to reduce the spacing between menu bar icons. Lets you fit at least 2x the number of items in the menu bar.

```

defaults -currentHost write -globalDomain NSStatusItemSpacing -int 2

defaults -currentHost write -globalDomain NSStatusItemSelectionPadding -int 2

```


There is a lot of documentation from Apple on how all of this works, but this is indeed expected behaviour. A way to make this smoother would have been:

  1. Doing the password reset
  2. Reboot straight back into recovery
  3. Update your new password back into your old password
  4. Boot into macOS, your default keychain will unlock but you'll still have to re-authenticate to iCloud since your machine-user identity combo will no longer match with what iCloud expects. (not sure if this is part of Octagon Trust, but there are various interesting layers to this)
Check the escalation path of key revocation for example where you don't just have longer time delays but also stricter environments where new attempts can be made (near the end): https://support.apple.com/en-gb/guide/security/sec20230a10d/...

There are a number of much more in-depth technical guides and specs, but just listing out random articles (or the Black Hat talk(s)) would probably rob someone of a nice excursion into platform security.


I haven't tested it, but this should be slightly simpler, and work better for subsequent review iterations (reviewing what changed once PR is updated):

    jj new main -m review
    jj new -m pr
    jj git fetch
    jj restore --from=big-change@origin .
Then keep squashing from `pr` to `review` as described in the article. When the PR gets a new version, rerun the last 2 commands.

Some things I've learned over the years:

1. do not show a slide full of code. The font will be too small to read. Nobody will read it

2. don't read your slides to the audience. The audience can read

3. don't talk with your back to the audience

4. make your font as big as practical

5. 3 bullet points is ideal

6. add a picture now and then

7. don't bother with a copyright notice on every slide. It gets really old. Besides, you want people to steal your presentation!

8. avoid typing in code as part of the presentation, most of the time it won't work and it's boring watching somebody type

9. render the presentation as a pdf file, so any device can display it

10. email a copy of your presentation to the conference coordinator beforehand, put a copy on your laptop, and phone, and on a usb stick in your pocket. Arriving at the show without your presentation can be very embarrassing!

11. the anxiety goes away

12. don't worry about it. You're not running for President! Just have some fun with it


Kratos is awesome, especially alongside Hydra, OathKeeper, and Keto. Super powerful combo, if not a little intimidating at first. There’s a LOT of configuration involved, but that’s to be expected if you want to host your own Auth0 replacement.

Their dynamic forms stuff is really cool too, always liked how they chose to go about that. Only complaint I really ever had is that while their docs were overall serviceable, I remember some areas were pretty lacking and I had to dig really far to find answers to some fairly common issues.


My general opinion, off the cuff, from having worked at both small (hundreds of events per hour) and large (trillions of events per hour) scales for these sorts of problems:

1. Do you really need a queue? (Alternative: periodic polling of a DB)

2. What's your event volume and can it fit on one node for the foreseeable future, or even serverless compute (if not too expensive)? (Alternative: lightweight single-process web service, or several instances, on one node.)

3. If it can't fit on one node, do you really need a distributed queue? (Alternative: good ol' load balancing and REST API's, maybe with async semantics and retry semantics)

4. If you really do need a distributed queue, then you may as well use a distributed queue, such as Kafka. Even if you take on the complexity of managing a Kafka cluster, the programming and performance semantics are simpler to reason about than trying to shoehorn a distributed queue onto a SQL DB.


Cloudtrail events should be able to demonstrate WHAT created the EC2s. Off the top of my head I think it's the runinstance event.

And while you are being sarcastic, this is the Right Way to use queues.

Upload file to S3 -> trigger an SNS message for fanout if you need it -> SNS -> SQS trigger -> SQS to ETL jobs.

The ETL job can then be hosted using Lambda (easiest) or ECS/Docker/Fargate (still easy and scales on demand) or even a set of EC2 instances that scale based on the items in a queue (don’t do this unless you have a legacy app that can’t be containerized).

If your client only supports SFTP, there is the SFTP Transfer Service on AWS that will allow them to send the file via SFTP and it is automatically copied to an S3 bucket.

Alternatively, there are products that treat S3 as a mountable directory and they can just use whatever copy commands on their end to copy the file to a “folder”


Fun read, but you probably could have installed mitmproxy with brew, pointed your IntelliJ instance through this proxy (you can either set it in your settings or run it with environment variable HTTP_PORT or HTTPS_PORT). This allows you intercept the request like wireshark and diagnose. You honestly can just intercept the interface request using wireshark but the learning curve is stepper.

What people often don't realize is that in a big business system a user may have no permission to raw data of some table, but may have permission to report which includes aggregated data of the same table, so report permissions cannot be deducted from base CRUD permissions.

If such SIAAS

    - Checks that query is SELECT query (can be tricky with CTE, requires proper SQL parser)
    - Allows editing said query by superuser only
    - Can be parametrized, including implicit $current_user_id$ parameter
    - Has it's own permissions and users can run the query if they have permissions
It's safe enough. I've seen and applied such "Edit raw SQL in HTML form" many times. It's super flexible, especially combined with some CSV-to-HTML, CSV-to-PDF, or CSV-to-XLS rendering engine.

Former Head of Security GRC at Meta FinTech, and ex-CISO at Motorola. Now, Technical Founder at a compliance remediation engineering startup.

Some minor nits. One can't be SOC 2 "certified". You can only receive an attestation that the controls are designed (for the Type 1) and operating effectively (for the Type 2). So, the correct phrase would be that Excalidraw+ has received its "SOC 2 Type 1 attestation" for the x,y,z Trust Services Criteria (usually Security, Availability, and Confidentiality. Companies rarely select the other two - Privacy, and Processing Integrity - unless there's overlap with other compliance frameworks like HIPAA, etc.)Reason this is important is because phrasing matters, and the incorrect wording indicates lack of maturity.

Also, as others have said, no one "fails" a SOC 2 audit. You can only get one of four auditor opinions - Unmodified, Qualified, Adverse, and Disclaimer (you want to shoot for Unmodified).

As fyi, the technical areas that auditors highly scrutinize are access management (human and service accounts), change management (supply chain security and artifact security), and threat and vulnerability management (includes patch management, incident response, etc). Hope this information helps someone as they get ready for their SOC 2 attestation :-)

Similarly, the report areas you want to be very careful about are Section 3: System Description (make sure you don't take on compliance jeopardy by signing up for an overly broad system scope), and Section 4: Testing Matrices (push back on controls that don't apply to you, or the audit test plan doesn't make sense - auditors are still stuck in the early 00's / "client server legacy data center" mode and don't really understand modern cloud environments).

Finally, if you're using Vanta/Drata or something similar - please take time to read the security policy templates and don't accept it blindly for your organization - because once you do, then it gets set in stone and that's what you are audited against (example - most modern operating systems have anti-malware built in, you don't need to waste money for purchasing a separate software, at least for year one - so make sure your policy doesn't say you have a separate end point protection solution running. Another one, if you have an office that you're using as a WeWork co-working space model only, most of the physical security controls like cameras, badge systems etc either don't apply or are the landlord's responsibility, so out of scope for you).

Hope this comment helps someone! SOC 2 is made out to be way more complicated (and expensive) than it actually needs to be.


In most scenarios, you are no longer running with multiple users on the same machine. Either this is a server, which has an admin team, or a client machine, which _usually_ have a single user.

That isn't 100% true, and local privilege escalation matters, but it is a far cry from remote code execution or remote privilege escalation.


Note about GitHub Windows Actions runners: I think I understand what is wrong with them, though it's somewhat conjecture since I don't actually know how it works internally.

It looks like the free CI runners have C: drive pointing to a disk that is restored from a snapshot, but often times it hasn't finished restoring the entire snapshot by the time your workflow runs, so IO can be very slow, even if you don't need to read from the still-frozen parts of the disk. Some software ran inside workflows will do heavy R/W on C: drive, but it's better to move anything that will be written to disk, e.g. caches, to D: if possible. This often leads to much better performance with I/O and more predictable runtimes, particularly when there isn't a lot of actual compute to do.


The fact that so few people blog these days makes blogging even more influential than it used to be.

You can establish yourself as something of a global expert on some topic just by writing about it a few times a month over the course of a year!

Don't expect people to come to your blog. Practice https://indieweb.org/POSSE - Publish (on your) Own Site, Syndicate Elsewhere - post things on your blog and then tweet/toot/linkedin/submit-to-hacker-news/share-in-discord etc.

Also, don't worry too much about whether you get traffic at the time you write something. A lot of the reputational value comes from having written something that you can link people to in the future. "Here are my notes about that topic from last year: LINK" - that kind of thing.

There's a lot to be said for writing for its own sake, too. Just writing about a topic forces you to double-check your understanding and do a little bit more research. It's a fantastic way of learning more about the world even if nobody else ever reads it.


Here's an off-the-cuff summary:

First you have to make space in your life for it. You need long blocks of time for deep work.

The first idea you pick is unlikely to work, so pick something and start moving. Many of the best products come out of working on something else.

When building, optimize for speed. Try to get something out in the world as quickly as possible and iterate from there.

Pick a tech stack you're familiar with, that you'll be fastest in.

Try to spend half your time on marketing/sales, even if you hate it.

The most important skill you can have is resiliance. Not giving up is the best path to success. This is hard because there is so much uncertainty in this career path.

It's worth it! The autonomy and freedom are unmatched by any other career.


I’ve often said that it is the speed of deployment that matters. If it takes you 50 minutes to deploy, it takes you 50 minutes to fix a problem. If it takes you 50 seconds to deploy, it takes you 50 seconds to fix a problem.

Of course all kinds of things are rolled up in that speed to deploy, but almost all of them are good.


I've got you:

hashtables faster than linked lists in places where lookups are frequent & item count is high


This is called prototyping, which is a valuable part of the design process; some people call it "pathfinding".

These are all inputs into the design. But a design is still needed, of the appropriate size, otherwise you're just making things up as you go. You need to define the problem your are solving and what that solution is. Sometimes that's a 1-page doc without a formal review, sometimes it's many more pages with weeks of reviews and iterations with feedback.

Don't forget: "weeks of coding can save hours of planning" ;)


The end paragraph:

>A Psychological Tip

Whenever you're called on to make up your mind, and you're hampered by not having any, the best way to solve the dilemma, you'll find, is simply by spinning a penny.

No—not so that chance shall decide the affair while you're passively standing there moping; but the moment the penny is up in the air, you suddenly know what you're hoping.


I’m already doing this, but:

- All of Wikipedia English

- Download as many LLM models and the latest version of Ollama.app and all its dependencies.

- Make a list of my favorite music artists and torrent every album I can.

- Open my podcast app and download every starred episode (I have a ton of those that I listen to repeatedly).

- Torrent and libgen every tech book I value. Then, grab large collections of fiction EPUBs.

- Download every US Army field manual I can get my hands on, especially the Special Operations Medic manual, which is gold for civilian use in tough times.

- Download every radio frequency list I can for my area of the country.

- Download digital copies of The Encyclopedia of Country Living by Carla Emory, Where There Is No Doctor, and Where There Us No Dentist.

I already have paper versions of almost all of these but it’s handy to have easily-reproducible and far more portable digital copies.


It really depends on how mature your org and stacks are. This is generally how I would do it.

1-20 people - password manager (bitwarden, 1pass, etc.) 20-30+ people - SSO

50+ people - start assigning real roles to your SSO schema

1-5 services - secrets in CircleCI and password manager is good enough.

5+ instances - use a secrets manager like Vault.

10+ instances - start using a secrets manager locally as well for dev. Start to consider using well scoped IAM policies for each of your services and team members.

15+ instances - start to think about adding additional zero trust boundaries.

Of course, this is very rough. Depending on your regulatory/compliance requirements and how much revenue you’re bringing in and from who, you might have to do this stuff sooner. In general, it should go:

1. Centralize secrets even if you can’t easily revoke people (password manager).

2. Make things easily revocable and centralized (sso).

3. Make roles and access finer grain (RBAC).

4. ^ with automation between all of these steps where it makes sense.

Something I would warn anyone of is building your own auth/secrets core tooling. This stuff is incredibly complex because of the edge cases and it’s just not worth the risk you take on by saving money unless you have a really good core business reason to roll your own. It’s also dangerous to prematurely optimize and pay the SSO tax too early. You will find that a lot of engineers appeal to emotion when it comes to risk. Something extremely helpful is going through and actually assigning a security risk score for all your systems. This might be tedious, but it brings a lot of clarity to the conversation of “what do we want to build when? What risk can we take on at any given stage?”


We're using similar approach in PHP application by facilitating https://github.com/spaze/phpstan-disallowed-calls

In essence we have defined within each domain: a) Public folder with code that other domains can use b) domain folders (src and infra) with code that only given domain can use. This way developers know not to change public contracts for a (or if they do change them they do understand they're changing public code) be it method signatures or interfaces and are free to refactor b, because these classes should not be publicly accessible and can change at any time. Even extending classes defined this way is disallowed.

This becomes helpful when operating within confines of monolith application, but with different teams owning different parts of the application. Trying to use non-public part of each domain will be prevented on commit level (developers will not be able to commit their work) rather than run level though


I've been enjoying using Dokploy recently.

https://github.com/Dokploy/dokploy

It's similar to Dokku but has a nice web UI, makes it easier to deploy Docker/Compose solutions and auto LetsEncrypt functionality is built-in by design (not as a separate plugin).

I've also built a GitHub Actions workflow to trigger off a deploy to apps hosted on it (basic cURL command but works well). https://github.com/benbristow/dokploy-deploy-action

And put together some pre-configured Compose files you can deploy for various apps. https://github.com/benbristow/dokploy-compose-templates



I read this book called "How Big Things Get Done." I've seen my fair share of projects going haywire and I wanted to understand if we could do better.

The book identifies uniqueness bias as an important reason for why most big projects overrun. (*) In short, this bias leads planners to view their projects as unique, thereby disregarding valuable lessons from previous similar projects.

(*) The book compiles 16,000 big projects across different domains. 99.5% of those projects overrun their timeline or budget. Others reasons for slipping include optimism bias, not relying on the right anchor, and strategic misrepresentation.


For runtime cost analysis, you could try Steampipe [1] with it's Powerpipe "thrifty" [2] mods. They run dozens of automatic checks across cloud providers for waste and cost saving opportunities.

If you want to automatically make these changes (with optional approval in Slack) you can use the Flowpipe thrifty mods, e.g. AWS [3].

It's all open source and easy to update / extend (SQL, HCL).

1 - https://github.com/turbot/steampipe 2 - https://hub.powerpipe.io/?objectives=cost 3 - https://hub.flowpipe.io/mods/turbot/aws_thrifty


Does WezTerm support an equivalent of iTerm's "hotkey window"?

For those unfamiliar, that's a window tied to a show/hide keybinding which when shown floats above all other windows, making a terminal instantly available everywhere - a feature I could live without, but don't care to. I'd love to switch for all of WezTerm's other features, but without that it's simply a nonstarter for me.


using zalando's patroni operator in k8s at scale for years (mainly OCP but pure k8s as well). Features like in place major version upgrade are no match for any of the alternatives checked. Close to it is CNPG (cloudnative-pg) which is 2nd best and in 1yr might take the crown. (for companies, best part is that cnpg has enterprise support for it (named pg4k, a fork of cnpg).

But, above all, I would warmly recommed anyone to first do their best to use cockroachDB (or yugadb if you like more) instead. The benefits of distributed/horiz scaled DB usually overcome the effort of moving to it (which should not be big as it's using same pg client/protocol). And it's free if you don't need enterprise features like partitions, etc.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You