For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | Randomdevops's commentsregister

If you say interface there needs to be an implementation; and docker already does that with contexts:

  https://docs.docker.com/engine/context/working-with-contexts/
Which allows pushing to AWS, Azure and K8S.

Of course each implementation is opinionated, AWS for example is deploying to ECS with ALBs.

And it is all good for a list of services, but when you add configuration it becomes a mess:

  secrets:
    foo:
      name: "arn:aws:secretsmanager:eu-west-3:1234:secret:foo-ABC123"
      x-aws-keys:
        - "bar"
So now you can no longer use your neat docker-compose file to deploy locally on your docker as it is referring to an external secret.

So to keep the interface clean you would need to split it up in a Service model and a Deployment model that exposes the needed configuration for the target platform plus the service configuration for the specific environment.

Or something...

Thinking of something generic that works in all cases is hard and complex...and opinionated hence the many platforms, I think.

I posted this a while ago https://news.ycombinator.com/item?id=34225669


Just saw Borg, Omega, Kubernetes: Lessons learned over a decade (2016) (acm.org) https://queue.acm.org/detail.cfm?id=2898444 pass by.

" Dependency Management

Standing up a service typically also means standing up a series of related services. If an application has dependencies on other applications, wouldn't it be nice if those dependencies (and any transitive dependencies they may have) were automatically instantiated by the cluster-management system? "


Yes!

I haven't quite wrapped my head around Nix yet, but https://hydra.nixos.org/build/203347995/download/2/manual/#e...

  ### Databases
  zipcodes = {
    name = "zipcodes"; 
    pkg = customPkgs.zipcodes; 
    dependsOn = {};
    type = "mysql-database";
  };
  ...

  ### Web services

  ZipcodeService = { 
    name = "ZipcodeService"; 
    pkg = customPkgs.ZipcodeService; 
    dependsOn = { 
      inherit zipcodes;
    };
    type = "tomcat-webapplication"; 
  };
  ...
And then in the ZipcodeService you can access your dependency attributes https://hydra.nixos.org/build/203347995/download/2/manual/#e...

  contextXML = ''
    <Context>
      <Resource name="jdbc/ZipcodeDB" auth="Container" type="javax.sql.DataSource"
                maxActivate="100" maxIdle="30" maxWait="10000"
                username="${zipcodes.target.container.mysqlUsername}" password="${zipcodes.target.container.mysqlPassword}"   driverClassName="com.mysql.jdbc.Driver"
                url="jdbc:mysql://${zipcodes.target.properties.hostname}:${toString  (zipcodes.target.container.mysqlPort)}/${zipcodes.name}?autoReconnect=true" />
    </Context> 3
  '';
Still wondering if the 'zipcodes.target.properties.hostname' is a fixed schema that could be validated or if it is just a map...

So yeah it looks like I'm searching for what they describe as a 'services-model' where as docker compose is what they describe as a 'deployment model'...

Their idea of always doing a new deployment and leaving the old versions running, so you can always do a rollback is interesting.


Not at all unpopular, but they just seem to tackle a single application and maybe a database... But AWS Beanstalk, Heroku, Fly.io,... They got the CD, autoscaling, iaas down, but have multiple applications talking to eachother and you're back to managing them manually it seems.

Clicking together some apps in a GUI is nice for some random tests, but I want some kind of manifest that can be promoted between environments.


I use it extensively already :)

A few labels and magic!


We don't switch technologies. So it's compose all the way to production.

As for Kubernetes, developers currently don't run it locally, they just run the app/apps they're developing from their IDEs and connect to a 'build' environment for the other services/databases (far from ideal)


Being only able to connect to declared dependencies.

So say the application is compromised, it can't connect to the internet, from there it could only connect to the declared database and webservice. So those would need to have vulnerabilities too that could be exploiting from that end, hence limiting the blast radius.

So not really worried about physical access, but more in the lines of a RCE(Spring4Shell) probing the rest of the network or a supply chain attack that tries to send out data...


In that case, I would recommend something like cilium (which can run standalone or part of k8s) where you can setup firewalls per application/node and be alerted whenever something attempts to do something against the rules.


Well they are examples :)

But imagine the complete deployment looks like 2 monoliths, a dozen supporting webservices, 3 databases, redis, an elasticsearch cluster, ActiveMQ and a mailserver.

Then for a development track you only want to run monolith A (cost/resources/startup time) and the mailserver being MailHog so you don't accidentally send something to real addresses.

But you can't just split the 2 monoliths in separate compose files, as they share services/dbs. If you did that you would have to wire them in manually.

For local development you might only want to run webservice B (which has a db and redis as dependencies)

And indeed if you try to manage several files that need to be combined in a specific way things gets messy...

So then things seems to boil down to starting every service separately and relying on service-discovery, but then the lifecycle isn't tied together anymore. So if I want to tear them all down, how would I know which ones I spun up in this imaginary context?

Say I want to spin up an on demand integration environment for an application, what else do I start too?


> But imagine the complete deployment looks like 2 monoliths, a dozen supporting webservices, 3 databases, redis, an elasticsearch cluster, ActiveMQ and a mailserver.

Make sense, in such case you probably want to transition your production environment from compose to one node swarm cluster (docker running in swarm mode).

Then you just define ingress server (caddy/nginx/haproxy/traefik), probably some service you want to share like a database and use docker stacks (docker stack deploy) where you can keep config for each stack separated.

This approach is very modular yet still relatively simple to maintain once you understand concepts around docker swarm and how docker stacks work.

If you want to be more robust, you can describe your deployment in simple ansible recipes, but I found this approach a bit cumbersome and try to stick with simple readme file for deployment as long as possible.

For more complex deployments I'm toying with an idea of diving into hashicorp nomad but this type of tools requires a dedicated devops team which makes sense for large multi-node deployments when swarm allows you to keep stack very simple as most devs already know docker-compose and the learning curve to maintain swarm clusters is still approachable compared to properly diving into devops world.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You