What are the downsides of using terraform? We are currently in the process of redoing a lot of our infrastructure and are considering Terraform.
We had some bad experience in the past with AWS (probably 12-18months ago) and Terraform especially when it comes to manual changes to resources for environments where manual changes for testing purposes are common (think changing security group rules for example). It resulted in us having a broken state and being unable to apply changes to our Terraform deployment without tracing the manual changes and undoing them, so I'm a bit cautious about moving forward with terraform. Have you experienced this recently? I'm intrigued by your comment and would love if you could expand on it.
Ideally, don't allow manual changes to happen. It's not that hard to setup for different environments and testing, so IME, it's not been much of an issue.
However, if you really can't change your ways of working, which I understand if you can't, then try out the "terraform refresh" command. I've been importing state recently, to move some of our own infrastructure over to TF, and have found it to be quite useful for things like manual security group changes. Basically, I'm building things up bit by bit, and when one of my states gets out of sync I've been updating the local config and running that command, which brings the state back in line.
In general, once you get your workflows sorted out and running for a while, you're unlikely to have any major issues with Terraform. Just make sure to use remote states and version them whenever you can (for example, turn on versioning on the S3 bucket if you use S3 as the remote).
I never heard any excitement from people when we told them we would test some automation with their request, but otherwise I love being reminded of this. We tend to slack with that in our team, all the reasons mentioned in this post.
What we have found helped us quite a bit is, after writing the initial runbook for a task, to write a second script (usually pretty short, maybe two dozen lines) that asks you questions and then generates customised steps. We then later use these functions in the actual automation.
I'm a bit confused about this one.. we have been using SSM Session Manager for quite some time now and this looks like it does the same. We also export all logs during the session with SSM and you can see which user initiated the session. What am I missing here?
For dev environments SSH is essential, but in production environments I 100% agree with using SSM Session Manager instead of SSH. Getting terminal access to a production server is sometimes necessary but it ought to be temporary access, all actions are logged, and treated as an exception situation rather than routine. SSM session manager provides all that without requiring SSH keys and SSH firewall rules in production.
SSM session manager is basically a HTTP wrapper over a shell. You have to use browser for SSM which mostly works until it doesn't. I had trouble sometimes copy pasting to it.
This new service is basically a managed SSH so things like port forwarding etc will work. With SSM you can't do port forwarding etc because it is not SSH aware.
I love that AWS is starting to care more about providing these services. For context, we have basically been building this for 2 years in our company internally to provide hundreds of “compliant by default” accounts. Every company seems to do it themselves.
What I personally find very frustrating is the lack of being able to migrate any existing organisations into this. I’d love to get rid of some of our account provisioning but this would basically mean starting over with a brand-new AWS Organization which is impossible for us.
It still is quite a hassle to manage many accounts (and resources you need in them) so I hope this service will sooner or later help us with this.
PS: if anyone is over at re:Inforce and wants to talk about anything AWS Orgs & accounts, feel free to mail me (profile)!
I also worked on a team that built something similar, and I've seen it done in other companies. With services like this, and others like Transit Gateway, it's getting a lot easier to manage multiple accounts and VPCs. I haven't tried AWS Control Tower yet, but I am hoping it gives easy visibility into all the accounts in one place. With Amazon accounts, once you assume a role into an account, you can't see other accounts without switching back into them.
This is one area where I think GCP got it right. By using organizations and projects within one account instead of having parent and child accounts, it's quite a bit easier to see what's going on. And a parent account has a very different role from child accounts, so it makes sense to treat them as separate things.
I'm actively using[0] sr.ht with a paid membership. I really like the service and the build system is super simple and easy to use, while fulfilling everything I need for my private development. The owner has been super responsive to feedback/issues. I encourage other people to try it out as well and I am looking forward to see how this service grows.
I feel you and I think it is more of a "React + TypeScript" problem than a TypeScript problem.
We have started using Angular 2 (now.. 6?) with TypeScript for a project last year and I never had big issues with TypeScript.
Last month, one of our teams started a React project and given the success of using TypeScript before, they opted to do the same with their project. I've walked them through a few things but found myself getting frustrated a lot with weird TypeScript errors. Especially things like typing your Props was such a nightmare that we resorted to "any" a lot more than I'd like.
The project is not using TypeScript 3 so I am unsure whether that would get rid of some problems but React + TypeScript was just a frustrating experience for us.
Using TS3 with ours and the problems are definitely still there.
I think the most immediate and obvious pain point for me is typing HOCs. It's basically a matter of rearranging how you apply them until TS can automatically infer a type for you. At the same time, using `compose` instead of applying them individually solves that problem but results in strange types (that are different from the type TS infers if you apply the HOCs manually yourself in an order that makes it happy) that I suspect will cause type errors down the road when we convert the files that make use of it to TS as well.
Do I have the ability to somehow specify "use git+ssh for this dependency" with the new modules system?
Right now it seems near impossible with go to do that other than manually cloning the repositories into the correct path. We can't host our things publicly and have to use SSH to clone the repositories at my company.
It is especially frustrating in our CI/CD process if we need to manually clone our packages for setting everything up.
That's one thing that frustrated me about dep. go get is deficient in this area too. I understand the need to namespace packages, but requiring them to be hosted (or have metatags on a page) at the location the import path specifies in order to be able to pull them down is insanity.
To make matters worse, dep tried to stuff too much of a DSL into the package specification on the command line. example.com/path/pkg@hashish made it impossible to specify git@example.com/path/pkg as the location because the parser wasn't robust enough, and the package location parser wasn't/isn't smart enough to honor ssh://git@example.com/path/path as a way to be explicit about how you wanted this done.
dep did work for our use case if you edited the toml file directly, once I made a 2 character change to a regular expression in v0.3.0. We use dep and stopped upgrading with that version; I'm hoping go modules make non-public repos easier, but I'm not holding my breath.
> Go’s module system should not have to give consideration to the particular transport you want to use.
> Its references are canonical and it’s up to you to set up the relevant process for it to retrieve the source for those references.
Git has a proper notion of references; it properly separates transport from references of remotes. Go uses just `https://` links, and expects the website to have a single `<meta>` tag containing a vcs clone URL (i.e., transport) to use. VCSes have had separation and multiplicity of transport for a long time, but Go will deal with only one URL (i.e., transport), with no way for the user/distributor to specify preferences otherwise.
For example, everything from git.kernel.org to github.com allows the user to chose which URL to clone with. The remote is not supposed to know or dictate a single transport.
I think `go get`'s limited method of transport discovery is just a hack that got released into production and stagnated in its form. It was a perfectly fine hack for public repos (on the internet or on Google's intranet), but it just never got any features (let alone documentation) specifically for repos that need an authenticated way to clone them. The fact that git has a nifty way to rewrite URLs is just a luck.
My entire point is that VCSes support multiple transports since before Go came about. I never claimed Go should support HTTP and SSH only. In fact, I never claimed it should support either. I claimed it shouldn't force the VCS host to chose for the user which clone URL (and thus transport) to use.
It’s hardly magic. It’s a defined feature of git. If one couldn’t avail himself of the defined features of git, then there’d be no reason to build on top of it in the first place.
> If one couldn’t avail himself of the defined features of git, then there’d be no reason to build on top of it in the first place.
Sure build on top of it, but don't expect your users to implicitly know the defined, yet obscure, features of git you build on.
> It’s hardly magic.
I didn't claim this is a magic feature in git. I meant the way of `go get`-ing a private library by letting `go get` try and clone from https URLs while silently changing them to git:// URLs in a completely separate layer under it is magic.
How is the first application developer to know this git (not go) feature exists in the first place? The application developer is looking to use `go get`, and is frustrated that `go get` only downloads via https, asking them for a username and password combo every time. `go get` has no documentation to make it use git URLs.
Only when the application developer looks online for a solution to this do they find some StackOverflow post detailing this workaround. Or other workarounds, like the insecure option of telling git to save your https credentials.
Sure, a knowledgeable application developer can put whatever quirks their build system needs in their documentation; but a developer ignorant of these workarounds shouldn't have to go beyond `go get`'s documentation in the first place.
I currently use submodules inside of vendor/ and set each dependency's remote to the SSH URL. You can then simply use git clone --recursive to get everything, including via SSH.
That being said, I share your hope that this can remain smooth in the new Go modules implementation. $GOPATH is always a hurdle for new-comers to wrap their heads around in my experience.