I’m always weary of using abstractions like this. I understand the desire to simplify the process of creating new apps, but tools like this take an opinionated stance on how to couple and interact with all of these dependencies like Npgsql, meaning you might end up stuck with an older version of Npgsql until your Aspire package catches up.
Developers are also abstracted one more layer from their dependencies which isn’t always a good thing when debugging or for understanding how the dependencies work.
The manifest file seems like a step in the wrong direction. I see no reason to learn this .NET specific thing when I can learn to use docker-compose or the like.
The otel dashboard is nice but there are alternatives.
Having used Aspire during pre-release, it's so nice as a developer to be able to work on deployments in a real programming language where I can set breakpoints and right-click down into the code to see what's going on.
I'm definitely never going back to working in opaque yaml-variants that can't make up their mind about whether they want to be a programming language or not.
People were already doing that with PowerShell and as you'd expect, from a very complicated language which lets you do many things in very complicated ways, absolutely nothing good came out of it.
Then people started to migrate to YAML, because although it's tedious, it's actually very dumb and the declarative nature of it keeps things very very simple. Sure the domain it is applied to is complex but infra will always be complex.
It looks like we are just going in circles. Now people think that using a complex language to build complex infrastructures is a good thing again, until they use it in anger then get burnt and then they will arrive at the next YAML variant in 10 years.
If I understand correctly, the benefit of using this over something like Compose is the parity with production.
In my experience Compose is great for local development, but doesn't hold up well for complex architectures in production.
I work for a company [0] building something similar, but mostly agnostic to programming language. One thing I particularly like about the approach is the reduction of time debugging pipeline issues. I find increasingly that that is where my time goes -- most unfortunate.
Aspire has a code-based application model that is used to represent your application (or a subset of your application) and its dependencies. This can be made up of containers, executables, cloud resources and you can even build your own custom resources.
During local development, we submit this object model to the local orchestrator and launch the dashboard. This orchestrator is optimized for development scenarios and integrates with debuggers from various IDEs (e.g. VS, VS code, Rider etc, it's an open protocol).
For deployment, we can take this application model to produce a manifest that (which is basically is a serialized version of the app model with references). Other tools can use this manifest to translate these aspire native assets into deployment environment specific assets. See https://learn.microsoft.com/en-us/dotnet/aspire/deployment/m...
This is how we support Kubernetes, azure, eventually AWS etc. Tools translate this model to their native lingua franca.
Longer term, we will also expose an in-process model for transforming and emitting whatever manifest format you like.
Aspire doesn't provide parity with production. Locally it has its own container host and production deployment is outside of Aspire. Aspire has a bit more flexibility to run things that are not containers.
You can update dependencies on your own. The model is quite extensible and nuget is great. You can override container image tags, or any settings we default to. That's the great part about using code!
All the discussions around header vs. path versioning reminds me of this post by Troy Hunt of Have I been Pwned.
> In the end, I decided the fairest, most balanced way was to piss everyone off equally. Of course I’m talking about API versioning and not since the great “tabs versus spaces” debate have I seen so many strong beliefs in entirely different camps.
I think what you're doing with pulumi is the right answer and it's only a matter of time before this becomes the norm. The author's examples could easily be done with plain ol' JS/ES/TS with more far more extensibility and customization when the need arises.
I also feel this is where JSX got it right. Instead of creating yet-another-templating-language (looking at you Angular!), they used JavaScript and did a great job of outlining how interpolation works. Any new templating language is always going to be missing some key feature you expect out of a general programming language and your customers will continue to ask for more features.
Paired with Typescript, we would have the clearness of a declarative language, with the power and flexibility of a real language that is also easy to extend and navigate.
Marketing Attribution | Senior Frontend Engineer | Evanston, IL (with remote days) | Full Time | On Site | http://marketingattribution.com
Marketing Attribution was founded and is run by Ross-boy Link, a seasoned statistician and entrepreneur who’s been doing data science since before the term was coined. Ross continues to actively participate in the development of the product given his background in analytics and you’ll see him sling some SAS or Python to experiment with a new way to crunch numbers.
We develop and support highly automated analytical software that uses cloud-based statistical analysis of large marketing datasets to measure the incremental sales that result from various media, allocate marketing spend to the most efficient media, and connect to media buying systems to execute media buys.
In short, we take the client’s sales and marketing data, run analytics on it (our secret sauce), and from those results, tell the client where they should start and/or stop spending on marketing (TV, Radio, Internet etc.).
You're coming in on the ground floor. This is an entirely greenfield project with no legacy code to maneuver around. The frontend was developed about 5 months ago and we're looking to accelerate the development of new features.
Marketing Attribution | Senior Frontend Engineer | Evanston, IL (with telecomute days) | Full Time | On Site | http://marketingattribution.com
Marketing Attribution was founded and is run by Ross-boy Link, a seasoned statistician and entrepreneur who’s been doing data science since before the term was coined. Ross continues to actively participate in the development of the product given his background in analytics and you’ll see him sling some SAS or Python to experiment with a new way to crunch numbers.
We develop and support highly automated analytical software that uses cloud-based statistical analysis of large marketing datasets to measure the incremental sales that result from various media, allocate marketing spend to the most efficient media, and connect to media buying systems to execute media buys.
In short, we take the client’s sales and marketing data, run analytics on it (our secret sauce), and from those results, tell the client where they should start and/or stop spending on marketing (TV, Radio, Internet etc.).
You're coming in on the ground floor. This is an entirely greenfield project with no legacy code to maneuver around. The frontend was developed about 5 months ago and we're looking to accelerate the development of new features.
Marketing Attribution | Lead and Senior Data Engineer and Lead Frontend Engineer | Evanston, IL (with telecomute days) | Full Time | On Site | http://marketingattribution.com
Marketing Attribution was founded and is run by Ross-boy Link, a seasoned statistician and entrepreneur who’s been doing data science since before the term was coined. Ross continues to actively participate in the development of the product given his background in analytics and you’ll see him sling some Python to experiment with a new way to crunch numbers.
We develop and support highly automated analytical software that uses cloud-based statistical analysis of large marketing datasets to measure the incremental sales that result from various media, allocate marketing spend to the most efficient media, and connect to media buying systems to execute media buys.
In short, we take the client’s sales and marketing data, run analytics on it (our secret sauce), and from those results, tell the client where they should start and/or stop spending on marketing (TV, Radio, Internet etc.).
We're hiring our first engineers (#2, #3, and #4). Full description at:
You're coming in on the ground floor. This is an entirely greenfield project with no legacy code to maneuver around. You'll be responsible for building everything from the ground up.
Interesting is one word for that, given the main thesis that venture capital is universally to blame for the failure of centralized things we rely upon (with zero evidence to support this assertion), and that building another centralized service with less functionality is somehow the better answer. Speaking personally, I completely rely upon PDFs and images and things accompanying my notes because of the nature of the work I do (I have about 4,700 pages of research and almost a gigabyte of imagery all tied together in a Scrivener project for a single piece of work, for example), and the frozen feature set so proudly advertised smacks of "people need to take notes exactly the way I do," which is an immediate turn-off for me. That it's taken further and somewhat arrogantly called "standard" notes really chills me on liking this product at all, given that it completely and intentionally omits useful things to a lot of people -- including me. Then there are extensions, of course, pushing out all the useful features to extensions which will work even less over time.
Maybe some people will like this, but the motivations and decisions just seem ill thought out so far, particularly when it's "VC is going to kill Evernote, so you should rely upon a hostname I personally administer instead and you get exactly what I give you" as the main call to action when I go to the page.
I agree. I spent quite a few years working in Digital Preservation, and what we found is that as evil as you paint them, a big corporation with lots of money will outlast an individual with pure motivations. That's why we can still read DOC files today.
Speaking realistically, investing your personal data in this project is currently a much greater risk than using Evernote.
The app doesn't work without an active web connection -- it's just an empty shell. So if the website goes down, I've still lost my data unless I back it up in advance.
I don't agree with the author much. Ok, Windows Evernote client is a bit bloated, I agree, but I can still immediately enter new note and immediately search notes via a shortcut. And there is a normal windows listview where I can see a lot of notes at once.
On the contrary, this is like a web application optimized for tablet converted to a windows app. Frankly the list of notes looks horrible on desktop.
Anyway, I look forward to this project, and it could be successful in the end, similar way the Visual Studio Code is. Good luck.
I had no idea that this is a sensitive topic. I personally have family members that are civil engineers and understand the difficulty with acquiring a PE. I went ahead and modified the title.
Sorry just to be clear I didn't meant to criticize your article, it's just one example of a general cultural tendency that I was somewhat unhappy about.
Your response is a bit over the top. I'm in no way criticizing anyone or placing myself above anyone. I've been a junior engineer and I've under-engineered. I've been a mid-level engineer and I've over-engineered. Only through feedback from senior engineers did I finally get to see code I wrote stand the test of time 2 years later with minimal to no change.
As I mentioned in the post, no matter how many years we've been doing this, we'll continue to over-engineer and under-engineer because software is hard. The article is meant to layout the mistakes I've learned from, and for engineering managers, our responsibility is to help those we manage avoid the pitfalls we ran into.
Your points couldn't be more accurate. In any organization, the quality of the function that dictates the work to be done (be it Product Managers or Business Analysts) heavily impact the success of an engineer's work. Some organizations choose to remove this level of management [1], others have ensure only former engineers fill this role.
It's very important but broad topic. I chose to not explore it in the post because it deservers its own discussion.
Developers are also abstracted one more layer from their dependencies which isn’t always a good thing when debugging or for understanding how the dependencies work.
The manifest file seems like a step in the wrong direction. I see no reason to learn this .NET specific thing when I can learn to use docker-compose or the like.
The otel dashboard is nice but there are alternatives.