High latitude launch sites are better for polar or highly inclined orbits.
Consider a launch from the equator targeting a pure polar orbit (i.e. going over the poles). You not only have to reach orbit, but also cancel out the "equatorial direction" component of your initial velocity to zero, which takes extra delta-v.
Polar orbits are good for a single satellite to see the entire Earth's surface (basically scanning over a different part each orbit), which is often desirable depending on the purpose. For example something that wants to measure the entire atmosphere or photograph the entire surface, etc.
This isn't true. Because it's a vector addition of velocity vectors—not scalars.
The orbit you want to reach is a delta-v of about 9.0 km/s in the polar direction,
↑ 9.0 km/s
The speed of the Earth's rotation, that you would want to cancel, is maximally 0.46 km/s on the equator, in the azimuthal direction,
→ 0.46 km/s
So your delta-v vector is ↑ 9.0 km/s pointing north, plus ← 0.46 km/s pointing west (cancelling the Earth's rotation), for a vector sum of magnitude
sqrt( (9.0 km/s)^2 + (0.46 km/s)^2 )
= 9.01 km/s
Even though you're negating 460 m/s of speed, it costs no more than an additional 10 m/s (give or take) of delta-v. It's a negligible difference — 0.1%.
Launching into a polar orbit is equally easy, from any latitude!
⁂
The converse isn't true. If you're trying to reach an equatorial orbit, the delta-v into that orbit is co-linear with the earth's rotation,
(→ 9.0 km/s), (→ 0.46 km/s)
That's a scalar sum! An azimuthal launch is 460 m/s cheaper on the equator—you inherit the full speed of the Earth's rotation. Latitude is a significant factor here: it's easier to launch, the closer to you are to the equator.
(More profoundly, the orbit you're trying to reach might not even pass through the point you're trying to launch from. This happens if the target orbit inclination is smaller than your launch site's latitude (in absolute value)).
I think it's just the best available location in Europe for it. Previously, the same company were doing launches from European territory (French Guiana), but not on continental Europe, which seems to be the new direction.
If you're wondering because Spain is southerly and hence closer to the Equator, consider this: the extreme southernmost point of continental Spain (latitude 36° N) is still farther north than most of North Carolina in the USA [0].
Only the Canary Islands have comparable latitudes to Cape Canaveral. While on the one hand a Canarian launch site could perhaps be a huge economic and productive boon for the Islands, I think that it might conflict with their unique ecosystems, tourism, and even (maybe?) with their current use as a base for telescopes, which is useful thanks to their high mountains and low pollution.
I wonder if its because (1) theres no space left on the Med coast in Spain for this or (2) if a Spanish launch site does not offer enough angle for moderately inclined orbits because (3) they would still be dropping hardware on central Europe for nearly all common orbit profiles.
F# was for me the best functional language when I looked at rewriting a Ruby on Rails app.
I wanted to go with a functional language, as it seems to better fit my thinking and reasoning, and I looked at Haskell, Ocaml, Scala, F#.
Being a stranger to Microsoft technologies, F# was the least likely to be chosen, but easily became the first choice.
Haskell's purity made it hard to adopt (for me), Ocaml's ecosystem is subpar (there wasn't even a clear choice for a library to interact with postgresql, I couldn't install the latest version due to its reliance on an obscure tool whose name I forgot and didn't get help on the forum), and Scala is seems complex....
F# was surprisingly easy to get started with. The community is mis-managed by a corporate-minded approach (requiring people to become member of the F# software foundation to get access to the official slack!), but its members are friendly, smart and ready to help. The ecosystem is great with access to all the dotnet libraries (some complain there's a mismatch as most of those are developed for use with C#, but I rarely got in trouble for using them).
There are also great libs and frameworks available. Like https://github.com/SchlenkR/FsHttp to easily interact with http servers, to the point that I find it easier to use than a dedicated library. Or https://github.com/CaptnCodr/Fli , to run commands. And last but not least, https://www.websharper.com/ is the best web framework I have encountered across all ecosystems. Their reactive approach to web ui really allows me to develop complex interfaces in a maintainable way.
This became a longer message than I thought, probably due to my enthousiasm for the language. For complete transparency, the situation is not perfect, and in my experience the tooling is not the best.
Curious since you don't expand on it on the blog: in what way did Haskell's purity make it difficult to you?
Having used Haskell in production for a bit now, I don't even notice its purity. Most functions are in some kind of I/O context making it similar as other languages, except with the option of running without I/O capabilities for functions that shouldn't need it.
For me, Haskell's image and ideal of purity are what made it difficult when I started out. I tried learning the language by reimplementing a program I'd previously done imperatively, that was (in hindsight) obviously hard to do in a plain pure way, ended up learning about zippers and knot-tying to do something in a less efficient and more confusing way than just using something like STArray because I had this idea from reading about Haskell that this was not only a good way to do things, but would be magically fast because GHC. (It was not.)
These days I'd just do such a task more-or-less imperatively in Haskell, and I would be well guided by the types in doing so. But I also feel like you have to make a few such mistakes if you want to get a good intuition and taste for when it's good do things purely and when imperatively.
You gotta remember people are often picking languages based on what they can easily find out about it and extrapolating/guessing about what problems they'll run into with their expected use.
A few years ago on here I had an interesting conversation with someone who wasn't going to use rescript for something because they didn't like how it handled object types. I can't remember ever using an object type in rescript; we all just convert js objects to record type in the extern binding. But that's not information easily available to someone who has never used the language.
Same thing here I think. If you don't already have familiarity with this paradigm, it's hard to imagine what using an IO monad for side effects is like. It's not easy to tell how hard it'll be to learn it, how much it may affect the rest of your code, etc. It's easy to imagine someone (shit even me a few years ago) going "eh I'll take the language with the big easy escape hatches just in case."
> You gotta remember people are often picking languages based on what they can easily find out about it and extrapolating/guessing about what problems they'll run into with their expected use.
This is a good observation.
As someone who writes a lot of Lisp, I'm inclined to agree as the amount of people that have never written any Lisp yet immediately reject it over syntax over fears that it somehow hampers development is a (to me) surprisingly large number of people.
If I recall correctly, one of the motivating factors for Rescript was to reduce the perceived/real distance between Reason and JS in order to attract more JS devs, as Reason was so heavily associated with OCaml.
I honestly don't remember as it was +/-6 years ago. I had started learning Haskell and got to that conclusion. Maybe that I am now more versed in FP I would arrive at another conclusion? I don't know.
Another thing that was hard to grasp for me were the special operators like =<<, ., $, etc. I was using Xmonad, but those operators create a barrier to understanding exactly what happened in the file.
In the end, F# was in my (personal) experience much more approachable, and it let me learn the functional concepts along the way.
There was a large group of folks that left Ruby on Rails for Elixir (even has a similar looking syntax), yet it wasn't on your list of languages to consider. Just curious, was there a particular reason?
I should have mentioned in the message, but I was looking for a strongly typed language.
I was an avid-user of dynamically-typed languages, but that particular Ruby on Rails app became unmaintainable, and part of the culprit was due to the dynamic typing. I hoped that using a statically typed language would make it easier to maintain a complex app in the long term.
And I must say that it totally materialised, to the point that I don't want to develop in dynamically typed languages anymore.
Here's an example: as I said in my original message, I was a complete stranger to the dotnet ecosystem, and I learned the F# language at the same time. And I decided to develop the app as a library project to be used by the web app. I completely missed the prevalence of the async approach in the dotnet, and all my code was synchronous. One day, about half-way in the project, I realised I needed to switch to async code. Had this happened in a dynamically typed project, it would have been hell for me. Maybe it's me that can't grasp a project well enough, but I need the type-guardrails to find my way in large refactorings. And with the strong types, large refactorings can be done confidently. They don't replace tests, but make the refactoring process much more smooth.
The app is open source and its code is at: https://gitlab.com/myowndb/myowndb
It doesn't have a lot of users, not the least due to lack of marketing and polishing the user experience. But I am satisfied of what I learned developing it!
This is a really minor point, but "strongly typed" and "statically typed" are not interchangeable terms. In the context of your comments here, you are exclusively interested in the static nature of the type system, rather than anything about the "strength" of it (which is something totally different and inconsistently defined).
Typically, "static typing" refers types being checked at compile time rather than runtime; in other words, the analysis can happen before the program is run, which gives you some degree of confidence in what the behavior will be when actually running it. The opposite of this is "dynamic typing", which means that the type-checking happens while the program is running, so you don't have the up-front guarantee that you won't end up having an error due to the wrong type being used somewhere. In practice, this isn't a strict binary where a language has to be 100% static or 100% dynamic. For example, Java is mostly statically typed, but there are some cases where things are a bit more dynamic (e.g the compiler allowing certain casts that might not end up being successful at runtime, at which point they throw an exception). On the other hand, Python traditionally has been a dynamically typed language, but in recent years there have various efforts to allow type annotations that allow checking some things in advance, which moves it a bit in the static direction (I'm not familiar enough with the current state of things in the ecosystem to have any insight into how much this has moved the needle).
On the other hand, "strong typing" isn't as quite as standardized in type systems terminology, but broadly speaking, it tends to be used to describe things like how "sound" a type system is (which is a well-defined concept in type systems theory), whether or not implicit type coercions can occur in the language, or other things that roughly translate to whether or not its possible for things to get misused as the wrong type without an explicit error occurring. Two examples that are commonly cited are JavaScript[0], with its sometimes confusion implicit conversions to allow things like adding an empty object and an empty array and getting the number 0 as the result (but not if added in the other order!) and C, with it being possible to interpret a value as whatever the equivalent underlying bytes would represent in an arbitrary type depending on the context its used.
[0]: I normally don't like to link to videos, but this famous comedic talk demonstrating a few of these JavaScript quirks is so thoroughly entertaining to watch again every few years that I feel like it's worth it so that those who haven't seen it before get a chance: https://www.destroyallsoftware.com/talks/wat
> static typing: 2 + "2" does not compile/parse (e.g. Python vs mypy, Typescript vs JS)
I think this example is not correct, because static typing doesn’t affect how values of different types interact. And while I don’t know of any staticly typed language where specifically `2 + “2”` is a valid expression, statically typed languages definitely can be weakly typed: the most prominent example is C where one can combine values of different types without explicitly converting them to the same type (`2 + 2.0`).
I believe strong/weak and static/dynamic are orthogonal. And my examples are:
- Strong: `2 + “2”` is a error,
- Weak: `2 + “2”` makes 4 (or something else, see the language spec),
Dynamic typing can forbid the latter (at runtime), but it's implementation dependent. There's a further distinction, Latent typing, which is where types are associated with values rather than variables.
But a dynamic language can have types associated with variables, and it can forbid changing those types after their types have been checked the first time.
> But a dynamic language can have types associated with variables, and it can forbid changing those types after their types have been checked the first time.
`auto` is still using static typing, and is a tool for type inference. A dynamically typed version might look equivalent but would behave differently, failing at runtime rather than compile time.
These days there's Gleam[0], as a strongly typed alternative for the BEAM virtual machine. Of all the languages I haven't used yet, it seems to hit the safe + minimalistic + productive sweet spot the best. (Yes the C-inspired syntax is slightly off-putting, but syntax is the least important aspect of a language.)
The appeal is the runtime model. I can’t readily verify if BEAM languages are meaningfully slower or really slower at all but let’s take the premise for the sake of argument.
Even if is slower, the runtime model is incredibly resilient and it’s cheap to scale up and down, easy to hot update, and generally does asynchronous work extremely well across a lot of different processes.
F# has really good async ergonomics but it doesn’t have the same task/processing flexibility and Websockets are kind of a pain compared to elixir or even erlang
.NET's SignalR is actually quite good. Strongly typed message hubs on the server[0]. Wide client support. Azure SignalR[1] if you don't want to own the infrastructure to scale web sockets.
I am also keeping an eye on gleam! I also regret that they left the ml syntax behind, but as you say it shouldn't be a blocking factor.
If they adopt computation expressions and make otp a priority it would probably come beside fsharp in my toolbox!
I'm sure it can be the better choice, but for me it was not.
It seems there was some incompatibility between me and Scala. I find it such a complex language and I never managed to wrap my head around it.
As I said F# was my last choice at the start of my evaluation, and Scala was high on the list due to the Java ecosystem. But in the end it didn't work out for me.
I agree with you. I tried Scala for weeks and found it far too complex. Every line I wrote, I felt there were 5 different ways of doing it and I didn't know if I was choosing the right one. Scala tries to be too many things at once imo.
It definitely was for me!
The syntax is simple, it is functional first but is not pure.
I started with zero experience with ml languages and got productive fast enough to enjoy it. Of course my early f# code could be improved, but it was working and while writing the code the language didn't feel like a barrier.
One caveat though: it seems FP matches my way of thinking. As an example, I always liked recursion, while some others saw it as complexifying things.
Try fsharp as fsx scripts to avoid boilerplate (see blog post linked in other comment) and you'll rapidly feel if you like it or not.
At least the tooling should be way nicer. It is way more of an OCaml language than Scala. Also much like having to deal with JVM ecosystem in Scala, you'd need to deal with .NET ecosystem in F#. In my opinion, the latter can be an advantage. F# has a lot of depth but you do not need to grasp it fully to be productive with it.
I have done a bit of both Scala and F#, I think F# is a good bit easier to learn. Scala I think mixes OOP concepts and mutability in a bit less gracefully.
I think Clojure is the better option if you want to do FP using the JVM ecosystem. The problem (for me, anyway) I've run into with Scala is that it supports both functional programming and object-oriented programming. Every code base I've worked on in Scala has ended up being a hodgepodge of both, which I find annoying.
However, the best functional programming language is, of course, Elixir. :D
> Every code base I've worked on in Scala has ended up being a hodgepodge of both
Is there something about that that has bothered you? Working in Scala codebases, I've found the best ones to work in are the ones that embrace Scala's multiparadigm nature. When programmers try to solve every problem with OO, they end up adding more and more layers to get the job done. When programmers try to solve every problem with FP, they end up resorting to sophisticated techniques that are unapproachable for other engineers. I think the simple parts of OO and the simple parts of FP go much, much further together than simple OO or simple FP can go by themselves. Have you seen something different?
I really think this is where Kotlin is going to excel; multi-paradigm, multi-platform. Scala's community went too hard into FP and type-golfing to make it approachable.
Gleam lacks lisp-style macros, and its implementations of BEAM and OTP are not exhaustive. For example, Gleam does not support:
- Hot updates.
- Full distributed system support.
– Low-level process manipulation.
- Named processes.
- Advanced supervision strategies.
- Behaviours other than GenServer.
- Type-safe distributed messaging.
- And several other things that I value in BEAM and OTP.
I can't justify trading the full power of BEAM and OTP for static typing. To be fair, though, I've written a lot of code in both statically and dynamically typed languages, and static typing isn't something I value much (to the point that you might say I don't care about it at all :D).
I knew otp was still suboptimal in gleam, but thanks for mentioning all these additional points!
Funny how preferences and priorities vary among devs, I need my static type system! :-)
But note even in static type systems there are variations. I'm talking about an hindley milner type system with its type inference like the one in fsharp
Evaluated F# vs Clojure. Speed of certain algorithms just lacked for me. Value types particularly in tail recursive stacks shines in F# compared to the JVM in general. As usual YMMV
i don't think it is. i would say it is functional + bridges to the jvm (which is why it has been ported to many other platforms... there is not that much stuff in the language itself).
it is functional (value) programming first. there are tools to hook in the object jvm stuff but this is not the natural grain of the language.
clojure is pretty much all values and functions (and some macroes).
+ some concurrency stuff
there is no class, there is no inheritance, you don't even have information hiding (no private etc.). you have protocols and multimethods.
(well technically there is private because java but it is not obvious to use and not what you expect, you will very rarely see that in clojure codebases)
honestly it is a nice small yet powerful language, with not too many kludges. my personal coding is either clojure or rust (which has way more kludges, but better than the other stuff in the typed fast compiled world at least for me).
I mean to each their own, but a quick search of "clojure multiparadigm" comes up with a fair number of hits from people who would disagree with it not being so.
It is possible to start your project with the script possibilities offered by F# (as mentioned in the blog post).
It is absolutely a viable approach and I even blogged about it a couple of months ago: https://www.asfaload.com/blog/fsharp-fsx-starting-point/
As mentioned in another comment, I'm currently still happy with (single node) Docker Swarms (with the reverse proxy as described on https://dockerswarm.rocks/ ). I like that I can basically use the docker compose files published by a lot of project to deploy. How does Unraid compare in your experience?
And I like that I can deploy images which basically don't have any requirement to be deployable to Docker Swarm. Is that also the case with Unraid?
Exactly, my first reaction was "I should write a blog post about why I still use Docker Swarm". I deploy to single node swarms, and it's a zero boiler plate solution. I had to migrate services to another server recently, and it was really painless. Why oh why doesn't Docker Swarm get more love (from its owners/maintainers and users)?....
I'd be interested. Might be a strange question but I'll throw it out there, I seem to have a hard time finding a good way to define my self hosted infrastructure nodes and which containers can run on them, have you run into/have a solution for this? Like I want my database to run on my two beefier machines but some of the other services could run on the mini pcs.
I am running one-node swarms, so everything I deploy is running on the same node. But from my understanding you can apply labels to the nodes, and limit the placement of containers. See here for an example (I am not affiliated to this site): https://www.sweharris.org/post/2017-07-30-docker-placement/
> I deploy to single node swarms, and it's a zero boiler plate solution.
Yup, it's basically like a "Docker Compose Manager" that lets you group containers more easily, since the manifest file format is basically Docker Compose's with just 1-2 tiny differences.
If there's one thing I would like Docker Swarm to have, is to not have to worry about which node creates a volume, I just want the service to always be deployed with the same volume without having to think about it.
That's the one weakness I see for multi-node stacks, the thing that prevents it from being "Docker Compose but distributed". So that's probably the point where I'd recommend maybe taking a look at Kubernetes.
I moved us off docker swarm to GKE some years back. The multi node swarm was quite unstable, and none of the big cloud providers offered managed swarm in the same way they offer managed k8s.
It's a shame I agree because it was nicely integrated with dockers own tooling. Plus I wouldn't have had to learn about k8s :)
I am very interested. I tried to migrate to Swarm, got annoyed at incompatibility with tons of small Docker Compose things, and decided against that. I'd love to read about your setup.
F# was never really "OCaml on .NET", though, not really. ML on .NET, sure, but it is (and always has been) missing OCaml's most interesting features, such as functors and OO with inferred row-types.
They have kinda positioned themselves that way and had (have?) syntactic compatibility with a subset of OCaml. And OCaml is probably the least obscure of the ML language family, so it's not exactly surprising.
Fsharp is such a nice languange. Such a shame that I never seem to get the light it deserves.
Between the alternative light syntax, type providers and first class "scripting" mode supported it really was a great middle point between fully scripting language and fast prototyping and full blow projects
They installed a root certificate on windows computers that could have been used to MITM all traffic.
I personally had issues with the project years before that when I tried to install their Linux .deb and they ran `pip install` as root in the pre install script inside the .deb. That caused so much havoc to clean up I was pissed at them for years. Now that idiocy is blocked by default in current versions of pip.
FSharpPacker works on scripts (with extension .fsx) written with F# and usually run with `dotnet fsi`. For the F# advent calendar [1] I recently blogged about how those scripts are a viable starting point for developing an application in F#. Actually, it's even possible to easily maintain a scripted version and a compiled version of the same F# app, with basically the same code. For those interested, it's at https://www.asfaload.com/blog/fsharp-fsx-starting-point/
Incredibly easy to setup and manage for usual small scale deployments. I use it as one node swarms, and I have setup
- backups
- automatic https certificates setup and renewal
- automatic upgrades to new images
- easy setup of persistence of data on the server
I'm very surprised it is not more popular, with quite some people trying to replicate swarm's features with docker compose, but often in harder to maintain setups.
That's indeed the opinion of the author.
Note however that at this time all elements used in the setup described on dockerswarm.rocks are maintained. I started using swarm in 2022 and I documented my decision [1], and my reasoning for my kind of needs as not changed. The investment is very low, as well as the risk. Migrating away from swarm should not be very problematic for me, and in the meantime I enjoy an easy to maintain setup. I still think it's better than tweaking a maybe working solution with compose.
I'm not expecting to convince anyone but wanted to share an alternative approach(only applicable to certain setup)
I love Swarm and don't see the appeal of K8s when something as simple as Swarm exists. I do however run K8s in prod for work and would never run Swarm in prod due to Docker seeming to have its days numbered. Idk where that leaves us aside from ECS. But I also have no need to run something any more robust than ECS in AWS for my workload.
We are moving our EKS workload over to ECS over the next year. I expect needing to down size my team because of it.
One thing K8s is not is cheap. That shit takes a well oiled team or a couple of hot shots so do right. We probably did a lot of what makes it expensive to ourselves by not switching to managed add-ons sooner and never evolving the apps that run in the cluster. I've only been lead for about 5 months now, but I'm finally able to start making significant progress on the necessary evolution that I've been trying to make happen for 2 years before my promotion. The enterprise is a big ship. Takes time to turn. Thanks for reading what turned into a rambling vent session.
I agree it depends on the situation, but there still are some situations (like small apps) where I see swarm as the way to go. Yours is probably different.