SEEKING WORK | Berlin, CET | Remote: Yes | Full-time freelance or 20h+/week
Experienced Go architect (15 years; Zalando, StreamElements).
I untangle fragile microservice architectures and make them boring again.
I’ve modernized fragmented microservices for hyper-growth companies, migrated Riak → Postgres under hard deadlines, and turned chaotic legacy systems into stable, scalable platforms.
Recent impact
Overhauled business-critical messaging systems for high-volume, low-latency workloads. Simplified/retired 11+ microservices, reduced latency, and improved observability — enabling a hyper-growth partner launch ahead of competitors.
Led discovery, design, and delivery of an auditable, multi-source, high-throughput inventory tracker — giving an e-commerce client immediate stability and a scalable foundation for years of growth.
Migrated an auth database from Riak → Postgres under a hard deadline (World Cup traffic). Zero downtime and smooth scaling through peak load.
Ideal clients
Berlin/EU scale-ups with distributed systems complexity — fragile
microservices, data-intensive platforms, or legacy architecture that's become
a liability.
€100+/hr retainer. Fixed-bid projects also considered.
Available now for 3–6 months.
As an industry we really need a better way to tell what’s going g where than:
- someone finally reading the T&Cs
- legal drafting the T&Cs as broadly as possible
- the actual systems running at the time matching what’s in the T&Cs when legal last checked in
Maybe this is a point to make to the Persona CEO. If he wants to avoid a public issue like this then maybe some engineering effort and investment in this direction would be in his best interest.
But I’ve seen a lot of similar claims - just open LinkedIn for a second - and I always come back to the same questions:
- What value has been delivered?
- How much did you spend?*
- How long did it take _all told_?
I know you made a context management db. But if your argument is that AI is the future like this then that seems a somewhat self-referential proof.
What value has been delivered/products built outside of tooling to build products?
I’m aware you probably can’t be 100% open here - IP and all - but I feel it would go a long way to reinforcing your arguments the more concrete you can be.
* points for being up front about the 1000 per engineer minimum. But there’s still the human cost and actual token cost here
"I’m trying to break off of big tech as much as I can"
I wish I could check this more ...
I've had similar needs/desires/gripes with my calendars and the terrible state of calendaring apps for a while.
So thank you for scratching your own itch and sharing it with us.
I'm curious when you say that "[CalDAV] is an area begging for disruption".
Can you enlighten us as to what your wishlist could be for (a) better protocol/systems/ecosystem might be?
(a rant about your pain points might work too).
I've seen polylith over the years and it's always piqued my interest.
I'm curious as to what has been built (by yourselves or others) in the 4 (?) years since its release. Have the experiences held up to the initial goals and designs?
Congrats on the launch and hitting HN's front page.
Do you mind if I ask how your example site (multiplayer.dev) scaled?
I'm super curious about realtime multiplayer solutions (and I don't think I'm alone). But I find a great lack of info on what running this kind of app would cost. I come from the old-school hold-no-state, request->response->gtfo mentality, and I always have the _feeling_ that it'd be expensive to scale.
Not just holding the websocket open, but how much effort do you expend parsing the WAL? How chatty is that kind of persistence mechanism? What other 'gotchas' are there etc.
I'd love nothing more than to dispel that vague feeling with data.
I know it wouldn't come close to a full performance analysis, but throwing a few datapoints on a chart would help get a ballpark idea and tune my hype->action conversion.
I could nail down a pile of questions, but I'm sure you know better than I how to measure your own systems. But roughly I'm wondering:
- how many users did you have?
- how much traffic did you get?
- how much would/did it cost to host on supabase?
- how much resources did the database/realtime components consume?
Congrats again on the launch and have a nice weekend.
But I always come to the same question with services that provide auth and user management: You pay a lot of money for someone _else_ to own critical information about your customers. What happens if you want to move away and use a different/your own/your customers own service?
Your customer data (at least login) lives in WorkOS' database.
How do you get it out? How much does that cost? Are there contractual guarantees around that?
The same goes for your customers integration points. If the customer has to do any setup to integrate WorkOs for your app then moving away would involve them making changes. Not necessarily an easy thing to manage.
Not to be negative: I'd be happy to hear that WorkOS have great processes and guarantees around this.
WorkOS doesn't really own the user management database. It's more like an agnostic API to connect with multiple IdPs through protocols like SAML and OIDC. Identity providers such as Okta, OneLogin, and Azure AD are the ones responsible for storing that data.
It would probably take months to implement SSO with all of the flexibility and ease of use they offer, mainly just because of the built-in integrations with so many providers. The price is pretty steep though, so this would really only be used by the big bucks Enterprise Software™ guys.
It's not. Implementing the OIDC flow from scratch takes half a day to get working and maybe a week to polish. Using available libraries you can do it way faster of course.
The fact that we’re all talking about it, and not at all surprised, is a great example we can take when making the case for more 9’s of reliability.
* well, very technical power users.
reply