I am with you. Been messing with tools my entire life...have tried ratcheting screwdrivers of several kinds...Hard no. If I do lots of screws, I use a drill or impact driver. Or, failing that a nice 1/4 or 3/8 ratchet wrench with the appropriate adaptor + bit.
Front page of the .pdf shows a neat lumber gauge in action. It's designed to fit on your finger and allow rapid checking of common dimensions. You can still buy these...or, in this case, an excellent reproduction here: https://www.leevalley.com/en-ca/shop/tools/hand-tools/markin...
I recall this fondly when it came out. I had a few racks of Sun servers at that time...and anyone I showed the video too were convinced it was bullshit. April Fools anyone? However, Brendan and Bryan know their onions...and all the naysayers had to pause and think a bit. It too crazy to believe...and too crazy to not believe. I miss this stuff.
Edit: This was done when everyone ran spinning media. I wonder how modern SSDs hold up?
I don't really doubt the numbers...in fact, I thought them a bit low. My feeling is that this will really change the automotive landscape for the overall lifecycle of a car. Seems to me that the dealership currently doesn't worry too much about immediate margin on a new car. They know that that car will come back to them for in-warranty service (which the dealer bills back to the manufacturer) and after-warranty service (loyal customers afraid to go to a less expensive garage). In a nutshell, all the real money is the recurring revenue for service. EVs will require less service...so I'm thinking the dealer will just keep their margin higher up front to make sure they don't lose after the sale. I would bet the dealers would rather sell some sort of EV that needs frequent/predictable/reasonable service...thus supporting the annuity model that has existed for decades. What to do? Build super high quality EVs that need almost no maintenance? Then you don't need that expensive dealer network. But consumers want that dealer to visit and complain to when things don't work...and that costs money. In the end, I think consumers will need to realize that their overall experience and expectations of car ownership are in for a big shake up over the next few years.
Do you have data for the comparison of profits from sales to service? I never get my vehicle serviced at a dealer. I don't know what percentage of people do, but that would play a big factor in this analysis.
"National Automobile Dealers Association (NADA), the new-vehicle department of a car dealership accounts for about 58% of a dealership's total sales but less than 26% of a dealership's total gross profit. "
"So where does the majority of a dealership's profit come from? It's not from car sales, at least not directly. It's from the service and parts department, which accounts for the other 49.6% of the dealership's gross profits, according to NADA"
My comment was based on what I have been told by people in the business. They all want to get cars out the door...and raw counts are more important than margins on each. I would say that any new car under warranty will absolutely be serviced at the dealer. After that? Agreed...people will find alternatives. Mind you, in my experience, only "car people" tend to find non-dealer service...many feel "safer" at the dealer. I am told that the manufacturers of the cars make the most profit on a new vehicle sale...not the dealer. This is also why the lease/used market is so attractive to dealers.
I am surprised I had to scroll this far to find a dBase reference. I am the same vintage as OP, and yeah... back in the day dBase was the key to riches...at least as a contract dev. I remember spending A Lot of Money on the big box of dBase IV when it was released...all the diskettes and manuals. Around that time there was much talk about migrating from dBase databases (those .dbf files) to the magic of "SQL" which at the time meant nothing to me. Needless to say dBase IV cratered, and the industry went elsewhere. Good times!
I’m older than i was when people started telling me as a tech founder that we needed micro services to be cool. At 25 i felt it didn’t deliver any real customer value and made my life harder, at 35 it seems like the people that do this are not people i want to hire anyway.
I started sorta the opposite and thought that nano(!) services (think one AWS Lambda per API call) were the best approach, and now I look at that younger wisdom with a parental smile..
I agree that we need to at least name nanoservices -- microservices that are "too small". Like surely we can all agree that if your current microservices were 100x smaller, so that each handled exactly one property of whatever they're meant to track, it'd be a nightmare. So there must be a lower limit, "we want to go this small and no smaller."
I think we also need to name something about coupling. "Coupling" is a really fluid term as used in the microservice world. "Our microservices are not strongly coupled." "Really, so could I take this one, roll it back by 6 months in response to an emergency, and that one would still work?" "err... no, that one is a frontend, it consumes an API provided by the former, if you roll back the API by 6 months the frontend will break." Well, I am sorry, my definition of "strong coupling" is "can I make a change over here without something over there breaking", for example rolling back something by 6 months. (Maybe we found out that this service's codebase had unauthorized entries from some developer 5 months ago and we want to step through every single damn thing that developer wrote, one by one, to make sure it's not leaking everyone's data. IDK. Make up your own scenario.)
Nano actually did (and does) make sense from access control perspective - if a service has permissions to do one thing and one thing only, it is much harder to escalate from. But I'm not sure if these benefits outweigh the potential complexity.
Your definition of coupling seems a bit too strong to be useful. By that definition, just about nobody has an uncoupled API, because if anyone uses it (even outside your company) then you can’t really just revert six months of changes without creating a lot of problems for your dependents. If those changes just happen to be not user facing (eg an internal migration or optimizations) then you might be ok, but that is a characteristic of the changes, not of the coupling of a service and it’s dependents.
IMO it’s more valuable to have strong contracts that allow for changes and backwards compatible usage, so that services that take a dependency can incrementally adopt new features.
That definition of strong coupling is in fact standard. Like if you ask people why they don't want strong coupling they tell you exactly that when you change a strongly coupled thing you induce bugs far away from the thing you changed, and that sucks.
Now you might want this strong coupling between front-end and back-end and that's OK—just version them together! (Always version strongly coupled things together. You should not be guessing about what versions are compatible with what other versions based on some sort of timestamp, instead just have a hash and invest a half-week of DevOps work to detecting whether you need to deploy it or not. Indeed, the idea of versioning a front-end separate from a back-end is somewhat of an abdication of domain-driven design, you are splitting one bounded context into two parts over what programming language they are written in—literally an implementation detail rather than a domain concern.)
Other patterns which give flexibility in this sense include:
- Subscription to events. An event is carefully defined as saying “This happened over here,” and receivers have to decide what that means to them. There's no reason the front-end can't send these to a back-end component, indeed that was the MVC definition of a controller.
- Netflix's “I’ll take anything you got” requests. The key here is saying, “I will display whatever I can display, but I'm not expecting I can display everything.”
- HATEOAS, which can function as a sort of dynamic capability discovery. “Tell me how to query you” and when the back-end downgrades the front-end automatically stops asking for the new functionality because it can see it no longer knows how.
- HTTP headers. I think people will probably think that I am being facetious here, what do HTTP headers have to do with anything. But actually the core of this protocol that we use, the true heart and soul of it, was always about content negotiation. Comsumers are always supposed to be stating upfront their capabilities, they are allowed a preflight OPTIONS request to interrogate the server’s capabilities before they reveal their own, servers always try to respond with something within those capabilities or else there are standard error codes to indicate that they can't. We literally live on top of a content negotiation protocol and most folks don't do content negotiation with it. But you can.
The key to most of these is encapsulation, the actual API request, whatever it is, it does not change its form over that 6 month period. In 12 months we will still be requesting all of these messages from these pub/sub topics, in 12 months’ time our HATEOAS entry point will still be such-and-so. Kind of any agreed starting point can work as a basis, the problem is purely that folks want to be able to revise the protocol with each backend release, which is fine but it forces coupling.
There's nothing wrong with strong coupling, if you are honest about it and version the things together so that you can test them together, and understand that if you are going to split the responsibilities between different teams than they will need to have regular meetings to communicate. That's fine, it's a valid choice. I don't see why people who are committing to microservices think that making these choices is okay, as long as you lie about what they are. That's not me saying that the choices are not okay, it's me saying that the self-deception is not okay.
I think strong versioning in event-driven arch is a must, to avoid strong coupling. Otherwise, it becomes even worse than "normal" call-driven service arch, because it's already plenty hard to find all of the receivers, and if they don't use strong versioning then it's so easy to break them all with one change in the sender.
Yeah I would tend to agree! I think there is freedom to do something like semver where you have breaking.nonbreaking.bugfix, which might be less “strong”... But in a world with time travel you tend to get surprised by rollbacks which is why they are always my go-to example. “I only added a field, that's non-breaking” well maybe, but the time reversed thing is deleting a field, are you going to make sure that's safe too?
And I think there's a case to be made for cleaning up handlers for old versions after a certain time in prod of course.
As fuel for thought, consider that when python was being upgraded from 2 to 3, shipping lots and lots and lots of new features, and breaking several old ones, there were many libraries which supported both versions.
Some of them may have functioned by only programming in the mutually agreed upon the subset, but given that that subset did not really include strings, that was clearly the exception rather than the norm. Instead people must have found a way to use the new features if they were available, but fall back to old ways of doing things if they weren't.
Some of those mechanisms we're internal to the language, “from __future__ import with_statement”. So how can my JSON-over-HTTP API, within that JSON, tell you what else is possible with the data you fetched?
Ugh sorry for weird autocorrect errors, I had to abandon for a mealtime haha... I was going to add that there are also mechanisms outside the language, for instance that you might just detect if you can do something by catching the exception if it doesn't work... If it's syntax like JS arrow functions this might require eval() or so. The API version is just to expect the failure codes that would have been present before you added the new feature, usually 404, and handle them elegantly. If it's a follow-up request that RFC-“must” be done for consistency, maybe your app needs a big shared event bucket where those can always go, so that when “POST /foo-service/foos/:fooId/bars/:barId/fix" fails, you can fall back to posting to the deferred request buffer something like
Just catch the exception, record the problem for later resolution, use a very generic template for storage that doesn't have to change every month...
Think creatively and you can come up with lots of ways to build robust software which has new features. Remember the time reversal of adding a feature is removing it, so you have to fix this anyway if you want to be able to clean up your codebase and remove old API endpoints.
I almost went down that road once, and got very good advice similar to the first couple of steps (plan, do your homework) in the original post here: "before you start making implementation choices to try to force certain things, try to really understand the architecture you want and why you want it, and ask yourself if it's a technological problem you're hitting or just a system design one"
I wish I could answer this. I have always had a penchant for any robust versioning system. Even the simple ones (CVS and the like) were amazeballs to me at the time. 20 odd years ago I spent a lot of time with ClearCase (initially Rational, now Rational IBM). I was amazed at what could be done. We had dozens of customers, maybe 3 major and 10-14 minor versions in the field, and probably dozens of "bug fix" or "special customer branches". Oh, and likely 2 or 3 on-going major version dev branches. Somehow, it all worked. If you could describe (by whiteboard or hand-waving) what you wanted to see as your working branch, it could be done. "Give me a view that is exactly what customer x has, but for this directory, give me latest, except for this file...I want that one from customer y" . And blam! The CC guys would make it happen. The magical config spec. I have had to dabble in many other SCMs.. SVN, Git, etc...and they all seemed to be a compromise. Or, just ran out of gas when the going got tough. In my mind, I wish I could argue that ClearCase was where it was at, and the patterns it supported would be wonderful to have today...especially over Git. But I don't know enough to defend the point. All I am saying is that even with enormously complex version scenarios, the damn thing didn't break and we all got our work done.
One reason we moved off it was because it was a bandwidth hog! I believe they had two clients to try and help with this, a thin and think client. Even with the thin one, with the CC servers hosted in the states and some developers located in Europe, it would take a half hour for the client to refresh and pull down the latest changes.
Interesting. It was never an issue for us, as we were close to the CC servers. Some folks were not, so they ran their dev/build locally ("network close" to th CC servers) and VNC'd in from their actual location. In that case, only the stuff on the screen had to transit the "far away" network. Although, I think today, network issues would be less of a thing...as I tend to think that networks have scaled faster than code base size. I could be horribly wrong about that though. Any centralized repo is going to have this challenge though. It also depends on if you prefer snapshot or dynamic views. Snapshots were much easier on the network, at the expense of consistency. I also remember the CC team could work magic at optimizing things if you gave them time and a bit of flexibility. Crappy config specs were hard to read, and often slow to work off of. Any config spec that I had to scroll...I knew I was in for a shit week.
True, but I meant that fan runs when the compressor is running to blow air over the coils. That movement of air will allow more transfer of heat from the stuff in the freezer.
You've got a good point, which is that moving air greatly improves heat transfer, but your errors in terminology are confusing people. The fan that blows cold air is the "evaporator fan". The compressor fan (if there is one) blows hot air on the outside of the freezer. Here's a diagram: https://home.howstuffworks.com/freezer2.htm.