I assumed the primary feature of Flatpak was to make a “universal” package across all Linux platforms. The security side of things seems to be a secondary consideration. I assume that the security aspect is now a much higher priority.
The XDG portal standards being developed to provide permissions to apps (and allow users to manage them), including those installed via Flatpak, will continue to be useful if and when the sandboxing security of Flatpaks are improved. (In fact, having the frontend management part in place is kind of a prerequisite to really enforcing a lot of restrictions on apps, lest they just stop working suddenly.)
Many apps require unnecessarily broad permissions with Flatpak. Unlike Android and iOS apps they weren't designed for environments with limited permissions.
I'll second that Angular provides a great experience these days, but they have definitely had substantial API changes within the last few years: standalone components, swapping WebPack for esbuild, the new control-flow syntax, the new unit-test runner, etc...
Was going to say, I only vaguely look at Angular code from adjacent projects at work, and noticed all of a sudden the entire structure changed with the ngModule deprecation thing. Glad I'm not knee-deep in that.
I'd like to ask these CEOs, for people which are taking advantage of the system, why are they not let go? Could it be that management often have no clue how much value each employee brings to the team? Is RTO being mandated to avoid facing that uncomfortable truth?
My understanding is that modern techniques are similar, but the tradeoffs have changed as chip voltages become lower and transistors become smaller. (Admittedly, I don't know a lot about modern techniques.)
Yes, using 2 or three flip flops in series to prevent metastability is still common practice today. As is using diodes in that configuration for esd protection.
Shadow maps are a good example, if the final rendered image is 4k you don't want to be rendering shadow maps for each light source which are only 1080p else your shadows will be chunkier.
I understand where he's coming from, but on the other hand it sounds like he's trying to avoid saying "maybe that OOP thing isn't all it was cracked up to be".
I think that is more descriptive and less prescriptive.
I’ve never seen a codebase ever really use OOP in an enterprise setting. Just spaghetti and meatballs architecture. Meaty god objects “services” with long strings of structs.
> I’ve never seen a codebase ever really use OOP in an enterprise setting
What!? Did you look at the source code of Chrome, QT, even Java JDK, SAP, Adobe, WebLogic, WebSphere....and so on...and so on...What kind of enterprise setting are we discussing here?
The code that uses those frameworks often works with turns into spaghetti and meatballs. Spring has real OOP, but code that uses Spring turns into DTOs and repositories and services.
That's because that happy family of business logic living in with state envisioned as a savior in the Smalltalk days turned out to be a hot mess. Information hiding does not magically make problems go away, in particular not when code isn't "write once and don't ever touch again" and state outlives code iterations. OOP has been a great contribution to how we organize code, but its principles are best applied in moderation. It's quite mindboggling to see how many people still believe they are followers of the OOP ideals when in fact they have long moved on.
I think it depends on what you mean when you say OOP. I agree with the statement that OOP as it's taught in a text book with heavy emphasis on modelling the domain with deep inheritance taxonomies and polymorphism, that is largely a methodological cryptid.
These are tools that are relatively rarely used in Java. Not that they aren't ever done or used (anyone using almost any of Java's own APIs is knee deep in this), but the emphasis is in application code is typically on state-light services and immutable models. Not that I quite understand what is the problem with that.
> I’ve never seen a codebase ever really use OOP in an enterprise setting.
That’s a very strong hint OOP does not solve any problems better than alternatives.
I think this is something programmers understood for a very long time but either didn’t care or didn’t have the tools to do better. Then go and typescript came along and showed the world that indeed structural typing is in practice better at almost everything, except perhaps GUI libraries. Maybe.
> That’s a very strong hint OOP does not solve any problems better than alternatives.
It also requires more conceptually out of developers. I cannot count the amount of times I was on a team with developers that when implementing an interface never created a method that wasn’t one of the publicly required ones (ie no helper methods).
OOP failed to deliver its promise. So people just use what's reasonable: split state into one structure and handlers into another namespace. So-called anemic model. Which is the only sane way to build software.
Especially if you count in 10-20 years of maintenance, bug fixes and small and bigger changes happening by various folks of various skillset and approaches.
Or more about boundaries, like in less Monoliths, and more micro services and container deployments including cloud functions like AWS Lambda and Azure Functions. And "coupled by less strongly typed schemas" is more of a fact statement, but is it really a good thing?
Software Engineering will not progress into real Engineering, until it starts building on the past instead of throwing away past lessons. OO was about many things but particularity about code reuse. Is that also a bad thing?
>OO was about many things but particularity about code reuse. Is that also a bad thing?
No - but OO wasn't as successful at delivering code reuse as it was promised to be, especially polymorphic OO.
>Software Engineering will not progress into real Engineering, until it starts building on the past instead of throwing away past lessons.
SE won't be real Engineering until we start being able to do things like measuring the robustness of a system, or projecting it's maintenance costs. I think we are as far away from this as we've ever been.
Unfortunately, while OOP promises code reuse, it usually makes it worse by introducing boundaries as static architecture.
OOP's core tenet of "speciating" processing via inheritance in the hope of sharing subprocesses does precisely the opposite; defining "is-a" relationships, by definition, excludes sharing similar processing in a different context, and subclassing only makes it worse by further increasing specialisation. So we have adapters, factories, dependency injection, and so on to cope with the coupling of data and code. A big enough OOP system inevitably converges towards "God objects" where all potential states are superimposed.
On top of this, OOP requires you to carefully consider ontological categories to group your processing in the guise of "organising" your solution. Sometimes this is harder than actually solving the problem, as this static architecture has to somehow be both flexible yet predict potential future requirements without being overengineered. That's necessary because the cost to change OOP architectures is proportional to the amount of it you have.
Of course, these days most people say not to use deep inheritance stacks. So, what is OOP left with? Organising code in classes? Sounds good in theory, but again this is another artificial constraint that bakes present and future assumptions into the code. A simple parsing rule like UFCS does the job better IMHO without imposing structural assumptions.
Data wants to be pure, and code should be able to act on this free-form data independently, not architecturally chained to it.
Separating code and data lets you take advantage of compositional patterns much more easily, whilst also reducing structural coupling and thus allowing design flexibility going forward.
That's not to say we should throw out typing - quite the opposite, typing is important for data integrity. You can have strong typing without coupled relationships.
Personally, I think that grouping code and data types together as a "thing" is the issue.
> Data wants to be pure, and code should be able to act on this freeform data independently, not architecturally chained to it.
If behaviors are decoupled from the data they operate on, you risk a procedural programming style lacking the benefits of encapsulation. This can increase the risk of data corruption and reduce data integrity...
Behaviours don't have to be decoupled from the data they operate on. If I write a procedure that takes a particular data type as a parameter, it's a form of coupling.
However, there's no need to fuse data and code together as a single "unit" conceptually as OOP does, where you must have particular data structures to use particular behaviours.
For example, let's say I have a "movement" process that adds a velocity type to a position type. This process is one line of code. I can also use the same position type independently for, say, UI.
To do this in an OOP style, you end up with an "Entity" superclass that subclasses to "Positional" with X and Y, and another subclass for "Moves" with velocity data. These data types are now strongly coupled and everything that uses them must know about this hierarchy.
UI in this case would likely have a "UIElement" superclass and different subclass structures with different couplings. Now UI needs a separate type to represent the same position data. If you want a UI element to track your entity, you'd need adapter code to "convert" the position data to the right container to be used for UI. More code, more complexity, less code sharing.
Alternatively, maybe I could add position data to "Entity" and base UI from the "Positional" type.
Now throw in a "Render" class. Does that have its own position data? Does it inherit from "Entity", or "Positional"? So how do we share the code for rendering a graphic with "Entity" and "UIElement"?
Thus begins the inevitable march to God objects. You want a banana, you get a gorilla holding a banana and the entire jungle.
Meanwhile, I could have just written a render procedure that takes a position type and graphic type, used it in both scenarios, and moved on.
What do I gain by doing this? I've increased the complexity and made everything worse. Are you thinking about better hierarchies that could solve this particular issue? How can you future proof this for unexpected changes? This thinking process becomes a huge burden to make brittle code.
> you risk a procedural programming style lacking the benefits of encapsulation. This can increase the risk of data corruption and reduce data integrity...
You can use data encapsulation fine without taking on the mantle of OOP. I'm not sure why you think this would introduce data corruption/affect integrity.
There's plenty of compositional and/or functional patterns beyond OOP to use beyond procedural programming, but I'd hardly consider using procedural programming a "risk". Badly written code is bad regardless of the pattern you use.
That's not to say procedural programming is all you need, but at the end of the day, the computer only sees procedural code. Wrapping things in objects doesn't make the code better, just more baroque.
OOP and especially post-OOP languages don't encourage the "Dog is-a Animal" type of inheritance that you describe. Sadly, education has not caught up to industry and so it is still often taught the wrong way. Composition-over-inheritance has been the dominant methodology of practical OOP for a long time, so much so that most post-OOP languages (Swift, Rust, Go, ...) have dropped inheritance entirely, while still preserving the other aspects of OOP, like encapsulation, polymorphism, and limited visibility.
Yeah, I cannot count how many times I've seen it claimed that the virtual DOM is the secret to why React(or another framework) is fast, completely missing the point of the virtual DOM.
The virtual DOM is not faster than performing direct mutations of the actual DOM, the virtual DOM is a tool that allows the normally slow approach of "blow away and rebuild the world" to be fast enough to put into use.
Virtual DOM used to be faster than DOM. DOM operations were really slow back when React came out, and the usual componentization approach of replacing entire blocks of code with new HTML strings used to lock up the browser. React made it possible to effortlessly reuse previous state DOM nodes.
Your second statement that the virtual DOM lets you skip unnecessary DOM mutations is true, but the first statement is inaccurate because "DOM" is not equivalent to "regenerate and replace all the DOM nodes with updated ones".
Prior to React, performant apps that weren't using frameworks that regenerated the entire DOM tree would simply manually mutate the DOM nodes based on what it was doing. If your app was loading a new comment, it would find the list of comment nodes and append a new one to it, same as a virtual DOM would.
The problem is that this could be error-prone in large apps—are you even sure that the comment list is still on the page? If it's not, do you need to add it to the page or is this a completely new view and you're reacting to a network request that finished after the user navigated away from the page? That's the kind of finnicky manual work that React helped simplify, but the performance was the same as an app manually mutating the DOM plus the extra virtual DOM comparisons.
The usual approach used to be to simply replace entire "component" with updated HTML string returned by a function representing that component. Doing what you're saying was not feasible for large apps, that led to unmaintanable spaghetti code and never really worked correctly - imagine you need to add a feature somewhere and then you have to go into all the other components that rely on that internal structure. I am talking about CRMs and other business apps like that, not some light JS to load blog comments.
None of what you’re saying is globally true. It might have been true of the apps you personally worked on but it’s not the case that everyone did HTML string replacement or that using the DOM directly forced you to forget basic software engineering concepts.
It's what was commonly done across more than 10 companies in Europe over a decade I worked in the field. Looking back at the source code of projects back then, it was the normal method.
The thing which was slow is changing more than you need to change. When the React marketing push included performance, that’s almost always what they were comparing it to - someone would have slow code making the same updates multiple times or triggering reflows by forcing the browser to do a partial update to measure something, and then updating again.
There’s an argument that the guaranteed overhead of using React is better because it helps the average developer who doesn’t measure performance much do a better job than they might have otherwise, and that can be true for many people but it also tended to get oversimplified to “React is fast” despite widespread evidence to the contrary.
We measured performance, but we also measured things like how hard it is to implement a feature or how many bugs we have to fix, and how quickly we can do so. When we jumped on React in 2013, it improved all of these metrics.
You're right that it was difficult, that's part of why React caught on. But lots of apps did it anyway—its not as impossible as you make it seem if you take the time to write some abstractions around checking what needs to be done or centralizing all DOM operations to reduce the chance of other code modifying then. I worked on multiple complex web apps that did things like this. It's just tedious and error-prone.
That's my whole point, though - it was tedious and error prone, so it wasn't usually done. Then React offered an easier and more performance way, that's why everybody jumped on it.
If your actual point is that "The virtual DOM is faster than a slow-but-common pattern of DOM mutation" then, sure. We have different experiences in how common wiping out the entire DOM really was in apps not using frameworks but differing experiences is normal.
But in a comment thread of people saying the virtual DOM is not faster than equivalent manual DOM mutations you can understand why "The virtual DOM was faster than the DOM" got understood differently since it lacked that clarification.
Well that's what React originally compared against. Sure, you could do it the manual, tedious and error-prone way to achieve similar performance, but that's not really interesting to React users.
Fair point. Cypress cannot, the initiator of this comment thread context asked what Selenium can’t do that Playwright can, but my brain misred this as “Cypress”. Sorry about that.
reply