- Having a single X server that almost everyone used lead to ossification. Having Wayland explicitly be only a protocol is helping to avoid that, though it comes with its own growing pains.
- Wayland-the-Protocol (sounds like a Sonic the Hedgehog character when you say it like that) is not free of cruft, but it has been forward-thinking. It's compositor-centric, unlike X11 which predates desktop compositing; that alone allows a lot of clean-up. It approaches features like DPI scaling, refresh rates, multi-head, and HDR from first principles. Native Wayland enables a much better laptop docking experience.
- Linux desktop security and privacy absolutely sucks, and X.org is part of that. I don't think there is a meaningful future in running all applications in their own nested X servers, but I also believe that trying to refactor X.org to shoehorn in namespaces is not worth the effort. Wayland goes pretty radical in the direction of isolating clients, but I think it is a good start.
I think a ton of the growing pains with Wayland come from just how radical the design really is. For example, there is deliberately no global coordinate space. Windows don't even know where they are on screen. When you drag a window, it doesn't know where it's going, how much it's moving, anything. There isn't even a coordinate space to express global positions, from a protocol PoV. This is crazy. Pretty much no other desktop windowing system works this way.
I'm not even bothered that people are skeptical that this could even work; it would be weird to not be. But what's really crazy, is that it does work. I'm using it right now. It doesn't only work, but it works very well, for all of the applications I use. If anything, KDE has never felt less buggy than it does now, nor has it ever felt more integrated than it does now. I basically have no problems at all with the current status quo, and it has greatly improved my experience as someone who likes to dock my laptop.
But you do raise a point:
> It feels like a regression to me and a lot of other people who have run into serious usability problems.
The real major downside of Wayland development is that it takes forever. It's design-by-committee. The results are actually pretty good (My go-to example is the color management protocol, which is probably one of the most solid color management APIs so far) but it really does take forever (My go-to example is the color management protocol, which took about 5 years from MR opening to merging.)
The developers of software like KiCad don't want to deal with this, they would greatly prefer if software just continued to work how it always did. And to be fair, for the most part XWayland should give this to you. (In KDE, XWayland can do almost everything it always could, including screen capture and controlling the mouse if you allow it to.) XWayland is not deprecated and not planned to be.
However, the Wayland developers have taken a stance of not just implementing raw tools that can be used to implement various UI features, but instead implement protocols for those specific UI features.
An example is how dragging a window works in Wayland: when a user clicks or interacts with a draggable client area, all the client does is signal that they have, and the compositor takes over from there and initiates a drag.
Another example would be how detachable tabs in Chrome work in Wayland: it uses a slightly augmented invocation of the drag'n'drop protocol that lets you attach a window drag to it as well. I think it's a pretty elegant solution.
But that's definitely where things are stuck at. Some applications have UI features that they can't implement in Wayland. xdg-session-management for being able to save and restore window positions is still not merged, so there is no standard way to implement this in Wayland. ext-zones for positioning multi-window application windows relative to each-other is still not merged, so there is no standard way to implement this in Wayland. Older techniques like directly embedding windows from other applications have some potential approaches: embedding a small Wayland compositor into an application seems to be one of the main approaches in large UI toolkits (sounds crazy, but Wayland compositors can be pretty small, so it's not as bad as it seems) whereas there is xdg-foreign which is supported by many compositors (Supported by GNOME, KDE, Sway, but missing in Mir, Hyprland and Weston. Fragmentation!) but it doesn't support every possible thing you could do in X11 (like passing an xid to mpv to embed it in your application, for example.)
I don't think it's unreasonable that people are frustrated, especially about how long the progress can take sometimes, but when I read these MRs and see the resulting protocols, I can't exactly blame the developers of the protocols. It's a long and hard process for a reason, and screwing up a protocol is not a cheap mistake for the entire ecosystem.
But I don't think all of this time is wasted; I think Wayland will be easier to adapt and evolve into the future. Even if we wound up with a one-true-compositor situation, there'd be really no reason to entirely get rid of Wayland as a protocol for applications to speak. Wayland doesn't really need much to operate; as far as I know, pretty much just UNIX domain sockets and the driver infrastructure to implement a WSI for Vulkan/GL.
> Literally no user cares what language a project is implemented in
I think this is true but also maybe not true at the same time.
For one thing, programming languages definitely come with their own ecosystems and practices that are common.
Sometimes, programming languages can be applied in ways that basically break all of the "norms" and expectations of that programming language. You can absolutely build a bloated and slow C application, for example, so just using C doesn't make something minimal or fast. You can also write extremely reliable C code; sqlite is famously C after all, so it's clearly possible, it just requires a fairly large amount of discipline and technical effort.
Usually though, programs fall in line with the norms. Projects written in C are relatively minimal, have relatively fewer transitive dependencies, and are likely to contain some latent memory bugs. (You can dislike this conclusion, but if it really weren't true, there would've been a lot less avenues for rooting and jailbreaking phones and other devices.)
Humans are clearly really good at stereotyping, and pick up on stereotypes easily without instruction. Rust programs have a certain "feel" to them; this is not delusion IMO, it's likely a result of many things, like the behaviors of clap and anywho/Rust error handling leaking through to the interface. Same with Go. Even with languages that don't have as much of a monoculture, like say Python or C, I think you can still find that there are clusters of stereotypes of sorts that can predict program behavior/error handling/interfaces surprisingly well, that likely line up with specific libraries/frameworks. It's totally possible to, for example, make a web page where there are zero directly visible artifacts of what frameworks or libraries were used to make it. Yet despite that, when people just naturally use those frameworks, there are little "tells" that you can pick up on a lot of the time. You ever get the feeling that you can "tell" some application uses Angular, or React? I know I have, and what stuns me is that I am usually right (not always; stereotypes are still only stereotypes, after all.)
So I think that's one major component of why people care about the programming language that something is implemented in, but there's also a few others:
- Resources required to compile it. Rust is famously very heavy in this regard; compile times are relatively slow. Some of this will be overcome with optimization, but it still stands to reason that the act of compiling Rust code itself is very computationally expensive compared to something as simple as C.
- Operational familiarity. This doesn't come into play too often, but it does come into play. You have to set a certain environment variable to get Rust to output full backtraces, for example. I don't think it is part of Rust itself, but the RUST_LOG environment variable is used by multiple libraries in the ecosystem.
- Ease of patching. Patching software written in Go or Python, I'd argue, is relatively easy. Rust, definitely can be a bit harder. Changes that might be possible to shoehorn in in other languages might be harder to do in Rust without more significant refactoring.
- Size of the resulting programs. Rust and Go both statically link almost all dependencies, and don't offer a stable ABI for dynamic linking, so each individual Rust binary will contain copies of all of their dependencies, even if those dependencies are common across a lot of Rust binaries on your system. Ignoring all else, this alone makes Rust binaries a lot larger than they could be. But outside of that, I think Rust winds up generating a lot of code, too; trying to trim down a Rust wasm binary tells you that the size cost of code that might panic is surprisingly high.
So I think it's not 100% true to say that people don't care about this at all, or that only people who are bored and like to argue on forums ever care. (Although admittedly, I just spent a fairly long time typing this to argue about it on a forum, so maybe it really is true.)
About a half of the amino acids used in proteins, i.e. ten of them, can form easily in abiotic conditions and they are widespread in some celestial bodies.
They are easily distinguished from terrestrial contaminants, because they are a mixture of left-handed and right-handed isomers.
When analyzing the genetic code in order to determine which amino acids have already been used in the earlier versions of the genetic code and which have been added more recently, the same simpler amino acids that are easy to synthesize even in the absence of life are also those that appear to have been the only amino acids used earlier.
The article contains the phrase "Given the fact that the current scenario is that life on Earth started with RNA".
This is a fact that it is too often repeated like if it were true, when in reality one of the few things that can be said with certainty about the origin of life is that it has not started with RNA.
What must be true is only that RNA had existed a very long time before DNA and DNA has been an innovation that has been the result of a long evolution of already existing life forms, long before the last ancestor of all living beings that still exist now on Earth.
On the other hand, proteins, or more correctly said peptides, must have existed before any RNA. Moreover, ATP must have existed long before any RNA.
RNA has two main functions based on its information-storage property: the replication of RNA using a template of RNA (which was the single form of nucleic acid replication before the existence of DNA) and the synthesis of proteins using RNA as a template.
Both processes require complex molecular machines, so it is impossible for both of them to have appeared simultaneously. One process must have appeared before the other and there can be no doubt that the replication of RNA must have appeared before the synthesis of proteins.
Had synthesis of proteins appeared first, it would have been instantly lost at the death of the host living being, because the RNA able to be used as a template for proteins could not have been replicated, therefore it could not have been transmitted to descendants.
So in the beginning RNA must have been only a molecule with the ability of self replication. All its other functions have evolved in living beings where abundant RNA existed, being produced by self replication.
The RNA replication process requires energy and monomers, in the form of ATP together with the other 3 phosphorylated nucleotides. Therefore all 4 nucleotides and their phosphorylated forms like ATP must have existed before RNA.
ATP must have been used long before RNA, like today, as a means of extracting water from organic molecules, causing the condensations of monomers like amino acids into polymers like peptides.
The chemical reactions in the early living forms were certainly regulated much less well than in the present living beings, so many secondary undesirable reactions must have happened concurrently with the useful chemical reactions.
So the existence of abundant ATP and other phosphorylated nucleotides must have had as a consequence the initially undesirable polymerization and co-polymerization of the nucleotides, forming random RNA molecules, until by chance a self-replicating RNA molecule was produced.
Because the first self-replicating RNA molecule did not perform any useful function for the host life form, but it diverted useful nucleotides from its metabolism, this first self-replicating RNA molecule must be considered as the first virus. Only much later, after these early viruses have evolved the ability to synthesize proteins, some of them must have become integrated with their hosts, becoming their genome.
The catalytic functions that are now performed mostly by proteins, i.e. amino acid polymers that are synthesized using an RNA template, must have been performed earlier by peptides, i.e. typically shorter amino acid polymers that are synthesized without the use of RNA templates.
Even today, all living beings contain many non-ribosomal peptides, which are made without RNA, using processes that are much less understood than those that involve nucleic acids.
The difference between a living being that would be able to make only non-ribosomal peptides and one that makes proteins using RNA templates is pretty much the same difference as between a CPU with hard-wired control and a CPU with micro-programmed control, with the same advantages and disadvantages.
Life forms able to reproduce themselves must have existed before the appearance of the nucleic acids, but they must have been incapable of significant evolution, because any random change in the structure of the molecules that composed them would have been very likely to result in a defective organism that would have died without descendants. This is similar with a hard-wired control, where small random changes in the circuits are unlikely to result in a functional device.
On the other hand, once the structure of the enzymes was written in molecules of nucleic acids, the random copying errors could result in structures very different from the original structures, which could not have been obtained by gradual changes in the original structures without passing through non functional structures that could not have been inherited.
So the use of molecules that can store the structural information of a living being has enabled the evolution towards much more complex life forms, but it cannot have had any role in the apparition of the first life forms, because the replication of any such molecule requires energy that can be provided only by an already existing life form.
This is maybe a dumb question, but why is it so hard to buy Nvidia GPUs?
I can understand lack of supply, but why can't I go on nvidia.com and buy something the same way I go on apple.com and buy hardware?
I'm looking for GPUs and navigating all these different resellers with wildly different prices and confusing names (on top of the already confusing set of available cards).
ChatGPT has one trade that is guaranteed to be bad. I'm not saying unprofitable, just bad. GBTC is the bitcoin ETF with biggest expense ratio - 1.5%. If you want to bet on bitcoin, a better choice would be BITB (0.20%) or BTC (0.15%).
Also, the reasoning is partially a hallucination - "The holding period of 9 months aligns with the expected completion of Grayscale's pivotal Phase 3 Bitcoin ETF trial, a major catalyst for unlocking investor demand and driving trust value realization."
There is no such thing as a "holding period", nor are they doing a "Phase 3 Bitcoin ETF trial". It's possible the "Phase 3" thing is picked up from news about a drug company.
In the US especially post WW2 we implemented a zoning system which mostly groups the same type of building (residential, commercial, industrial) together.
In a place with residential zoning you can't just build a pub without getting an exception to that zoning approved.
So many Americans living in residential suburban houses have to get in their car and drive 5-20 minutes to get to some sort of commercial center, strip mall, shopping area, etc. that has stores, bars, and restaurants.
There is starting to be a major pushback on this as people realize that having nice towns (not even necessarily cities) with dense mixed-use centers of walkable apartments, townhouses, shops, restaurants, bars, etc. is actually both pleasant and good for business. In the strict separate-zoning model you tend to get more chain establishments and fewer interesting local options because things are spread out and there isn't enough foot traffic in any given place to win over new business.
People who are new to programming have a long way to go before even the concept of "managing dependencies" could possibly be made coherent for them. And the "unsoundness" described (i.e. not having lockfile-driven workflows by default) really just doesn't matter a huge percentage of the time. I've been writing Python for 20 years and what I write nowadays will still just work on multiple Python versions across a wide range of versions for my dependencies - if it even has any dependencies at all.
But nowadays people seem to put the cart before the horse, and try to teach about programming language ecosystems before they've properly taught about programming. People new to programming need to worry about programming first. If there are any concepts they need to learn before syntax and debugging, it's how to use a command line (because it'll be harder to drive tools otherwise; IDEs introduce greater complexity) and how to use version control (so they can make mistakes fearlessly).
Educators, my plea: if you teach required basic skills to programmers before you actually teach programming, then those skills are infinitely more important than modern "dependency management". And for heavens' sake, you can absolutely think of a few months' worth of satisfying lesson plans that don't require wrapping one's head around full-scale data-science APIs, or heaven forbid machine-learning libraries.
If you need any more evidence of the proper priorities, just look at Stack Overflow. It gets flooded with zero-effort questions dumping some arcane error message from the bowels of Tensorflow, forwarded from some Numpy 2d arrays used as matrices having the wrong shape - and it'll get posted by someone who has no concept of debugging, no idea of any of the underlying ML theory, and very possibly no idea what matrix multiplication is or why it's useful. What good is it to teach "dependency management" to a student who's miles away from understanding the actual dependencies being managed?
For that matter, sometimes they'll take a screenshot of the terminal instead of copying and pasting an error message (never mind proper formatting). Sometimes they even use a cell phone to take a picture of the computer monitor. You're just not going to teach "dependency management" successfully to someone who isn't properly comfortable with using a computer.
I live out in the country. Not quite "wilds" but it's about a 45 minute drive to the nearest walmart and we drink water out of a well. It's remote enough that only a few cars drive by a month.
I've also found that the animals behave differently out here, or appear to anyhow. Maybe it's just there's more sensory room to notice the differences. There's a family of small furry rodents that greet me a few feet away from the porch every morning. Birdsong also has a load of hidden complexity to it I've never noticed. Go outside every day and listen to the songs. There's persistence, modification proposals and consensus reaching among birds over days and weeks. I don't know a thing about birds, but it's clear there's a lot of fascinating stuff happening among them.
We have an "armadillo buddy" that lives under the cabin. Clouds of bats swarm between the trees at night and coyotes howl at the moon. There's got to be dozens of rabbits. They'll let you walk right up to them before they run off. Once had to wait for a family of 10 cross the gravel driveway on our way home. Another time there was a large cougar just chilling in the yard.
Having never lived in a rural area until my 30's, it's wild how much activity there is and how close it is to us.
How much of this is because nature doesn't have to work as hard to survive around our cabin, and how much is just being able to notice it? It's a mix for sure.
I am 67, and equestria is mostly correct. I still get great satisfaction from my tech career, but sure, friends and family matter more. This story involves some work I did that did not bring me satisfaction.
I worked at my first consumer-oriented tech company, right after the dotcom crash. It was a really unexciting interlude in my career. I was given the job of writing the database and Java representation of credit/debit cards, and the related business logic. As often happens, the code grew over time, as requirements and card types were added. And it was finally time for a rewrite, and this code became a poster child for technical debt.
Startup activity resumed, and I left for a far more interesting startup.
Then, maybe 15 years later, I was retired, and doing consulting, and ran into a friend from the company, who told me that a new company doing something very similar, and was looking for help. I go in and talk to them, and discover that they actually licensed the software from my former company. Including my long-in-the-tooth credit/debit/xyz-card software. The code was still completely recognizable, disturbingly so. It lived on far past the point that it should have.
I decided to not take the consulting job. I really did not relish the idea of going back to this very forgettable and uninteresting code. But most importantly, I had just retired, and wanted to spend my summer on a lake, not keeping this code alive a bit longer.
My great great grandfather was one of those who went down on the Hawke.
The news was censored, and despite rumours of what had happened, official confirmation didn’t come until after the war. His widow was told that he had deserted and she wouldn’t be receiving a pension.
Fast forward to 1942, my great grandfather is on HMS Curacoa on a foggy night, escorting the Queen Mary, which then rams the Curacoa, sinking it with almost all hands.
My friends and I made a long series of Nomic games in our forums back in the early 2000s [1]. It was weird and very non-traditional (if you could call Nomic games traditional).
We were just teenagers when one of the members of our strange little online friend group discovered Nomic and introduced it to the rest of us. We thought it was interesting, and after playing with the basic rules a bit, we modified it substantially. We turned the base Nomic rule making into this weird improvisational fantasy ARG that we played over AIM and IRC. It was set up to have us conduct "real life" missions to stop imagined alien invaders disguised as corporate overlords. (This very much resembled the plot of the game "Perfect Dark".) The objectives, limitations on what we could explore, what imagined dangers we could face -- all mutable rules we dreamed up and voted on together.
On some weeks we'd put players up for certain tasks that defined our game world, like finding some place or object that became part of the ongoing mythology. It was essentially an append-only log of rules, lore, and journaling. I don't think any of this content exists anymore, not that it was any good. Before long that slowly morphed into a more permanent and recorded game on EzBoard (which also no longer exists) before we started hosting our own websites and forums (bits and pieces of this still exist on archive.org). We were weird kids, but it was all entertaining to us.
At some point we decided to transition the Nomic rules into the bylaws for a fake company that made products and had ambitions of "taking over the world for fun". We made games, short films, websites [2]. Lots of stuff. Some of our websites are still around today [3].
We got featured in Nintendo's E3 press conference one year, beat Anil Dash [4] in some SEO competition (as teenagers that didn't care a thing about SEO -- we just wanted to win prizes), and even made it on Slashdot a few times. All of this was conducted with Nomic rules that we had collectively voted on at some point or another.
There was even one time where the game led to us creating a sandboxed instance of phpBB where people could set rules, ban each other, wipe the entire forums, etc. It was a bit of performance art for a few months. When someone (never identified) SQL injected the website and deleted all the tables, we cheered.
I miss the early web and my younger years, but I have to imagine kids these days are exploring technology and doing the same crazy things in Discord, Minecraft, Roblox, VRChat, etc.
Property and construction are the most distorted markets in a half of the world and this article is yet another proof of that. Our society as a whole would actually benefit if state intervention would make property market unattractive for investors by oversupply: government could invest 50-100B€ in creating non-profit construction verticals that will oversaturate market. The next logical step would be to allow some of the constructed apartments to be privatized by tenants at low cost: maybe not by everyone, but only by those who has a socially important job (police, firefighters, healthcare and social workers, teachers) and majority of difficult problems that government has to deal with would go away. Yes, a lot of people who bought expensive properties may forget about any returns, there could be some margin calls and banking system won’t be happy. But this would be a necessary sacrifice for a healthier economy in which money go elsewhere.
Yeah, this feels a lot like saying "My god, people, stop using telephones! Facetime has video! And the bitrate! It's a million times higher! It's 2023, why would anyone use a POTS call?!" Sure, nicer technical specs are...nice, but they're a far, far second to solving a problem.
In an sms world:
Problem: I need to contact a new acquaintance.
Solution: Exchange phone numbers because everyone has them. Text. Or call.
In a post-sms world:
Problem: I need to contact a new acquaintance.
Solution? Talk about messaging apps. Find the intersection of the ones you both use. Rank by favorite if it's not an empty set, then decide on the relative merits of choosing one in the middle of both your lists, otherwise attempt evangelism. Following evangelism, regress to "Yeah, admittedly the UI has some quirks -- here's how you do <thing you want to do>."
Unfortunately, tech bro progress-at-all-costs featurejerking very often leaves a very large number of people out of the party. The elderly. The impoverished. etc. Despite the corporate mantra of all features all the time, we can actually elect to take old things and make them better instead of inventing the new hot thing all the time.
Fully agree. I still regularly organize LAN parties with the same crew as back then. 24 years and counting... Family has made finding a suitable date much more difficult. But the biggest obstacle is finding LAN-capable games.
10 people joining Dota 2 from the same IP results in instant ban for everyone. StarCraft 2 is horribly laggy when 10 PCs compete for UDP traffic to the internet server. GTA 5 keeps load-balancing us into different lobbies. Most new games just cannot handle a LAN party anymore. And yeah, I remember the time when I paid for a WoW account despite not playing because the WoW guild chat was the quickest way to reach all of my real-life friends.
Warcraft 3 fun-maps, Left 4 Dead 2, Flatout 2 are the games that reliably work well.
In 2015 I was working at a "fintech" company and a leap second was announced. It was scheduled for a Wednesday, unlike all others before which had happened on the weekend, when markets were closed.
When the previous leap second was applied, a bunch of our Linux servers had kernel panics for some reason, so needless to say everyone was really concerned about a leap second happening during trading hours.
So I was assigned to make sure nothing bad would happen. I spent a month in the lab, simulating the leap second by fast forwarding clocks for all our different applications, testing different NTP implementations (I like chrony, for what it's worth). I had heaps of meetings with our partners trying to figure out what their plans were (they had none), and test what would happen if their clocks went backwards. I had to learn about how to install the leap seconds file into a bunch of software I never even knew existed, write various recovery scripts, and at one point was knee-deep in ntpd and Solaris kernel code.
After all that, the day before it was scheduled, the whole trading world agreed to halt the markets for 15 minutes before/after the leap second, so all my work was for nothing. I'm not sure what the moral is here, if there is one.
Did you ever watch 12 Angry Men? There's a scene in that movie where an unabashedly racist man is making his point as loudly and angrily as he can. One by one, all the others in the room turn their backs on him. When only one man is left, that man has a short message for the racist: "Sit down, and don't open your mouth again while I'm here."
Nobody puts him in jail. Nobody takes away his right to make a living or his children or his home. They just send a clear message: Don't bring that sort of thing around here. Don't bring it around us. There's no point in engaging in a dialogue with that person. Sure, you allow them to speak, in that you don't respond to that speech with violence or prosecution. But you don't have to make room for it.
We can't and shouldn't have the government stepping in to say what speech is or is not allowed. And in the United States, we don't. Russia is another matter, but as the post says, the idea that that means every Russian person or company is OK with things this awful is not true.
What that means, though, is that if there are sentiments so odious (and in some cases, literally dangerous, but not illegal) that our free society doesn't think it's appropriate to support a venue for their discussion, it's up to that society writ large, not government, to limit that discussion. There is nothing wrong with fostering a societal sense that there is no room in our world for the kind of shit that Kiwi Farms spewed, even if it is legal. There is nothing wrong with expecting large companies to live up to that standard.
So I don't see this as scary, at all. I see it as a relief. I see it as a free society working the way it's supposed to, with some caveats. (I still don't love how much power large corporate entities have, but not for this reason, exactly.) And I think there are a lot of people out there whose lives are safer because of it.
I feel really sorry for these executives who have to decide whether to earn $300,000 or $1M. :))
Joking aside, a person in the US agreeing to earn $300,000 rather than $350,000 – to a very large extent off the value created by volunteer labour, and off the public's donations – is not what I would call "charity" or a "sacrifice".
I would remind you that some of this money is raised in India, South Africa and Latin America – by telling people there money is needed to "keep Wikipedia online" for them.
Now, a real sacrifice by an American is when a senior with $18 to his name promises to donate to Wikimedia as soon as his social security check clears, as this fellow did:
I wish he hadn't been pressured into making that promise!
Many of the small donations funding these salaries do actually come from people for whom donating that small amount really is a sacrifice. These phrases, about how it's "awkward" to ask, how Wikipedia has "no choice but to turn to you", they speak to them most of all, because they can relate to this situation.
This is from an unpaid Wikipedia volunteer on the Volunteer Response Team: "I can't go into the specifics, but as a VRT agent I've received numerous emails from people on limited incomes who are donating money they need because they believe that Wikipedia is in trouble and that they need to give money to keep it online. I'm absolutely disgusted by this, and I think it will catch up to us in the long-run, as people won't want to give once they realize how deceptive these campaigns are."
So, please, if it's a question of brownie points for "charity" and "sacrifice", these executives are hardly at the front of the queue.
There is not even any good reason for many of these staff positions to go to people in the US. Most of these are remote jobs. They could just as easily be done by someone in India, Poland or Singapore, and at a fraction of the cost.
Let the people who are after that $1M-dollar job go to a for-profit company. Wikipedia doesn't need them.
Say what you want about Twitter, but as someone who's been on it since 2011 and has made it my primary platform for social media since 2016, I'm grateful that fundamentally it's still the same old Twitter.
Yes, they've fallen into the copy-cat temptation machine in some regards – i.e. "fleets" (stories) – but unlike Instagram will at least admit when it's not working and revert (fleets are no longer a feature on the platform). Of course, their failure to innovate definitely cost them TikTok-level success with their mismanagement of both Vine and Periscope, but at least their stubbornness to change has kept the core experience intact.
Obviously there's some larger issues around Twitter's direction ("who is going to be in control of the company?") and the ever-present critiques of misinformation/harassment/"moderation versus censorship" that comes with the territory of being a public forum, but overall it's still the Twitter I know.
I wonder if instagram's downfall will be a boon for Twitter in this regard: at least I know what it is.
It's interesting that they don't care, it's a Marie Antoinette-esque attitude to their lifeblood, and ultimately what will end their dominance.
The truth is they converted a free service into a pay service (the gas-lights creators and businesses into paying for visibility). The visibility, even when paid for is not very fruitful for many as well...
If you run a small business like a restaurant in the US, but get 40,000 views directed to your profile of people who live in Russia, it is unlikely to turn into a pipeline of sales that can sustain your business... This is just one way modern social apps gaslight people who are working very hard to build business... It undermines the very model of business success, to keep people working very hard... For no reward. These site let memes and non-business-related things flow all day on feeds provided they are not promoting anything, which makes the sites appear "full of life".
People seem to be catching on that it was all one big gaslight after they've spent tons of money on ads, they are slowly realizing that it's a rigged carnival game, but social apps like Instagram have pitted the rest of the Internet, and they constantly play a dictatorial role. The carpet can get pulled at any time if they don't take these complaints seriously though, and I look forward to seeing how it all plays out.
I really don't think TikTok is much better now either, these apps all use the technique of phasing adds and slowly reducing (external) visibility for creators, and just judging from comments I see on Twitter, the economy is imploding as a result.
With the economy as bad as it is, this current creator economy is not sustainable, and not fruitful. We'll all see the quality on these platforms decline as creators quit and protest, while platforms will burn up cash reserves and disappoint investors as their overhead for hosting and operations increase.
We've praised the arrogance of social platform leadership for too long and now it's coming back to bite all of us. The biggest worry is that there will not much to gravitate back to if social media implodes... But it's also a great opportunity for IRC and web sites to re-emerge, and for someone to invent hopefully something better than these giant platforms that really aren't "social" at all any more.
School is great. Not all of it obviously, and more for some people than others, but overall the pain and stresses of going to school were hugely outweighed by the fun and adventure of navigating through that system and interacting with all of those different peers and teachers. I wouldn't want to deny any kid those adventures and relationships and that exposure to our culture (for better and for worse). To be fair, I also grew up in a pretty great school system.
I feel the same way about college. I understand the arguments that college is not often worth the excessive cost if you look at it in terms of financial outcomes. But I didn't go to college for financial outcomes. I went because it was going to be (and was) a uniquely amazing adventure that I could never replicate at any other time in my life. I think going to grade school is the same thing.
Nothing can replace acting awkward around your crush in the hall, learning to navigate around the bully, bonding after class with a favorite teacher, enduring the horrors of PE or the excitement of a bomb threat, to name a few nostalgic examples.
> We used type IIac conic synthetic diamonds (supplied by Almax Easy-Lab) with ~30 micron diameter culet flats. About 5 microns were etched off of the diamond culets using the technique of reactive ion etching, to remove defects from the surface. The diamonds were then vacuum annealed at high temperature to remove residual stress. Alumina is known to act as a diffusion barrier against hydrogen. The diamonds, with the mounted rhenium gasket, were coated with a 50 nm thick layer of amorphous alumina by the process of atomic layer deposition.
Incredible technology!
> The pressure was initially determined to ~88 GPa by ruby fluorescence using the scale of Chijioke et al (20); the exciting laser power was limited to a few mW. At higher pressures we measured the IR vibron absorption peaks of hydrogen with a Fourier transform infrared spectrometer with a thermal IR source, using the known pressure dependence of the IR vibron peaks for pressure determination (see SM).
Just incredible!
> Photos were taken with a smartphone camera at the ocular of a modified stereo microscope
It's pretty obvious that Verilog and VHDL, modeled after C and Ada respectively, both imperative languages, follow a drastically mismatched paradigm for hardware design, where circuits are combined and "everything happens in parallel". It becomes even more obvious when you have tried a functional alternative, for example Clash (which is essentially a Haskell subset that compiles to Verilog/VHDL: https://clash-lang.org).
The problem is, it is hard, if not downright impossible, to get the industry to change. I have heard many times, in close to literally these words: "Why would I use any language that is not the industry standard". And that's a valid point given the current world. But even for people that are interested, it might just be hard to switch to something like Clash and not give up pretty quickly.
Unlike imperative languages, functional languages with a rich modern type system like Haskell are hard to wrap your head around. It's no news that Haskell can be very hard to get into for even experienced software engineers. In 2005, after already having more than a decade of programming experience in C, C++, Java, various Assemblers, python (obviously not all of these for the same time) and many other languages, I thought any new language would mostly be "picking up new syntax" at that point. Yet Haskell proved me very wrong on that, so much that it was almost like re-learning programming. The reward is immense, but you have to really want to learn it.
And to my surprise at the time, when I got heavily into FPGAs, the advantage proved to be even stronger when building sequential logic, because that paradigm just fits so much better. My Clash code is much smaller, but also much more readable and easier to understand than Verilog/VHDL code. And it's made up of reusable components, e.g. my AXI4 interfacing is not bespoke individual lines interspersed throughout the entire rest of the code. That's mainly because functional languages allow for abstraction that Verilog/VHDL don't, where often the only recourse is very awkward "generated" code (so much so that there is an actual "generate" statement that is an important part of Verilog, for example).
So by now, I have fully switched to using Clash for my projects, and only use Verilog and VHDL for simple glue logic (where the logic is trivial and the extra compilation step in the Verilog/VHDL-centric IDE would be awkward) or for modifying existing logic. But try to get Hardware Engineers who probably don't have any interest in learning a functional programming language to approach such an entirely different paradigm with an open mind. I've gotten so many bogus replies that just show that the engineer has no idea what higher order functional programming with advanced type system is on any level, and I don't blame them, but this makes discussions extremely tiring.
So that basically leaves the intersection of people that are both enthusiastic software engineers with an affection for e.g. Haskell, and also enthusiastic in building hardware. But outside of my own projects, it just leaves me longing for the world that could exist.
> I'm definitely awaiting for traveling salesman solver or similar, guided by NN, solving things faster and reaching optimality more frequently that optimized heuristic algos.
Just in case we are not being clear, let's be clear. Bluntly in nearly every practical sense, the traveling salesman problem (TSP) is NOT very difficult. Instead we have had good approaches for decades.
I got into the TSP writing software to schedule the fleet for FedEx. A famous, highly accomplished mathematician asked me what I was doing at FedEx, and as soon as I mentioned scheduling the fleet he waved his hand and concluded I was only wasting time, that the TSP was too hard. He was wrong, badly wrong.
Once I was talking with some people in a startup to design the backbone of the Internet. They were convinced that the TSP was really difficult. In one word, WRONG. Big mistake. Expensive mistake. Hype over reality.
I mentioned that my most recent encounter with combinatorial optimization was solving a problem with 600,000 0-1 variables and 40,000 constraints. They immediately, about 15 of them, concluded I was lying. I was telling the full, exact truth.
So, what is difficult about the TSP? Okay, we would like an algorithm for some software that would solve TSP problems (1) to exact optimality, (2) in worst cases, (3) in time that grows no faster than some polynomial in the size of the input data to the problem. So, for (1) being provably within 0.025% of exact optimality is not enough. And for (2) exact optimality in polynomial time for 99 44/100% of real problems is not enough.
In the problem I attacked with 600,000 0-1 variables and 40,000 constraints, a real world case of allocation of marketing resources, I came within the 0.025% of optimality. I know I was this close due to some bounding from some nonlinear duality -- easy math.
So, in your
> reaching optimality more frequently that optimized heuristic algos.
heuristics may not be, in nearly all of reality probably are not, reaching "optimality" in the sense of (2).
The hype around the TSP has been to claim that the TSP is really difficult. Soooo, given some project that is to cost $100 million, an optimal solution might save $15 million, and some software based on what has long been known (e.g., from G. Nemhauser) can save all but $1500 is not of interest. Bummer. Wasted nearly all of $15 million.
For this, see the cartoon early in Garey and Johnson where they confess they can't solve the problem (optimal network design at Bell Labs) but neither can a long line of other people. WRONG. SCAM. The stockholders of AT&T didn't care about the last $1500 and would be thoroughly pleased by the $15 million without the $1500. Still that book wanted to say the network design problem could not yet be solved -- that statement was true only in the sense of exact optimality in polynomial time on worst case problems, a goal of essentially no interest to the stockholders of AT&T.
For neural networks (NN), I don't expect (A) much progress in any sense over what has been known (e.g., Nemhauser et al.) for decades. And, (B) the progress NNs might make promise to be in performance aspects other than getting to exact optimality.
Yes, there are some reasons for taking the TSP and the issue of P versus NP seriously, but optimality on real world optimization problems is not one of the main reasons.
Here my goal is to get us back to reality and set aside some of the hype about how difficult the real world TSP is.
It's a bait and switch. They give the search engines full text to crawl, and then nothing when a user clicks that link from their search. They got the tangible monetary benefits of having a high Google search ranking, without giving access to the actual content, which is how the web is supposed to work.
It's not my fault that news websites have continually debased their users with horrible advertising and tracking, to the point where ad-supported news is pretty much no longer viable. There are plenty of examples of smaller content offerings like podcasts which do quite well being ad-supported, because they have the trust of their audience, and they only accept quality advertisers so they can charge a premium for their ad space.
Books are different, they are not the web. I find things like Google Books very helpful, even for things like when I'm reading a physical book and want to look up which page I read something, even though I can't read the full book through it.
I don't buy it. I remember real meetings. They were far more exhausting than any zoom call. Completely draining.
I guess the novelty of shuffling rooms every 45 minutes is enough to keep some people feeling "active". Or perhaps some people find it "exciting" to frantically have everyone check if there are any other nearby conference rooms with space since we're getting kicked out of this one because, unlike with a video call, the fact that everyone was "running 5 minutes late" cut into an actually scarce physical resource.
But here's my hot take: that feeling you have in video calls? It's you realizing for the first time what a meeting actually is: a waste of time. Without the chit chat and running across the hall, or the sky-high concentrated CO2 clouding your judgement, the meeting is distilled to its purest form. Since you're at home, "going to the meeting" isn't an excuse to escape your current surroundings. And since you're not walking there, the calories being burned aren't there to make you feel artificially productive when nothing meaningful took place. You're just finally seeing those 45 minutes slotted haphazardly into the middle of your day for what they really are. A waste of time.
But remember to reduce your personal carbon footprint! /s
We're expected to shave off a few kilos of CO2 here and there, usually at significant personal expense, while nothing is being done about these releases that a) could be avoided relatively easily b) have a much bigger impact.
The abstract mentions ~8 million metric tons of methane per year. Using a GWP of 28 (which seems to be the value from the IPPC's AR5) that's equivalent to around 224 million tons of CO2.
That's 224 billion kg CO2e, or around 28 kg CO2e for every human on the world, just from the large leaks (not from the ongoing smaller leaks which are >10x as big according to the article). So yeah, go ahead and tell me (and everyone else) to go vegan for a month [1] or make similarly impactful changes just so companies can continue venting methane.
To avoid being only negative: The obvious solution would be to require companies to buy emission certificates for those emissions, with a penalty factor that accounts for the probability of detection if they didn't accurately self-report and had to get caught. This would create an immediate incentive to reduce those easily avoidable emissions, and should be relatively easy to implement if we can get accurate enough data from satellites or aircraft. It would also ensure these emissions are accounted for in the emissions trading framework, so we don't emit the allocated amount elsewhere and are then surprised by the unaccounted-for emissions.
[1] Assuming I replace 250 kcal from pork per day with Tofu (around 100g of pork which seems to be considered "one serving"), this would be savings of ~1 kg CO2e per day: https://ourworldindata.org/grapher/ghg-kcal-poore
> It's almost like some tiny extremist faction has gained control of Windows
This has been the case for a while. I worked on the Windows Desktop Experience Team from Win7-Win10. Starting around Win8, the designers had full control, and most crucially essentially none of the designers use Windows.
I spent far too many years of my career sitting in conference rooms explaining to the newest designer (because they seem to rotate every 6-18 months) with a shiny Macbook why various ideas had been tried and failed in usability studies because our users want X, Y, and Z.
Sometimes, the "well, if you really want this it will take N dev-years" approach got avoided things for a while, but just as often we were explicitly overruled. I fought passionately against things like the all-white title bars that made it impossible to tell active and inactive windows apart (was that Win10 or Win8? Either way user feedback was so strong that that got reverted in the very next update), the Edge title bar having no empty space on top so if your window hung off the right side and you opened too many tabs you could not move it, and so on. Others on my team fought battles against removing the Start button in Win8, trying to get section labels added to the Win8 Start Screen so it was obvious that you could scroll between them, and so on. In the end, the designers get what they want, the engineers who say "yes we can do that" get promoted, and those of us who argued most strongly for the users burnt out, retired, or left the team.
I probably still know a number of people on that team, I consider them friends and smart people, but after trying out Win11 in a VM I really have an urge to sit down with some of them and ask what the heck happened. For now, this is the first consumer Windows release since ME that I haven't switched to right at release, and until they give me back my side taskbar I'm not switching.
This is great advice, even if it sounds cliché and obvious. I like to say that you have to set yourself up for success. That is, make it as easy as possible for your task to actually be easier and more straightforward (i.e. get the other stuff out of the way the day before, clean up, mute your phone, empty your agenda, etc) It’s very common for us to do the exact opposite, and then we feel bad later because we failed. Of course! We made it even harder for ourselves then it already is. I think that many people (especially coders) got used earlier in life to do things naturally, to have natural motivation to work and that was enough. If at some point that’s not the case anymore, because you’re older, has more responsibility, is tired, or a combination of all these, you lose your “method” and don’t know what to do. So: recognize the difficulty of what you are going to do, and set yourself up for success. And also learn to expect less from yourself and be happy when small goals are achieved (you’ll learn this the hard way anyway; after not doing anything, even a small thing will feel like success).
Compiz was probably the single most impactful event for desktop linux of the mid 2000's. Not that it itself made much of a difference, but the ripples affect many areas we take for granted today. A few reasons:
- it put linux ahead of windows and mac in terms of appearance;
- it brought many new users and most were a good mix technical and enthusiasts;
- it showed the advantages of modular software;
- many plugins were useful and these useful plugins influenced desktops to this day;
- it was fast, stable and cool enough;
- it brought many new developers;
- it was an incentive for vendors to improve 3d linux drivers;
- it made X.Org developers improve redirection,
- it came by default on the most popular distro from 2006 to 2012.
Yes, most of the effects were useless but even they helped developers and designers to decide what not to include or do in the future. It pioneered useful things like selecting an area of the screen and saving it directly to a file, useful zoom and quick visualization of non-visible windows. It also showed how important compositing was on the desktop. Although probably not in direct influence, there is a reason android, wayland and whatever comes with ChromeOS all have compositing features.
At the time, there was some interesting developments and experimentation: métisse, sun's looking glass, bumptop, deskgallery... none of them was as successful as compiz. I'm proud I was myself part of it (https://www.youtube.com/watch?v=-X9bcrJ3TjY) and have my name written in some of its source files to this day, even if almost nobody use it anymore.
Around 20 years ago, I disassembled various electronics and scanned them on a flatbed scanner. The scans were 300dpi. Gameboy Color and Gameboy Pocket were among the items I scanned. You can see the scans at https://waltersgameboy.tripod.com/
I don't remember why I didn't scan my DMGB. Either I had lost it, or it was too precious to me to risk breaking. As a kid, I bought it with my own money. I was probably 11 years old.
edit: I did find my DMGB eventually when I moved out of my family home.
- Having a single X server that almost everyone used lead to ossification. Having Wayland explicitly be only a protocol is helping to avoid that, though it comes with its own growing pains.
- Wayland-the-Protocol (sounds like a Sonic the Hedgehog character when you say it like that) is not free of cruft, but it has been forward-thinking. It's compositor-centric, unlike X11 which predates desktop compositing; that alone allows a lot of clean-up. It approaches features like DPI scaling, refresh rates, multi-head, and HDR from first principles. Native Wayland enables a much better laptop docking experience.
- Linux desktop security and privacy absolutely sucks, and X.org is part of that. I don't think there is a meaningful future in running all applications in their own nested X servers, but I also believe that trying to refactor X.org to shoehorn in namespaces is not worth the effort. Wayland goes pretty radical in the direction of isolating clients, but I think it is a good start.
I think a ton of the growing pains with Wayland come from just how radical the design really is. For example, there is deliberately no global coordinate space. Windows don't even know where they are on screen. When you drag a window, it doesn't know where it's going, how much it's moving, anything. There isn't even a coordinate space to express global positions, from a protocol PoV. This is crazy. Pretty much no other desktop windowing system works this way.
I'm not even bothered that people are skeptical that this could even work; it would be weird to not be. But what's really crazy, is that it does work. I'm using it right now. It doesn't only work, but it works very well, for all of the applications I use. If anything, KDE has never felt less buggy than it does now, nor has it ever felt more integrated than it does now. I basically have no problems at all with the current status quo, and it has greatly improved my experience as someone who likes to dock my laptop.
But you do raise a point:
> It feels like a regression to me and a lot of other people who have run into serious usability problems.
The real major downside of Wayland development is that it takes forever. It's design-by-committee. The results are actually pretty good (My go-to example is the color management protocol, which is probably one of the most solid color management APIs so far) but it really does take forever (My go-to example is the color management protocol, which took about 5 years from MR opening to merging.)
The developers of software like KiCad don't want to deal with this, they would greatly prefer if software just continued to work how it always did. And to be fair, for the most part XWayland should give this to you. (In KDE, XWayland can do almost everything it always could, including screen capture and controlling the mouse if you allow it to.) XWayland is not deprecated and not planned to be.
However, the Wayland developers have taken a stance of not just implementing raw tools that can be used to implement various UI features, but instead implement protocols for those specific UI features.
An example is how dragging a window works in Wayland: when a user clicks or interacts with a draggable client area, all the client does is signal that they have, and the compositor takes over from there and initiates a drag.
Another example would be how detachable tabs in Chrome work in Wayland: it uses a slightly augmented invocation of the drag'n'drop protocol that lets you attach a window drag to it as well. I think it's a pretty elegant solution.
But that's definitely where things are stuck at. Some applications have UI features that they can't implement in Wayland. xdg-session-management for being able to save and restore window positions is still not merged, so there is no standard way to implement this in Wayland. ext-zones for positioning multi-window application windows relative to each-other is still not merged, so there is no standard way to implement this in Wayland. Older techniques like directly embedding windows from other applications have some potential approaches: embedding a small Wayland compositor into an application seems to be one of the main approaches in large UI toolkits (sounds crazy, but Wayland compositors can be pretty small, so it's not as bad as it seems) whereas there is xdg-foreign which is supported by many compositors (Supported by GNOME, KDE, Sway, but missing in Mir, Hyprland and Weston. Fragmentation!) but it doesn't support every possible thing you could do in X11 (like passing an xid to mpv to embed it in your application, for example.)
I don't think it's unreasonable that people are frustrated, especially about how long the progress can take sometimes, but when I read these MRs and see the resulting protocols, I can't exactly blame the developers of the protocols. It's a long and hard process for a reason, and screwing up a protocol is not a cheap mistake for the entire ecosystem.
But I don't think all of this time is wasted; I think Wayland will be easier to adapt and evolve into the future. Even if we wound up with a one-true-compositor situation, there'd be really no reason to entirely get rid of Wayland as a protocol for applications to speak. Wayland doesn't really need much to operate; as far as I know, pretty much just UNIX domain sockets and the driver infrastructure to implement a WSI for Vulkan/GL.