Between the genuine weirdos, the autistic and/or the neuro-divergent, is there anyone left, really? Do the "normies" genuinely exist? Happy-go-lucky, knows a bit about everything but doesn't nerd out on anything, picks up every conversation subject and listens and holds their own in a manner that is just right? I am genuinely curious about the existence of these "superhumans".
There are many many of these socially-skilled normies. But, by virtue of being socially skilled, most have already pretty much filled up their social capacity and don't tend to show up at the kind of venues dedicated to helping under-socialized people meet up.
The "normie" doesn't really exist. Everyone is kind of weird in some aspect, which might not be obvious on a surface level.
But having gone to a bunch of programming meetups, the majority of people are perfectly pleasant and good to socialise with. The weirdos are usually non tech people who have an app or crypto idea they want help with. Or just total crazy people who just showed up to the first event they could find regardless of topic.
While there is often a "normal" (bell-curve fitting) distribution for individual factors, putting them together can be counter-intuitive.
> Even when considering just three dimensions, fewer than 5% of pilots were “average” in all. [1]
I would guess many/most people probably think they fall into either (1) the normal bucket or (変) the weird/fringe bucket. Either "I am pretty normal" or "I am an outsider". How many think "We're all fairly different once you cluster in any 3 interesting dimensions!"?
But people feel that dichotomy, which makes me think it is largely about perception relative to a dominant culture: the in-group versus out-group feeling. For example, atheists might feel like outsiders in many parts of the U.S., but less so in big cities and in other countries. In dense urban walkable cities (like NYC), people see diversity more directly and more often. Seeing a bunch of people is different than seeing a bunch of cars.
I think it should be fairly easy to determine if atheists really are outsiders in parts of the US or if it's just perception: just look at voting results, and church attendance for any given area. I don't think it's merely perception at all; visit any rural area and you'll likely see a surprising number of churches relative to the population.
Also, seeing people walking around in public doesn't tell you anything about their religious beliefs unless they're in some sect where they make it obvious with their clothing or hairstyle.
it's a Japanese word for "weird". I'm guessing that OP is a bit of an Otaku (aka "obsessed with Japan") -- which is either ironic or completely appropriate.
My first thought was that they were an LLM, but then checking their profile it seems they've been around since 2012 and have a comment expressing that they seem to get accused of being an LLM a lot, and suggesting people don't do that.
> Quite soon these accusations will nearly always be accurate.
/headscratching They don't have to be, do they? It is possible that some people will build identity systems with norms that e.g. humans type with their own hands. These could become popular, at least conceivably, in certain areas. Hard to enforce for sure. And getting harder and harder to distinguish reliably.
The hero image on the linked page, which consists of a muted teal background with the words "Introducing Muse Spark", weighs in at 3,5MB. I don't even...
"Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting."
Maybe they did get their models to test their pages, but they didn't tell their models to pretend that they're browsing on mobile using a 3G connection.
Good catch - looks like it's a PNG image, with an alpha channel for the rounded corners, and a subtle gradient in the background. The gradient is rendered with dithering, to prevent colour banding. The dither pattern is random, which introduces lots of noise. Since noise can't be losslessly compressed, the PNG is an enormous 6.2 bits per pixel.
While working on a web-based graphics editor, I've noticed that users upload a lot of PNG assets with this problem. I've never tracked down the cause... is there a popular raster image editor which recently switched to dithered rendering of gradients?
My reasoning is because once upon a time, I was using Macromedia Fireworks, and PNGs gave far far better results than JPGs did at the time, at least in terms of output quality. Nearly certainly because I didn't understand JPG compression, but for web work in the mid 2000s PNGs became my favourite. Not to mention proper alpha channels!
I am simply offended. By Meta's lack of sensibilities (or ability) towards use of images on the Web while touting their new flavour of artificial intelligence as a product.
> But I suspect that only creatures that have hopes and dreams and fears similar to our own would actually have much impact on human loneliness. And finding that sort of creature may be a long shot indeed.
I've been saying this for many years, and have had the same suspicion for longer -- first of all, for most people beyond myopic (with good reason) zoologists or biologists, it's not about "alien" life -- plenty of that, arguably, in the Mariana trench etc, noone's noticing because it's an answer to the wrong question. Extrapolating this, it's not even about _extra-terrestrial_ life necessarily -- not for some of us -- finding living bacteria of non-terrestrial origin on Mars is going to be amazing and an epic discovery, but it's still an answer to the wrong question -- our search is deep down motivated by desire to communicate, to ask as if our own mirror "what is going on?", "why do we exist?", "have you guys figured it out" and last but not least -- "we are excited to meet you, for all our numbers we've been feeling lonely with so much space, thinking we were alone". Many a sci-fi author express the question much better, because that's the one that matters.
But to placate the level-headed empirists -- yes, discovering the bacteria or alien jelly-fish in the interstellar void, is of course scientifically a big thing. But I suspect we are just being cautious not wanting to utter that discovering these we just want to get _more_ excited about the possibilities the former allows -- that we _will_ meet sentient beings of intelligence who will in the very least understand us (with due effort), sort of like the extended family we suspect we have and always wanted to meet, but the meeting is always postponed.
For all his infamy, Jobs held Apple together in large part through his uncompromising perfectionism and attention to the kind of details that have since been demoted to "we'll fix it in the next version" or the equivalent of "# temporary". Every company is a bit of an ant-farm, but this one either has no single queen to lay down the law, or the queen is "trying things out" :P
Jobs used to laugh at Microsoft for all manner of inconsistencies in behaviour and user experience with Windows, but now Apple is contending with the same problem, in part due to exposure as macOS has never been so popular and prevalent, and now there are ever growing amount of eyes calling them out for those inconsistencies that have been appearing more and more frequently without Jobs' leadership style.
I see you point, but I think that Jobs not per se held Apple together. This is Tim Cook doing as well and arguably on a way larger scale.
The one thing that distinguished Jobs from the rest ever since is the fact, that he was Apple's greatest fan boy. If you have a look at the Itunes introduction, Jobs sits there and for around 2 hours showcases every feature and function. He was so into the product, that this keynote is for me the most nerdy ever conducted by him.
The others as well always show him being the company's No 1 fan and host of every feature there is.
Imagine to have a boss like this. He set the standard for product development in every regard.
And this is what slipped. Consistency is lacking and according to biographies about Cook, he has a very huge focus on him as a person. This is always wrong. It is about the product, nothing else.
There will never be a Jobs again. And it is getting worse from here: the old guard is mostly gone. Even the myth of Steve Jobs is nothing Gen Z cares about.
We live in the Post-Jobs phase and Cook seems to be overshadowing Jobs, as sad as this is. All innovations except the headphones date back to Jobs. All the scale that Apple reached to Cook.
I bet Jobs would rather have a way smaller scale with great products. This luxury lifestyle is nothing Jobs liked.
You have to realise every single UI up to that point was solid white or grey and unable to access alpha channels. And the fact that they expanded upon this design “language” with the transparent Imac cases made it all cohesive and 2YK hip.
Having used `jq` and `yq` (which followed from the former, in spirit), I have never had to complain about performance of the _latter_ which an order of magnitude (or several) _slower_ than the former. So if there's something faster than `jq`, it's laudable that the author of the faster tool accomplished such a goal, but in the broader context I'd say the performance benefit would be required by a niche slice of the userbase. People who analyse JSON-formatted logs, perhaps? Then again, newline-delimited JSON reigns supreme in that particular kind of scenario, making the point of a faster `jq` moot again.
However, as someone who always loved faster software and being an optimisation nerd, hat's off!
Integrating with server software, the performance is nice to have, as you can have say 100 kRPS requests coming in that need some jq-like logic. For CLI tool, like you said, the performance of any of them is ok, for most of the cases.
Indeed, thanks for spotting that, as I myself remember discovering there's at least two. Thing is, I had learned and started with Mike Farah's `yq`, not the pass-through-to-`jq` variant written in Python that's often more easily (read: system package manager) available. Both semantics and syntax are a bit different between the two.
A bit of a fun fact: there's a quote by Farah where he said that the language and semantics of the tool he was writing, didn't really "click in" until he was well into writing it :-) I myself have been on occasion pulling my hair out trying to wield `yq`'s language, there's some inconsistencies here and there which I think are related to the novel nature of the language (not novel to everyone but it's uncommon even for those well versed with e.g. SQL). `jq` suffers from similar woes, but to a lesser degree.
When you say "unit of work", unit of _which_ work are you referring to? The problem with rebasing is that it takes one set of snapshots and replays them on top of another set, so you end up with two "equivalent" units of work. In fact they're _the same_ indeed -- the tree objects are shared, except that if by "work" you mean changes, Git is going to tell you two different histories, obviously.
This is in contrast with [Pijul](https://pijul.org) where changes are patches and are commutative -- you can apply an entire set and the result is supposed to be equivalent regardless of the order the patches are applied in. Now _that_ is unit of work" I understand can be applied and undone in "isolation".
Everything else is messy, in my eyes, but perhaps it's orderly to other people. I mean it would be nice if a software system defined with code could be expressed with a set of independent patches where each patch is "atomic" and a feature or a fix etc, to the degree it is possible. With Git, that's a near-impossibility _in the graph_ -- sure you can cherry-pick or rebase a set of commits that belong to a feature (normally on a feature branch), but _why_?
By "unit of work", I mean the atomic delta which can, on its own, become part of the deployable state of the software. The thing which has a Change-Id in Gerrit.
The delta is the important thing. Git is deficient in this respect; it doesn't model a delta. Git hashes identify the tip of a tree.
When you rebase, you ought to be rebasing the change, the unit of work, a thing with an identity separate and independent of where it is based from.
And this is something that the jujutsu / Gerrit model fixes.
Just want to raise my hand and say I too have been using em dashes for considerably longer than LLM has been on every hacker's lips. It's obviously not great being accused of being an AI just because one has a particular style of writing...
Every time Web Components is being fronted, one has to duly inform the reader that Apple _rightfully_ refuses to implement what in my humble opinion is at least one broken piece of the specification that if implemented -- and it is implemented faithfully by Chrome and Firefox browsers -- in principle breaks the Liskov's Substitution Principle:
* https://lists.w3.org/Archives/Public/public-webapps/2013OctD...
* https://lists.w3.org/Archives/Public/public-webapps/2016JanM...
To be fair, this only concerns so-called "custom elements" that need inheriting existing HTML element functionality, but the refusal is well explained IMO. Meanwhile everyone else is just chugging along, like it tends to happen on the Web (e.g. History API giving way to Navigation API that in large part was designed to supercede the former).
To all of the above I might add that without "custom elements" Web Components is severely crippled as a feature. If I want to sub-class existing functionality, say a `table` or `details`, composition is the only means to do it, which in the best style on the Web, produces a lot of extra code noone wants to read. I suppose minimisation is supposed to eliminate the need to read JavaScript code, and 99% of every website out there features absolutely unreadable slop of spaghetti code that wouldn't pass paid review in hell. With Web Components that don't implement "custom elements" (e.g. in Safari) it's a essentially an OOP science professor's toy or totem. And since professors like their OOP theory, they should indeed take Liskov's principle to heart -- meaning the spec. is botched in part.
I am reading opinions here from agent users, but I haven't adopted the "agentic workflow" myself because I believe I am (for now) now getting a lot of my trouble's worth using Gemini (3 Pro) in the traditional conversational manner. It is adequate at suggesting solutions in the form of code, or reasoning in general. My problems are software engineering but also everything that is not, since I have a subscription it's my go to problem solving partner. I see no reasons to switch to another product for now either, I am constantly in the loop getting samples of chats with Grok and ChatGPT and it seems a very close race. If Claude is that one race horse that's built different -- and I absolutely can believe it is so because they have rightfully tuned it -- I am not convinced I am missing out much. But maybe because I am more traditionalist to most of everyone's having embraced the idea of having an agent run a loop on their workstation(s) and trusting it to deliver. Perhaps if I were in more of a tight time frame, I'd be pressed to do so myself, but for now I am already benefiting from the extra speed "rubberducking" with Gemini all manner of software engineering problems that I need to solve, so I simply have no reasons to abandon it. I think this is also Google's strength -- they have the data, they've already integrated Gemini or a variant of it anyway, into google.com which is one of their prized cash cows, and it's everywhere else too. Like others here have said, Google may not have the absolute best in class at all times, but they're fairly good and they still have the brains that gave us DeepMind and GPT, unless there's some sort of stagnation going on in their ranks, I expect they're not resting on the laurels. With their capital they're still at the head of the race. Anthropic and OpenAI have the benefit of being nimble, though, and it shows too. Anyway, competition is good, the cat's out of the bag and on the greener side of the river :-)
I think your wish is self-contradicting. `#fff` is so-called _device_ colour -- a device like a LED-based display uses it directly to drive the LEDs, where `#fff` means that the red, the green and the blue channel are already "cranked to 11". The `f` here is equivalent to 11. HDR uses a different color format, I think -- exactly because `#fff` is otherwise either ambigous, or has to map to a different colour gamut -- where, for instance, `#fff` actually means the whitest white cranked up to 11, at however many nits (say 1500) the monitor may emit, which would make your "standard" or "SDR" white (per sRGB, say) that's usually has the emitted strength of around 100 nits, be somewhere at `#888` (I haven't taken into account the _curve_ implied here, at any rate I don't think it's going to be a linear relationship between nits and device primary N-bit colour numbers).
Also, `#fff` is ambigous -- if you mean device colour, then there's no brightness (nits) specified at all, it may be 200 or 2000 or 10,000. If sRGB is implied, as in `#fff in sRGB colour space` then the standard specifies 80 nits, so when you say you don't want brighter than that, then you can't have much of HDR since sRGB precludes HDR by definition (can't go brighter than 80 nits for the "whitepoint" aka white).
I think if you want HDR you need a different colour space entirely, which either has a different peak brightness, or one where the brightness is specified additionally to e.g. R, G and B primaries. But here my HDR knowledge is weak -- perhaps someone else may chime in. I just find colour science fascinating, sorry to go on a tangent here.
It wasn't obvious to me -- I misread "blue" as "white".
`#fff` is device color, it's short for `#ffffff` which is 24-bit RGB that predates sRGB, as does true color device support. I was sending 24-bit RGB to VESA-compliant graphics cards before sRGB became a thing. `#fff` was supported by Photoshop and Macromedia products as straightforward device colour format, before sRGB was adopted by at least the latter, mind you. The use by CSS is co-incidental, not where the format was introduced.
I meant #fff as the nominal white point, which is not the brightest white the device can produce. It's how #fff is displayed on Firefox on an otherwise HDR screen. Assuming the screen is not set to max brightness and otherwise HDR capable, it means that it's not the max brightness an individual pixel/subpixel can produce, so desaturating blue color just to make it brighter can be an unnecessary compromise only dictated by color representation in the software stack.
reply