Why does documentation require hosting it on a server? My assumption is that it's a static site, and as such, even GitHub Pages would be sufficient.
I know... all content has to be served via a "server" but in case of OVH it's a full-blown hosting solution isn't it?
Besides, I'm sure GitHub wouldn't mind supporting Pandas documentation. They do it for a million other projects for free (even though they're not popular among the HN crowd these days)
I decided to get back into reading two years ago and I picked this as one of the first ones to get started with, given it was a small book. I absolutely love Arthur C. Clarke's style of helping you visualize the grand scenes.
His books are more plot driven and the characters are pretty flat, but it's so damn fun to read through!
Morgan Freeman has been trying to get the movie adaptation made since early 2000s and wants to play Commander Norton. I had read that Denis Villenueve (the same director from the new Dune movies) was attached to direct the adaptation, but it seems like his schedule is really busy. He recently finished filming Dune Messiah and then he's got the next James Bond movie to deliver.
I seriously believe that it's not that GitHub is run on AI-generated code that's responsible for these slew of outages recently. I think it's crumbling under the load of a significantly large amount of AI-enabled coding with users raising PRs and pushing content a lot more than previously.
Obviously, if this is true, the team at GitHub is failing to scale their infra to meet the workload demands.
I place considerable doubt on claims of LLMs improving the user's thought process.
Especially since everyone harps on about it but never provides concrete evidence. If your thinking has sharpened, surely you can find a way to demonstrate how.
I suspect it's one of those things where the user thinks they have improved but the reality is different.
There's a research paper from the University of Liverpool, published in 2006 where researchers asked people to draw bicycles from memory and how people overestimate their understanding of basic things. It was a very fun and short read.
It's called "The science of cycology: Failures to understand how everyday objects work" by Rebecca Lawson.
There’s also a great art/design project about exactly this. Gianluca Gimini asked hundreds of people to draw a bicycle from memory, and most of them got the frame, proportions, or mechanics wrong.
https://www.gianlucagimini.it/portfolio-item/velocipedia/
A place I worked at used it as part of an interview question (it wasn't some pass/fail thing to get it 100% correct, and was partly a jumping off point to a different question). This was in a city where nearly everyone uses bicycles as everyday transportation. It was surprising how many supposedly mechanical-focused people who rode a bike everyday, even rode a bike to the interview, would draw a bike that would not work.
I wish I had interviewed there. When I first read that people have a hard time with this I immediately sat down without looking at a reference and drew a bicycle. I could ace your interview.
This is why at my company in interviews we ask people to draw a CPU diagram. You'd be surprised how many supposedly-senior computer programmers would draw a processor that would not work.
If I was asked that question in an interview to be a programmer I'd walk out. How many abstraction layers either side of your knowledge domain do you need to be an expert in? Further, being a good technologist of any kind is not about having arcane details at the tip of your frontal lobe, and a company worth working for would know that.
A fundamental part of the job is being able to break down problems from large to small, reason about them, and talk about how you do it, usually with minimal context or without deep knowledge in all aspects of what we do. We're abstraction artists.
That question wouldn't be fundamentally different than any other architecture question. Start by drawing big, hone in on smaller parts, think about edge cases, use existing knowledge. Like bread and butter stuff.
I much more question your reaction to the joke than using it as a hypothetical interview question. I actually think it's good. And if it filters out people that have that kind of reaction then it's excellent. No one wants to work with the incurious.
If it was framed as "show us how you would break down this problem and think about it" then sure. If it's the gotcha quiz (much more common in my experience) then no.
But if that's what they were going for it should be something on a completely different and more abstract topic like "develop a method for emptying your swimming pool without electricity in under four hours"
It has nothing to do with “incurious”. Being asked to draw the architecture for something that is abstracted away from your actual job is a dickhead move because it’s just a test for “do you have the same interests as me?”
It’s no different than asking for the architecture of the power supply or the architecture of the network switch that serves the building. Brilliant software engineers are going to have gaps on non-software things.
That's reasonable in many cases, but I've had situations like this for senior UI and frontend positions, and they: don't ask UI or frontend questions. And ask their pet low level questions. Some even snort that it's softball to ask UI questions or "they use whatever". It's like, yeah no wonder your UI is shit and now you are hiring to clean it up.
> Without a clear indicator of the author's intent, any parodic or sarcastic expression of extreme views can be mistaken by some readers for a sincere expression of those views.
If that's the goal, the technology for how these agents "learn" would be the most interesting one, even more than the demos in the link.
LLMs can barely remember the coding style I keep asking it to stick to despite numerous prompts, stuffing that guideline into my (whatever is the newest flavour of product-specific markdown file). They keep expanding the context window to work around that problem.
If they have something for long-term learning and growth that can help AI agents, they should be leveraging it for competitive advantage.
You only notice this stuff if you use shell very often, and practically live in the command line. Since I started my career I've been using omz and a fresh install is always snappy but over time it starts getting slow.
Debugging/profiling why it's gotten slow has mostly been an uphill battle for me. I tried using zprof which pointed that compdef and compinit were culprits. I tried changing my config to calculate compinit only once a day since most people reported it to work, but it never worked. This kind of stuff pokes and stabs at you endlessly.
OMZ being shell, and being a maze of a codebase, I couldn't track down if and where compinit was being called from even after the config change above, because all profilers pointed to the possibility of compinit being called twice.
I gave up and started using barebones zsh + starship because I do need a good prompt. Yet the issues persisted.
I recently started using fish + starship on my local machine so that I could evaluate it before committing to it at work. It's the fastest shell so far (maybe because its new, I intend to find out).
My only painpoint now is I have a bunch of utility functions I've maintained in bash that I need to port to fish because of the posix incompatibility.
We keep a precomputed cityhash64 value for a few columns we know are going to be used for aggregations. Rather than relying on ClickHouse to do it internally, this explicity behavior I've found is faster.
Especially if it's a multi tenant architecture, it helps to have the cityHash64 caclulated as a combination of tenant ID and another column, so the overall amount of data scanned is lowered too.
I find this to be a very amusing critique. In my experience, Notion (when I stopped using it 3 years ago) was slow as molasses. Slow to load, slow to update. In comparison, at work, I almost exclusively favor Confluence Cloud. It's very responsive for me.
We have tons of Confluence wikis, updated frequently.
I think it might be the same issue as with WordPress and Jira - terrible plugins. Each company uses own special mix, and encounters issues often occurring in that one specific configuration. And it is the base platform that takes the blame.
In particular a place I used to work had a plugin for threaded comments in Jira. The specific one we were using slowed things down noticeably with the DB on the same server, but not too much to be an improvement in overall usefulness.
Then we decided trying to make our Jira more reliable by splitting the DB out into a separate clustered DB system in the same data center. The latency difference going through a couple of switches and to another system really added up with those extra 1600 or so DB calls per page load.
We ended up doing an emergency reversion to an on-host DB. Later, we figured out what was causing that many queries.
You're referring to the on-prem Jira. That might suck, sure. My experience has been purely using Jira Cloud and Confluence Cloud, both of which I've found to be snappy and responsive.
Amusingly, exactly opposite experience here. That said, our on-prem is jira and confluence integrated with db on same machine, and apache in front doing additional caching. I imagine like so many things it is how you set it up...
If you read my previous comment, I said it was largely the specific poor plugin that caused most of the performance issue with the database queries. I never complained about the overall speed of on-prem Jira. That was the assertion of the person who’s only ever used the cloud version.
My last company switched several teams to Jira Cloud. My current company started with Cloud when we moved over from other tools.
Cloud does not give you the flexibility of your own plugins, your own redundancy design, or your own server upgrades. On top of that, the performance is pretty variable and is far worse than a self-hosted Jira on fast hardware.
It’s interesting to me that your lack of experience to make a comparison qualifies you in some way to criticize the experience I actually have.
I know... all content has to be served via a "server" but in case of OVH it's a full-blown hosting solution isn't it?
Besides, I'm sure GitHub wouldn't mind supporting Pandas documentation. They do it for a million other projects for free (even though they're not popular among the HN crowd these days)