That chart is comparing point in present time, not point in generation-relative time. IE zoomers at ~25 mills at ~40. If you were to approximately age sync the red and yellow lines on that chart, by moving their start dates to the same point, the red line is higher.
There's several charts, the second is: Gen Zers, Millennials Less Likely to Own Homes Than Their Parents at the Same Age which does a direct "at same age" comparison and showed that Gen Z started off slightly stronger than millennials but fell behind.
I do wonder about how they're calculating some of this. It looks like in the chart is saying 16% of the cohort born between 1981 and 1996 (aka millennials) owned a home in 2000. I wouldn't even expect 16% of that group to be over 18.
Thanks for sharing! Seems to show that both are doing poorly relative to earlier generations, and it doesn’t seem Gen Z is greatly (or much at all) outpacing millennials.
As a millennial, I apologize for the blame and hate the boomer generation gets. But I think it's important to understand why the hate exists.
Many boomers grew up in an era where even if you dropped out of high school and waited tables full time for a few years, you'd be able to afford to buy a house and start a family by age 25. Sure, interest rates were 20%, but the price of a house was often just 2-3x someone's annual salary (single earner). Now the price of a house is often 4-5x a households annual salary.
Boomers also had access to stuff like pensions.
I think boomers wouldn't get hate if it weren't a trope for them to say that the millennial generation is lazy, entitled, etc. When milennials have to be extraordinary in order to live what used to be an ordinary life (3 bedroom house, 2 kids).
its not clear whether youre ramping up the virtue signalling, being polite, or trying to be the argumentative strongman. if you care about the way you look to me, the former and latter would probably be my perception of you. out of good faith i shall follow the middle option and execute the jump instruction myself.
I have watched a enough of Fox News to know they are not reporting, on a number of subjects. And so people that only consume right wing news, are in a bubble, and their arguments do follow similar patterns based on that ignorance.
My response was what it was, just seeing the pattern presenting itself early. Why waste more time on engaging with BS? Arguing with True Believers with facts and citations, goes nowhere.
I don't think I have ever used stars in making a decision to use a library and I don't understand why anyone would.
Here are the things I look at in order:
* last commit date. Newer is better
* age. old is best if still updating. New is not great but tolerable if commits aren't rapid
* issues. Not the count, mind you, just looking at them. How are they handled, what kind of issues are lingering open.
* some of the code. No one is evaluating all of the code of libraries they use. You can certainly check some!
What does stars tell me? They are an indirect variable caused by the above things (driving real engagement and third interest) or otherwise fraud. Only way to tell is to look at the things I listed anyway.
I always treated stars like a bookmark "I'll come back to this project" and never thought of it as a quality metric. Years ago when this problem first surfaced I was surprised (but should not have been in retrospect) they had become a substitute for quality.
I hope the FTC comes down hard on this.
Edit:
* commit history: just browse the history to see what's there. What kind of changes are made and at what cadence.
> If one library has 1,000 stars and the other has 15, I'm going to default to the 1,000 stars.
There are clearly inflection points where stars become useful, with "nobody has ever used this package" and "Meta/Alphabet pays to develop/maintain this package" on the two extremes.
I'm less sure what the signal says in-between those extremes. We have 2 packages, one has 5,000 stars, the other has 10,000 stars - what does this actually tell me, apart from how many times each has gone viral on HN?
If the goals relate to maintenance and viability, we’re looking for a minimum threshold of implementation. Amazon and Microsoft have a lot of stars, both have ‘more than enough’ to care.
If the goals are marketing or targeting or mass-market appeal or hiring pools then those stars say something else.
And that's fine if you're just writing a toy program for personal use. But it's deeply problematic if you have to rely on that library for anything important. This type of lazy approach to the software bill-of-materials has gotten a lot of organizations into trouble with exploitable security flaws.
<10 Stars is a strong signal, that a repo is not relevant to anyone except maybe the maintainer. This fact does not change even if other repos have bough 10.000 stars.
I don't know how often this happens, but what about repositories that would "naturally" have <10 stars if not for buying them? Is the signal you're referring to still useful if it can be artificially silenced by spending some money?
More stars = More followers = More people interested and contribute. Even with fake, there will be more people joining the project because they are duped. Still though it is going to get more attention.
I will. I have other heuristics too, like there are plenty of starred libraries that are just hard to use or don't actually fit my use case. But if I choose the 1000 star one and it works easily, then cool. If it doesn't, I'll try the 15 star one. If it works, cool. If not, then I'll probably end up vibe coding my own thing.
Why not? Buying stars is also a positive signal on commitment.
i.e. if the maintainer is serious enough to buy stars, is not in theory likely to spend time /money in maintaining /improving the project also ?.
Presumably he wouldn't just want fake users but also real users, which is a signal than a just purely hobby project, that is vibe-coded on a whim over a weekend and abandoned?
> i.e. if the maintainer is serious enough to buy stars, is not in theory likely to spend time /money in maintaining /improving the project also ?.
i mean if maintainers clearly spend much more time and effort on fraud than actually improving the project, why should I at all believe they would, let alone trust their judgement with regards to other things such as technical choices for example
Just because you make a decision quicker doesn't mean you saved any time. It is good to save time, but not at the sake of quality. You spend more buying cheap boots, and they don't even keep your feet dry.
I listed many other useful heuristics. Do you not find value in them? Do you find stars more valuable than them?
Take a moment to consider stars as a useful metric may only be useful for packages created prior to ~2015 when they weren't such a strong vanity metric, and are already very well established. This is preconditioning you to think "stars can still sometimes be useful, because I took a look at Facebook's React GH and it has a quarter million stars".
Sure, it's useful for that. But you aren't going to evaluate if the "React" package is safe. You already trivially know it is.
You'll be evaluating packages like "left-pad". Or any number of packages involved in the latest round of supply chain attacks.
For that matter, VCs are the ones stars are being targeted at, and potential employers (something this article doesn't cover, but some potential hires do hope to leverage on their resume).
If you are a VC, or an employer, it is a negative metric. If you are a dev evaluating packages, it is a vacuous metric that either tells you what you already know, or would be better answered looking at literally anything else within that repo.
The article also called out how download count can be faked trivially. I admit I have relied upon this in the past by mistake. Release frequency I do use as one metric.
When I care about making decisions for a system that will ingest 50k-250k TPS or need to respond in sub-second timings (systems I have worked on multiple times), you can bet "looking at stars" is a useless metric.
For personal projects, it is equally useless.
I care about how many tutorials are online. And today, I care more about if there was enough textual artifacts for the LLMs to usefully build it into their memory and to search on. I care if their docs are good so I spend less tokens burning through their codebase for APIs. I care if they resolve issues in a timely manner. I care if they have meaningful releases and not just garbage nothings every week.
I didn't mean for this to sound like a rant. But seriously, I just can't imagine in any scenario where "I look at stars" as a useful metric. You want to add it to the list? Sure. That is fine. But it should not be a deciding factor. I have chosen libraries with less stars because it had better metrics on things I cared about, and it was the correct choice (I ended up needing to evaluate them both anyhow. But I had my preference from the start).
Choosing the wrong package will waste you so much more time. Spend the 5 minutes evaluating for stuff that is important to your project.
Having stars isn't a positive metric, it's more that not having stars is a disqualifier unless I want to use someones brand new toy.
My first scan of a GitHub repository is typically: check age of latest commit, check star count, check age of project. All of these things can be gamed, but this weeds out the majority of the noise when looking for a package to serve my needs. If the use case is serious, proper due diligence follows.
This behavior is similar from the time I played a very popular mmorpg - when people selected others for their groups, their criteria deferred to the candidate's analyzed gameplay records (their 'logs') on a website which boiled down to a number showing their damage dealt and the color of it's text.
There was nothing about going into the logs to see if they could do the game's mechanical challenges, minimizing their damage taken. It made for a worse environment yet the players couldn't stop themselves from using such criteria.
In short, humans are lazy and default to numbers and colors when given the chance. When others question them on it, they can have a default easy answer of being part of the herd of zebras to get out of trouble.
you're arguing with people who are fundamentally unempathetic. it's a lot of words spent on vamping about stars instead of contributing to stuff everyone actually uses, which hardly anyone does, which should tell you everything you need to know about the value of having open source users: most of them only care that something is free as in beer.
To be honest, these days I have more faith in an application or library with a moderate development pace where maybe the last commit wasn't 2 seconds ago co-authored by claude (in the most blatant examples).
The same is true for amount of commits, the type of commits, release cadence and the amount of fixes and hotfixes in releases. I don't feel like being a glorified alpha tester so I look for maturity in a project.
Which more often than not means that, yes there needs be activity. But, it is also fine if it was two days ago and there is a clear sign of the same pattern over a longer period. Combined with a stable release cycle, sane versioning and clear changelogs that aren't just a list of the last 10 commit messages.
On your point of stars, I think they used to be a valid metric in a similar category. Namely, community behind the software. But it has been a while since that has been true. It certainly hasn't been for a while, ever since I saw these star tracking graphs pop up on repos I knew that there was no sense in paying attention to them anymore.
I use stars to try and protect myself from dependency confusion attacks.
For example, let’s say I want to run some piece of software that I’ve heard about, and let’s say I trust that the software isn’t malware because of its reputation.
Most of the time, I’d be installing the software from somewhere that’s not GitHub. A lot of package managers will let anyone upload malware with a name that’s very similar to the software I’m looking for, designed to fool people like me. I need to defend against that. If I can find a GitHub repo that has a ton of stars, I can generally assume that it’s the software I’m looking for, and not a fake imitator, and I can therefore trust the installation instructions in its readme.
Except this is also not 100% safe, because as mentioned in TFA, stars can be bought.
Sure, I suppose that is one solution, but given that buying stars has been around for at least 5 years, and I have been aware of people faking stars for longer than that, I am not sure why you would rely on stars as a primary metric.
There are many other far more useful metrics to look at first, and to focus on first, and to think about. Every time you think about stars, you'll forget the other stuff, or discount it in favor of stars.
Forget stars. They now no longer mean anything. Even if they did before, they don't anymore.
In it they explicitly call it out as a ranking metric
> Many of GitHub's repository rankings depend on the number of stars a repository has. In addition, Explore GitHub shows popular repositories based on the number of stars they have.
Yet another case of metric -> target -> useless metric
That’s not something I wanted to imply. It can also stand for "the fine article". Is there a better shorthand for "the article linked at top of the page" / "the original article"?
Totally agree with you. I think Github "stars" are a relic of the past. They should be renamed to "Bookmarks" and exist as a tool for users to just mark interesting repositories. By no means should a repository keep a count of how many people bookmarked it. It makes no practical sense. Active maintainers and commit dates are much better metric.
As someone else pointed out... When commits are this cheap, if that's the metric to be gamed, it will be gamed.
You just create 5 GitHub accounts, and spread your Claude Code commits to 5 separate accounts to make it look like there's 5 active contributors.
If anything, we're better off with a fake star economy that is the main thing most people are trying to game, so the signal to noise can still be that it (at least so far) seems pretty easy to tell how many REAL active contributors there are.
Though, I should note, 2 heads are not always better than 1.
I'm more interested in a repository that has commits only from two geniuses than a repository that has 100s of morons contributing to it.
> Active maintainers and commit dates are much better metric.
But in an age of bots/agents, that's just kicking the can down the road by making it easier to fudge regular activity of practically zero importance. Even worse for the ecosystem than paid like counts.
You call these baubles, well, it is with baubles that men are led... Do you think that you would be able to make men fight by reasoning? Never. That is only good for the scholar in his study. The soldier needs glory, distinctions, and rewards.
> I always treated stars like a bookmark "I'll come back to this project" and never thought of it as a quality metric. Years ago when this problem first surfaced I was surprised (but should not have been in retrospect) they had become a substitute for quality.
Same here. I've starred over 1500 projects on Github over the years, and only because I wanted to save them for later use or as a reference for something I was working on. These days I'll occasionally use the star metric as a signal to avoid certain projects as overhyped (especially if the project has a stars-per-day meter).
>>The FTC's 2024 rule banning fake social influence metrics carries penalties of $53,088 per violation - and the SEC has already charged startup founders for inflating traction metrics during fundraising
Six million fake stars is just what this small crew found, likely in a matter of hours.
A fine of $53,088 times six million is 318.528 billion.
Just going hard after a small portion of that should both put an end to it and a slight dent in the deficit.
This kind of fraud is rampant because everyone concludes the way to win is not to make a real advance, but to simply game the system. Seems they are not wrong because the lack of enforcement makes the rules meaningless.
I also never in my career have consciously looked at the GH star counter on a repo, let alone used it to make decisions.
Instead I look at (in addition to the above):
1. Who is the author? Is it just some person chasing Internet clout by making tons of 'cool' libraries across different domains? Or are they someone senior working in an industry sector from which project might actually benefit in expertise?
2. Is the author working alone? Are there regular contributors? Is there an established governance structure? Is the project going to survive one person getting bored / burning out / signing an NDA / dying?
3. Is the project style over substance? Did it introduce logos, discord channels, mascots too early? Is it trying too hard to become The New Hot Thing?
4. What are the project's dependencies? Is its dependency set conservative or is it going to cause supply chain problems down the line?
5. What's the project's development cadence? Is it shipping features and breaking APIs too fast? Has it ever done a patch release or backported fixes, or does it always live at the bleeding edge?
6. NEW ARRIVAL 2026! Is the project actually carefully crafted and well designed, or is it just LLM slop? Am I about to discover that even though it's a bunch of code it doesn't actually work?
7. If the project is security critical (handles auth, public facing protocol parsing, etc.): do a deeper dive into the code.
Agree! My longstanding metric uses just two values:
* Most recent commit
* Total number of commits
This might have to die in the era of AI, but it's served me well for a long time. Rather than how many people are paying attention, it tries to measure the effort put in.
At the very least I'd add release cadence to it and the quality of releases. Mature, good software will have hotfixes and patch releases every now and then. But not in every release and certainly not 50% of the changes. In the same sense I will often look at the effort put in changelogs. If they took the effort of putting things in category, writing about possible breaking changes, etc it is a possible indicator of some level of quality. At the very least I will have a lot more faith in software with good changelogs compared to something that is just a list of the last N commit messages.
Honestly, anyone using Github as a basis for hiring to begin with is approaching hiring with flawed thinking. Github isn't the only source for git, and git isn't the only standard for version control. Further, Github has been pushing companies AWAY from the platform thanks to high costs and other nonsense. I've seen more than one company either run a local git server or something like a local git lab instance. Using github as a metric just ensures that you eliminate anyone not using github. That includes many amazing open source devs, for example.
But to someone else, it is a meaningful metric that you bookmarked something. It doesn't matter that the star isn't you saying you liked something. It's already telling enough merely that you wanted to bookmark it.
It's only not meaningful because of how other people can game it and fabricate it, but everything you just said, if it was only people like you, that would be a very meaningful number.
It doesn't even matter why you bookmarked it, and it doesn't matter that whatever the reason was, it doesn't prove the project as a whole is overall good or useful. Maybe you bookmarked it because you hate it and you want to keep track of it for reference in your ted talk about examples of all the worst stuff you hate, but really by the numbers adding up everyone's bookmarks, the more likely is that you found something interesting. It doesn't even matter what was interesting or why. The entire project could be worthless and the thing you're bookmarking was nothing more than some markdown trick in the readme. That's fine. That counts. Or it's all terrible, not a single thing of value, and the only reason to bookmark it is because it's the only thing that turned up in a search. Even that counts, because that still shows they tried to work on something no one else even tried to work on.
It's like, it doesn't matter how little a given star means, it still does mean something, and the aggregation does actually mean something, except for the fact of fakes.
Yes...which is why I said it is an indirect variable, as caused by the other things I pointed out above. Age, quality, code, utility, whether issues are addressed, interest, etc. Or fraud. Pretty cut and dry.
FWIW, I almost never star repos. Even ones I use or like. I don't see the utility for myself.
Aim for a more concise post and don't couch your statements in doubt next time if you want a productive conversation, because I don't know what you are trying to say.
They're doing it for VCs because VCs consider it proof of traction. They're not doing it to impress you and they don't care whether you check stars or not.
Yea, you mean Israel's use of human shields, right?....wait, who are we talking about using human shields, again?
"Dressed in army fatigues with a camera fixed to his forehead, Ayman Abu Hamadan was forced into houses in the Gaza Strip to make sure they were clear of bombs and gunmen, he said. When one unit finished with him, he was passed to the next.
“They beat me and told me: ‘You have no other option; do this or we’ll kill you,’” the 36-year-old told The Associated Press, describing the 2 1/2 weeks he was held last summer by the Israeli military in northern Gaza."
Germany: 7.7 to 7.9 beds per 1,000
Japan: 12.6 to 13.0 beds per 1,000
Israel: 2.9 to 3.1 beds per 1,000 people
USA: 2.3 to 2.4 beds per 1,000 people
Egypt: 1.1 to 1.4 beds per 1,000 people
OECD avg per capita: 4.2 per 1,000 people
Lebanon: 1.0–1.2 beds per 1,000 people
Palestine: 1.3 beds per 1,000 people
Palestine did have way more before, in 2022, at 13 per 1000. That would indeed be high.
Turns out that was purposefully done because of the extremely high prevalence of chronic disease and crises as caused by a blockade.
Now that I have wondered, I am curious as to why you would leave out such important details like this. Or not just plainly state your opinion.
I think we can guess why, and I think you also know it's because you are wrong.
Currently 4.7 is suspicious of literally every line of code. May be a bug, but it shows you how much they care about end-users for something like this to have such a massive impact and no one care before release.
Good luck trying to do anything about securing your own codebase with 4.7.
Worse, I have had it being sus of my own codebase when I tasked it with writing mundane code. Apparently if you include some trigger words it goes nuts. Still trying to narrow down which ones in particular.
Here is some example output:
"The health-check.py file I just read is clearly benign...continuing with the task" wtf.
"is the existing benign in-process...clearly not malware"
Like, what the actual fuck. They way over compensated for the sensitivity on "people might do bad stuff with the AI".
Let people do work.
Edit: I followed up with a plan it created after it made sure I wasn't doing anything nefarious with my own plain python service, and then it still includes multiple output lines about "Benign this" "safe that".
Am I paying money to have Anthropic decide whether or not my project is malware? I think I'll be canceling my subscription today. Barely three prompts in.
You mean like this article that has been making the rounds recently? The huge famous one that Altman himself called out in his post talking about the attack?
Because redfin shows that just is very clearly not true
https://www.redfin.com/news/homeownership-rate-by-generation...
reply