I wonder if the kind of personality that gets you on 30U30 correlates with being willing to engage in massive fraud, and being able to get away with it for a minute.
Holmes, SBF, Shkreli, Charlie Javice, Ishan Wahi...
When ambitious competitors who can't accept loss or normalcy enter into a field that's saturated with skilled rule-abiding players, they'll cheat.
Hypercompetitive fields will always surface cheaters given enough time. Then regulations pile on to fight the cheating, which makes it harder for honest people to do the good work.
We do not punish cheaters like these as much as we should.
You know, after all this time Lucas Duplan doesn't seem so bad. His hubristic sin was posing for a photo burning fake hundred dollar bills. That just seems like a random Tuesday now.
If I remember correctly, you need to be nominated by someone to be considered for the 30U30 list. Some of the people on those lists will literally run their own campaigns to get on the list, meaning that they'll pay people to nominate them, pay PR firms to run stories and campaigns. Other people do seemingly nothing, and just get nominated by legit people that admire them.
So, I'm fairly certain lists like that will attract some amount of unscrupulous narcissists.
I'd focus less on the U30 part, and more on the 30U, if that makes sense — the problem is with people who seek that sort of attention (and that 79 year old certainly qualifies as wanting that sort of attention). For those people, their businesses are a means to an end in the most cynical way possible.
Speak for yourself. I'm O18 and I don't want him in there like you claim to. Most of his base claimed to be anti-pedo until they saw the evidence in the unredacted subset of the Epstein files that Congress legally forced him to release, and now suddenly they're pro-pedo (and pro-war and pro-bombing-schoolchildren). But you be you, and make baseless evidence-free false equivalence accusations against other people to justify the rapes and legally adjudicated sexual assault and pussy grabbing by the guy you as an "O18" claim you want in there.
Yes, gitea (and originally gogs) are released under permissive licenses, so it's legally allowed to fork them.
But forking complete working projects with years of work, rebranding with a "good guys" attitude, and progressively erasing the name/history (mentioning a gitea fork has moved down the faq now) is not fair.
Edit: even worse, the word "fork" is not in the FAQ. It is "Comparison with Gitea" now (fork is mentioned on that page).
> Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software…
This is already a crazy take on its own, why would a fork have to describe their relation to the parent project front and center? Both the Readme and the comparison page link to the origin blog post [1] that describes the lineage clearly.
But even if there were some "ethical reason" to do this, I don't think Gitea is the right project to play up as a victim. Their homepage [2] doesn't mention that Gitea itself is a fork either. Their Readme does, but is this so much better?
Strange that they apparently raised $169M (really?) and the website looks like this. Don't get me wrong: Plain HTML would do if "perfect", or you would expect something heavily designed. But script-kiddie vibe coded seems off.
Strange that they raised money at all with an idea like this.
It's a bad idea that can't work well. Not while the field is advancing the way it is.
Manufacturing silicon is a long pipeline - and in the world of AI, one year of capability gap isn't something you can afford. You build a SOTA model into your chips, and by the time you get those chips, it's outperformed at its tasks by open weights models half their size.
Now, if AI advances somehow ground to a screeching halt, with model upgrades coming out every 4 years, not every 4 months? Maybe it'll be viable. As is, it's a waste of silicon.
The prototype is: silicon with a Llama 3.1 8B etched into it. Today's 4B models already outperform it.
Token rate in five digits is a major technical flex, but, does anyone really need to run a very dumb model at this speed?
The only things that come to mind that could reap a benefit are: asymmetric exotics like VLA action policies and voice stages for V2V models. Both of which are "small fast low latency model backed by a large smart model", and both depend on model to model comms, which this doesn't demonstrate.
In a way, it's an I/O accelerator rather than an inference engine. At best.
Even if this first generation is not useful, the learning and architecture decisions in this generation will be. You really can't think of any value to having a chip which can run LLMs at high speed and locally for 1/10 of the energy budget and (presumably) significantly lower cost than a GPU?
If you look at any development in computing, ASICs are the next step. It seems almost inevitable. Yes, it will always trail behind state of the art. But value will come quickly in a few generations.
maybe they're betting on improvement in models to plateau, and that having a fairly stablized capable model that is orders of magnitude faster than running on GPU's can be valuable in the future?
reply