1. Use a proper Markdown parser. The grammar is easy to define EBNF style most implementations I see now days use some recursive descent parser, etc… Regex implementation was used in original authors parser when it became popular.
2. You can resolve ambiguities and define more consistent symbols that make sense. Most markdown implementations are decent and follow common sense best practice syntax.
3. The beauty is its simplicity. You can pick it up in a few minutes and utilize it anywhere you damn near see a text box.
4. Parsing to HTML isn’t the only option! I mostly use TUI markdown viewers that render the document via ANSI escape codes beautifully. Check out glow project. But once you parse and have a AST, you can easily walk it and render it in other ways as well. Again though. Everyone can read a text document and a html document. You can render it to a PDF if need be.
5. Do we really need a whole new markup text2<format of some kind>? Markdown is simple fast and widely supported. So I have to say.. I prefer it over most things and that includes Rst.
If you need real beauty and power you can move to LaTeX or something… My two cents anyway.
I recently tried to create markdown like parser, I had to refer a lot to common marks. What I saw was madness. I never knew from casual use, that markdown is so complex. There is literally zero thought about parsing, it forces natural looking text into a format that can be structured into actual markup, but it has so many pitfalls and edge cases that it just feels wrong.
Each time I've looked up another markdown parser I was stunned how overegineered they are. If you need an AST to parse something seemingly simple like markdown, then something went wrong. Probably every coreutil is smaller then the average markdown parser. Why?
Sort of hard to do because AI it shoved down your throat in one form or another virtually everywhere you go. I also think a lot of us Hackers are mourning the fact we spent many years mastering machines and programming just to have the skill devalued (at least from the publics perspective) nearly over night. I personally think it is more important now more than ever to understand technology. To be able to write code, understand how a CPU works etc. Tech literacy will help prevent doom scenarios. A future where virtually everyone depends on AI and Computers but lacks people who actually understand them from a low level perspective seems bleak. I know thinking itself seems to have gone out of fashion and its given rise to misinformation and/or political nonsense like the rise of fascism etc... I think a lot of us just feel "empty" and are trying to express it.
I get it. I’ve been doing this for 11 years. I use agents everyday at work now and deal with all the benefits and problems of that. The craft is certainly changing and it will take years for everything to shake out and settle. I understand the desire to publicly wax poetic, but nobody actually knows shit about where we will land, so it gets a bit tiresome to see over and over.
I agree that humans should continue to value various forms of literacy even in the face of AIs that can do everything better than us. I too will continue to dig deeper into tech literacy. There was a Terence Tao paper recently that mentioned we are in a shift similar to the end of heliocentrism. It made clear that Earth is not the center of the universe, but Earth is still deeply valuable and important for humans. Much the same way that AI may supersede our understanding and intellect and make the are limitations more apparent, but our human intellect is still important to humans. Plus, what are you going to do when the price of LLM tokens are through the roof or you get messages like "burn an extra 1,000,000 tokens for a better implementation!".
I have some amount of hope that local open models with sufficient quantization are the future as hardware becomes more powerful and models become more optimized. I don’t think we will be living in thin client land forever. Human expertise and intelligence will continue to be important and anyone who says otherwise is being disingenuous.
Over reliance on LLMs is going to become such a disaster in a way no one would have thought possible. Not sure exactly what, who, when, or where.. Just that having your entire product or repo dependent on a single entity is going to lead to some bad times…
Contrary to the popular opinion here, there are other services beyond Claude Code. These usage limits might even prompt (har har) people to notice that Gemini is cheaper and often better.
On-premise LLMs are also getting better and likely won’t stop; as costs go up with the technical improvements, I would imagine cost saving methods to also improve
There are just so many compelling reasons to be on-prem instead of dependent on a 3rd party hoovering up all your data and prompts and selling you overpriced tokens (which eventually they MUST be, because these companies have to make a profit at some point).
If the only counterbalance is "well the api is cheaper than buying my own hardware"...
That's a short term problem. Hardware costs are going to drop over time, and capabilities are going to continue improving. It's already pretty insane how good of a model I can run on two old RTX-3090s locally.
Is it as good as modern claude? No. Is it as good as claude was 18 months ago? Yes.
Give it a decade to see companies really push into the "diminishing returns" of scaling and new models... combined with new hardware built with these workloads in mind... and I think on-prem is the pretty clear winner.
These big players don’t have as big of a moat as they like to advertise, but as long as VC wants to subsidize my agents, I’ll keep paying for the $20 plan until they inevitably cut it off
gemini-cli has not been useable for weeks. The API endpoint it uses for subscription users is so heavily rate-limited that the CLI is non-functional. There are many reports of this issue on Github. [1]
I use Gemini-CLI at work, and haven't noticed anything. I use Google Jules (free tier) on a toy project much more heavily and can't complain. I think sometimes the prompts take longer than they used to, but I couldn't care less. I'm not in a hurry.
Last time I used Gemini I watched it burn tokens at three times the rate of any other models arguing with itself and it rarely produced a result. This was around Christmas or shortly after.
It's still not uncommon for it to escape it's thinking block accidentally and be unable to end it's response, or for it to call the same tool repeatedly. I've watched it burn 50 million tokens in a loop before killing the chat.
No. It's still shit. It can do some well contained tasks, but it is very less usable on production codebases than gpt or claude models. Mainly because of the usage limits and the lack of good environments for us to use it on. Anthropic gets away with this because claude code, as bad as it is, is still quite functional. Gemini cli and antigravity are utter trash in comparison.
Exactly my experience. I remember thinking to myself that if this is what people get exposed to when they try to use a coding agent, no wonder there's so much bad-mouthing going on about LLMs. You use CC and you get usable output without much hassle, and in the end it costs way less because you aren't fighting with a substandard model.
Frankly, Gemini seems like Codex was two years ago. Lots of back and forth and nothing of value in the end.
For a second I hoped you were gonna comment on how LLMs are going to rot out our skillset and our brains. Like some people already complaining they "have to think" when ChatGPT or Claude or Grok is down.
The other day I was doing some programming without an LSP, and I felt lost without it. I was very familiar with the APIs I was using, but I couldn't remember the method names off the top of my head, so I had to reference docs extensively. I am reliant on LSP-powered tab completions to be productive, and my "memorizing API methods" skill has atrophied. But I'm not worried about this having some kind of impact on my brain health because not having to memorize API methods leaves more room for other things.
It's possible some people offload too much to LLMs but personally, my brain is still doing a lot of work even when I'm "vibecoding".
Ironically this is one of my main use cases for LLMs
“Can you give me an example of how to read a video file using the Win32 API like it’s 2004?” - me trying to diagnose a windows game crashing under wine
Exactly. I feel this is the strongest use case. I can get personalized digests of documentation for exactly what I'm building.
On the other hand, there's people that generate tokens to feed into a token generator that generates tokens which feeds its tokens to two other token generators which both use the tokens to generate two different categories of tokens for different tasks so that their tokens can be used by a "manager" token generator which generates tokens to...
I don't get this pov, maybe b/c I'm not a heavy Claude Code user, just a dabbler. Any LLM tool that can selectively use part of a code base as part of the input prompt will be useful as an augmentation tool.
Note the word "any." Like cloud services there will be unique aspects of a tool, but just like cloud svc there is a shared basic value proposition allows for migration from one to another and competition among them. If Gemini or OpenAI or Ollama running locally becomes a better choice, I'll switch without a care.
Subscription sprawl is likely the more pressing issue (just remembered I should stop my GH CoPilot subscription since switching to Claude).
There's so many different models, from hosted to local and there's almost no switching cost as most of them are even api compatible or supported by one of the gateways (Bifrost, LiteLLM,...).
There's many things to worry about but which LLM provider you choose doesn't really lock you in right now.
It should be abundantly clear that depending on a single entity will screw you royally, but obviously we don't learn from the mistakes of others. We are condemned to repeat history because we don't know it.
That’s what I really gets me. These folks who are “so rich from said technology” always need you to buy their course for $5,000… Likes buddy if you were bringing in so much money you probably wouldn’t be pestering people to take your “course” and you certainly aren’t going to give any info away that have value only because they are obscure or hard to do… They are also almost ALWAYS self proclaimed experts. Oversight everyone because an AI expert. Before ChatGPT they probably had zero AI was a large field and machine learning is one small part of it..
That’s all it came down to with me.. FreeBSD doing WiFi circa 2002 was a remote dream. Shit even Linux you had to use ndiswrapper and it still prob wouldn’t work
I’m 37 and have coded my entire life. I even got to pull the drop out of college and do star up and make money type thing before I took my current position.. I have to say AI has sucked the heart and soul out of coding.. Like it’s the most boring thing having to sit and prompt… Not to mention the slop, nonsense hype etc.. Never attach your identity to your job or a skill. Many of us do that just to be humbled when a new advancement occurs… I know I see programming and looking at Open Source code to contribute and all of it…. Is just lifeless. Literally and figuratively. Sorry for long rant I needed to vent.
I see open source projects entirely run by clueless LLM-using idiots, and existing projects overrun by them, and there is none of the quality or passion you would normally see.
Even if I were to apply my skill/energy to a project of my own, my code would just get stolen by these LLM companies to train their models, and regurgitated with my license removed. What's the point?
I am not a graduate but Apple has reached out to me twice in the past month. Many others too so I wouldn’t say it’s absolutely dead but it’s tightened a bit.
Vulnerability Researcher here… Unless your target has a security bounty process or reward; leave them alone. You don’t pentest a company without a contract that specified what you can and can’t test. Although I would personally appreciate and thank a well meaning security researchers efforts most companies don’t. I have reported 0days for companies that HAVE bounties and they still tried to put me in hot water over disclosure.. Not worth the risk these days.
We had a situation in Sweden when a person found that if you remove a part of the url (/.../something -> /.../) for a online medical help line service, they got back a open directory listing which included files with medical data of other patients. This finding was then sent to a journalist that contacted the company and made a news article of it. The company accused the tipster and journalist for unlawful hacking and the police opened a case.
But was it? Is it pen testing to remove part of an URL? People debated this question a bit in articles, but then the case was dropped. The line between pen testing and just normal usage of the internet is not a clear line, but it seems that we all agree that there is a line somewhere and that common sense should guide us in some sense.
You walk past a ministry office and notice that there is nobody at the door checking people entering, you walk in, you find an office door open, many binders on the shelves, nobody present. You read through the binders, pull out the drawers and see private info etc. You then walk out and send a mail about this. What do you think is going to happen?
This dive instructor was using this insurance company for his clients, and thus had a responsibility to prevent any known risk (data privacy loss in this case).
So he had two options: take his clients and his business to another insurer (and still inform all his current and previous clients about their outstanding risk), or try to help the insurer resolve the risk.
Good guideline advice but it seems you didn't read the article. Their personal data was at risk here. Leaving them alone would very likely result in a breach of this person's data. Both he and you have an ethical responsibility to at minimum notify the business of this problem and follow up with it.
> And the real irony? The legal threats are the reputation damage. Not the vulnerability itself - vulnerabilities happen to everyone. It's the response that tells you everything about an organization's security culture.
See. The moral of the story is that the entity care more about their face than the responsibility to fix the bug, that's the biggest issue.
He also pointed out bugs do happens and those are reasonable, and he agreed to expose them in an ethical manner -- but the goodwill, no matter well or ill intentioned, those responses may not come with the same good tolerations, especially when it comes to "national" level stuff where those bureaucrats knows nothing about tech but they knew it has political consequences, a "deface" if it was exposed.
Also, I happened to work with them before and know exactly why they have a lot of legal documents and proceedings, and that's because of bureaucracy, the bad kind, the corrupt kind of bureaucracy such that every wrong move you inflicted will give you huge, if not capitcal punishment, so in order to protect their interest, they rather do nothing as it is unfortunately the best thing. The risk associated of fixing that bug is so high so they rather not take it, and let it rot.
There's a lot of system in Hong Kong that is exactly like that, and the code just stay rotten until the next batch of money comes in and open up new theatre of corruption. Rinse and repeat
1. Use a proper Markdown parser. The grammar is easy to define EBNF style most implementations I see now days use some recursive descent parser, etc… Regex implementation was used in original authors parser when it became popular.
2. You can resolve ambiguities and define more consistent symbols that make sense. Most markdown implementations are decent and follow common sense best practice syntax.
3. The beauty is its simplicity. You can pick it up in a few minutes and utilize it anywhere you damn near see a text box.
4. Parsing to HTML isn’t the only option! I mostly use TUI markdown viewers that render the document via ANSI escape codes beautifully. Check out glow project. But once you parse and have a AST, you can easily walk it and render it in other ways as well. Again though. Everyone can read a text document and a html document. You can render it to a PDF if need be.
5. Do we really need a whole new markup text2<format of some kind>? Markdown is simple fast and widely supported. So I have to say.. I prefer it over most things and that includes Rst.
If you need real beauty and power you can move to LaTeX or something… My two cents anyway.
reply