Easiest people to understand: someone hurt you (in this case disrupted your workflow, especially if pointlessly like this user thinks), you express the dissatisfaction to the person who did.
> Easiest people to understand: someone hurt you (in this case disrupted your workflow, especially if pointlessly like this user thinks), you express the dissatisfaction to the person who did.
Are you aware you are talking about a FLOSS project that was gifted to you, and you are advocating for attacking for abusing the creator of said project because you can't even bother to contribute anything back?
too little (preserves weird whitespace semantics requiring extra newlines, non-obvious _under_ for /italics/ markers, confusing multitude of link syntax, too many and too few list markers (why isn't • allowed?) too late? But would've been a great start instead of the aborted markdown attempt back then...
This also misses the point: multiple syntax markers for the same thing is the opposite of simplicity (though that's not the reason why Markdown will fail)
You have the wrong end of the stick. The point is simplicity for the user, not the developer of the parser. And for users, having two ways to get the same thing adds no complexity- they just pick one and use it.
> It should turn bold but keep the asterisk displayed so you can still edit as normal.
This is just terrible UI, why do you need garbage marks when you already have bold? And you can edit "as normal" if you like, but that only requires displaying asterisks during that tiny % of time you edit the word, not all the time when you read it or edit something else.
So you can still see the actual text that you're editing. And to reduce ambiguity. If you don't leave them, then you can't distinguish between adding more bold text to currently bold text or adding non-bold text immediately after
> So you can still see the actual text that you're editing
But you're not editing that text! You're editing some other text and see a bunch of asterisks all over the place. And this is especially bad in nested styles - try some colored bold word in a table cell - without hiding the markup you'll basically lose most of visibility into the text/table layout
> to reduce ambiguity
it does the opposite, you can't easily distinguish between an asterisk and an asterisk, which is... ambiguity
> can't distinguish between adding more bold text to currently bold text or adding non-bold text immediately
Sure you can. In a well-designed editor you'll see the style indicator right near your caret is so it's always obvious whether and how your typed text is styled or not.
In a not-so-well-designed editor you'll get that indicator far away from your caret or just get asterisks appearing when you need them.
In a not-designed editor you'll see them all the time even when they don't serve any purpose.
Ha, I remember this religious debate all the way back in the days of text-mode word processing in the 80s on CP/M and PC. I was indoctrinated in the WordStar camp where style controls were visible in the editor between actual text characters, so you could move the cursor between them and easily decide to insert text inside or outside the styled region. This will forever seem a more coherent editing UI to me.
This might be why I also liked LaTeX. The markup itself is semantic and meant to help me understand what I am editing. It isn't just some keyboard-shortcut to inject a styling command. It is part of the document structure.
And... I preferred WordPerfect's separate "reveal codes" pane, which reduced the opportunity for ambiguity. WP 5.1 has never been equalled as a general-purpose word processor.
Heh, I'm not even sure WordStart other styles at that level. Changing the color back then would mean having the print job pause and the screen prompt you to change ink ribbon and press a key to continue. I can't remember if it could also prompt to change the daisy wheel, or whether font was a global property of the document. The daisy wheels did have a slant/italic set, so it could select those alternate glyphs on the fly from the same wheel. Bold and underline were done by composition, using overstrike, rather than separate glyphs.
But yeah, this tension you are describing is also where other concepts like "paragraph styles" bothered me in later editors. I think I want/expect "span styles" so it is always a container of characters with a semantic label, which I could then adjust later in the definitions.
Decades later, it still repulses me how the paragraph styles devolve into a bunch of undisciplined characters with custom styling when I have to work on shared documents. At some point, the only sane recourse is to strip all custom styling and then go back and selectively apply things like emphasis again, hoping you didn't miss any.
Though this doesn't make much sense on its surface - a bug means something is already broken, and he tells of millions of crashes per month, so it was visibly broken. 100% chance of being broken (bug) > some chance of breakage from fixing it
(sure, the value of current and potential bug isn't accounted for here, but then neither is it in "afraid to break something, do nothing")
I've experienced a nearly identical scenario where a large fleet of identical servers (Citrix session hosts) were crashing at a "rate" high enough that I had to "scale up" my crash dump collection scripts with automated analysis, distribution into about a hundred buckets, and then per-bucket statistical analysis of the variables. I had to compress, archive, and then simply throw away crash dumps because I had too many.
It was pure insanity, the crashes were variously caused by things like network drivers so old and vulnerable that "drive by" network scans by malware would BSOD the servers. Alternatively, successful virus infections would BSOD the servers because the viruses were written for desktop editions of Windows and couldn't handle the differences in the server edition, so they'd just crash the system. On and on. It was a shambling zombie horde, not a server farm.
I was made to jump through flaming hoops backwards to prove beyond a shadow of a doubt that every single individual critical Microsoft security patch a) definitely fixed one of the crash bugs and b) didn't break any apps.
I did so! I demonstrated a 3x improvement in overall performance -- which by itself is staggering -- and that BSODs dropped by a factor of hundreds. I had pages written up on each and every patch, specifically calling out how they precisely matched a bucket of BSODs exactly. I tested the apps. I showed that some of them that were broken before suddenly started working. I did extensive UAT, etc.
"No." was the firm answer from management.
"Too dangerous! Something could break! You don't know what these patches could do!" etc, etc. The arguments were pure insanity, totally illogical, counter to all available evidence, and motived only by animal fear. These people had been burned before, and they're never touching the stove again, or even going into the kitchen.
You cannot fix an organisation like this "from below" as an IC, or even a mid-level manager. CEOs would have a hard time turning a ship like this around. Heads would have to roll, all the way up to CIO, before anything could possibly be fixed.
The better analogy is that they ran out of the kitchen in a panic, and left the pots on the burners. Some time later there is smoke curling up from under the kitchen door, but they’re used to the burning smell by now so it’s “not that big a deal”.
How is this a factor for the very few users going you use it? (besides, for the primitive needs familiarity is of questionable use to begin with, almost any gui email client would do)
reply