The title is a little misleading. The article itself acknowledges that there was a tulip fever:
> That’s not to say that everything about the story is wrong; merchants really did engage in a frantic tulip trade, and they paid incredibly high prices for some bulbs. And when a number of buyers announced they couldn’t pay the high price previously agreed upon, the market did fall apart and cause a small crisis—but only because it undermined social expectations.
But it implies that it's not all that notable because it didn't collapse the entire economy:
> But the trade didn’t affect all levels of society, and it didn’t cause the collapse of industry in Amsterdam and elsewhere.
Personally, I don't think I'd ever made the assumption that it had. Likewise, if Bitcoin were to implode, it would be similar — some people would suffer, but it wouldn't be all that widespread of an effect.
Tulipmania still makes a great historical anecdote on speculation, and I for one, will keep using it as one.
The title is now changed but it reminds me of a tangentially related pet peeve.
I really dislike titles like this. Why do they presume to know what I think? An example from yesterday was that consciousness goes deeper than I thought. On reading the article, it did no such thing. Again, why would they presume to know what I think?
I'm not sure why it rubs me wrong. It certainly biases my perception of the article before I've even read it. I even know that it is prejudicing my reading but I can't seem to get over it.
To me, they seem like click-bait. No, the answer may not surprise me and no, the author actually had no idea what I was thinking. It really does bias me against the article. It's just a small thing but I can see my own biases when I see headlines that include similar statements.
My problem is that it's a cliche, first time it was fine, Nth time it's utterly, irredeemably trite. If they can't bother to come up with a less tired headline, it's probably not good enough for me to bother to read it.
Same with "X considered harmful", just give me a break.
Maybe it's akin to nigerian scams. They don't want to waste their bandwidth on someone who's not going to click on their numerous articles with similar title and generate more ad revenue.
However if you are gullible enough to click on
"7 awesome things Miley Cirus did, number 4 will shock you!", you are ripe to click on all the click bait they will show you.
The problem arises when good quality content uses clickbait. It's the same as naming your bank "The Nigerian Prince Bank of America".
I think it may bother me because I see it as insulting our/my intelligence.
'We looked at X and you won't believe what we found!'
Why yes, yes I will believe what you found, provided you give clear evidence, explain your methodology, and provide credible citations where needed.
I was worried it was only me who felt like this, as I'd not seen anyone else mention it. It really does bias my reading of the article afterwards. I should probably figure out a way to work past that bias, but I guess I should be grateful that I recognize it in myself.
Again, nothing major - it just irks me a bit. I'm not going to rage-quit or anything.
Slight nuance, but I'd say it's assuming the existence of a collective intelligence and homogeneity in the population. Its a slight backhand insult to prod readership for anyone who doesn't already know the subject. It pushes that button of the fear of being left out of a trend.
It's clickbait in another form. Playing psychological games with readers.
It's disrespecting its readers for making an assumption that readers are too shallow to show interest in substance. So, agree, insulting readers' intelligence.
I always have an automatic reaction of postfixing lines like those by "of course, you wouldn't, that's because you are stupid". And then, I get instantly biased against the author just because of a stupid title.
It's not a rational thought, but the saddest part is that the real world fits that bias way too well.
You're definitely not the only one. But in my eyes it's major. As you pointed out -- it's insulting to the reader's intelligence.
That would be fine btw... the real problem is when a valuable material is hidden behind clickbaity titles. Now _THAT_ is what irks me personally --
and is sadly happening lately.
I don't think they are making a point about any single person. Saying "...than you thought." is just another way of saying "...than is generally believed."
It's very modern of them to assume that a speculative bubble would cause collapse of industry and that it's remarkable that it didn't. The whole relationship between "some people default on loans" and "some other people lose jobs" is a relatively modern type of financial crisis, starting in late 19th century. In a way, it's a curious introduction to macroeconomics, the apparent puzzle that when one person is unable to repay a debt, why does another person somewhere else lose their job? I think they missed an opportunity there to write a little bit more about it, it would explain why people should still care about bubbles.
Sure, but this is the most interesting part, I think:
> So if tulipmania wasn’t actually a calamity, why was it made out to be one? We have tetchy Christian moralists to blame for that. With great wealth comes great social anxiety, or as historian Simon Schama writes in The Embarrassment of Riches: An Interpretation of Dutch Culture in the Golden Age, “The prodigious quality of their success went to their heads, but it also made them a bit queasy.” All the outlandish stories of economic ruin, of an innocent sailor thrown in prison for eating a tulip bulb, of chimney sweeps wading into the market in hopes of striking it rich—those come from propaganda pamphlets published by Dutch Calvinists worried that the tulip-propelled consumerism boom would lead to societal decay. Their insistence that such great wealth was ungodly has even stayed with us to this day.
> “Some of the stuff hasn’t lasted, like the idea that God punishes people who are overreaching by causing them to have the plague. That’s one of the things people said in the 1630s,” Goldgar says. “But the idea that you get punished if you overreach? You still hear that. It’s all, ‘pride goes before the fall.’”
You may not have made that assumption, but its a _very_ common characterization of what happens in free market based economies. Or more directly, tulipmania is frequently equated with business cycles. I never saw it either, but I've definitely seen it presented as a valid argument many times.
Are business cycles in the modern sense (unemployment that lasts for longer than six months) known to happen in the presence of free banking and absent a central bank? Actual question. It matters because we know how this happens when a central bank controls the currency supply and allows NGDP to drop. It's not obvious to me that this happens with free banking.
The US only got a central bank - and a pretty limited one at that - in 1913, exactly because the many panics that hit the US money markets were too severe without a lender of last resort to act as a moderator.
The question would then be, after the US got a central bank, have the recessions got less severe? The Great Depression was arguably greater than any of those prior to 1913.
Not sure how you weight the great depression, but I believe* the period after the great depression through the 2008 financial crisis featured less severe business cycles than prior to the central bank. The repeated crashes of the late 19th century really were very severe.
In the 19th century, a panic in the US was limited to the US - it was still a peripheral economy compared to Europe. After WWI, the US had most of the gold reserves in the World, Europe was riddled with debt (to the US) and very slow to recover. The Great Depression was greater because it affected the whole World, given the centrality the US had assumed in World trade.
But, more importantly, between the Great Depression and the last crisis of 2008, the Western World never saw any panic as big as those of the 19th century: the Fed had learned much better how to be the World banker, and the US had learned that, after winning a war, rebuilding international trade was much important than collecting war debts.
Central banks don't 'control' the currency supply... Money creation by private bank lending is where most money in modern economies comes from, and adjusting interest rates is such a blunt instrument that they are really powerless to meaningfully control it. Central banks are important for a currency that is stable and usable long-term (if managed properly), but really should be working with the Government to control the money supply through the Government's fiscal measures (taxing and spending) and by prudential (lending) regulation, instead of the current ineffective Monetarist fantasy...
Seems like it's completely unavoidable for there to be cycles; central banks and regulations intended to smooth the economy just seem to expand the cycles (longer booms and longer busts) and concentrate malinvestment in different areas (real estate debt in the last crash, for example).
> Why are banking systems unstable in so many countries—but not in others? The United States has had twelve systemic banking crises since 1840, while Canada has had none. The banking systems of Mexico and Brazil have not only been crisis prone but have provided miniscule amounts of credit to business enterprises and households.
> Analyzing the political and banking history of the United Kingdom, the United States, Canada, Mexico, and Brazil through several centuries, Fragile by Design demonstrates that chronic banking crises and scarce credit are not accidents. Calomiris and Haber combine political history and economics to examine how coalitions of politicians, bankers, and other interest groups form, why they endure, and how they generate policies that determine who gets to be a banker, who has access to credit, and who pays for bank bailouts and rescues.
> Canadian banks historically had balance sheets like other banks, and participated in complex global interbank networks since the early 19th century. Yet Canadian banks, throughout their history, avoided systemic banking crises – with the exception of two short-lived suspensions of convertibility in 1837 and 1839 in response to crises originating in the United States. Moreover, prudential regulation was absent for much of Canada’s history, as was a central bank (until 1935). According to the structural theory of crises, the exposure of Canadian banks to liquidity risk should have been higher than in many other countries, given that the Bank of Canada did not come into existence until 1935. According to the externalities theory and the myopia theory, the absence of activist prudential regulation in Canada during most of its history should have been associated with a higher frequency of banking crises, but it was not.
Note that you write "central banks and regulations intended to smooth the economy" but at that last quote points out, "prudential regulation was absent for much of Canada’s history", so something is incomplete in your understanding.
Another way to look at this is that for most of the period under consideration, the Canadian economy was an order of magnitude smaller than both Britain and America. For the bulk of the period under consideration, Canadian currency was fixed to the pound Sterling, which had the Bank of England. Canada gained independence in 1867 and in 1871 passed the Bank Act (and for the bulk of the period under consideration was subject to English common law, and Canadian Acts could be voided by the House of Commons.)
That being said, I admit to not having read the paper. The authors' ultimate conclusion may stand, but that their exemplar is a British Dominion adds some confounding details.
Their "happy six", defined as "free of systemic crises since 1970", are Australia, Canada, Hong Kong, Malta, New Zealand, and Singapore.
They do point out the connections to British rule. They give some other reasons as well.
A test for the importance of being fixed to the pound Sterling, etc., is to look to other regions under British rule, like Jamaica, Mauritius, Kenya, Gambia, and Cyprus, and evaluate their banking crises.
I know nothing about that history, just wanted to pointed out that it is, in principle, a testable prediction.
The major gap in your understanding: Bank insolvency != business cycle. Canada does indeed experience business cycles like every other significant economy. There are some examples in this article, if you don't believe me:
Does that meant that your comment had little or nothing to do with the "panic and a crash every few years" from the comment of nerdponx that you were replying to?
Because I thought they were connected, and was following that same topic of "panic and crash".
I do not know why you think "panic and crash" is specific to a run on the bank / forced bank restructuring. By the way, the original comment was about business cycles in general (that nerdponx replied to):
> Are business cycles in the modern sense (unemployment that lasts for longer than six months) known to happen in the presence of free banking and absent a central bank? Actual question. It matters because we know how this happens when a central bank controls the currency supply and allows NGDP to drop. It's not obvious to me that this happens with free banking.
Hope that clears everything up. I don't like arguing semantics for virtually no reason, but in this case it appears you legitimately misunderstood that the terms business cycle, market panics, and crashes are not specific to banks.
banks just settle business credit. That's their job. so business credit is free banking. Everybody can create money. The trick is getting somebody else to accept it.
I was going to make the same comment. I never heard that the Dutch economy collapsed from the tulip trade. But everything else was confirmed. And just like when the bitcoin bubble bursts, I don't think there will be a collapse in the economy either, but it will definitely be a phenomenon that will be talked about decades later.
This is probably not the answer you're looking for, but MongoDB no longer has much technical justification.
Compared to Mysql and Postgres, it does have an out-of-the-box partitioning scheme, which is something, and a reason to use it (although I'm not sure it outweighs the downsides).
Compared to newer cloud databases like Spanner, Citus, or Aurora which scale well but also provide your with ACID guarantees reminiscent of traditional RDMSes ... there's really not much there.
The top post on HN right now is about PostgreSQL 10 features, of which the very first feature listed is "Native Partitioning" (and it's not about sharding). Relational databases have been using the term "partitioning" for a very long time so if you're going to compare features across comparable systems, you need to use the correct terminology.
Let's also keep in mind we're discussing MongoDB which has a very rich history of using misleading benchmarks to showcase their product (e.g. benchmarking the number of requests/sec the server can select() versus the fsync() rate of a RDBMS), so I think it's useful to be extremely clear what we're talking about.
Yeah a colleague referred to MongoDB as the best marketed database and he was 100% correct. They’ll make some misleading benchmarks, throw a few conferences, and voila you have something to replace your ol’ RDBMS
My favorite MongoDB moment was when Stripe wrote MoSQL so they could sync Mongo into PostgreSQL so data scientists can do joins and otherwise interpret the data. Now you have a “SQL replacement” and SQL instead of just using SQL!
I can see it for truly self contained records at massive scale but IMO it falls flat as a RDBMS replacement which is how most people probably use it still.
Fair enough. When I first read your original comment, I thought that you were being unnecessarily pedantic.
I imagined that by using "partitioning" most people would probably know what I was talking about, but I've been working with MongoDB for a long time, so I probably have some cognitive bias here.
"Sharding" is the more specific/better term, but I'd still say that although more general and possibly ambiguous, partitioning is still roughly correct.
Can you clarify what you mean by this? You're Matt Kroll and you're confirming that you still work for MongoDB? (I'm afraid that nothing is obvious from anyone's usernames.)
I'm Matt Kroll.
I can confirm I was never Head of Sales at MongoDB.
And I can confirm (as anyone can see on LinkedIn.com) that I departed MongoDB for Google to continue to work as an Enterprise sales rep.
Just do a bit of digging. Both names dropped appear to be accurate from usernames/profiles. Matt did indeed leave MongoDB for Google, that much is accurate.
Neat project. It looks like this was largely written by one person, and I'm fairly in awe at anyone who can take a big project like a compiler this far alone.
Isn't there a bit of cognitive dissonance in believing that Rust as a language is an important idea (i.e. by the additional code safety and code maintainability that it conveys), but then simultaneously making the effort to rewrite the current Rust-implemented compiler in C++?
C++ is fast, but aside from a shared value around performance, it has fairly little in common with the ideas that Rust is built on.
Multiple implementations of a compiler lets you implement the "Diverse Double-Compiling"[0] countermeasure to the famous "Reflections on Trusting Trust"[1] attack. You wouldn't necessarily use the C++ implementation in production, but it still improves the security of the Rust language just by existing.
DDC is irrelevant here, DDC is an argument to not write the second compiler in C++ and write it in Rust too.
Having a Rust compiler in C++ is a mitigation to the trusting trust attack, period. You don't need DDC for this.
DDC is necessary when you have two self hosted compilers (e.g. GCC and clang). Here we have one self-hosted compiler (rustc), and one in another language (C++). To mitigate trusting trust in rustc, use mrustc to compile rustc, and then use that rustc to compile itself, and now you have a trusted binary (provided you trust your C++ compiler. you can fix this by DDCing the C++ compilers)
As far as I can tell the goal of the project isn't to target more platforms (Rust targets quite a few by way of LLVM), so I don't think I'd choose any other language, including C++.
Having a compiler and standard library written in the language that it compiles has some huge benefits for increasing the pool of possible contributors.
Interesting. Along with with your other comment about the borrow checker, I guess you could develop using (or occasionally check against) the rustc compiler for borrow checker correctness, and deploy using mrustc. That's pretty cool.
> Having a compiler and standard library written in the language that it compiles has some huge benefits for increasing the pool of possible contributors.
But this isn't the official compiler, this is someone's personal project?
> But this isn't the official compiler, this is someone's personal project?
True, but compilers are complicated machines and Rust is still changing at a fairly frantic rate.
The author seems to be doing quite a good job of development today, but if it has any hope of staying current, it probably needs to think about how to increase its bus factor (something happens like changing jobs, starting a family, or they just become interested in something else, and a single person suddenly has less time to contribute).
rust is changing, but in a backwards compatible way. That said the standard library aggressively makes use of new features, so the challenge isn't the language, but compiling libstd.
Could rustc have a way to output desugared code or code targeting a specific epoch with new features like generators expand to a backwards compatible form. This might allow for preprocessed source that could be compiled by something like mrustc even if it doesn't implement every single RFC?
Yes, but I mean outputing actual Rust source but with generators or async/await expanded into calls to Futures. Similar to how Go is now bootstrapped from a down-level compiler.
Yeah, I get it. But that's what I mean; MIR is the common sub-language that's the same across epochs.
I don't think there's any real plans for a source-based approach. But epochs can only change a limited amount of things for exactly this reason; they minimize the compiler burden of supporting them.
Any that are both open-source and C++11 compliant? Guess you can still build g++-4.6 with just C, then a newer g++ from that, but it's a bit of a pain.
If the goal is to break the dependency cycle, a higher level language like Python would make development much easier. C++ is powerful, but not as rapid to develop in.
It's clearly easier to make a compiler in a higher level language (Python is just an example, but Lisps are suited to this kind of thing). For example, text parsing is easier in Perl/Ruby/Python/Swift/etc. As someone who knows C++, more thought is required to do the same thing as in a higher level language, although it runs much faster. If you just wanted to bootstrap the compiler, then you'd choose the easiest route to that. It could also be easier to read and understand than a C++ compiler.
Until then, those comments only annoy those of us that happen to like Rust, but don't find it mature enough to replace C++ on the use cases we happen to care about.
I didn't say you can't write useful software in C++. I said that C++ software is not, in general, memory safe. Sometimes the benefits of a particular piece of software (for example, having an excellent production-grade optimizer) outweigh its drawbacks (for example, not being memory-safe).
I don't think you'd find a single LLVM developer who would claim that LLVM is memory safe. Giving invalid IR to LLVM and not running the verifier frequently segfaults it, for example...
... but the details are very, very different. Unless I've missed significant revisions, data races and concurrency are a non-goal of the Core Guidelines, but are central to Rust.
That said, I always welcome tooling to make C++ safer; the end game is making programs better, not language partisanship!
I will say this for C++: post C++11, it’s one of the few languages in widespread use to have an explicitly defined memory model. The people working on C++ definitely do care about concurrency and parallelism. I’d still choose Rust over C++ for that kind of program any time I was given the choice, though. =)
Absolutely, I'm not saying they don't care; it's that solving that problem is an explicit non-goal of the GSL work. The C++ committee is clearly working on concurrency related things, I was reading the various coroutines TSes recently in fact, as we're working on similar things in Rust.
SaferCPlusPlus[1] is the library that's probably closer to Rust "in spirit and intent". And more importantly, safety effectiveness. And it does address the data race issue.
Im in awe too how single handedly some folks have that drive to keep hitting the keyboard and put all the brain's power into a model defined by electricity and binary. Truly amazing! Bravo
> I know this will never actually happen, but I sincerely wish the Social Security Administration would publish a complete official database of real name to SSN mappings.
A full list might not even be necessary — a form that would allow you to trade some basic details for an SSN might be enough to scare various agencies straight. At least that way isn't quite as iterable.
Honestly, this doesn't even sound all that implausible to me as long as some sort of sufficient warning was built into the announcement to give organizations a way to build alternatives. Say two years. After that, levy major fines against anyone who's not compliant with certain very basic security standards.
The biggest hurdle here isn't going to be backlash as much as it is comprehension. Just like with net neutrality and encryption, most lawmakers are going to have a hard time understanding why SSNs as secrets aren't a good thing, and they'll have to be convinced.
This isn't accurate. Shaw has recently jumped on the same bandwagon and started enforcing hard caps with overage fees. Just go here if you want to check that:
The trouble is that to run more complex operations, it's nice to have a high-level language to write your tasks in. Rake uses Ruby for this, and Nemesis Haskell.
Either app can be used for other purposes, for example, I use Rake to compile/run tests on my C# projects as an alternative to NAnt.
> That’s not to say that everything about the story is wrong; merchants really did engage in a frantic tulip trade, and they paid incredibly high prices for some bulbs. And when a number of buyers announced they couldn’t pay the high price previously agreed upon, the market did fall apart and cause a small crisis—but only because it undermined social expectations.
But it implies that it's not all that notable because it didn't collapse the entire economy:
> But the trade didn’t affect all levels of society, and it didn’t cause the collapse of industry in Amsterdam and elsewhere.
Personally, I don't think I'd ever made the assumption that it had. Likewise, if Bitcoin were to implode, it would be similar — some people would suffer, but it wouldn't be all that widespread of an effect.
Tulipmania still makes a great historical anecdote on speculation, and I for one, will keep using it as one.