Your numbers are wrong, you can get 'developer support' for $29/mo or 3% of AWS cost (whichever is higher), and 'business support' at $100/mo or 10% of AWS cost. In my experience, the support reps are qualified engineers that take your issues seriously, and it's something that we gladly pay for (particularly since it's opt-in, and you can change your mind at any time).
Do 3) and 4) not contradict each other? 3) states all the distribution power is in the hands of a few companies, and 4) states that no entity can control the flow of information on these newly created distribution channels. Seems like they can't both be true?
The point is the tension. It’s very easy to acknowledge (1) and (4), or (2) and (3). It’s very rare to acknowledge all four.
That said, there’s a misunderstanding here:
(3) says much, not all, of the power is increasingly centralized, and that that’s concerning in light of (2).
(4) isn’t saying the companies lack power. It’s saying that their platforms are being abused. It’s understood (from (3)) that they can (indeed sometimes do) use their power to attempt to control that abuse.
I wouldn't say this is a straw man. Every billionaire being exploitative/criminal is a real political position among the American Left, see e.g. this tweet [0] by Robert Reich. He was secretary of labor in the Clinton administration, so not some extremist weirdo.
When leftists say "exploit," it's because they regard paying workers less than the marginal value they produce with labor exploitative. (Cf. boss makes a dollar, I make a dime.) They claim that it's unfair that that product of labor goes to the company's owners — hence, to fix this, leftists want to "seize" or "democratize" the means of production. (Sitenote: this a core tenet of capitalism: value going to owners.)
Others would disagree that this is exploitative because they believe that wages should be set by the market.
It's clear that workers are not compensated for all of the value they produce, and that the excess value goes to owners. The disagreement is whether this is exploitative. Leftists argue that it's not fair for owners to capture value without contributing any work. Rightists argue that capital is a scarce (i.e. limited) commodity, so owners deserve to be compensated for providing capital. Who you agree with is up to you to decide.
Pg's blog post strawmans "exploit" because he uses a different definition of exploit than leftists use. This might be because he misunderstands leftism, but the cynical believe that he is deliberately strawmanning them for his own gain (since pg benefits from capitalism). A more honest blog post would examine whether value going to owners or founders is exploitative or not.
Not sure if you're being serious, but assuming you are...
Nowhere did I state that all leftists are communists. In fact, seizing the means of production is also a socialist idea. One key difference between socialism and communism is that communism abolishes all private property in addition.
In leftist circles in the US, leftism colloquially refers to people who advocate for some form of social democracy, with most leftists supporting at least socialism. Those who support more center-left policies (like Keynesian economics and the welfare state, without advocating for socially democratic programs) are called "liberals," not leftists, despite the fact that they are left-leaning (in the US's sense of left-leaning).
> Moving left doesn’t necessarily make one “more liberal.” At a certain point, the traveler leaves the province of liberalism for one that is more correctly identified as socialism, radicalism or leftism.
> Leftist: They believe the free market system is inherently flawed to favor the rich and powerful and believe that the government should either work within the framework of the market system but heavily regulate it and pay for more social services and welfare through higher taxation on the rich and corporations (social democracy) or abandon the free market all together in favor of socialism.
> Liberals are socially liberal and typically believe in democratic capitalism and the welfare state.
> Leftists typically believe in socialism and are against the ideas of capitalism.
I think Robert Reich is more left of center than the current political party since the 90s, in an increasingly polarizing nation. Here's the quote:
There are basically 5 ways to accumulate a billion dollars in America:
1) Profiting from a monopoly
2) Insider-trading
3) Political payoffs
4) Fraud
5) Inheritance
To disprove this, one must simply enumerate the 2000 or so billionaires and determine how they each made their money; then categorize them and see if a significant percentage (say, 5%? 10%) do not fit these five criteria.
This is a needlessly divisive interpretation of that tweet. The tweet lists 5 concrete ways of becoming a billionaire, and you're putting words in the author's mouth by labeling them all "exploitative/criminal", which were not used at all.
The enterprise plan is a very custom plan - if you only need access to one or two features and/or only have a few million requests a month, the price can be pretty cheap (much less than the 5k/mo price advertised on the CF dashboard), but if you want mission-critical features like bot management[0], access to China datacenters[1], etc. it definitely can get into the 6-figure range - and they do have over 550 customers paying 6 figures or more [2].
But just getting one to remove the cookie is probably not worth it since it will end up costing more than a business plan (200/mo) regardless.
Generally, because all the sales & marketing cost (which makes up a big share of most large SaaS companies' expenses) is front-loaded, while the revenue is spread out over time.
When you spend, say, $10k to acquire a customer, and they pay you $25k for a perpetual license, you're cash-flow positive in year 1. When, instead, the customer pays you 10k a year and stays on average 4 years, the model is a lot more profitable over the long-term, but will cost money in year 1. Combine that with large growth rates, and the need for cash investment grows accordingly (even if the business model as a whole is perfectly sound).
Came here to say this. He started with 0 shares of class A stock, and ends with 0 shares of class A stock.
What he did do is convert $23m worth of class B stock into class A stock, and sell it.
Adding up all the class B stock & options he holds at the end of the transaction (Table II, column 9), he still holds ~50M shares (worth ~$165 a pop as of this writing).
The current headline, "Zoom CEO sold all of his common shares", is technically correct but highly misleading – but's it's still interesting to see what a founder on a rocketship is doing to diversify a little, even in the particulars of the conversion/disclosure & how it's (mis-)interpreted by peanut galleries everywhere.
I guess. I don't know how much of that $10b is readily liquid. If we really want to look at it in percentage terms it seems like that number is the one to compare to.
Frankly it seems reasonable to me to sell off some shares to "lock in" some of the gains; I'm not sure that shows a lack of confidence going forward.
Besides, there's a lot of reasons to sell shares. Maybe he's buying a house.
I see an X against 10% Owner. which means of the market cap of 46 billion Eric Yuan should own at the least $4.6 billion worth of shares. the current sale is noise (0.005 % of his shares. )
That's some artifact of the form or the reporting requirements.
Per their Mar 20 2020 10k [1], Eric holds
> As of January 31, 2020, our founder, President and Chief
Executive Officer, Eric S. Yuan, together with his affiliates, held approximately 16.5% of our outstanding capital stock
The “30% cut” only applies to digital goods like apps, software or music. Physical goods like a book ordered from Amazon (or an Uber ride, for example) are exempt. Amazon-owned properties that sell digital goods are subject to the rule though. That’s for example why you can’t buy Audible audiobooks from within the app.
> Yes, Google has a huge index, but most queries aren't in the long tail.
I'm not quite sure about that. 15% of Google searches per day are unique, as in, Google has never seen them before. [1]. That's quite an insane number.
Sharing for anyone who didn't know there is a very good dataset you can use now. If you don't have a nvme ssd in your computer, I highly recommend getting one for fast i/o.
[edit]
in my experience yacy works really well. You have it crawl the sites you frequently visit and their external links and it quickly accumulates to something more accurate than google.
Wow, 15% unique searches is indeed quite an interesting figure. With that said, what OP said is definitely not disproved. Just because 15% of searches are unique, that doesn't mean the most relevant result is buried in the tail end. I mean I can think of loads of my own searches that are probably unique or rare, but lead to the same popular results because of typos, improper wording etc.
Without some clear numbers on that from a major search engine, I think this might be very difficulty to infer.
Heh, yes, they do. Which is a reminder that devs are not "typical" users.
As a developer, I search using keywords; for example, if I was looking for property for sale in Inverness, I might search for "property Inverness", whereas I've seen and heard "typical" users use something like "find me a 2 bedroom house with a garden for sale in the North of Inverness" - much more verbose, and containing stop words and phrases unlikely to help (I think!).
I do the same as you, but was just thinking that if most users search using full sentences then Google will spend most effort optimizing for that, so maybe we're the ones getting the worse results?
No, the optimization they do for the low-quality query is more than balanced out by the higher clarity and relevance of a well-phrased query. There are often extraneous words that aren't simple stop words, and they're not 100% successful at removing these extraneous ones.
I almost always search keywords while my girlfriend uses sentences and we often get quite different results. If I'm having trouble finding a good result there's a pretty good chance she will find something quickly. Surprisingly this holds true even for programming questions on topics that I know well and she's never heard of before.
What does it matter whether it came from an assistant or not?
Natural language is likely the preferred search input method for kids under a certain age, who cannot yet type fluently. My kids formulate very long, complex queries verbally. The other day my son asked Alexa why the machine gun is such a deadly weapon. She replied with a snippet from Wikipedia that was surprisingly relevant.
I search full sentences (questions) from the keyboard. I figure I'm not the only to have had the question before, so I ask. Also, I find that blog posts, etc. tend to match well for full sentences.
Does that actually work? I must be old school, I always delete such IDs before searching, but then again I used Google back when it actually did what you told it instead of misinterpreting everything for you.
It doesn't seem to have any particular effect on the results that come up. I always used to delete them, and still do sometimes but Google seems to pretty much ignore them in practice.
Could this be explained by supposing that people are just searching for current events, sometimes national, sometimes international, sometimes very local? If so, you really wouldn't need much indexed to handle those queries. I imagine many queries are also just overly verbose and sentence-length, which artificially inflates the number of unique queries which are actually seeking roughly the same pages.
Good point and 15% is indeed much, but the question would be what "unique" means. If it means that the exact same character sequence appeared for the first time, it doesn't mean that the users searches for a term that has never been searched for.
I mean with the newest advantages like machine learning it's more and more possible to _semantically_ link queries. If that's the case, those 15% could become 5% truly unique searches or even less.
"how dumb is trump" and "how dumb is donald trump" are two different searches but they semantically belong together because they mean the same.
Probably quite a few. New things happen. Politics, wars, famous folks, movies, music, diseases, scientific studies, products, brands, model numbers for products, fads and slang. I'm guessing there are other things as well.
Some of the new things are probably variation as well - as others have mentioned, sentences and voice commands can give lots of new stuff.
I would think it’s pretty common. For a lot of people google is the internet. Or at least the reference. If google isn't working it’s almost certain it’s your end. I don’t think anyone else has that reputation for availability amongst the general public.
> 15% of Google searches per day are unique, as in, Google has never seen them before.
That is impossible, and therefore wrong (I'm wrong, please see below). To know if a search is unique, as in Google has never seen them before, Google must be able to decide if a query it receives was seen before or not. Even if we assume Google needed only one bit for each message it has ever seen, and assuming it only saw 15% of new messages each day since its creation more than 20 years ago, it would need to store more than 2^1471 bits.
What could be true is that each day 15% of all searches are unique on that day.
Edit: I'm wrong. The 15% of completely unique messages per day are in regards to the messages per day, and not in regards to all messages it has ever seen, therefore exponential growth doesn't apply. To see that, assume Google just received one search query each day for 20 years but it was unique random gibberish, then Google could easily save that even though 100% of all messages per day are unique.
This is somewhat a faulty analysis. One could easily use a high accuracy bloom filter to store whether a search has definitely not been seen before, and that would be an estimate on the lower bound of the error margin.
It is roughly 1.15^(365*20). That it is wrong was clear from its size. I wanted to use it's falseness to show that the assumptions are incorrect. Which they are, just not how I understood initially.
How are you computing that number? It's definitely wrong.
Assume Google receives 1 trillion queries per year, and has been around for 20 years. Using a bloom filter you can achieve a 1% error rate with ~10 bits per item. So a 200 terabyte bloom filter would be more than sufficient to estimate the number of unique queries.
If you have a list of 20 trillion query strings, and each query string is on average < 100 bytes, you're looking at a three line MapReduce and < 1 PiB of disk to create a table which has the frequency of every query ever issued. Add a counter to your final reduce to count how often the # times seen is 1.
Actually the bloom filter was just an approachable example. There are much more clever and space efficient solutions to this problem, such as HyperLogLog [1] (speculating purely based on the numbers in that article, it looks like a few megabytes of space would be far more than sufficient). See the Wikipedia page on the "Count-distinct problem" [2].
My initial approach was also technically wrong; it tells you the fraction of queries which happen once.
To find the fraction of queries each day which are new, you would want to add a second field to your aggregation (or just change the count), the first date the query was seen. After you get the first date each query was seen, sum up the total number of queries first seen on each date, compare it to the traffic for each date.
You could still hand the problem to a new hire (with the appropriate logs access), expect them to code up the MapReduce before lunch (or after if they need to read all the documentation), farm out the job to a few thousand workers, and expect to have the answer when you come back from lunch.
I don't think it's necessarily impossible to calculate. Using probabilistic data structures arranged in a clever way, it's likely possible to calculate with some degree of accuracy.
I haven't thought this through, but take all the queries as they're made and create a bloom filter for every hour of searches. Depending when this process was started, an analytics group could then take a day of unique searches, and run them against this probabilistic history, and get a reasonable estimation with low error. Although the people who work on this sort of thing probably know it far better than I.
The real question though might be assuming the 15% is right, do we care about those 15%, are they typo's that don't merge, are they semantically different, are they bots search for dates or hashes, etc.
I believe that they're unique in a sense that nobody has typed in that exact query previously.
Of course, Google knows better but to treat every search query literally. Slight deviations and synonyms work for the majority of the people, even if us techies highly oppose them and look for alternative solutions (like DDG) that still treat our searches quite literally.
Source: https://aws.amazon.com/de/premiumsupport/pricing/