The more targeted the advertising, the more effective and therefore valuable it is.
But it's still hard to know what people want or need to buy right now
They can (and do) personalize it somewhat, but just aren't as well-placed as facebook, because they aren't as close to the user (people). People are ultimately where the money comes from. That's how advertisers get it.
So who's information is more valuable? Facebook's because they know more about you overall (your friends, your habits etc.) or Google's because they know what you want right now (because you just entered it right fucking there in the search box)
I'm leaning toward Google. If someone has a problem, they go to Google to search for ways to solve it, and as you point out, that's precisely when they're most amenable to advertising. People go to Facebook to connect with other people. Advertising will always be noise there. (Stupid mindless games notwithstanding.)
"Some of the best ideas may initially look like they're serving the movie and TV industries. Microsoft seemed like a technology supplier to IBM before eating their lunch, and Google did the same thing to Yahoo."
I agree, and this would be a very typical long-term move by Amazon: they systematically partner with existing incumbents, and then develop a business model that renders them redundant. Book publishing was the first example.
I agree that just because they work with the studios doesn't mean they don't fit the bill.
I think they don't fit because it's boring. It's scaling up and updating of "I'll help you sell your screenplay" newspaper ads. Open to everyone but with predatory terms.
Another middleman to the studios.
It's not innovative and it's not killing anything.
If the good costs more to make than the buyer is willing to pay then there is no price because there is no sale, so clearly price depends on cost. Functions can have more than one input.
I just about spat my coffee out when I saw that nio bytebuffers used 32 bit ints for everything. (I'm not normally a java guy.) I thought "oh hey a direct byte buffer will be a great way to keep all this data from blowing up the heap AW WTF!!?? ints?!?"
Does anyone know the rationale for this? If they had used 64 bit long values, like the underlying OS calls, his whole matrix could have been mapped into a single buffer, making all this list-of-mappings stuff unnecessary. That extra level of indirection normally wouldn't matter much but in this case he's paying the cost 1e12 times over.
The 32-bit ints can be solved even at the user-level library. But it's much worse than that.
Even if you only need to access 2GB (or you had fixed the Java memory mapping code) you still have a .getDoucle() or .putDouble() call for every access; and that's actually a virtual call (and as far as I can tell, even though I only ever used one kind of memory channel, the JVM wouldn't inline it -- although I can't tell for sure, because the JVM also sucks at introspection).
I had real computational code in C that needed to be translated to Java.
First attempt (no memory mapping, converting C structs to Java objects) failed miserably because my structs were 32 bytes each, and the object overhead was 24 or 32 (don't remember), which took me beyond physical memory (using virtual memory caused a slowdown of ~1000).
2nd attempt, I switched to memory mapped arrays -- much better, only ~15 times slower. But I also had to write my own sort, because Array.sort() or whatever it was called was allocating 48 bytes for each 4 byte int to sort (wtf?), blowing memory usage up again.
That's a cost people using Hadoop pay all the time -- which kind of surprises me how popular it is. You need 10 times less CPU if you do things right -- and at that scale, maintenance & hardware cost as much as salaries....
It was copying more, or for some reason expending from ints to Integers -- it multiplied the required memory by 12.
I don't have access to that source code anymore, and I don't remember what exactly I used, but -- given that I had to implement my own data structure over mmap -- it was an array of int, which needed sorting through a comparator class I supplied. That comparator looked up the structs corresponding to ints, and compared them. Perhaps it was just crazily instantiating the comparator class or something.
You can use sun.misc.Unsafe [1] and get 64 bit (long) addressing. It is JVM dependent, of course, but typically in such use cases you have pretty tight control over the stack. Unsafe pretty much covers the gap between C/JNI and NIO.
Yeah, it's stupid. I think the underlying rationale was that the arrays are indexed with ints, and that decision was made in 1995 with universally 32-bit machines.
You can hack around it by having multiple memory mappings over a file starting at different offsets, but just use C honestly if you're doing something that's math-heavy and needs really big memory-mapped files, C's better for both of those anyways.
The presence of "octonions" in the boost library kept me from seriously considering it for like 2 years. I understand the rationale--that by making it monolithic they're getting more libraries installed on more developer's machines--but come on. If I need threads, filesystem, date_time, foreach, etc. don't make me install wave, spirit, proto, phoenix, fusion, fucking octonions.
Because Google has a monopoly on non-shitty free email with GMail, in the same way that Microsoft has (had?) a monopoly on hardware-independent operating systems with DOS/Windows. (I wish I was really being sarcastic, you know because there are actually a billion web-based e-mail alternatives besides GMail that actually don't suck... right? Anybody know of any? At this point I'm earnestly inquiring.)
Gmail might be 'the best', but not enough to get a everyone to switch. So, IMO it's a question of where your suck threshold than anything specifically wrong with Hotmail / yahoo.
> what happens in HPC tends to filter down to servers
Is this conventional wisdom? How does a petaflop race affect app servers and databases? It seems like most traditional server workloads could get by without a single FPU. The only thing they have in common is IO. Are there many data centers using Infiniband? (Maybe there are I don't know.)
The Cell architecture is an evolutionary dead end. SPARC is no more of a threat to X86 now than before. GPUs may be the next big thing for HPC but its got a long way to go to get out of its niche in the server market. (That niche being... face detection for photo sharing sites? Black-Scholes? Help me out here.)
I mean, I agree with your overall point, but I think it's more likely that ARM will steal all the data center work before anything from the HPC world does. They are too focused on LINPACK.
Are there many data centers using IB? Yes. SMP was common in HPC before it came down-market, likewise NUMA. Commodity processors have many features - vector instructions, specilative execution, SMT - first found in HPC. Power and cooling design at places liks Google and Facebook is heavily HPC-influenced as well. Certainly some things go the other way - e.g. Linux - but usually today's server design looks like last year's HPC design.
I'm not quite sure it's valid to write off SPARC as an architectural dead end when the current fastest computer in the world uses it, and the next crop of US competitors for that crown are all based on the Cell/BlueGene lineage. GPUs are also more broadly applicable than you might think. Besides video and audio processing, they can be used for many crypto-related tasks (witness their popularity for Bitcoin mining), various kinds of math relevant to data storage (e.g erasure codes or hashes for dedup), and so on. Many of their architectural features are also being copied by more general-purpose processors as core counts increase, as well.
Yes, high-end HPC is too obsessed with LINPACK. Nonetheless, it remains a good place to look when trying to predict the future of commodity servers. Even if ARM does displace x86 instead, many features besides the ISA are likely to come from HPC. Perhaps more relevantly, either outcome is still very bad for Intel.
What exactly does "viable" mean in this context? Because the husband and wife team behind NameVoyager have had the baby name trend visualization market cornered for years now.
They aren't just using it to drive ad traffic and promote the book, but are now (since I last looked) actually charging for the visualizations themselves. And presumably people are paying, but who knows...
On the plus side, this means there's a market, but you've got a long way to go to catch them, both in regards to the visualizations and the breadth of the name database. It's a good start, though.
Yeah, viable in this context means that you can graph names and there are ads on the page. All the times we've tried to look at names for popularity when naming our kids we haven't had just simple visualization tools like this, so I thought I'd build one.
Then after I started I saw those guys but saw that you have to pay to visualize it, so I thought "hey, I'll disrupt them " :)
Not to be too harsh, but I agree, this is hardly a "viable" product. It pretty much is a college-level assignment where you just graph data in a database. It's needs a lot more work before it becomes minimally "viable".
Why make people guess the names? Every name I picked seems to drop down dramatically, but if I want to find the most popular names, I have to guess it? Why not show the top 10 most popular names?
What about a logarithmic graph? If you graph "Isabella" or "David" next to any name, it renders the graph useless.
Space leaks, stack overflows (foldl vs. foldl'), failed pattern matches (head []), etc. There are tons of bugs lurking in well-formed Haskell programs.
Tons being a vast overstatement. I mean, you could exploit these things to cause bugs, but really do you encounter them on a day to day basis? I know I rarely if ever have.
But it's still hard to know what people want or need to buy right now
They can (and do) personalize it somewhat, but just aren't as well-placed as facebook, because they aren't as close to the user (people). People are ultimately where the money comes from. That's how advertisers get it.
So who's information is more valuable? Facebook's because they know more about you overall (your friends, your habits etc.) or Google's because they know what you want right now (because you just entered it right fucking there in the search box)
I'm leaning toward Google. If someone has a problem, they go to Google to search for ways to solve it, and as you point out, that's precisely when they're most amenable to advertising. People go to Facebook to connect with other people. Advertising will always be noise there. (Stupid mindless games notwithstanding.)