I think the ribbon is okay. I had just as hard a time finding things in Word/Excel 2003 as I did in Word/Excel 2007 but after a while the design started to make sense. I still had to google things but I also had that problem with the older versions.
Word 2007 worked much better for mathematical documents than 2003 and once you learned the shortcuts (it even allows some tex like functionality) short reports work out much faster than emacs+auctex or lyx while still looking generally good.
You can use a vim emulation mode in sublime so all your hard work wasn't for nothing but I believe some of the more advanced vim features don't work in sublime.
The advantage of sublime is that it works well out of the box and with package control it is very easy to install any package you may need.
If you decide to give it a try, try sublime 2 with package control (I believe the trial is still free for sublime 2).
It's not just the advanced features. Even basic features are completely broken. Pressing undo several times can occasionally break out of undoing and start editing your document on your behalf - spewing a slew of "uuuuuu" into your document, which - guess what - ruins any hope you have of reverting back to the original document state.
Gvim always used to work fine for me when I was developing a C# system, the awkward thing was the naming conventions being broken for everything so I had to go on a renaming binge.
You're confused about how gplv2 works, Genesis gives you the source code and you are free to distribute it if you'd like, or alter it, but they don't have to distribute it for free.
Very cool, this could be what svbtle should have been.
If you implement the things like trending and subdomain user blogs (with things like domain hosting add-ons in the future) I could see it being a very useful blogging alternative.
Go with wordpress, even going with wordpress.com would be a good idea just pay the cheap fee to host your domain there (and maybe pay for ad-free total would be $40/year).
It is ridiculous to me that people view public web pages as something that shouldn't be archived, if anything it provides illuminating snapshots to the state of the web at certain dates.
The archive.org team does follow robots.txt and I believe they remove content retroactively meaning if you update your site with a robots.txt it will delete the old content (which I think sucks).
> The archive.org team does follow robots.txt and I believe they remove content retroactively meaning if you update your site with a robots.txt it will delete the old content (which I think sucks).
Indeed, especially since most domain parking garbage sites seem to have robots.txt files for some crazy reason.
> Indeed, especially since most domain parking garbage sites seem to have robots.txt files for some crazy reason.
Presumably to avoid being plagued (in terms of load and bandwidth costs) by the numerous crawling bots looking to update their caches of pages that no longer exist on those domains.
I've seen a CMS brought almost to its knees because the previous owner of that IP had a site that had lots of distinct pages on it. Since every page in the CMS was stored in a DB it took a DB lookup to find out whether the incoming URL existed or not. Caching/varnish wouldn't help as there were hundreds of thousands of different incoming URLs and none will be in the cache because they don't exist.
About 20% of the hits to one site I look after are 404 because they're from the previous site hosted on that IP address. Luckily the vast majority of URLs have a specific prefix so it's a simple rule in the apache config to 404 them without having to got to disk to check for the existence of any files. It still counts against my bandwidth utilisation too (both incoming request and outgoing 404).
>The archive.org team does follow robots.txt and I believe they remove content retroactively meaning if you update your site with a robots.txt it will delete the old content (which I think sucks).
Every time the "Change Facebook back to the way it was!" brigade came out, I would link to the wayback machine's copy of facebook.com from 2005 and say "Is this what you want??". Now I can't do that anymore because of stupid robots.txt.
I hope they have backup of this old content. This robot.txt policy is crap. robot.txt should not be taken into account retroactively when the site owner has changed.
They are kind of different beasts. Linode can spin servers up and down rather quickly and you can deploy different OS's all through the online menu.
Prgmr you buy a machine send an ssh key they set it up and send you the bill. Then you get a login console at xxxxxx.prgmr.com and login and setup a user or deploy your own OS using centos recovery. But you can't create and destroy servers like linode or aws or rackspace.
I've had a couple prgmr VPSes for about a year and I'll say this: I've never had any interruptions, they are always fast when sshed into, and make good web servers. The only problem, and its kind of a big one, is that support sucks. Reseting your ssh key (which I stupidly have needed to do a couple of times) takes about 3 days. You send them an email and they respond... eventually. But you get what you pay for and its hard to ask for much service when you're paying those low prices. For the price, the servers themselves are awesome.
I'm switching to Azure because their prices are reasonable and you get the full management experience.
Word 2007 worked much better for mathematical documents than 2003 and once you learned the shortcuts (it even allows some tex like functionality) short reports work out much faster than emacs+auctex or lyx while still looking generally good.