It's interesting that Lumia sales are up by more than 10%, but revenue through phone sales decreased by 38%. That was primarily caused by Microsoft's refusal to sell any flagship Windows Phones after they purchased Nokia. I was personally affected by that decision — after my Lumia 1520 broke, I had to reluctantly upgrade to an LG G4 because there are no flagship Windows Phones for sale anymore.
I believe they covered this - and is somewhat related to your flagship comment. They sold more of the cheaper Lumia variants (targeted at other markets than the US/your part of the world)
It's an unfortunate state of affairs when a modern browser has to spoof another browser just to get the right content. The user agent in Microsoft's new browser does this by default [1][2]. As a user, I'd love to see a day when browsers don't reveal their user agents and web developers rely on feature detection instead. I admit, though, that I haven't fully considered the collateral damage that might come with that.
Almost daily I have to spoof my User Agent to claim to be an iPad due to sites confidently proclaiming that Flash is """required""" to get at some audio/video content. What do you know, spoof User Agent, and their Flash """requirement""" instantly evaporates. Pet peeve.
Also UA detection can happen before feature detection which is very handy in some cases such as redirection based on device. Feature detection also implies that there is some sort of client that runs the response which is not always the case.
Sniffing User-Agent has been poor practice since 2011 , in favour of feature detection using a library like Modernizr.
I used to advocate using something like Modernizr and then writing your code against the results, but now I think an even more straightforward approach is to just do the feature detection you wish to use directly in your code. No sense in loading in a library and testing for things you don't need, still only to have those tests totally decoupled from the parts of your codebase that depend on the feature detection.
It makes more sense to do only the feature detection you need right in your codebase adjacent to the code which relies on the results.
> Sniffing User-Agent has been poor practice since 2011
I'm old enough to remember it was much earlier than that. In the early '00s there were already calls to do feature detection; jQuery was released in 2006 and it basically did it for you. In 2015, there is no excuse to do UA sniffing -- if anything, because we now have 20 years of case-history showing people will trivially spoof it.
Unfortunately, there are cases where it is still necessary. For example, IE 10 reports that it supports the CSS pointer-events property, but it only works on SVG elements, not HTML elements.
When I went professional with my web development (2009) I can remember the in-the-know people were actively advocating for feature detection, but it was still common practice even in 2010 to add IE-specific support using conditional comments to load stylesheets with fixes for a known IE version.
On the way to 2012 it became clear once Microsoft said they never intended to let IE announce itself as IE in the future that the game was finally over for UA sniffing. The funniest part was I still remember the MS fanboy blog post that was saying how wonderful it would be that future versions of IE would identify itself as not IE, and how this would make the world a better place. :P
You don't have to include the whole library. Modernizr allows you to customize the package to only detect for features you need.
While writing your own feature detection is still probably lightweight (although not by much if you customize modernizr correctly), you're not going to get all the edgecases Modernizr will unless you spend A LOT of time on it.
I get the tradeoff, the problem is when you have 90 projects that customize the library for 90 different needs, and then you want to update those libraries. At that point, you've basically made 90 Modernizr forks. Does it make more sense for you to maintain 90 forks of a library whose codebase you aren't developing and try to keep them current, or to bake-in the parts of Modernizr (with its edge-cases included) right into your codebase for each project, and then only have to maintain your own codebase, instead of your codebase + a library fork.
If you're using the majority of Modernizr tests it makes sense to use the full library, but for the most part me and the other devs I've talked to usually drop the whole library in for ease of maintenance, but mainly use it for one feature alone: touchscreen detection.
Lately I've tried putting some of my HTML on a diet by replacing my need for Modernizr for the purpose of touschreen detection with this snippet:
if(('ontouchstart' in window)||(navigator.msMaxTouchPoints>0)){
// code for touchscreens here…
}
This keeps the test and the result together in the code, and eliminates the need for a library, which can also be an additional single-point-of-failure outside of your control, especially if hosted by a CDN.
It's worth noting that each custom build includes a commented line with a URL to download the latest version with the same custom set of detects. A great timesaver!
If you're "baking in" parts of Modernizr into your codebase, it seems to me you're still maintaining 90 forks, but it's more work and the provenance of the forked code is less clear.
Suppose you're baking a cake, but it's a custom cake and you're not 100% sure you have the best recipe for frosting.
Suppose Modernizr were a book, and each feature Modernizr tests for was a recipe. What I'm suggesting is this: Instead of going to the library and checking out the entire recipe book just to refer to the icing recipe, just copy out the list of ingredients and leave the book on the shelf. you're customizing the recipe anyway, so if you just add their list of ingredients to the steps you're writing for your own customized icing it's going to be easier in the future than if you write your recipe steps with a little note that says: 'refer to ingredients on page 24 of Modernizr Cookbook'. I also think buying a personal copy of the book for the same purpose is overkill - it makes more sense just to list the specific ingredients you want to reference directly inside the new recipe you are writing and keep it all on the same page, plus not have to worry about where that recipe book is every time you want to whip up a new batch of icing.
How is generating a customized Modernizr build with just the tests you need equivalent to checking out an entire recipe book for one recipe? Either way, you are just getting the tests you need.
Testing for features over platforms was known to be a good idea since the early days of autoconf (1991-1992, according to the history in its info file). Why did the web folk take so long to figure this out?
For the most part it didn't matter until recently.
Netscape versus IE (late 90's) - features didn't matter, they just rendered HTML and CSS differently
Firefox versus IE (early 00's) - Firefox added a bunch of great CSS support and things like rounded corners and PNG transparency, so once we could use those we could just supply a polyfill for the specific IE version that needed it. Opera could handle it already, Netscape was Firefox rebranded. Only IE (which announced itself as IE) needed extra help
Mobile vs Desktop (late 00's) iPhone! Android! Tablets! Now is where things start to get a little crazy, IE will be IE but a bigger concern is the separation between tiny little touchscreen devices browsing a website, and a massive desktop computer
Mobile versus Mobile (early 10's) - IE never says it's IE, we have smart watches, phones, phablets, tablets, netbooks, notebooks, and still desktops. There is Firefox, and Chrome that run on Mac, Windows, Linux, iOs, and Android, there's Safari which runs on OS X, Windows (old version), and iOS, there's IE, of which there are 8,9,10,11 and the new ones in circulation, and a handful of other browsers like Android browser that kind of gave up a long time ago but are still used.
I'm sure backend software was rife with feature-detection for the OS's it ran on (Redhat versus CentOS, special support for IIS, etc) but until things exploded after the iPhone was released in 2007 the web had very predictable deficiencies that could be addressed more directly than feature detection.
I can remember as a Linux user, there were plenty of websites that would only let 'approved' User-Agents in, because they would rather you NOT see their site than see a site in a browser they didn't support. When using Linux I often didn't have access to IE or Netscape, so I would use Firefox or Konqueuror to spoof a different User-Agent. Nintendo.com used to be like this, plus others.
1) it didn't take "web folks" long. It was near immediately after the availability of the browser as an application platform that the advice took hold to those willing to heed it.
2) The existence of autoconf et al is not in any way preventing people from performing platform tests all over the place, so it's not like a panacea.
Sorry if you find this a bit abrasively worded, I aimed to avoid that, but you made me bristly with your implication that "web folk" are somehow lesser.
Autotools don't only check for features, it also checks for bugs. For example I know that Firefox SVG support is buggy and I need to use Canvas version of OpenLayers for that browser - how am I supposed to do it without UA sniffing?
Unfortunately for various reason it's often impossible to use feature detection.
2 examples:
1) There's currently no way to reliable check if a browser support device orientation. Maybe because of bugs or incomplete implementations
2) It's impossible to tell when iOS 8.x has added the chrome around the page in landscape removing nearly 1/3rd of the entire view-able area on an iPhone5S.
In Chrome, event.pageX/pageY refer to the position of the top left corner of the drag-preview-image, relative to the top left corner of the page.
In Safari, event.pageX/pageY refer to the position of the mouse curser relative to (0, window.innerHeight * 2 - window.outerHeight), a point slightly above the bottom left corner of the screen.
(Neither of these is spec, which as far as I know says that event.pageX/pageY should be the position of the mouse cursor, relative to the top left corner of the page.)
I eventually got around it by storing values from the `drop` event, but anyone who needed these values from the `dragend` event would be screwed.
My next most recent problem unsolvable by feature detection has to do with HTML5 Notification, whose API was massively changed recently. Attempting to use the new API on old versions of Chrome would cause the render process to crash (not just throw an error you could catch in a try-block, but actually crash like http://i.stack.imgur.com/DjdCX.png ). Of course, using the old API on new versions of Chrome would fail silently, so there was zero way to reliably deliver a notification to the latest version of Chrome without crashing older versions or using user agent detection:
Agreed re feature detection. This could lead to an uncomfortable world for devs where the UA becomes completely unreliable - may force a migration to feature detection.
I believe that's referring to Xamarin and Cordova, which are now built into VS2015. Xamarin still requires a Xamarin subscription to compile and deploy the apps, though.
On the other hand, they might also be talking about the new Universal Windows Platform bridges [1], which let you port your Android, iOS, Win32 and Web apps to native Windows apps.
Personally, I switched over to Windows Phone from a Nexus 4 after I bought a generic Windows 8 tablet and started using the live tiles. In my opinion, compared to the iOS and Android "grid of icons", live tiles are just superior in every way. I'm honestly surprised that neither Google nor Apple have copied Microsoft here.
I'm using a brand new LG G4 as my daily driver, and I own two Nexus 7s. You're going to have a hard time convincing me that the Android widgets I can set up on my G4 [1] compare in any way to the live tiles on a Windows Phone [2]. I recognize that this is all subjective, though.
Again, this is all subjective, but I'll try to explain why I prefer live tiles over Android's widgets.
1. Android widgets are huge, I can fit maybe 4-5 widgets on one page of the Android desktop. More than that, they become so scrunched that you can't get any information out of them. This is compounded by the fact that some widgets only allow a certain size unless you download 3rd-party launchers.
2. Widget design is largely up to the app developer, and a lot of widgets are ugly (subjective) or at least don't match the other widgets you have them grouped with.
3. Widgets try to be interactive, and I can't tell you how frustrated it makes me when I'm scrolling across the desktops to look at widgets and I accidentally cross an item off of a todo list, or scroll through my list of emails instead of continuing on to the next desktop.
To me, the Android (and iPhone) home screens feel lifeless and dead when compared to a Windows Phone. On WP, my live tiles are always flipping around and displaying the latest emails, tweets, facebook posts, news articles, pictures from my camera roll, etc. The Windows Phone start screen displays all of that information to me at a glance, and I can pick and choose what to do next based on the summary that I see on my start screen. On Android, though, I have to actively search for that information by going from app to app.
I think there are ways to solve all of your issues on android, including using a 3rd party home screen; but at the end of the day - these are not products that come down to measured specs against measured specs, these are products we use as an intimate part of our daily lives and as a result are subject to purely subjective factors being a core part of the decision making process.
I appreciate your opinion, it has helped me further understand why using subjective reasoning as a driving factor for technology purchases is no longer a sign of the uninformed/uninitiated, and is now instead (rightfully) simply a personal preference.
For those that are curious about this, sudioStudio64 is talking about "Project Westminster", which is one of the four "Universal Windows Platform bridges". The other three let you essentially port Android, iOS and Win32 apps to the new UWP platform and app store.
> MS publishes new builds like biweekly through MSDN.
While this is supposed to be true, there hasn't been a new public build in 50-something days. Most of the new information is coming from leaked internal builds.
... as-in posted by some Microsoft Employee under direction of Microsoft Management in a "we don't want to write-up a change doc or support info, nor answer questions about this build, and/or test the waters for a possibly controversial feature without officially claiming the build."
"Leaked", in the same sense as anonymous Government Officials "leak" important news to the press to test waters before the administration officially backs an issue, or big manufacturers "leak" pictures of new products to the press to create buzz before they do an unveiling... one would think the world was made of holes.
Why do we call all these things "leaks" when they so-definitely are not. They are "clickbait", plain and simple.
They have proper release notes of course. Maybe talking about a feature violates some NDA, I'm not sure, but they are not leaked in the sense that the build itself leaked from Microsoft.
You mean public previews, which are meant for end users and press, but the development builds are being released much more often. Latest is less than a week old.
I had asked the founder about a Windows Phone client a couple of weeks ago and at the time he had said a WP client was 'very likely'. Not sure which is more recent, but I would also like to see an Authy client for Windows Phone.