"Diffing the commits" isn't really available in in a GitHub-style Pull Request web UI, which is where 99% of our code review is happening. I'm definitely optimizing for that view of the merge over everything else.
I love how github fosters discovery and remote collaboration, though one of its liabilities is when great git command-line features are effectively lost unless github re-implements or exposes them, because some conventions incentivize only doing what github itself can do.
Bluetooth used to be common until vendors started implementing their own file-sharing or just focused on sharing files in the "cloud", which is really inconvenient when there's no internet connection (not to mention, even when there is internet connection, it's really an inefficient process, bandwidth-wise, to send someone right beside you a single file over the internet).
Maybe we should push a standard for local file-sharing so that all vendors will implement it in their mobile operating systems.
We need to focus more on mesh wifi networking. If Our phones could connect directly (and they can) using wifi in a mesh, it would be quite easy to write network clients to share data with them.
FireChat (http://www.opengarden.com/firechat.html) tried to do mesh network messaging, but security is hard to get right in mesh networks, since the data passes to other clients if needed. For example in Tor, it's great use for anonymizing end users, but it's not great for security - I wouldn't use it to send confidential information such as credit card info.
What about a rings of trust system? Tap your phone to someone you know etc, and your devices know to trust each other. Add a level of trust to that, so the apps can act accordingly. Also, make it so messages can be stored until a PtP or relay over internet can be done.
Yea and its good for when it works, but i was talking more of a true mesh, where the message is passed between phones directly until it finds a path to the net or destination.
The problem is that there are lots of standards for local file sharing, and vendors all choose different standards. The answer to this problem is unlikely to be creating a new standard (insert obligatory XKCD link here)
Most of the time, I just clean up my desktop and reorganize files. Also, at least once a month, I simply navigate in the finder and delete old, barely used and unneeded files and apps.
If I find my Mac to be significantly slow, I backup files and apps I only need, and do a clean install of macOS.
Developers can now just upload one set of screenshots for an app. Optionally, they can also upload one app preview per device family. This will definitely make the release process of apps a lot faster.
None of the effort in adding C# and Xamarin support distracted anyone from getting the core product to 1.0. We are an entirely separate team hired specifically for this project. If you follow the Java, Swift or Objective-C products you will have seen a steady set of releases in the last year. I can't comment on our closeness to 1.0 shipping.
I'd like to think that the C# team has also contributed by setting a high standard for API usability and ease of use but we're standing on the shoulders of the giants who wrote LINQ and Fody (thanks Simon Cropp).
Concurrency is done with libdispatch library, which is also open source. Right now, it's only compatible with OS X, but int their post, they said that "For Linux, Swift 3 will also be the first release to contain the Swift Core Libraries."
Their online tech, bundled apps, and the Aqua GUI style don't need to be opened up for macOS* itself + the Kits + Finder to be open sourced.
If people could reliably and legally install it on any PC they want it could still cut into Windows' share a lot more than it currently can.
It's not hard to imagine that before long, enterprising people will release custom "distros" of it, say with an up-to-date OpenGL, or even a Wine/DirectX emulation layer baked in so we can just double-click on any .exe and have it run natively.
* As I'm assuming/hoping it's going to be called starting June 13. They could open source "OS X" while keeping the "macOS" brandname for themselves.
My point is that Apple has no interest in doing any of that.
> If people could reliably and legally install it on any PC they want it could still cut into Windows' share a lot more than it currently can.
But Apple would lose a huge amount of money on hardware sales, which is where they make their money. Apple even tried an approved clones program in the 90s, it was a miserable failure and one of the first things Jobs did on his return was kill it.
> It's not hard to imagine that before long, enterprising people will release custom "distros" of it
Which Apple really wouldn't want. One of the selling points of OS X is the lack of variation in both software and hardware.
You may be right, I'm not sure. Personally, I would only want for my custom desktop. As for laptops, I've looked at other vendors and as nice as the new XPS 15 looks and has a type C connector that could be used for external GPU, I can't help but remember the XPS 15 that I used as my main for a year where within that time, the wifi became hosed and the battery required replacing. Forget that. As far as laptops, the MBP I bought after that laptop is the only one I've had any real confidence in.
As such, waiting for a new 15 MBP (current is 13) with Type C connector(s) and then will upgrade.
OSX being available on non-Apple hardware would just make it where my desktop and laptop could play nicely together.
I completely agree with all your points, but to be fair iOS dominates desktop Mac in terms of revenue to begin with ($51B iPhone + $7B iPad vs. $7B Mac). That's reflected in the fact that Apple continues to open source XNU on desktop (granted, not in anywhere near approaching an open-process, or complete, manner) but not on mobile. Apple has historically been more willing to open source things they don't make money on (e.g. LLVM and Swift). Seven billion dollars is a far cry from zero revenue, but it's an interesting trend nonetheless.
Many people who like Macs will continue to buy Macs even if OS X was freely available on other PCs, and OS X would continue to indirectly generate revenue for Apple even after it were open-sourced:
For one, many more people will have access to the Mac App Store and the iBook Store, leading to increased sales for apps like Final Cut and Logic. There'll be many more potential customers for their iCloud Drive storage plans. Last but not least, it will drastically lower barriers for iOS/tvOS/watchOS development as people will be able to develop on any PC they want, not to mention it would increase the pool of people making native Mac apps as well.
Apple can still differentiate Macs through their hardware, things like their pressure-sensitive trackpads, form factor and by keeping bundled apps (like Photos and stuff) closed-source and exclusive to Macs.
But it's not open source in the way Swift is. Apple doesn't have a git repo or something for Darwin. In essence, you only get the "tags". They don't take pull requests, you need a Dev account to post bug reports, etc.
It feels a bit dangerous to claim that the only True Open Source projects are the ones hosted on Github and Github alone. Just because the source code isn't released using your favourite version control system doesn't mean it's not "open-source." The code is available and yours to use.
I agree that Apple's darwin release doesn't have much of an open-source community around it.
IMHO ++ is trivially readable, but that's only because I happen to be familiar with C/C++'s pre/post increment idiom. An idiom which is well known for causing confusion and bugs, especially - as you say - for devs new to the language. For me the compelling argument for removing them is that they're a Cism and don't fit with the semantics of any of the other operators. Chris Lattner strongly agrees with you and explains his reasons here https://github.com/apple/swift-evolution/blob/master/proposa...
A beginner should be made to learn something only if they get an advantage out of it in readability or type safety or performance or something else. I don't see that being the case for pre- and post-increment. If anything, I find:
i += 1;
String name = names[i];
more readable than
String name = names[++i];
where I have to pause and remind myself of pre- and post- increment and which one is being used here and mentally translate the code into the above version.
Imperative languages already have a way of specifying execution order: it's the order of statements in your file. Let's reuse that rather than making things more complex.
I don't see a difference in functionality — both code snippets I gave do exactly the same thing.
As for brevity, I think clarity is more important. We should optimise for the time it takes to read and understand the code (clarity), not just read (brevity).
The best abstractions and programming language features are both brief and clear, like Python's list comprehensions. I find
[name.uppercase() for name in names if name.startswith("a")]
to be clearer than Java's
List<String> uppercaseNames = new ArrayList<>();
for (String name: names) {
if (name.startsWith("a")) uppercaseNames.add(name.toUpperCase());
}
So, the best programming language features enhance both brevity and clarity. When that's not possible, I'll take slightly longer but clearer code over short but confusing one.
I've seen ++i vs i++ cause so many bugs that whenever I see it, I stop to ponder if the author got it right. It's like in Javascript where I stop to check if the author intended for `if (x)` to take the false path if x is "" and 0.
It's death by a thousand papercuts for my mental load.