For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | jumperjake's commentsregister

I completely support the IPFS (especially with them keeping content hosting explicitly out of scope). The problem I faced when trying to leverage IPFS for a project is that the reference implementation is very difficult to use in languages other than Go, and to completely embed in applications (as in staticly linking to sqlite).


Yeah, I like Go, but the reference implementation should have been a C library (or similar). There are splinter projects trying to implement it for other languages, but they're either already dead or moving at a snail's pace.

I love the idea, but that decision leaves me with little confidence that it'll catch on.


I guess it just needs time for maturing: more languages and better UX. If IPFS finally piggybacks on the Ethereum ecosystem it will accelerate considerably this...


There already are efforts to address this with projects like the IPFS. However, without mass adoption, it seems very unlikely that an alternative internet would emerge before the increasingly-likely collapse of the open internet.


Somewhat agreed. However, there is enough repression to provide incentive for semi-shady people to work on these things. Unfortunately piracy seems to be the biggest incentive for secret communications, and while that is indeed productive, it incentivizes high-bandwidth torrents for large files, which is not exactly in line with the most crucial needs (although of course there are important uses for distribution of videos and photos). An interplanetary low-bandwidth Usenet might be the most crucial necessity for the upcoming dystopian future. So I suggest we start encouraging the spread of subversive poetry and politically uncomfortable discussions, to incentivize such a grassroots network.


You're best off figuring out other avenues to do this - HN is preaching to the choir.


You need a choir to make a chorus.


> However, without mass adoption, it seems very unlikely that an alternative internet would emerge before the increasingly-likely collapse of the open internet.

I am not sure what you mean by mass adoption. Sure, there has to be a critical mass for it to work. But I don't think that everyone and their dog should use it.

Part of what made "the good old days" of the web so awesome was that you needed to put in some hours to actually use it. Thanks to this, it wasn't that interesting for people who didn't want to contribute (somewhat) meaningfully.


It doesn't necessarily apply to IPFS, but the increasing amount of "mobile only" users significantly limits what sort of input people can provide.


You should tell Kimble that message, he's currently determined to launch MegaNet using mobiles to store data when they're idle in a p2p network.


Right, and it's not a bad use of mobile devices, but my point is users can't introduce anywhere near as much input. If everyone were to go in that direction, written word - not to mention programming would take a huge blow.


Am I the only one who prefers the current internet over IPFS? IPFS exposes my IP to anyone who wants to know about the websites I'm visiting. As seen with BitTorrent, that power gets abused significantly. I would prefer a IPFS+Usenet solution, separating the nodes from the clients. What would be even better is if I could choose to connect exclusively to [any country here] nodes, and then I can feel comfortable knowing my traffic is remaining within borders that I can trust with privacy laws.


I believe that the underlying transport mechanism is not part of IPFS; You can use IPFS over Tor to insure privacy.

Also, you can run nodes on other systems. This is effectively what `https://ipfs.io` is.


This might be good news for open source: No competent government will use software with classified information if it can't audit the code.


As a workaround, I use google cache or the waybackmachine when viewing sites that have issues with my strict HTTPS-only policy. I expect it's also a viable workaround for Tor users.

DuckDuckGo nicely lists these options if you search for the full URL too.


The same parties that hosted the old usenets and mailing lists; the users themselves.


You're confusing redistribution with scraping. Scraping publicly-accessible content is legal. Redistributing it is not (at least not in the US).

Google scraps webpages as a core competency.


Google also adheres to the robots.txt standard. Most of the scrapers I block don't.


Not correct. Google will completely ignore the rules in robots.txt if it deems it acceptable. I think there's a link to this somewhere in this comment page.


They do not index the content but might add the URL, correct. You can have a meta noindex present and they won't index even the URL.


IANAL, but copyright governs redistribution of content not consumption (That's what pirates get busted for). I aslo recall that there was a ruling that footer TOSs aren't enforceable unless the user actively and explicitly agrees to them.

I agree with the GP in that public content is fair game. How do you thing google works?


Google technically respects robots txt and noindex metatags. OP is arguing the ethics of scraping, not if people are ignoring bot meta tags.

Copyright governs how the content is used, including distribution. The reason people who download videos are not liable is because you have to download the complete content to see the copyright. File sharers have already downloaded the content and are subject to copyright. Bots that scrape can interpret meta tags in the header of the dom, which is why scraping and violating copyright is unethical.


I don't understand. If I produce a map image with all the details I'm interested in, and publish it, then use opencv to extract the data from that image into a database, would I be free to license the resulting database as a I wish?


I don't know. Ask a lawyer or judge. You can read the licence http://opendatacommons.org/licenses/odbl/1.0/ and look at the definitions of "Produced work" (a map) or "Derivative Database".

In practice, no-one's really done that or likely to do it. Either you'd do something silly like make the map a SVG with all data encoded as textual attributes and your "Computer Vision Algorithm" is basically grep (in which case it would probably be seen as a Derivative Database), or do real CV on a real image, which is very hard to do and will result in bad results. It's sufficiently hard that no-one's worried about it.

If you really don't like the OSM licence, you are free to go to another map data provider, pay them what they charge and agree to whatever they want, and get something else. If you want OSM, agree to OSM's terms.


To me, the biggest improvement Rust brings to the table is its sane defaults. This, coupled with its type system, makes handling outcomes something you opt out of.

As I continue to use of Rust, I keep finding myself avoiding `if` statements in favor of the `match` statement for this reason alone.


And then you start pipelining all the match statements... Rust almost starts to look functional.


Any chance of providing an api that is more friendly to be used from other languages? Java is not the easiest thing to work with.

Also, does this library work on other systems besides android? I've noticed `android` appearing multiple places and google specific api (`GoogleCloudMessaging`).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You