For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more shazow's commentsregister

A more interesting definition is that a server is something that is asymmetric from a client. Things that are peer to peer which are symmetric to each other are something special.


Not really. Each peer is running a client and server and the only difference is the role of each node in a given transaction.


While you can frame peer-to-peer interactions as as a series of server-client interactions with servers and clients swapping roles... you shouldn't. It's a different paradigm with different implications.

Individual transactions might be usefully described as server-client, but the overall system is not server based. So...serverless.


In all cases, one node must send the first packet. The node sending the packet is the client, and the destination of the packet is the server.

Also, the internet is a peer-to-peer system. But "peer-to-peer" is an abstract paradigm, because ultimately, (a) two peers need to know about each other, and (b) one peer needs to initiate every transaction.

I realize we're just arguing semantics here, and I'm not sure what point I'm trying to make, but it's an interesting discussion nonetheless...


> In all cases, one node must send the first packet. The node sending the packet is the client, and the destination of the packet is the server.

I'm pretty sure you are just making up new definitions. It's kinda like saying that whoever says the first word in class is the teacher and the rest are student.

The fact is that both peers send packets and the first packet isn't particularly significant, except from a stateful connection or firewall perspective.


Well, let’s breakout Wikipedia [0]

> Clients therefore initiate communication sessions with servers which await incoming requests.

> Both client-server and master-slave are regarded as sub-categories of distributed peer-to-peer systems.

[0] https://en.m.wikipedia.org/wiki/Client–server_model


> Clients therefore initiate communication sessions with servers which await incoming requests.

Dogs walk on four feet, therefore anything that walks on four feet is a dog?

> Both client-server and master-slave are regarded as sub-categories of distributed peer-to-peer systems.

Dogs are canids, that doesn't mean all canids are dogs.

The client-server model really entails more than having one party that sends a packet before the other. Some illuminating details and a shallow comparison to the peer-to-peer architecture can be found in the article you linked.



Can you expand on "I still cannot get out of the package management mess"?

I've used the community dep project for a while, which was okay. I switched to the GO111MODULE experiment and it's more than okay.


Try adding a dependency to a big project like client-go or Docker client, it always gets messy for me.



Headquarter location is kind of an arbitrary line to draw, especially with poor data. Many of these companies are fully remote or have multiple large offices.

Just off the top of my head: PagerDuty has a large office in Toronto. GitLab and Zapier are very remote-first focused, I'm sure there's a dozen more on that list.

The problem is that almost all of these companies started out in the Bay Area when they did YC there. It's hard to find up to date information about how SF-centric they are now that they've matured, but it's easy to assume that they are by default.

This feels as useful as talking about incorporation state (so much Delaware!). I'd love to see more data about how distributed companies are these days vs being monolithic offices in SF.


It's not completely arbitrary. Where the HQ is usually shows where the company got started and experienced its initial growth. Also, while a company may have multiple remote offices, most execs will be in HQ and that can have a huge impact on the culture of the company. A Google office in Japan may feel more like a Silicon Valley company than a Japanese company's office in Silicon Valley.


I do believe that but this particular data set will demonstrate very little evidence of that since since the very likely biggest reason Y Combinator companies seem to be centered around SV is because that's where SV is; and so the data is biased beyond usability in this manner.


Does YC incorporate the companies in CA or Delaware?


> They may keep up to date with patches, but you have very little knowledge or control of what they collect from you and what they do with it.

It's not an ominous mystery. Google is extremely explicit about what they collect from you and what they do with it.

https://myaccount.google.com/privacy

https://policies.google.com/privacy

I have not seen any evidence that they violate their own policies, even when I worked there a while ago and had internal knowledge.


> I have not seen any evidence that they violate their own policies, even when I worked there a while ago and had internal knowledge.

Whether Google violates their policies today is the wrong question to ask. Nothing about these policies is long-term legally binding for Google and they can be changed on a whim.

While Google includes this language:

> We will not reduce your rights under this Privacy Policy without your explicit consent.

I'm not sure that covers them increasing their own rights to collect, share, and sell data.

Remember - nothing lasts forever. One day Google will be in a financially desperate situation and their investors will demand that they do anything they can to stop the losses. Meanwhile they will have a valuable trove of data on millions of people.

This is not just hypothetical. When Google decided that Google+ was a priority and only real names should be allowed many were forced to de-annonymize formerly anonymous Youtube and Gmail profiles or be removed from the service.

The only real way to assure the security and privacy of data is to not collect it. The only way ensure that the likes of Google/Apple/Facebook won't collect the data is through legislation that gives privacy policies real teeth when they're violated and gives users power to choose to reject changes to these policies in whole or in part.


Okay, so that's the outward intent. The practice is a little different. If they get hacked or an employee does misuse data, the public will probably not find out. Most of the company probably doesn't even know.

Eg last week: https://www.cnbc.com/2018/10/08/google-reportedly-exposed-pr...


If you were an engineer working at Google on one of the services that handles, say, location data from phones, how difficult would it be for you to go into the environment and find a specific person's location history? Also, what logging or other audit trail is there for that access?


Google has amazing controls and audit capabilities around access to customer data. When I worked on the security team there the number of people who could access a specific person's data without an audit record and an alert being triggered was zero.


+1 to what arkem said.

If I had to trust a company with private data, there is no other company I would trust more to keep it safe from rogue employees and accidental leaks/hacks.


Is that for someone going through the user interface, or is it a fundamental feature of the database (or whatever)? In other words, is there no case where someone could log into a server and see some PII in a debugger or a direct query without being detected?


This is both for people using tools and people accessing servers and databases directly.

Logging into production servers is audited and triggers alarms. There's basically no-one who has "root" level access to a large number of boxes (when I left in 2013 there were only a handful of people who could login to arbitrary boxes and systems were being built so that their access would no longer be necessary). Logging into a server that holds live data would be investigated and so would running a custom query against a production database. The goal was to have it basically impossible for an engineer or admin to directly access data on boxes to force people to use the tools.

The tools themselves had a great permission system as well as a way for users to elevate their permissions in emergency (triggering an investigation). It worked well because it was also easy to create dummy databases to develop on (for example by requesting a database extract of your own location data).

In my career to date I have yet to see a more privacy conscious / secure approach to handling customer data.


From the Google people I've talked to, the easiest way to get fired at Google is to inappropriately gain access to user data.

And yes there is anaudit trail on accessing that stuff.


FWIW, the ex-googlers I've asked about this claim it would be very hard for an individual to do this surreptitiously. One even claimed that Larry Page would probably get caught if he tried.


I wonder about facebook also, in terms of access/audit trails.


We know from the Snowden leaks that there were direct data links between Google and the NSA. Despite their vehement denials and public outrage, I still find it hard to believe that it was possible for the NSA to install such massive surveillance without some complicity from Google.

Technically this might have been possible without any Google involvement, I agree with that, but given past involvement of other companies like e.g. AT&T with the NSA, this seems kind of unlikely to me. It just seems more credible to assume that some people in the higher ranks of Google willfully complied, and I wouldn't be surprised if something similar still occurred.


Nothing in Snowden's leaked documents contradicted Google's statements. The New York Times reported the setup correctly from the start, and the rest of the news media picked up on it shortly thereafter. https://www.cnet.com/news/no-evidence-of-nsas-direct-access-...

Your understanding of PRISM matches Greenwald's incorrect reporting, which was based on a high school dropout's misreading of some slides he found on the SharePoint system he administered. Greenwald could have gotten the story correct if he had bothered to run the documents by an expert first, but instead he made ridiculous errors like thinking that DITU is a government system running inside the companies' networks instead of the FBI's Data Intercept Technology Unit, whose court-ordered wiretaps PRISM actually accesses.


Then again, Google's public statements might not be the best information source to check whether Google did something nefarious or not.


This is super cool, great work! It's the second (mostly-)language-native implementation[1] outside of a browser that I'm aware of, both started this year despite the WebRTC spec being finalized about 5 years ago.

It's really important for libraries like this to exist so more people can build apps without binding to massive hard-to-compile browser codebases. Thank you for working on it!

Also kudos for going asyncio and Python 3 right from the gate.

--

[1] At least semi-functioning with DataChannel support. Lots of started attempts that have never gotten this far. The other implementation is in Go: https://github.com/pions/webrtc


Is this one of the first examples of an IPFS-based API? This is very cool.

I like that you can take something that is effectively on your disk, ping some endpoint, and it will get "pulled" by the service as a peer and process it.

Next step could be to make the "ping" step as an IPFS pubsub thing.


No, I don't think it's the first example. https://www.eternum.io/ was one that was before, but I'm sure there are others.

We keep a list of some applications and other things over here too: https://awesome.ipfs.io/


"One of" :)

I'd argue eternum is not quite at the same level since it just takes an IPFS object as an input (but no IPFS output). Op made a thing that takes an IPFS object as an input and produces a different IPFS object as an output.

There are certainly many IPFS-based apps.


I would agree with this, Eternum isn't so much an IPFS-based API as an API for helping with IPFS pinning.

I'd say IPFessay (https://gitlab.com/stavros/IPFessay) is much closer to an IPFS-based API, since it can run entirely on IPFS with no outside servers.


Also free reliable email forwarding.


From the post:

> One of the most exciting opportunities the Librem Key opens up to us is in integrating with our tamper-evident Heads BIOS to provide cutting-edge tamper-evident security but in a convenient package that doesn’t exist anywhere else.

...

> We have worked with Nitrokey to add a custom feature to our Librem Key firmware specifically for Heads. This custom firmware along with a userspace application allows us to store the shared secret from the TPM on the Librem Key instead of on a phone app. Then when Heads boots, if the BIOS hasn’t been tampered with the TPM will unlock its copy of the shared secret, and Heads will send the 6-digit code over to the Librem Key. If the code matches what the Librem Key itself generated, it flashes a green light. If the codes don’t match, it flashes a red light.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You