For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more stepstep's commentsregister

I tried to address this in the article:

"If a generated password is ever compromised, you don’t need to memorize a whole new secret key and update all of your passwords. For that service only, just add an incrementing index to your secret key. For example, if your key was bananas, just use bananas2. If you can’t remember which iteration of your secret key you used for a particular service, simply try them all in order."

In particular, you don't have to use the same secret key for all websites. It's okay to slightly modify one if that password is compromised.


And that then starts to negate the point of this password manager: having to remember just one password.


I use a similar tool (pwdhash) and the benefit is not that it's a single password but a single root password. I need only a few changes and very little to remember for most sites and still get a unique password per site.

My bank requires me to change the password every 3 months or so and I only need to change on digit in what I remember and they see a whole new password.

It's a great benefit to me and to overall security.

How secure is it overall? Somewhat more secure than just using a single password for all sites and better trust compared to using a cloud based password storage.


You don't have to memorize them, though. If your master key is "bananas" and it didn't work, try "bananas2", "bananas3", etc. But you don't need to remember them all—because they're all essentially the same.

Hopefully this happens infrequently enough that it's a non-issue anyway.


Really? I've never seen any synchronization code (e.g., locks) in JavaScript. If multiple threads can execute async tasks in parallel, doesn't that mean JavaScript needs synchronization primitives? Most JS code I've seen in the wild relies implicitly on the single-threaded assumption—that only one callback (for example) will run at a time.


Say you wanna make an HTTP request.

What node will do is kick off the process (dns, grab a socket, connect tcp, write the header, wait for response etc.)... but it won't wait for any of this to finish, it just starts it on another thread. That thread doesn't need any JS-level synchronization because it has nothing to do with JS. Furthermore, you can kick off multiple requests like this in parallel.

When a request is done and has data to pass back, the external thread will queue up a V8 event. When V8 is free and is ready for the next event, it will see the finished request and trigger your callback with the data. When you're done handling that data (in Javascipt), V8 will wait for the next event and so on.

So you see, it's parallel but doesn't need any JS synchronizatioon.


> the external thread will queue up a V8 event

> So you see, it's parallel but doesn't need any JS synchronizatioon.

I don't know much about V8/Node internals, but I think we can regard the queue as a synchronization primitive? Granted, from what you say it's not in JS, but it's there, so at least some JS implementations need synchronization.


This is not done in parallel. Only one block of javascript is being executed at any given time. There is also no concept of preemption, if your function runs forever no other other events or callbacks will be handled.


How much time in a typical Node application is spent executing JS, vs. how much is in the C++ networking code?


JavaScript does not share memory between threads. No shared memory - no need for sync code. Instead it uses message passing and event loop to do the sync for you.

The way it's implemented is similar to Windows 3.x times - collaborative multitasking. Then there is no code to execute in your thread, an background event loop runs, listens for messages from another threads and calls appropriate callback function in your thread.

This design has the benefit of being simple to do simple stuff. And near impossible for more complex multithreaded algorithms that require shared memory.


SICP isn't a research publication—it's a pedagogical work. It's been very valuable as a textbook for CS students, but I don't think it contributed any significant ideas to the field of PL.


Most of the papers listed on the page are PL theory papers, PL is a much bigger discipline than just theory. Some of the papers at the end are more design and/or implementation papers (e.g. the Self paper).


All of these are valid points except 1. Node's async IO is one of its strengths. Contrast with Rails, for example, where the standard practice for concurrency is to spawn multiple processes (or, less commonly, threads). How many Rails processes can you fit on one machine? 5-10? Node can handle thousands of concurrent connections, all on a single thread. And when you hit those limits, you can always continue scaling with multiple processes like you would with Rails.

Doing CPU-bound computation in your application server is an anti-pattern. IO-bound computation, however, is where Node excels.

The thing to realize is that pre-emptive multitasking is costly. It is convenient for the programmer (the programmer doesn't need to worry about blocking and locking up the rest of the program), but it comes at a cost. Lightweight "green" threads, or equivalently, Node's evented dispatch mechanism, are a much more efficient use of the CPU. For applications composed of short-lived computations (e.g., < 1ms), it doesn't make sense to interrupt them and context switch. It's more performant to let them complete and then switch. You just have to make sure you aren't doing any CPU-intensive computations in the app server—which you probably shouldn't be doing anyway.

The downside of Node's approach is callback hell. And that's why we have Go.


Cooperative vs. Preemptive multitasking is not the same as Light threads vs. OS threads/processes. Erlang (and also Haskell for example) does preemptive multitasking with light threads. And that's what I'm talking about.

Yeah, preemptive multitasking may be costly. But as you say, when you have lots of users, most of their tasks are sleeping or waiting for timers. That's where preemptive multitasking excels. You can have millions of sleeping tasks and several (at times) doing real work. That's why Erlang is "scalable" and node.js is not. Especially if you need to have state in your workers and if you need them to live longer.

Saying that node's cooperative multitasking excels in IO-bound computation is a clear sign that you haven't tried Erlang.


> All of these are valid points except 1.

I have another counter-point for this as a rails developer tinkering with golang recently. I think Go got this correct in many ways. Having light weight goroutines that can scale well; having good IO which does epoll/libuv style wait in the background transparently when you read/write. Easy to understand multiprocessing language in general. I have no idea why it's not taking off in web development though.


Go is not bad, but if you're used to Python it feels very verbose. But Go performs much, much better...


I personally use generics enough in web development that Go is sufficiently annoying for many parts.


> How many Rails processes can you fit on one machine? 5-10?

Wat?!

By the end of the 90's, people started the "10K project", aimed at adjusting Linux and Apache so that a hight-end machine could support 10k simultaneous Apache threads, all doing IO at the same time. And they were successfull.

That was more than 14 years ago.


loxs argument (and as an Elixir guy I fully agree with him) is that those options are all terrible. The Erlang VM (BEAM) is extraordinary in how it solves that problem. It's both preemptive and green threads. It supports SMP and clusters across nodes out of the box. I highly recommend taking a look at BEAM and how different it is from everything else out there.


>The downside of Node's approach is callback hell.

Have you used Async.JS? In my experience, it has always solved my deep callback problems.


That's right. Typical routing regexes will not use backreferences, so that's not really an issue here. However, most routes do have parameters implemented as capture groups (which, I believe, is also not technically a feature of regular expressions). One simple solution would be to use a big regex (the union of all the routes) to determine which route it is (in O(n) time), and then once you know the route, use another regex to parse the parameters in the URL. So each route lookup requires 2 regex matches rather than N.


Just re-read this and realized it's unclear: when I said O(n) time, I meant linear in the length of the URL to be parsed. The point is that with this technique, it doesn't matter how many routes there are.


I've had better experience with Angular, though some of the complaints apply to Angular too (e.g., silent errors in templates). Recently I started using Facebook's React framework, and so far it's worked well for me. You'll probably end up writing more code than you would in Angular/Ember, but that's because React is only a view framework (so it frees you up to use vanilla JavaScript for other things, like ajax, routing, etc.). React is also much faster than Angular because of its virtual DOM patching technique.

But my favorite part of React so far is that view logic is grouped together with the views. This goes against the "best practice" of separating views and logic, but I think it makes navigating the codebase easier. You want to learn everything about how some custom view widget works? It's all in one file, rather than spread across app/assets/javascripts/widget.js and app/views/widgets/widget.html.erb. No directory hopping needed to learn how the widget works.


> All corporations must have a board of directors

It this a legal requirement, or is it just by convention? Has anyone ever experimented with alternate company structures?


I'm not a lawyer, but I did just read a book† written by a lawyer†† about this topic. I think it's possible that it can vary state to state, but in Delaware (a very popular place to form corporations) the law††† says:

The business and affairs of every corporation organized under this chapter shall be managed by or under the direction of a board of directors, except as may be otherwise provided in this chapter or in its certificate of incorporation.

http://www.amazon.com/The-Shareholder-Value-Myth-Shareholder...

†† http://www.lawschool.cornell.edu/faculty/bio_lynn_stout.cfm

††† http://delcode.delaware.gov/title8/c001/sc04/index.shtml


What happens if a sidechain network is insecure, and someone creates coins out of nowhere and integrates them back into the main bitcoin blockchain? Do sidechains increase the surface area for bitcoin vulnerabilities?


My understanding is that the worst two things that can happen are: 1) an attacker prevents a holder of bitcoins on a sidechain from reclaiming them on the bitcoin network (e.g. by preventing the relevant transaction getting into a block on the sidechain side)... this would be a net-plus for other bitcoin holders, I guess... since they would then own relatively more of them. 2) an attacker finds a way to release the coins on the bitcoin side. That would be bad for the rightful owner, of course, but it has no impact on anybody else on the bitcoin side.


If this happened, then the sidechain would effectively become "insolvent" (because the original bitcoin blockchain will never allow more coins to be returned than have been taken out of it).

There would then be a bank run from the sidechain, and some people would be left with unredeemable sidechain tokens.


I love it. :) Small suggestion: make the cursor slide against walls rather than getting stuck (at least it gets stuck on walls for me in Chrome).


Be sure to click on the "Generate proto disk" button for instant action. :)

> Particle radius is log of mass.

Wouldn't it make more sense for the radius to be the cube root of mass (assuming uniform density)?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You