For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more treo's commentsregister

Just in case that anybody cares: There are more or less simple ways to use cgi with nginx. Just see here: http://wiki.nginx.org/Fcgiwrap or http://wiki.nginx.org/SimpleCGI


Specifically for Fossil, you can use "server" command http://www.mail-archive.com/fossil-users@lists.fossil-scm.or...


This happens every time they declare an app as fraudulent.


I didn't know what Pixel-Perfect is and your README didn't tell me much about what Perfect-Pixels does. I found a short description of it in the project description, but you definitely should add it to your README and this post.


Added. Thanks.


I wonder, why haven't you just submitted it as a url?


That would take forever, at least on my system, as I don't really have a lot of entropy available. Just for kicks I have started this when I began writing this reply. And now I have the huge amount of 200 Byte in this test file. So thats about 10 bytes per second.

But if you take /dev/urandom you can get quite a bit more. On this system it is about 7Mbyte/s.


What's the difference between /dev/random and /dev/urandom? I've only ever used urandom and it spits out data as fast as I can consume it.


/dev/random uses environmental noise for entropy, and can be depleted rather quickly.

/dev/urandom supplements its entropy with a PRNG so that it never blocks..


/dev/random will block while waiting to collect entropy from the system. /dev/urandom will be satisfied with pseudorandom numbers. That's fine for many applications (e.g. a file filled with garbage) but not acceptable for things like cryptography.


That entropy, by the way, is derived from the keyboard and mouse devices. If you want /dev/random to go faster, move your mouse and type more ..


Can it be configured to use other sources?

Does this mean that if I've got a server with no mouse/keyboard attached, /dev/random will block forever?

Logging into my slicehost server, and running cat /dev/random | hexdump -C seems to support this, more or less - only a few lines get output unless I start typing into the terminal - then it goes marginally faster.



It will also use other interrupt timings in the creation of the entropy, most notably hard drives since they're rather random on when they reply back due to it having to rotate to the right place. Not sure how this works on an SSD. I also believe if you've got some kind of hardware RNG it will use that too.


Interesting. I don't see /dev/random blocking on Mac OS X (13" MacBook Pro). I wonder what the source is?


On OS X, /dev/random and /dev/urandom are the same thing (both acting like the traditional /dev/urandom).


My Thinkpad T400s runs at 8W/h when tuned with powertop2 (3g disabled, wlan disabled). But as the battery only has a capacity of 43Wh it only comes up to a bit more then 5 hours.

I could however replace the disk drive for another battery and have about 9 hours of run time with it.


I found it quite easy to read. Usually I use my readability bookmarklet on almost everything, but this time I didn't need it.


> We're very sorry, but while we would love to let you in and rock out with us, we need to currently restrict turntable access to only the United States due to licensing constraints

It looks like they have killed everything but USA access.


That 3x speed up is about the same that I have seen with my code. I'm currently writing a database cache simulator to try different algorithms with it, and if I want to have anywhere near realistic results I have to use realistic access traces.

Tried it today with a tpc-c trace which has about 500 million accesses. The result: CPython would have run for about 90 minutes (I stopped it after 30 minutes, and began to look for a speedier possibility), PyPy only took 22 minutes.


I've gotten about a 10x speed up on numerics code where there's so much branching involved in the calculations that I can't afford to use NumPy.

As for me, the main reason I haven't moved to PyPy yet is the lack of database and messaging support.


Which databases? At the moment we have SQLite, Oracle (haven't tested it myself), and Postgresql. Plus whatever you can find a pure python driver for. Also, what do you mean by messaging?


Oh wow, I didn't realize that Postgresql was working on PyPy. I heard that Django was only tested with SQLite so I made my assumptions from then on.

By messaging, I mean something like RabbitMQ, that way I can have batch scheduling at a little bit more sophisticated grain than "run a cronjob".


psycopg2 is implemented in a fork of mine: http://bitbucket.org/alex_gaynor/pypy-postgresql/ it requires compiling yourself, but works nicely (I was told by someone that this brought their script's time from 2 minutes to 8 seconds). As of last test it passes all Django tests. What's the current standard RabbitMQ lib? I didn't realize it was a c-extension (hell I've used it myself and never noticed).


Well the most used one is Celery. It depends on multiprocessing which blew up on me the last time I tried it in PyPy.

But.... I just tried "import multiprocessing" in PyPy 1.5 and it worked! Is this all part of the C-API compatability layer? Does that mean Cython code may soon work in PyPy too (that's my pony feature)?

RabbitMQ should work under PyPy currently then, all of its dependencies purport to be pure python. ---

Another RabbitMQ lib is Rabbitmq-c which is direct wrapping around librabbitmq-c. It ecks out extra performance vs pure python rabbitmq, but mostly it isn't needed.


Nope, multiprocessing was added to the Python standard library in 2.6, our previous releases implemented python 2.5, 1.5 implements 2.7, so it now includes multiprocessing.


That's good news! Guess it's time to remove the mechanism in Celery that disables the multiprocessing pool when running under PyPy.


There's also MySQL via PyMySQL


MySQLdb also works: https://bitbucket.org/pypy/compatibility/wiki/mysql-python

I just compiled it today and it works.



No, tpc-uva is a bit more than what I need right now. I might use it later on when I have decided on any single algorithm that I want to test in a more realistic environment. Because changing the caching algorithm that postgres uses isn't as easy as doing so in a standalone python simulator, I will have to be sure that I want to do that. I have already tried that before and it is a lot harder and takes a lot more time.


It's a shame that nobody has a chassis like the sun fire 4500 anymore. The closest I could find is a 5U Thinkmate STX XA48-2510 which would work out at about 32 cents/GB.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You