For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | tfussell's commentsregister

These are called trophic levels https://en.m.wikipedia.org/wiki/Trophic_level


Hmm, if you had a species of vultures that only fed on apex predators would that give them a higher trophic level? Or what about bacteria that only eat human flesh? It's wild to think of a bacteria at the top of the chain. Does it count if you only partially eat the victim and don't kill them?


D'oh! Thanks, I completely forgot that this is literally what makes a food chain a food chain.


We've improved fuel efficiency by a factor of 8 since 1970: https://commons.m.wikimedia.org/wiki/File:Aviation_Efficienc...


My back of the envelope calculation is that 1GW for 1 year is 30PJ. Jet fuel has 42MJ/kg with a density of 0.8 kg/L for a total of around 100 million liters of fuel at 100% efficiency. An A321 holds around 30000L which comes out to about 30000 flights equivalent from one reactor. GVA had around 200000 flights in 2018 meaning about 6 1GW reactors equivalent of fuel used (obviously not all flights would be fully loaded but I don't know what a normal load is).


My understanding is that these aren't for intimidation but rather to reduce theft by giving people an outside view of themselves. A mirror can also serve this purpose, but may be more difficult to fit in certain spaces. If this is correct, then these cameras aren't recording, just displaying the image directly on the screen. Here's an article about it https://economictimes.indiatimes.com/blogs/et-commentary/mir...


All of the ones I've seen specifically say "RECORDING IN PROGRESS" on the screen. Could they be lying? Sure, but I doubt they are.


What if your test includes a transaction already?


We make tests robust in the face of existing data. For example, each name is "GUID-ified", we only compare specific subsets of rows that we know will be isolated from the rest etc...

We still use transaction rollback for most of our tests, but all tests are written with the assumption of existing data, to accommodate for the few tests that must commit their transactions (such as concurrency tests).


This is the way to go imo!

It also enables parallelization of test runs :)

And it also makes testing databases with limited transaction APIs (like DynamoDB) much easier.


One approach we used was to wrap the connections in a proxy (easy in Python), and flag if the code called the commit() method at any point. You do need to trust that the code doesn't call COMMIT explicitly via the execute() method, and flag tests running subprocesses that might dirty the database. If the test is dirty, rebuild the test database from the template instead of relying on rolling back the transactions.

However, if you add this behavior to an existing test suite you will likely have to fix a lot of tests assuming sequences will return predictable numbers, such as in automatically generated primary keys. Technically, these tests are already broken even if it is unlikely to see them fail.


Typically the approach I take is monkey patch out the transaction methods (begin, commit, rollback) in the test harnesses and wrap each test in a real transaction. Some test runners have this built in.

This is easy in dynamic languages, really hard in in static languages.


So you‘re testing against a database some process that you expect to fail ond you would like to test whether the rollback is done properly.

And because you abstracted all your transaction logic away for the tests, your test result is not worth anything.

IMHO, there are only 2 ways to achieve proper database testing: 1) new db for each test (very slow) or delete all data from all tables before the test (ok for smaller projects) 2) tests do absolutely have no impact on other tests with the data they added/modified/dropped (very hard to achieve).

Edit: typo


I actually usually do something compatible with that for rollbacks, or I special case those tests.

It doesn't have to work in all corner cases, just the ones that matter to me.

The advantage of doing it this way is the tests run very fast - which is critical if you want developers to use them often.


Couldnt you just patch the transaction logic to savepoints? So even tests building on transactional behaviour would work correctly.


I seem to recall savepoints differ in some ways that can make this difficult - but without remembering the specific problem I hesitate to say anything concrete about that.


You could use savepoints instead to wrap the transaction.


Some ORMs do that automatically as they detect nesting transaction attempts.


Not sure how Django handles exactly it but it works really well transparently. The only issue is when the code you are testing rolls back a nested transaction, I seem to remember that caused issues but there is a workaround at least


What I've been curious to learn and haven't heard discussed is how this data will become available. Will I be able to call up Comcast and pay $X for a particular user's browsing history after this passes?


You (J. Random Person) may not be able to, but advertising companies probably will be able to. For example, from https://www.eff.org/deeplinks/2017/03/five-creepy-things-you... :

> According to Ad Age, SAP sells a service called Consumer Insights 365, which “ingests regularly updated data representing as many as 300 cellphone events per day for each of the 20 million to 25 million mobile subscribers.” What type of data does Consumer Insights 365 “ingest?” Again, according to Ad Age, “The service also combines data from telcos with other information, telling businesses whether shoppers are checking out competitor prices… It can tell them the age ranges and genders of people who visited a store location between 10 a.m. and noon, and link location and demographic data with shoppers' web browsing history.”


That's what I figured. I wonder if a request from a sole proprietorship would be sufficient to get an ISP to respond to such requests.


My guess is they'd just use price as a gatekeeper, rather than spending time to do a background check of their customers. I doubt any of these data feeds are cheap.


I'd love to see this process get hacked - What's the CEO of {your_favorite_ISP} been perusing on the web? Sounds like a good opportunity for a crowdfund.


There are firms that specialize in advertising data. If you're wondering where you information goes when you signup for your grocery store rewards programs, it's those guys. You usually can't buy a particular user history, it's usually in bulk. This is just standard practice, no government organization is placing rules on it so we'll see what kind of inevitable abuse stems from it.


You misunderstand.

Your data is available now.

It was made available by Obama starting in 2015 when he removed FTC authority over them. That tap wouldn't shut off until December at the earliest if the current administration did nothing.

The EFF is oddly silent about why they've been so oddly silent about this exposure for close to 3 years now.


We're definitely not fine in the Southeast right now. Large portions of Georgia in particular are in extreme drought. See: http://droughtmonitor.unl.edu/Home/StateDroughtMonitor.aspx?...


But why don't other counties do this if they're so smart?


We are in our cornbread!


Maybe. It looks like the buttons are a bit smaller with slightly more space between them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You