For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more jbermudes's commentsregister

If he says they were anonymous people who set it up, what guarantees are there that the money would actually go to him?


hopefully anonymous just means that they don't want their identity known, not that he doesn't know. It would be good to get more clarification though.


Wouldn't it still be susceptible to replay attacks?


Paper-based PUFs, anyone?


> No, not just decline. The rails were purposefully ripped up, mostly at the behest of the tire/auto/oil lobby.

This is a popular explanation of the fate of the Pacific Electric system, but like anything in history, it seems there were a number of factors that led to it happening.

The original system was designed as a loss-leader by Henry Huntington for his business venture of real estate sales and land development (e.g. electrification). Access to downtown LA from these new suburban developments in a pre-automobile world made them attractive to new buyers. As automobiles became common, three things happened:

1. The need for such a system began to decline as more people could afford their own personal transportation. This led to a decline in ridership, and hence revenue.

2. Because much of the system was at-grade (street-level), the increase in automobile traffic directly impacted service of the streetcars, as they had to share the road with the new influx of cars. This made the system less attractive, further decreasing revenue.

3. This declining revenue stream led to the system being sold to the local government, who inherited the problems above. At the same time, the freeway system was being planned (which at one point envisioned train tracks running down the middle of the freeways!), and this proposed system plus promises of more flexible (and cheaper) service via automobile busses made the system even less attractive to maintain.

So yes, while the system was eventually sold to a group with ties to the oil/auto/tire industries, the local authorities were all too happy to get rid of the system. While today the debate continues over busses (cheaper) vs trains, the real issue IMO is control of the right of ways. IIRC, MTA held on to a few ROWs and was able to use them in the new system's Expo line, among others.

EDIT: LA's MTA still faces this issue of at-grade tracks being impacted by traffic. Because it is cheaper, it is used substantially in both the Blue and Expo lines, and I've heard some arguments that the Expo line isn't much faster than driving because it has to deal with traffic+lights. From a purely financial point of view, some have argued that you get the most bang for your buck with lines like the Orange line: busses on private roadways, so you get the fleet cost of busses but the time benefits of traditional train tracks separated from traffic.


Should I parse it as tossing the same coin 10 times, or choosing from the jar 10 times?


Good question, and one that sometimes comes up when I ask it in interviews. You are tossing the same coin 10 times.


0.753?

A the beginning, there's a 1/1000 chance that you pick a double-headed, and a 999/1000 chance you pick a fair coin.

A fair coin would act the way you've observed 1/1024 times. A double-headed coin would act that way 100% of the time.

(This is where I get fuzzy): Given what you've observed, there is a (1000+1024)/1024 = 0.506 chance that the coin is double-headed. There is a 0.494 chance that it's fair.

A double-headed coin would come up heads next 100% of the time. A fair coin would come up heads 50% of the time. So, 0.506 x 1 + 0.494 x 0.5 = 0.753.

How far off am I?


Your .506 is right but the arithmetic problem that you set equal to it is wrong. I think you meant to type 1024/(1024+1000). (BTW, it should be 1024/(1024+999) ).


Close enough for beta. We'll refine after user testing. (Guess my specialty!)


This is a fun question. Can I look at the coin's two sides? If not...I assume you now have to start applying statistical tests (given that a fair coin will only do this once out of 1024 times, what are the chances I've got one of those 999 coins vs the 1/1000 chance that I picked the double headed coin?) or is there some simplifying assumption I'm missing.

Anyway--assume I think all that aloud in an interview. What does that tell you about the candidate?


>Can I look at the coin's two sides?

I would give points for just asking that question, because many people bound by conventional thinking wouldn't dare to ask it, accepting default assumption that you can't. I'm not saying this says anything about your ability to solve the problem, but asking the question is a good sign of a supple mind.


You can only see the result of the flips, you can't examine the coin. Yes, it comes down to estimating the probability of having a biased coin given that you have seen it come up heads ten times in a row.


0.5005?


That's what I got.

There's a .999 chance you have a fair coin and a .001 chance you have the rigged coin.

(0.999 * 0.5) + (0.001 * 1) = 0.5005.

Seems too simple, but a coin is a coin, right?


Ask yourself a question: can you use the data (the 10 coin tosses) to update the probability of the current coin being two-headed?


I wouldn't use the data. The coin hasn't changed since I picked it out of the jar. If I flip it 1, 10, or 10e100 times, the coin would still be the same coin.

So figure the p(heads) for the coin and ignore the previous history. Overthinking it is why this makes a good FizzBuzz problem.


An example that should show this approach is wrong:

Suppose that the jar contains 500 double-head coins and 500 double-tail coins. You pull a coin from the jar, flip it 10 times, and get 10 heads. What is the probability it will come up heads next time?


That seems like a completely different problem to me, since all randomness is out of the system the moment you see the first flip.


OK, so now imagine that there are 1000000 double-headed coins, 1000000 double-tailed coins, and one fair coin. Now (1) there's still (potentially) randomness present, so it's not "completely different" from the original problem, but (2) the ignore-the-data approach gives an obviously wrong answer whereas using the data gives a believable answer.


Let's consider that it might be a fair coin, or it might be a double-headed coin.

Let's also say that every time you flip the coin and it comes up tails, you win $5. And every time you flip the coin and it comes up heads, you lose $1.

Clearly, this would be a great game to have the opportunity to play, if the coin is fair. Every time you flip you either win $5 or lose $1, so your profit, on average, is $4 per flip.

You've flipped it 10 times so far, and it's come up heads every time, and you've lost $10.

After you're $10, $100, $1000, or $10e100 in the red, without ever seeing a win, when do you change your mind about playing this game?


You are getting downvotes because you didn't follow Bayesian reasoning, but there is some justification for your instincts here http://www.stat.columbia.edu/~gelman/research/unpublished/ph...


Yes, it's still the same coin, but you don't know which coin you got. You know that you got 10 consecutive heads though. How improbable this is if you got a fair coin? How probable this is with the double-head coin? This is the data you can use to update the probability.


So if you flipped the coin twice, once it was tails, and once it was heads... you'd ignore that info? Or look at it another way, if you flipped it a million times and it always came up heads...


I always wondered, would it be possible to design a custom cartridge with its own CPU/memory and only use the Atari for drawing on the screen and therefore have more complex screens? In other words, use the special hardware to calculate the frame and then use the Atari to control the beam to color the screen the way you want. So the classical interface between the cartridge and the Atari is only handling turning the beam on and off and with what color.

I know that the Atari has a certain pixel size due to how fast it can turn the beam on and off, but working with that constraint it seems with the above method you could still produce much more complicated games that rival the interactivity of say the SNES.


The Atari can't control the color of the beam with much precision. The CPU doesn't directly control the beam. It writes to the TIA chip's graphics registers, which are things like the 8 monochrome bits of a sprite or 8 bits worth of playfield background. The TIA then produces the beam, still limited to its natural resources of a one-dimensional single-scanline world containing the 40 monochrome playfield pixels and two hardware sprites.

All this method could do is continually feed the 6507 a stream of instructions to write those registers. It would be like if you had enough ROM space to unroll every possible loop and inline every lookup. The cartridge still couldn't write to the TIA any faster than the 6507 CPU can. It couldn't hit a register any more often than every 18 pixels, which is how long it takes for a basic load/store pair of instructions.

The SNES Starfox chip works by drawing polygons into its own RAM then using DMA to copy them to the console's video memory. The Atari 2600 has no video memory beyond the TIA's registers, and no DMA to hit them any faster.

Unless... I don't know enough about the electrical side, but is it possible for a cartridge to disable or halt the 6507 CPU and directly drive the bus to the TIA? Then you could at least write to the TIA every cycle, yielding resolution of every three pixels. The Atari cartridge slot pinout is pretty plain though, just the 13 address and 8 data lines. Nothing fancy like a CPU halt or reset line or even a clock, and actually not even a read/write line that would be necessary to write to the TIA. It would require some motherboard hardware hacking, and by that point you're pretty much implementing an entirely new game console with an arbitrary CPU coupled to the 2600's TIA graphics chip.

(TIA is Television Interface Adaptor: http://en.wikipedia.org/wiki/Television_Interface_Adaptor )


Pitfall II had a custom cartridge with additional memory and chips to do multi voice sound (there's music continuously playing in Pitfall II) so it is possible. I just don't know how much the custom hardware can take over the machine.


I was thinking the same thing, and it's funny you mentioned the SNES since some of the more advanced games for that platform used a second processor on the cartridge to assist.


While we're on the topic of K-12 CS Education, does it strike anyone else as insane that the state with Silicon Valley does not have a proper credential process for CS?

Last time I checked, there were a few ways to be authorized to teach CS/programming related classes in a school, all with problems. The three credentials that authorize you to teach computer sciencey things don't test any CS at all:

- Supposedly, the business credential test has a question about flowcharts

- The industrial tech test asks about part of a computer

- The math test might have some set theory, but good luck trying to teach computer science with a math credential. The administration at your school would prefer you teach math as there's always a need for another math teacher.

CS is a delicate and fledgeling field in K-12 and right now it's a bleak road ahead. The small number of schools that have programming classes (let alone a CS track) is shrinking and the credential problem isn't helping.

One demonstration of the state of CS education is the college board's discontinuing of its Computer Science AB test in 2009 due to a mixture of lack of turnout and lack of qualified teachers to teach it. (Of the few CS educators out there, even a smaller subset could teach AB, most stuck to A).

With all of the opportunity generated by the industry pioneers here in CA, our CS education should be at the forefront nationally. We don't even have to embrace CS for its career potential. CS is a liberal art of the 21st century and I believe CS concepts would be worth it alone for its application and transfer to other subjects, especially math.


I know; I was stunned to see the ways that different states have handled the ACM's recommendations: http://www.acm.org/runningonempty/roemap.html (One of my friends from Idaho gleefully pointed out that they "don't suck at everything.")

If you're passionate about helping CA stay the course ahead, you really should check out some of the resources for talking to administrators at http://www.csedweek.org/ or even get in touch with the CSEdWeek organizers.


Here's a program trying to address the issue by helping people from the industry get involved in teaching: http://www.nytimes.com/2012/10/01/technology/microsoft-sends.... I'm teaching an APCS class through it this year.


It amazes me how easily this whole thing could have been prevented if they had just made the Amazon results show up in a separate shopping lens instead of the default lens.

Then even if it's pre-installed there'd be some reasonable expectation that a program designed to show you shopping selections would have to connect to a 3rd party server and send your query.


Literally no-one would opt-in.

The obvious solution that was missed is 2 versions.

You have a paid version with no Amazon integration, and a free version with Amazon enabled by default.

This works 3 ways:

(1) The people who don't want amazon, but don't want to pay can just modify the free version. No big deal, small amount of effort to get ads out of your free software.

(2) Lots of people end up paying for the software, not because they don't want to be bothered modifying the free version, but because they actually want to support the product.

(3) They still get tons of revenue from all of the people that download the free version, but never turn the Amazon off.

Almost nobody complains, and everyone wins.

The solution that they chose is probably the worst possible option, and has put the entire operating system's future at risk.


What amazes me is that people make it sound as if they did not consider this. Of course what they considered doing that but also realized that it will raise very little revenue since 99.99% of folks will not click on the Shopping lens.

With this way, people are forced to see Amazon results and a few of them clicking on irrelevant results and buying will result in a lot more revenue comparatively given that it's certainly an affiliate deal.


It's just a total misfire to take Linux users, many of whom are in the Venn diagram intersection of "computer savvy" and "libre software advocate", and then try to force them to watch advertisements on their own desktop.

Even people that don't know computers would probably prefer to use Windows or OS X, since that's going to be a nicer experience.


Until everyone stops using Ubuntu because for some mysterious reason they suddenly hate it.


We already did. S. mutans has been genetically modified to excrete water instead of the acid that normally wears away enamel.

http://abcnews.go.com/Health/Dental/story?id=98080&page=...


Recently they tweaked the bacterium again so it would only survive if fed a particular nutritional supplement. That ensures the bacterium won't spread from one person to another while kissing or sharing utensils.

AKA vendor lock-in.


It's still rather experimental. At this stage, I'd rather have the bacterium require supplements than risk more widespread problems. Once proven safe, then hell yeah, open the floodgates.


While not a specific claim to prediction of form X will happen in the year Y, it is a bit unsettling how since its original publication in 1997, RMS's "Right to Read" [1] has almost served as a playbook for the intellectual property lobby (1998's DMCA + UEFI's Secure Boot, trusted computing, etc). especially in light of the author's notes [1] that demonstrate that the laws required to create that dystopia are either available in some form already or have been proposed in the past, or are actively being worked on (SOPA, Protect-IP).

Ultimately it's not one particular statement that is eye-opening, but rather taking a holistic view of how the landscape of law and the aspects of what general-purpose computers (and programmers) can do and how that will continue to step on more and more industries' toes as pointed out by Cory Doctorow [2].

It's not so much that he's some sort of prophet as it is the unfortunate fact that his paranoia is increasingly justifiable with every new law creating new IP restrictions and laws dealing with civil liberties (PATRIOT, NDAA, etc). It's a self-perpetuating negative feedback loop.

[1] http://www.gnu.org/philosophy/right-to-read.html [2] http://lwn.net/Articles/473794/


It's pretty much impossible to completely stop it. You can only choose how you will attempt to delay it or discourage it. Consider the war on drugs: at least there you have the advantage that you have a tangible object that you're seeking to limit access to which under our current understanding of physics is possible, but look how successful that has been. But trying to stop the spread of information that has no tangible limit and is trivial to spread?

As Corey Doctorow pointed out in his speech at a recent conference, copying will only get easier. In this world of increasingly computerized things, corporations will continue to struggle with this core problem: Computers (of all types) are general purpose (at the architecture level) and execute instructions given to it. How can we make it so it doesn't execute code that we don't like?

As others have pointed out, if your profitability relies chiefly on the secret order of 1s and 0s and restricting access to that voodoo then you're in for a long uphill battle because all of computing can be summed up as the art and science of copying bits from one location to another as cheaply and as efficiently as possible.

In the case of video game developers we've seen a few models that take into consideration copyright infringement. Some games have demonstrated profitable virtual economies with the company taking a cut of the profit, and even a company like Nintendo sells its hardware at a profit so as long as people have a reason to purchase Nintendo hardware there is room for a company like Nintendo to exist. This isn't to say these are the only or the best solutions, but rather just pointing out that to stay profitable going forward there is no reason to believe outdated business models will continue to thrive.


There are two ways current hardware makers are beating piracy, increasing hardware complexity and value-added services. A lot of people are complaining about the Vita's proprietary memory cards, but I wouldn't be surprised if that move was 100% solely made to prevent 3rd party execution. I'd imagine that more anti-consumer features would be coming in the next generation of console hardware too.

The flip-side is rewarding good customers. By not having a hacked console, you can use online services (such as Xbox LIVE).

Although amusingly, I think the answer may actually be in free-to-play games. 2011, in my opinion, was the year F2P changed from evil Zynga social games and last-ditch MMO fallback to a serious triple digit growth area with games like League of Legends and Team Fortress 2. Expect to see F2P on Sony consoles soon. (Sony is more liberal with their online policy, which is why I'm predicting the first popular F2P on their systems. Microsoft is very domineering of Xbox LIVE and how their online play works, and Nintendo's online strategy is somewhere in the pre-fire stoneage.)


Quick correction: It's 'Cory' not 'Corey'.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You