For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more Amadiro's commentsregister

> Chrome is not immune to bad JS and will also slow down your system until the single tab is killed/crashes.

That's true, but with like 8-or-so cores I don't really care or even notice that much (if I notice some tab is eating a lot of CPU power, I just go into the task-manager and kill it, until I need it, at which point I reload it.) No disruption to my browsing experience happens. In firefox, I'll have to close the tab, possibly restart the browser, et cetera.

> The browser restarts, all tabs are still there and only load if you click them

I haven't really used firefox for a while now, but it used to lose tabs occasionally for me. I hope that's fixed nowadays.


Developing USB drivers is actually totally fantastically easy nowadays with libusb(x). You can more-or-less just pretend your USB device is a server you talk to through a socket. No need to even write a single line of kernel-code, you can do it all in python in userland, if you so desire.

For many USB devices this is also a totally feasible thing to do, too, because most USB devices that are not providing some standard interface that the operating system takes care of (keyboard, mice, controllers, tablets, mass-storage, ...) just provide some sort of service that typically only one application would interact with, so just having the driver inside that application that interacts with it works well.


And just what makes you think making a purely functional language 'lambdacious' will not a few years later result in a book 'Industrial strength lambdacious'?

You're falling for the trap of thinking that pursuing a theoretically pure discipline will result in a language with less flaws; in reality, even theoretically pure concepts have issues and can be superior and inferior to other theoretical concepts/constructs [which may or may not be invented later]. Pursuing one theoretical framework may make some issues either disappear or be invisible (like garbage collection makes the issue of memory management appear to disappear -- but it still has to be done at some level, and this causes new issues, as evidenced by many talks given and books written about how to deal with $LANGUAGEs garbage-collector) but that doesn't mean that you won't have issues remaining or even create new issues.

You call "Aversion to Extremes" a cognitive bias that doesn't work well in practice, but you should realize that the opposite is true; the more extreme a language pursues a theoretical concepts, the less used it typically ends up being in practice. For every success-story you can give me about Agda (pursues dependent typing), haskell (pursues functional purity) or smalltalk (pursues object-orientedness) I can give you a thousand success-stories of people using C++, java or C#.

I can't imagine that you will be able to come up with any measure of "how well something works in practice" that is remotely sensible and makes these "purity-oriented" languages appear to "work better in practice". The reality of programming is just that there are many different requirements one might have to a language and its implementation, and properties that are very clear advantages in a theoretical framework seldomly translate to practical benefits in a nontrivial way.

A nice example (IMO) of a language that pursues functional programming without attempting to be needlessly pure about it is erlang; each "process" itself is written in a functional language that promotes separation of side-effects and communication, but from a "further up" perspective, each process is like an object, holding onto state. That way it gains the [for erlang] important advantages of having functional traits, while not sacrificing flexibility through needlessly strictly adhering to the functional paradigm.


The problem with Erlang is that it solves the wrong problem, so to speak. Reasoning about state locally (inside a procedure/function) isn't all that hard -- which is why intra-procedure/function immutability doesn't actually get you very far. The trick in, e.g. Haskell, is that you can enforce inter-function immutability. In the end all actor-based systems end up being a huge mess of distributed/shared mutable state -- which is what we're trying to get away from. (I'm well aware that there are formalisms that can help you deal with some of this complexity, but they are a) not part of the language, and b) not practiced very widely in my experience.)


Haskel doesn't address problems of distributed computing at the language level. For distributed computing you need message passing, you need to handle failures. If you do distributed computing in Haskel, you also need to build and use actor-based abstractions. It is not possible to hide distributed computation behind an immutable function call abstraction (RPC systems tried to do it and failed).


Indeed not, but that's not quite the point I was trying to make. My point was more that preemptively programming as if every single little piece of state is potentially distributed state is actually detrimental. Distributed mutable state is hard and there's no way around that other than changing the model to e.g. a reactive one -- local mutable state shouldn't be hard.


Quote: "You call 'Aversion to Extremes' a cognitive bias [... but] the more extreme a language pursues a theoretical concepts, the less used it typically ends up being in practice."

Yes, people use languages with solid theoretical foundations less because they perceive them to be extreme. That was my point.

Quote: "I can't imagine [... any measure that] makes these 'purity-oriented' languages appear to "work better in practice".

How about SLOC required to implement a task? Take a look at this: http://web.cecs.pdx.edu/~apt/cs457_2005/hudak-jones.pdf Its an old paper, but the findings are still perfectly valid.

Quote: "each process is like an object, holding onto state".

This is the actor model. Erlang is an uncompromisingly pure implementation of the actor model, which is why it is so effective. Again, I think you are making my case for me.

Your use of the term "sacrificing flexibility" sets up a false dichotomy between purity and flexibility, as though there were programs that are difficult to write in Haskell.


They use them less because they are less practical and due to their purity are unable to fulfill peoples (real, rather than theoretical) requirements.

Measuring LOC required to implement a task is meaningless. If you implement a task in python in fewer LOCs than what I need in C, that doesn't necessarily mean anything, because my code might fit on an 8-bit microcontroller, run faster, ... or meet any other number of imaginary or real additional requirements I might think of (and people always have lots of those!) "Have 10% fewer lines of code" is pretty much never a given requirement, though.

Sure, erlang is a "uncompromisingly pure implementation of the actor model", but then python is an uncompromisingly pure implementation of the python model, and C is an uncompromisingly pure implementation of the C model, so that's hardly an interesting argument. Erlang is pretty much what popularized the actor model as we consider it nowadays in mainstream programming, so that's tautological.

The point (which you have not actually argued against) is that erlang is also a functional language, but not a pure one; it allows side-effects, it allows sending messages, it allows storing data in your (destructibly updatable) process dictionary, et cetera.

That means erlang benefits from the features it inherits from the functional languages, even though it doesn't go "full functional" in the sense some other languages do.

If you don't think there are programs that are difficult to write in haskell, agda, miranda, ML, ..., I have to doubt that you have done a substantial amount of them, and/or that you have applied it to any actual real-world task.


>Yes, people use languages with solid theoretical foundations less because they perceive them to be extreme. That was my point.

At some point it stops beings the people's problem and its a problem of the language?

Or do we know (by divine intuition?) that those languages are perfect, and we don't need any stinking reality to verify it?


Erlang is an uncompromisingly pure implementation of the actor model

It's not. To pick just the first issue I remember, PIDs are forgeable. (I spent a day once trying to survey what you'd have to do to capability-tame Erlang, and it looked like a lot of work. The Erlang developers did not wear a hair-shirt.)


Check if your iMac defaults to software rendering for some reason.


Absolutely not, with where the SDK is currently at.

I've tried quite a few games and demos with the current SDK (the one that is publicly available,) and several of them included text in menus et cetera. It was always an absolute eyestrain and pain to read anything. Maybe it'll get better once the new version is released, which supposedly has a much higher resolution.

But I suspect even then, it'll be impractical; not being able to see your keyboard/mouse from your peripheral vision, having to move your head around to bring different parts of the document into focus, not being able to see a book/document you have sitting next to your keyboard for reference, the added strain over time, ... as well as the added complication that most people who code probably have to look at other peoples screens occasionally, talk to other people, et cetera, which will involve a lot of switching back-and-forth, which can be quite disturbing.

See also https://www.youtube.com/watch?v=DqZZKi4UHuo this video for some experience from the owlchemy lab guys, who have been using the rift extensively during development for testing their game.


It still hurts everybody, because it bloats your files with unnecessary opcodes and forces the JIT to sift through it and optimize it away, instead of spending its time to do useful optimizations.

It might only seem like a little turd on the sideway that you can easily step over, but quite evidently in the case of flash little things like these have added up and its performance is now -- in the metaphorical sense -- up to its neck in shit.


> It still hurts everybody, because it bloats your files with unnecessary opcodes

After compression, what do you figure the cost is? I mean, that looks like a lot of text, but as opcodes it's a few bytes at most before compression. Improving video codecs provides way larger gains than fixing things like this.

> forces the JIT to sift through it and optimize it away

That code is already going to be there, isn't going to burn up a measurable amount of CPU, and only imposes a cost once per load. There are better places to work on improving the product.

> It might only seem like a little turd on the sideway that you can easily step over, but quite evidently in the case of flash little things like these have added up and its performance is now -- in the metaphorical sense -- up to its neck in shit.

That's actually far from clear. I think another way to look at it is that Flash has so many other problems that focusing on this BS clearly isn't worth their (or anyone else's) time. Heck, for all we know some of this has been caused by fixes to other performance problems they've been working on.


If the hardware isn't the bottleneck, you can simply make your compression/encoding algorithms more complex, because 10% saved traffic is probably much more valuable than 10% saved CPU cycles (remember on phones traffic means a lot of energy drainage for sending/receiving). Besides that, though, FFT is old, so whatever codec you use to view HD videos on the iPhone, it probably already uses FFT or some other comparable thing like wavelets or so.


You need to write lots and lots of code to write a successful application in any language, really...


Well, WebSockets are somewhat more efficient in terms of overhead, the latency will probably be a little better, can be handled more efficiently/easier on the server-side and it doesn't have the other drawbacks Comet has (AFAIR in some browsers, say, opera, if you use comet, you've basically used up your quota of allowed connections, so making additional XHR might be problematic.)


> try websockets, if fails, use comet

I've seen this statement, or similar, a few times. But, Comet servers use WebSockets. So, WebSockets are not an alternative to Comet. They are a transport technology that is better suited to the Comet paradigm, and building interactive web apps, than HTTP (Long-Polling, Streaming). Here's a diagram I create a while back which hopefully explains my point of view: http://i.stack.imgur.com/BOFC2.png

Spot-on about the benefits of WebSockets over HTTP-based solutions when it comes to efficiency. The HTTP connection limit was a problem a while back but I don't believe it is any more.


Boards for high-end FPGAs like that are not cheap to make yourself either (needs like 6 layers or more, because they have SO MANY densely packed pins + you need one or two layers to feed its gratuitous hunger for power), so you'll probably end up having to buy some ready-to-use solution that probably quadruples that price...


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You