For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more ksml's commentsregister

But the entire premise of decentralized finance is to have no central authority, is it not?


Just because there is no central authority in control of crypto transactions like traditional banks does not mean laws and courts no longer exist. You can steal someone’s bitcoins by hacking their wallet but you can still be ordered by a court to pay it back and/or face jail time.


Thank God for real world centralized authority.


I think what the "centralised authority" steal from the people every year will put the all the crypto theft till now, combined, to shame.


If judges have the last word on who owns how many bitcoins why are you wasting all that electricity for? I thought it was to avoid relying on an authority...


My understanding is that decentralization helps avoid things like runaway inflation or banks deciding at will how much money they want to siphon off each transaction. If you are stealing someone else’s money, a court is not going to dismiss charges just because the transaction is not reversible according to the protocol of whatever cryptocurrency you used.


Decentralisation doesn't help with inflation, it only makes everything more expensive due to the extravagant cost of the consensus mechanism. Banks are already decentralised (should be obvious since there are many banks, not just one) and competition among banks is what prevents them from charging high fees.


> does not mean laws and courts no longer exist. You can steal someone’s bitcoins by hacking their wallet but you can still be ordered by a court to pay it back and/or face jail time.

I'm not aware of any court that would, for example, jail a Chinese citizen who stole virtual money of a US citizen? Are you?


Physical matter is decentralized but that doesn't give you a right to steal someone's gold without legal action.

Decentralized systems have different benefits for different people. For instance, some people like to trade 24 hours without corporate middlemen stopping you from trading at night. That's not the same as avoiding legal authority.


No central authority within the system. Treating the hacked crypto as stolen property to report to a central authority who can use the threat of physical violence is not incongruent with the current defi system. Its a false dichotomy presented by those naive to the pros and cons of all this.

Ideally governance would also be done in a decentralized manner, but we’re not there yet.


It's completely incongruent. The whole point of "decentralized" is to avoid relying on a central authority, and the price to pay is inefficiency. But if you're relying on a central authority anyway, why be inefficient?


Everyone wants no central authorities until they're being enslaved by their regional warlord.


Everyone wants central authorities until their money is incessantly devalued and squandered through cronyism and waste.


Can you give an example?


I can't think of a good one. Graph-like data structures is a common example of something that's a bit difficult in Rust - see https://stackoverflow.com/questions/34747464/implement-graph.... But the answers there seem reasonable and I'm not a Rust expert. I guess my point is that nobody would even need to ask this question if the implementation was in C.


Whoa. Do you have any links I can read about this? That sounds incredible


Check out this video starting at the 4:06 mark.

https://youtu.be/LF77qpbvkxo?t=4m6s

They talk about crows passing knowledge from one generation to the next, and they show an experiment that they performed on some crows.

While the young crows are still in their eggs, the scientists wear masks and capture the adult crows, teaching the adult crows that this mask means danger. What they then find out is whether the adult crows pass the knowledge of the dangerous mask on to their young.

However, in this video the young are learning about the dangerous mask because they get to see the adults react directly to it.

I don’t know anything similar where the adults communicate this kind of thing without the masked person being present.

It is possible that the young ones in turn taught the next generation without the masked individual being present. But I don’t remember and am not going to watch the whole video again right now.


The WD response to the original bug report (from https://www.wizcase.com/blog/hack-2018/):

> The vulnerability report CVE-2018-18472 affects My Book Live devices originally introduced to the market between 2010 and 2012. These products have been discontinued since 2014 and are no longer covered under our device software support lifecycle.

So support can be discontinued after two years after introducing a product??? That's ridiculous. It's not even like this was a one-off product and they're no longer supporting that line of work. WD still sells MyBooks.


Support can be ended at any time and leave you hanging when relying on proprietary software.

The fallout from this will be worse for western digitals brand than just patching the cloud service or smartly shutting it off if they weren’t going to support it.


If they sold it in the EU I think legally they have at least 2 years of support to do. Assuming the software not working directly affects the devices sold abilities.


I don't think so. The seller (not the manufacturer) has to make sure that the device is free of faults for up to two years. The first 6 months it's up to the seller to prove that the device had no issues in the first place, afterwards it it up to the buyer to prove that the device was faulty to begin with.

The manufacturer can only provide a warranty and exclude a lot from it. Like Motorola does with the batteries of the smartphones, which only have 3 months of warranty. The seller has to cover the 2 years.

There is no obligation to support a device, like to provide firmware to fix bugs. If the hardware/firmware has a bug, it is a defect, which entitles you to a fault-free replacement within 2 years.


I’m not sure it’s so limited:

https://europa.eu/youreurope/business/dealing-with-customers...

In particular there is mention of purpose:

> is not fit for purpose - either its standard purpose or a specific purpose ordered by the customer which you accepted


The page you link to does not refer to the manufacturer, but to the seller.

The seller will tell you to send the item to the manufacturer to check if it is a "warranty" case. You, on the other hand, will tell your seller that you won't do that, that you will send the product to him, and that the will have to check if it is covered by the "legal warranty" (notice the difference between just "warranty" of the manufacturer, and "legal warranty" of the seller). It is up to the seller then to forward the product to the manufacturer or to provide you directly a replacement or repair it himself.

If you send it to the manufacturer, like the seller wanted you to, the manufacturer can send it back to you untouched, saying it is not covered by his "warranty". You will have to pay the shipping. Then you will have to send it to the seller, and also let him know that you had to pay for shipping to the manufacturer, that you'd like that money back as well, which the seller can reject (but probably won't).

The seller is in a worse position than the manufacturer, because only he is bound to the the things mentioned on the page you linked to.


There is a difference between the seller and the maker. But as the customer is supposed to be covered one way or another for 2 years, does it really matter in the end ?

For instance Apple had to extend its warranty to 2 years in europe, while leaving it at one year everywhere else for a while. The same way, most sellers won’t deal with makers that will put them on the spot for repeated repairs down the line.


It does matter, because in Europe the manufacturer usually has a reduced warranty (or like you say, 1 year in other parts of the world).

Apple had to extend it because they are not only the manufacturer, but also a seller (same goes for Google and Microsoft).

If you buy your MacBook on Amazon then it can very well be the case that you are not getting the same warranty you get when you buy it directly from Apple. In that case Amazon is responsible for covering the difference in warranty, and you'll have to deal with Amazon if you have problems. But Amazon will tell you to send it to Apple, and if they don't cover it, send it to Amazon, and they'll either repair or replace the device for you.

> The same way, most sellers won’t deal with makers that will put them on the spot for repeated repairs down the line.

This is why the manufacturers will usually be accommodating towards the seller. You're in a better position if you let them both take care of it between themselves, instead of doing the seller's work. Of course this will take more time, but you're more likely to get a better result.

If you live in Europe, just keep in mind that you usually have more rights if you contact the seller instead of the manufacturer, as there's no small print you'd need to read on a warranty card. Either the device works or it doesn't.


In bernal heights, someone smashed my window and stole my camera out of the back seat of my car while I was in the car. It was quite traumatizing, and I don't really like going to San Francisco anymore. (They knew we were in the car, but went for it anyways. Pretty brazen, and they got away with it.)


And this won't be prosecuted as an armed robbery, if they even bothered to catch the perpetrator, even though it was.


Sure the point is, people who experience these things are going to talk about and make it seem like it happens all the time. Everyone who has nothing to say is quiet so it can seem like nothing good happens


You are making it sound like it’s normal to have your car broken in, and the only difference is rate.

This is not the case. In many places, you can leave car unlocked for years, and no one is going to steal stuff from it.

Under those circumstances, even one case is a reasonable evidence that the crime situation is bad. And here we have an evidence of much more than one case.


I mean stuff gets broken into all the time. Unless your talking small town suburbs and rural areas. That doesn't seem fair to compare to a city. Your all making it sound like if you park in sf it's guaranteed your windows will be broken. It might feel like that if you only look at all the negative things people say.


This is really interesting. Prior to this, Firefox's isolation model was much weaker than Chrome's due to only having a pool of 8 content processes. If I'm reading the technical blog correctly [1], this will move to a process-per-site model without also doing process-per-tab as Chrome does, i.e. if you have several tabs open on the same site, they'd be in the same process. This seems much less resource intensive than Chrome's model while still delivering similar security properties.

[1] https://hacks.mozilla.org/2021/05/introducing-firefox-new-si...


> process-per-tab as Chrome does

This is a common misconception. Chrome doesn't technically do process-per-tab.

Chrome's model can most succinctly be described as process-per-domain, although even then, there are rare instances where two tabs opened on different domains will actually share the same process.


It’s a misconception that Google fostered right from the start.

They did advertise Chrome as process-per-tab: https://www.google.com/googlebooks/chrome/big_04.html, pages 6 and 7 also definitely agree. (I haven’t read all through it again now, but I should also note that the process in the very centre of https://www.google.com/googlebooks/chrome/big_38.html shows what appears to be two tabs under it, which supports it not necessarily being process-per-tab.)

But either it never actually was, or they abandoned it as impractical even before release (https://stackoverflow.com/q/42804 has answers agreeing it isn’t process-per-tab on the day after the first release). So either Google lied, or they released the comic including a glaring and rather significant factual error (even if it had been true when Scott first drew the pages).

It’s frustrating when parties pull these shenanigans, making big claims around things like security and performance predicated on points that are simply not true, but never retracting those points properly or repudiating them, so that the misconception persists.


It's "scheme + eTLD + 1", with a flag to set it to per origin.


Sadly, process-per-site also means memory usage will skyrocket, which linked post doesn't mention.

It's ridiculous to think that a budget laptop with 4 GB of RAM suddenly isn't enough to browse the Web comfortably. All thanks to Meltdown and Spectre.


If browsers are careful not to use CPU and mem, web developers will just bloat their sites even more because there is room for it. Let browsers bloat, it will slow down website bloat.


Let's steal everything we can grab, it will slow others from stealing.

Let's buy all the toilet paper, it will slow others from buying toilet paper.


Complementary to this, one can use the Temporary Containers addon to get isolation of e.g. cookies. I've set it up to run one container per domain, and it works really well. I hope they merge this into Firefox at some point.


I ended up having to disable my containers plugin due to syncing issues and later...CPU usage. It wasnt terrible on a better processor (like 1-2 cores 50-100% consistent usage) but on my old core2duo thinkpad it was basically useless. And on any laptop that was unacceptable.

I like the idea of containers, and will probably revisit periodically to see if whatever was fubar on my account is resolved (theres none/if any logging, so its hard to really dig in)


How did you set it up to use one container per domain?

I'm using Temporary Containers, but if I visit `somedomain.com`, close it, and come back later, I get a new temporary container.


First Party Isolation is the native version of this (AKA Total Cookie Protection). Set Enhanced Tracking Protection to Strict to enable it.


Actually, the class you linked was an experimental class, and we ended up going with Python instead. (source: I was the head TA for that class)


Why a dynamically typed language to learn with?


I think there were two main reasons: choosing a popular language (JS and python happen to be among the most popular, accessible languages) and pedagogical simplicity (it's nice to avoid burdening beginners with extra syntax when they're already struggling to learn other things). Some unscientific evaluation showed that students picked up static types just fine in our second programming class (which is in C++)

Personally, I think we should be teaching static types and I pushed for Typescript, but it didn't happen.


> I think we should be teaching static types and I pushed for Typescript

Another option would be to introduce Python types:

    def square(x: int) -> int:
      return x*x
http://mypy-lang.org/examples.html


Is there something like DefinitelyTyped for Python? TypeScript only became useful once the ecosystem reached a critical mass of type definitions available for popular libraries.

At the moment the Python community seems pretty anti-static-types, and the core scientific libraries (numpy, matplotlib, pytorch) don't export type signatures. Language server / intellisense support for type inference also has a long way to go.


Not that I am aware of. You can create you own types like what has happened with boto3 [1]

I would assume as types become used more a definitely typed type project will happen on pypi.

[1] https://pypi.org/project/boto3-type-annotations/


Always an option to push for something like mypy. That'd at least give students exposure to types (even if they don't have to write them)


I started programming in Lua which is dynamically typed. It allowed me to experience why a type system would be nice. I'm sure if I had started with a typed language, I would have written it off as antiquated and unnecessary.


> it's nice to avoid burdening beginners with extra syntax when they're already struggling to learn other things

As someone that teaches college students to program (not in a CS department) that statement is correct but doesn't provide the full picture. It's good to get rid of the extra syntax if and only if you don't need it - and that means you're intentionally limiting what you're teaching them.


> you're intentionally limiting what you're teaching them

Yes, that’s the point. It’s an introductory class, choices have to be made about what to teach when.


Well, of course, but the point is that avoiding syntax is not a great criterion for defining the fundamentals of programming. A couple orders of magnitude more important is to decide what concepts are useful for beginning programmers to know.


I took it as "removing syntax left more space for other stuff"


I am sure I’m absolutely nowhere near your knowledge and expertise, but please bear with me sharing my opinion :)

I think Java fits better as an introductory language — it is similarly popular as JS and python, and while I’m sure static typing can be picked up easily, I think fighting the compiler for a likely-correct program in the end is a better alternative than understanding why the program failed with this given input and not with another — so my not-backed-up-by-teaching opinion is that dynamic languages are not necessarily a great fit. It also features the basic OOP concepts, and modern java can showcase plenty of FP ones as well.

On the other hand, going lower level is similarly detrimental, I’ve taken a class where C was used, and that way you get cryptic runtime errors as well, while memory management is not that hard to pick up later on.


Intro classes don’t need to overburden the student. The biggest hurdle for students is thinking like a computer. You start introducing all these extra things to just get going and it becomes even more difficult.

Explaining to students what public static void main means is pretty annoying and seeing cryptic syntax littered everywhere does not help students when they’re first learning.

Dynamic languages make much more sense to beginners because the idea of what a variable represents is more abstract to them than tangible to you. To them, they don’t see the value of types because they’re not going to be building large programs where that is going to matter. They know what their functions return and take in, because they probably only have one or two. Performance and compiling is also not as much of a concern, etc...


> Explaining to students what public static void main means is pretty annoying and seeing cryptic syntax littered everywhere does not help students when they’re first learning.

C# has solved that with "top level statements" [1]. If Java added that then problem solved, right? It's a simple addition.

[1] https://docs.microsoft.com/en-us/dotnet/csharp/whats-new/csh...


C# would be a better choice. In addition to top-level statements, the standard API seems much less "dogmatically" OOP. Reading a file is just one static method away. Unfortunately, it's still not possible to have top-level functions, right?


But C# on the other hand is a very very complex language, with almost double the keywords compared to Java.


Yes, but I think the learning curve matters more, especially for students who are going to be mostly writing code, not reading existing projects.


I agree with you on the overburden part, but not on the dynamic lang part. My very minimal experience is that the hardest thing to get right as a beginner (or even as an experienced developer) is to be able to follow the flow of your code (Maybe introductory courses should employ the debugger?), and the exact state it has at a given point, eg. at an exception. Restricting the valid states imo helps understanding. As well as not having to debug at all at first, but the compiler notifying what is wrong immediately.


I think that you're right in that it's hard to compose all those parts of your program together without accidentally mistaking a `Query` for a `string`, or god knows what. I'm helping a friend learn web development, and most of the issues they get stuck for long time at are solved by making sure that functions you are calling are returning what you think they are.

On the other hand, I feel that most of the lessons I've learnt that have really stuck are the ones where I first do a thing wrong, and then find how to do it right. In a similar fashion, letting the student create a steaming pile of ... and then giving them a solution to their troubles in a later course feels like a good idea.

I personally have come a full circle: starting with things like Pascal and C++, then going to C# and finding type inference interesting, then getting a HUGE boost to my knowledge and skills when I found Python and JS. A few years later, my personal preference is having static type checking, and Typescript does that pretty well, in my opinion. However, every time I remember how much I managed to learn and understand about programming when I switched to Python, I think that if I had to teach someone, I'd pick something expressive, intuitive and who cares about types when you really want to maximize exposure.


> I think fighting the compiler for a likely-correct program in the end is a better alternative than understanding why the program failed with this given input and not with another

In my experience (having TAed a CS1 course) I think it is better for introductory students to be able to figure out what they are doing wrong, rather than having a compiler point it out to them.

In the first class, we want to focus on computational thinking and being able to then express their ideas into programs. So we intentionally use very little of the language (JS in my case), because the language is not the point of the course.

OOP and all these models of abstraction and code organisation come later, once they have a good grasp of the fundamentals.

This particular course I taught is only taken by CS freshmen, so that other commenter's remark (that we should teach a popular language) doesn't apply here.


> I think it is better for introductory students to be able to figure out what they are doing wrong, rather than having a compiler point it out to them.

That’s implying they will find it out, instead of having a finished project that can randomly crash with a slightly different input.

Also, the compiler is basically just a shorter eval loop.

But I agree with you that OOP should only come later (though using objects is inevitable in most languages)


I think wrapping all your code in OOP boilerplate is probably too much for an intro course. One of the nice things about Python is that hello world is `print("Hello world")`. No `public static void main(String[] args)`, no nothing. And then on top of that, you can add your functions and classes and whatnot, but it's easier to wrap your head around a zero-fluff entry point.


print "Hello world" - python 2 was/is even nicer in that regard.


> I think Java fits better as an introductory language

Oh no, for a beginner just setting up the environment is a huge hurdle. Sun's Java downloads page is utterly confusing, and the IDEs are monstrous and complex too. Past that, they have to learn about the JVM and compiling to JARs before they can even run their first program. Leave all that housekeeping to us seasoned, soul-crushed developers ;)

If we're starting to teach compiled, boilerplate-ridden laguages to beginners, why not jump straight to C++ or Rust?


Sun’s download page?? Also, vim is more than enough for beginners and there is hardly anything hard in javac Main.java && java Main. With a modern JDK java Main.java will compile and run it in one go even. Jars are absolutely not required.


Oracle! I guess I showed my age, and that the last time I developed on Java I had to use Eclipse and I still have PTSD from it.


Having gone the other way (starting with JS and moving into languages like Objective C), I actually disagree. Having a solid foundation in software writing helped me when I started to learn about the somewhat obtuse abstract concepts that tend to be coupled with learning strict typing.

Then, when learning Flow and Typescript, where I'm expected to have a meta understanding of typing itself, I was ready to go.


As someone who began learning with a statically typed language then moved to a dynamically typed language, I can say that I found learning a lot simpler with a dynamic language.

I see static typing as a nice tool to help stop you making errors; but when I was trying to wrap my head around how to program I found it easier to not have to fight a compiler and allow myself to make mistakes.


Why did you guys switch to python instead of javascript?


Python is much more limited as an introductory language than JavaScript. JavaScript is very versatile. Need to write server-side logic? Use JavaScript and Node. Need to write a native iOS app? Use JavaScript and React Native. Need to write a command-line utility? Use JavaScript and Node again. Need to make your web page interactive? Use JavaScript in the browser. Python is versatile as well, but not to the extent of JavaScript. JavaScript, when type-checked using TypeScript also scales better to larger projects and development teams. The one area where Python shines is in machine learning, where Python is a de-facto standard. But here too, Python is facing stiff competition from Julia.


It's an introductory course. The goal isn't to train students to be able to do all those things. I'd wager most professional programmers aren't able to do more than a couple of them well. There's more to front-end/back-end/CLI/mobile than just the language.

> But here too, Python is facing stiff competition from Julia

You severely overestimate the significance of Julia in this field. It's a great language, and I wish it were more popular, but Python still dwarfs it.


Out of the use cases you mention only server-side and browser are a good fit for JavaScript, and Python does quite well on the backend. Writing mobile apps with JS is a frustrating experience at best, and requiring Node for a CLI app is not great UX, unless you bundle Node in an executable.

Students should learn early to use the right tool for the job, not reuse the same one just because it's already familiar.

IMO Python is a better first language because it's more approachable, and doesn't have the prototypal inheritance and other quirks unique to JS (not that it doesn't have quirks of its own, but most aren't important early on).


You seem to have limited visibility into python’s versatility. It’s much much more than you describe, and I’m saying that as someone who hates python with a passion (and has been gainfully employed writing python for most of my career).


None of these are requirements for a intro class

Python excels as a teaching language.


Can confirm. As someone who never formally learn programming, the moment I learned Python I never touched C# or Javascript again at will.

The syntax of Python is just way, way more straightforward. I do agree it may be a disadvantage for serious software engineering, but as a hobby? I will pick Python over others all day every day.


The introductory course is not taken by just CS students. In most colleges, introductory programming courses are open to all majors. For many students this will be the only programming course they will ever take. Why wouldn't you teach a language that is not only well designed, but also versatile, instead of just being a good "teaching language"?


You are describing Python.


> not only well designed

JS, and its multitude of foot-guns, is not something I would call 'well designed'.


I've been shooting for 10 years and didn't know this at all!! I just saw a photo of a lens mounted in reverse and my mind is blown. Time to go down a rabbit hole trying to understand why this works...


lenses pass light in both directions, if you flip it around then it bends the light the opposite way ;)

http://hyperphysics.phy-astr.gsu.edu/hbase/geoopt/image.html...

the lens is designed to take rays traveling from optical infinity, and converge all those rays at a focal plane that's (eg) 30 millimeters away, right? well, if you flip the lens around backwards, then it will send out rays towards optical infinity from a focal plane that's 30mm away... which is what a macro lens is.

in fact in the old days people used to use their camera as an enlarger to expose their negative onto prints. The theory was that using the same lens that took the image in reverse would help to correct the abberations of the lens... I am not sure that's actually supportable compared to using a real enlarger lens that's optimized for high-magnification work but it's a neat theory, and possibly better than just using a random crap enlarger lens ;)

now if you really want to have fun... ever wonder why diffraction makes your image softer at smaller apertures? ;)

https://www.cambridgeincolour.com/tutorials/diffraction-phot...


Are you also familiar with extension tubes? I think in both the reversed lens and extension tube the lens sits further from the film or sensor (focal plane). Somewhat counter-intuitively, at least for me, is when the lens is focused at infinity, it's at its closest point to the focal plane. Moving it further from the focal plane focuses closer.


Flutter feels so strange to me. In many ways, it's reimplementing major parts of a browser, compiled to (web)assembly, running inside a browser compiled to assembly... Watch five years from now someone add a low-level programming framework to Flutter and then soneone else reimplement the browser using that


Flutter is a UI framework on top of a low level programming framework, Dart. Sort of like UIKit and Cocoa.


I think you're thinking of Skia. Dart is a language.


Turtles all the way down! It would be more humorous if we used Skia, since that arguably fulfill's OPs proposition that someone would eventually rewrite the browser in said framework


That was really short and sweet. Thanks for the writeup!


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You