A question: even if the source is open source, what prevents the vendor than sends it to the fab to insert back doors. Is there a way to verify that it is indeed the source in github that was used?
Yes and no. Sending a chip to one foundry to do the first few layers and taking the chip to a second or third foundry has been done at Stanford as a proof of concept to mitigate against this attack. Other attacks are still possible but likelihood of success starts dropping off a cliff.
Foundries will generally use different “standard cells” (ratio of dopant in silicon to make the basic building blocks of your “P’s” and your “N’s”) so this is actually a big ask and not trivially supported out of the gate.
This in turn can be worked around by double welling ones designs but it becomes a yak shave real fast.
With OpenTitan compiling on open synthesis tools we’re about 3 of 5 steps towards an open silicon root of trust.
I think the parent is asking about the different problem: how to verify when you are not the one sending it to the fab. (Just like how to verify binaries when you are not the one doing the build.) In both cases fab and compiler are trusted.
There's a way to verify it statistically but it will be very expensive. Perhaps the verification can be automated down to commodity price if it is done often.
Get your batch of 1000+ devices. Uncap, repeatedly etch and scan a random subset to check the circuit is what you expected. Make sure it's you that chooses the subset.
ps. I would love to implement this if someone thinks it's a serious proposition!
It's not like it's easy to verify that binaries from Linux distributions correspond to sources without back doors, although https://reproducible-builds.org/ is working on that problem. I am not aware of any similar effort for open hardware.
I personally use Rawtherapee which seems more advanced(but a little more difficult to use for beginners) than Darktable. Ιs there any reason to use Darktable apart from ease of use?
I haven't used Rawtherapee much. I liked the creative opportunities it offered.
Darktable's workflow better met my needs at the time I was picking between them. In particular, Darktable was better for processing a few hundred images in a couple of hours right after shooting. Darktable's integrated image management was the key difference. Darktable's ability to offload processing onto GPU's was also a factor. GPU's make Darktable much much faster than without.
Rawtherapee is what I use too. I like tools that I can just point at a random directory, and get to work on. There are a lot of photography/image-management tools which insist on importing images into a "library", which ruins my filing-system.
I've got a photoshoot booked for tomorrow, so I'll try this out and see how it compares. (An average shoot for me results in 200-400 CR2 images to examine/reject/process.)
Just based on looking at the sites, Rawtherapee appears to focus on RAW "developing", while Darktable appears to also do photo management and Lightroom-style non-destructive editing.
RawTherapee does photo management and is non-destructive as well.
Darktable supports masks and parametric masks for almost all operations. RawTherapee on the other hand has a wider range of tools and support for profiles and raw formats.
RawPedia's "Features" page (http://rawpedia.rawtherapee.com/Features) doesn't mention photo management, and I didn't find anything by searching for "catalog" (Lightroom's name for a collection of managed photos), etc. It'd be great if the site had more info about that aspect of the app.
The equivalent functionality is the File Browser tab when you open RawTherape. You can filter images, assign ranks/color labels, create queues, do batch edits, assign a dark frame, etc etc.
It doesn't have "collections" or "catalogs" per se, if that's what you're specifically looking for.
As much as saying this is probably going to get me a lot of hate from web developers, the world needs more browser engines. Simpler ones, maybe HTML+CSS only with no scripting. The idea of the Web as a flexible hyperlinked document system and not an application platform needs to gain more support. IMHO if your site is information-centric, and it's not readable in these "document-only" browsers, you're doing it wrong.
I don't understand this hatred of Javascript. The only websites I've felt were actually bloated are news sites with a lot of ads, but that's not a problem with Javascript as much as it is a problem with excessive ads.
What counts as information-centric? A lot of basic things (commenting, searching, liking a post) require Javascript. If you want to use pretty animations, there's a high probability you need Javascript.
Making information-centric sites only use HTML/CSS would significantly decrease the capabsilities and attractiveness of the sites.
> The only websites I've felt were actually bloated are news sites with a lot of ads, but that's not a problem with Javascript as much as it is a problem with excessive ads.
The problem is JS has too much power in the browser, and too little consideration for security. It can effectively take control away from the user, there's virtually no way to know what it's doing without source code audits, which are prohibitive, and the security vulnerabilities are legion.
> What counts as information-centric? A lot of basic things (commenting, searching, liking a post) require Javascript.
> > What counts as information-centric? A lot of basic things (commenting, searching, liking a post) require Javascript.
> None of these actually require JS.
Can you imagine a facebook doing a page reload/refresh every time you click to like a post? Or without loading more content on demand every time you scroll the page?
> Can you imagine a facebook doing a page reload/refresh every time you click to like a post?
You're stuck thinking about Facebook as if it still had long lists of posts with infinite scroll. The UX would be completely different when the design constraints are different.
For instance, instead of infinite scrolling, you might show one post at a time with clickable previews of the last and next posts. A like doing a full postback isn't a big deal with this approach, particularly with judicious use of anchors. Certainly not as slick, but perfectly usable.
Put me in the group of users who despise infinite scrolling --- I would much prefer a paged interface (like the way it was before IS became popular) because it gives you a sense of where you are, and more importantly, an O(1) way to resume where you left off.
(I suppose the companies like IS because it has an addictive property, but I suspect me and others who see through that don't like it at all. Relatedly, the other popular concept of a "feed" also conjures up images of farm animals munching away at a trough; perhaps that is the real intent...)
A slightly worse user experience on Facebook, for a significantly better and more secure user experience on the web overall. I'm not sure that's such a terrible tradeoff.
Agreed on built-in browser behaviour though. Chrome pushing more input types a few years ago was a great thing.
Making information-centric sites only use HTML/CSS would significantly decrease the capabsilities and attractiveness of the sites.
For me, and probably many others, the "attractiveness" of sites that don't use JS is far higher. Searching the Web for information with JS enabled is like visiting a library full of books that will randomly turn their pages, jump around, and scream at you like those in the Harry Potter world.
Intrusive ads are a problem, but not the only problem.
My basic issue with JavaScript is when it's used to move stuff around after the page is done loading, or when it significantly delays when loading finishes.
Voting on posts, and well done search autocomplete can be nicely done with JavaScript, commenting itself works fine without (I don't think there's anything JavaScript where I'm writing this?)
Somedays it feels like I spend as much time waiting for pages to load in 2018 as I did in 2000, and pages certainly look prettier, but don't impart any more information.
I started a weblog a couple of days ago, and the joy of publishing generated markdown documents on a fileserver is as big as any of my JavaScript ventures.
You didn’t answer the question. All you did is try to misdirect the discussion.
I think he has a valid question. I’m in the market for a replacement for my Onion Omega2’s, so I clicked the link. The fact that the product page is exclusively in Chinese does not make me confident.
Western bias? Maybe. But that doesn’t mean the concern isn’t valid. I’d worry if it was in Russian, too.
i think your concern should be relative to your value as a target of a hack. there's a reason billionaires have private bodyguards and we don't, and there's a reason high value systems are built from scratch in tightly controlled environments and use things like hardware based security