For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | alquemist's commentsregister

In a WASM + seccomp implementation, the whole WASM runtime runs inside seccomp. Breaking out of WASM leaves one running arbitrary asm inside seccomp, which has exactly the same attack surface as directly running untrusted binaries inside seccomp. The WASM layer in WASM + seccomp simply requires an additional exploit.


You’re referring to seccomp-bpf, not seccomp. Seccomp-bpf + wasm has a dramatically larger attack surface than just seccomp. Please check references if you do not know the difference between seccomp-bpf and seccomp: https://en.wikipedia.org/wiki/Seccomp


That's a bit of apples and oranges. JS/WASM are runtimes executing hostile code, whereas Go apps are trusted code.


* Predictable performance.

* A wide ecosystem of mature language toolchains.

* Simplicity: JS implementation contain sophisticated JITs, which are harder to prove correct compared to a simple ASM translator.

* Portability: not tied to a specific HW architecture.


? Those are reasons to use JS not WASM.

"A wide ecosystem of mature language toolchains." - yes, for Javascript, not for WASM, which isn't deployed really anywhere in production and there aren't even best practices for it.

What language are devs going to even write these scripts in? That's not clear.

"Simplicity" - nothing is more 'simple' than JS, which is why it's used the world over.

"Portability" - again, nothing more portable than JS.

The reasons to use WASM are 'performance' along with 'black box' - but in most cases performance is not necessary and the black box for all intents and purposes exists with v8.


I am not following WASM closely, but it appears to be deployed in all modern browsers: Chrome, Firefox, Safari, Edge. That counts as 'large scale production deployment', even if there aren't that many websites that take advantage of this capability (yet?). https://caniuse.com/wasm


> "Simplicity" - nothing is more 'simple' than JS, which is why it's used the world over.

Not in terms of implementing the language.


Not in terms of using the language either. And the reason it is used the world over is because it was dictated to developers by browser makers. That's the single and only reason.


I agree, but I'd still rank JavaScript as one of the more approachable major languages.


How often does portability ever appear as a tie-breaker between to different languages?


Rarely. Over a long enough time period though, the probability of a portability event nears 1, and the cost of such event is enormous. Right now there is a credible challenge for the x86 domination in the server, laptop and desktop markets raised by ARM, via AWS A1, Apple M1 and many others. Would be foolish to bootstrap an ecosystem locked-in to the loser, and we don't know the winner / loser over a 5-10 year timeframe.


For clarity, do you work at Shopify? These all sound like valid reasons, I'm just curious if they're the ones that motivated the Scripts team at Shopify.


It's not an either/or. Most likely Shopify runs WASM inside an seccomp enclosure. Possibly inside a VM as well. Defense in depth.


Running untrusted code in a wasm vm doesn’t add any extra defense over just using seccomp. It just adds unnecessary overhead and increases attack surface.


Assuming that Intel / ARM microarch implementations are bug free, that is correct. In the real world there are no bug free implementations.

Edit. This is the strategy Chrome sandboxing uses: a hardened runtime (JS/WASM) inside a seccomp enclosure. https://chromium.googlesource.com/chromiumos/docs/+/master/s...


Running code in a wasm vm doesn’t magically prevent user code from exploiting uarch bugs. Lucet specifically does not mitigate spectre variant 2.


seccomp escapes are a thing and if you're inside a restrictive environment such as WASM, it is harder to achieve it.


I think you’re referring to seccomp-bpf. seccomp has never been escaped and it is unlikely such a bug could happen due its simplicity. If you do not know the difference between seccomp and seccomp-bpf, please check references: https://en.wikipedia.org/wiki/Seccomp


Unlikely? Sounds like you think it could happen.


Nothing is 100%. It’s effectively impossible.


This disproves your point. How many "effectively impossible" exploits have come out recently?


The semantic gymnastics here are pretty interesting. It’s about as secure / impossible to penetrate as anyone could reasonably guarantee in the face of future uncertainty. If you don’t understand why, please check references on how seccomp works: https://en.Wikipedia.org/wiki/Seccomp


Don't mock. Test production code, maybe with faked storage.


You don't need me to get on that bandwagon. I almost despise mocks.

Mocks usually over-specify implementations by setting up expectations of a specific implementation conversation rather than an outcome. They're painful to debug after refactoring implementation - they end up write-only - and they generally inhibit refactoring of the mocked API - often mocked instances of APIs outnumber production invocations.

I try to encourage people to replace control flow with data flow where possible; messages, command objects rather than method calls; iterators, streams and consumers composed together, rather than loops. Data flow can normally be trivially redirected into a container, and if the data objects are simple inert immutable tuples, they're trivial to construct as inputs or assert against on outputs.

Fakes are good too, better than mocks in most situations, since they are easier to refactor.

Although, IME, integration tests, while slow and often brittle (especially if you have any async components and define failure conditions in terms of timeouts), have a significant upside in permitting large factoring while still being able to test a substantial amount of end result functionality (i.e. the stuff that matters, not implementation details).


Often enough software has dependencies on external services (think of stuff like a CRM database, payment and shipping providers, integrations with CDNs, external identity verification services) where one has to go with mocks for testing if stuff like error handling etc. works.


Fake external storage / rpc dependencies.


That's called mocking...


Mock: Adhoc guessing of what methods called on a dependency, but sometimes even between classes in the same module, might return. Guess repeated over and over as new tests are added, sometimes tens of times or even more. For example, https://site.mockito.org/#how, "when(mockedList.get(0)).thenReturn("first"); System.out.println(mockedList.get(0)); // prints "first""

Fake: A replacement module that behaves like a production module, but with certain simplifications, for example in-process vs. using rpcs, or simply cleaning up the filesystem after usage. For example, https://github.com/tk0miya/testing.postgresql. "automatically setups a postgresql instance in a temporary directory, and destroys it after testing".


So... mock the storage? This very quickly becomes a slippery slope.


Fake, don't mock. Write, or ask the team that provide the external dependency to write, a small piece of code that behaves like your external dependency, but in-process. You'll thank me after about the 47th time you're guessing (inconsistently, possibly incorrectly, and definitely overly verbose) how the external dependency actually works. Ban mocking libraries.


Isn't that effectively the same thing? The fake implementation needs to mimic the API your software expects to run against, and must give valid-seeming results and also keep up with development of that external program for the variety of tests you'll run against it (which will grow, increasing the complexity of the fake). You'll of course have to do this yourself, because your vendor isn't going to do it for you. So now you have two problems.


Fakes are not mocks. The fake is just another module. Assuming no updates, it is written once. Mocks are written 47 times, inconsistently, while focusing primarily on other tasks. If there are updates needed, better to fix them in one place than chasing 47 test code locations with inconsistent usages.

Furthermore, these are external dependencies that can't be run in-process. If a dependency can be run in-process (aka library), there is no justification to ever mock it. I've even seen codebases that mock their own class B in order to test class A. Run the production code already. Ban mocking libraries.


Testing against external dependencies is only one form of testing.

Unit testing with mocks is another form. One does not replace the other.


Agreed. Using mocks while testing vanilla in-process code is never justified.


"In the beginning was the word". Language shapes reality. As software engineers, the second we accept that 'product owner' is a legitimate title, that second we lost agency to push back on poorly conceived features. Say it loud and clear: you also have a stake in the product.


https://leanprover.github.io

While it is difficult to design a secure procurement chain all the way to the SiO2, we could at least design simple enough hw/sw systems for which formal verification is an economical option. And then force government entities to use formally verified systems instead of the bug ridden crap most shops, especially the sw ones, have to ship under intense deadline pressure. The market has led us into a broken local optima, no way to get out short of state level action.


> force government entities to use formally verified systems instead of the [current commercial options]

When do we complain about the even more expensive defense budget in this story?


Can not resist. The tension between 'basic feature set' and an admittedly superficial reading of the docs is very funny. The manifesto links to https://github.com/commercialhaskell/rio#readme and urges us to use the rio library to get started. Upon opening the rio link and scanning for a list of the 'basic feature set', I stumble upon the first block of quoted code. After removing 39 eoln characters in respect for the HN audience, it reads:

"Our recommended [language extensions] defaults are: AutoDeriveTypeable BangPatterns BinaryLiterals ConstraintKinds DataKinds DefaultSignatures DeriveDataTypeable DeriveFoldable DeriveFunctor DeriveGeneric DeriveTraversable DoAndIfThenElse EmptyDataDecls ExistentialQuantification FlexibleContexts FlexibleInstances FunctionalDependencies GADTs GeneralizedNewtypeDeriving InstanceSigs KindSignatures LambdaCase MonadFailDesugaring MultiParamTypeClasses MultiWayIf NamedFieldPuns NoImplicitPrelude OverloadedStrings PartialTypeSignatures PatternGuards PolyKinds RankNTypes RecordWildCards ScopedTypeVariables StandaloneDeriving TupleSections TypeFamilies TypeSynonymInstances ViewPatterns"

39 language extensions just to get started. This screams 'incredibly complicated', even if perhaps reality is rather more mundane. Consider the 40th language extension: GradualTyping, so perhaps those that would rather write code about data than about types using a half baked and evolving type language (which taken to its logical conclusion will have to become a full fledged theorem prover in the Coq / Idris / Agda / Lean lineage anyways) could get their jobs done.

Wish you guys all the best!


This is a common response to Haskell language extensions. It comes from a misunderstanding of what a language extension is. A Haskell language extension is not "something that radically changes the language"; it is "a small, self-contained, well-tested piece of functionality that for whatever reason wasn't part of the Haskell 2010 standard". In any other language a "language extension" would just be "a feature of the language".


99% of the time a loop works just fine, because there are no measurable gains to be had from parallelism. For the 1% where performance matters, it's usually a bit more involved that simply using a map or fold, and hopefully already packaged as an off-the-shelf library. To have measurable gains from parallelism one has to be very intentional in balancing communication vs computation. Think carefully designed libraries like cuDNN.


'FP' covers 2 meanings:

A. Pure functions + immutable data structures.

B. Expressive type systems, all the way to compile-time metaprogramming and dependently typed proofs.

Writing programs in style A. is tremendously valuable. Expending too much effort in the fine points of the type system, which invariably is simultaneously both under-expressive and over-expressive, is a complete waste of time. Some critical projects require high defect-free confidence, and for those it's legitimate to go full in formal proofs and take the 10x-100x productivity slowdown. For mere mortals, documenting the structure of the data (json) manipulated by the respective functions suffices.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You