For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | herohamp's commentsregister

This comment is nonsensical


A simple, no frills, web game where you try to cover the US by guessing the names of cities.


Portability. Say I wanted to make language X run on all platforms, but I didn't actually care about compiling it on all platforms. I can just write a relatively simple VM for each platform. This is one of the reasons Java was and still kinda is so ubiquitous


Wouldn't writing an interpreter for each platform be less work and achieve the same goal as writing a VM for each platform?

Edit: ^Aside from being able to execute the bytecode on any platform


Why would it be less work? The interpreter will need to implement whatever operations a VM can perform, so a priori it's at least as much work. Bonus, if you can bootstrap the source->bytecode process, then you only need to write (and compile) that once to get a full-fledged interpreter on every host with a VM


As others mentioned, source code should be distributed that way, and I think creating a simple VM is easier than a simple language parser. But of course, an optimizing one can be really quite complex in both cases.


Can somebody explain what this is actually doing? I am quite confused


This is a Rust library for reading and writing the LEB128 integer compression format. LEB128 is a representation of arbitrary-size integers: https://en.wikipedia.org/wiki/LEB128

As I understand it, this library is optimized to avoid branching (which incur an overhead) and take advantage of SIMD instructions (which process data in parallel).


Hey, I am the OP. Thank you so much I will go through and amend what I got wrong, anyway that you wish for me to credit you?


If you want to credit me, just tag my twitter :)

(@theFerdi265)


If you want to share screenshots Ill happily put it up on the site. My email is me (at) hampton {dot} pw


Sure, here you go:

https://imgur.com/a/Sqjh2TZ


I cannot believe I did not notice that. I will rerun all of my testing with a valid UTF-8 byte sequence :)


after 5 weeks of seeing cryptic screenshots on discord of ycruncher it's finally done!


yes it is sub-optimal, but that is not because building it takes so much time, it is because of the node_modules. I am looking into migrating to Hugo as has been suggested by MANY people


yeah, that is a mistake. I just did not fully rethink my code when I moved the build layers. I will be removing that tonight along with a few changed suggested here


On a second thought, I also don't understand why do npm in two different images, why not just copy the webpack bundles from the builder image into the nginx image ?

For me the cause of the big image size was in

COPY --from=npmpackages /app /app

From your third Dockerfile, it seems replacing the above with the following would have done the trick without adding an extra stage

COPY --from=npmpackages /app/_site/ /usr/share/nginx/html/


I do npm in two different images so that the node_modules can be cached between builds. this massively speeds up my build. The npmpackages layer only installs the npm modules


I see, this speeds up when you change a dependency, because at least then your whole node_modules is not thrown away is that right ?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You