For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | dminik's commentsregister

An idea can also reduce value. Or prevent you from producing value in the future. Knowing when an idea is bad or not worth doing is a skill in itself.

Exactly. "Doing it this way would prevent us from also doing X in the future; are we sure to permanently cross X from the roadmap, and are stakeholders aware?" is a valid and valuable objection. EDIT: and it can be trivially shot down by somebody with enough authority to define a roadmap saying, "we don't care about doing X".

Some feedback:

- Nice idea, though after playing Turing complete, I would like to skip the beginning and move to stuff that makes GPUs different to CPUs. But it's understandable.

- I'm not smart enough to intuit NAND from transistors. I'm also not sure I will be alone in that. It's such a weird difficulty wall.

- Speaking of, the difficulty is all over the place. Though easy mode is appreciated.

- Even with a n key rollover keyboard, I couldn't complete the capacitor refresh level. It seems like it speeds up and certain capacitors already start empty.

- The routing for wires is no good atrocious. Any level with more than 8 components will end up impossible to read.

- It doesn't help that you can't color code or even path wires manually.

- Might be Firefox only, but I had a hard time selecting the connection points.

- Dragging the mouse along the edge should pan. Otherwise you have to drop the connection and zoom.

- I appreciate the added "show solution". But it's not really giving you a solution. It's just a better hint.

- An option to show all tests or at least more tests would get great.


> I'm not smart enough to intuit NAND from transistors. I'm also not sure I will be alone in that. It's such a weird difficulty wall.

I agree with you, because I feel like I only got that one because I happened to get curious about CMOS (PMOS + NMOS) logic earlier this year, and remembered the general idea from before. Otherwise, I don't think I would have figured that out either. Google image search for CMOS NAND basically shows the solution, but the game doesn't tell you that's what it is until after you beat the level. I think seeing the answer, then immediate trying to reproduce it from memory is a good way to learn. Then if you try again the next day/week/month and are still able to remember it, then you've learned it.

I also looked up a solution for the full adder since I couldn't quite remember how it worked.

Tangentially, I've gone through similar material over time repeatedly in the games nandgame and Turing Complete, going through the Nand 2 Tetris course (on Coursera), building Ben Eater's breadboard 8-bit computer, reading "Code" by Charles Petzold and "The Pattern on the Stone" by Danny Hillis and "Digital Computer Electronics" by Malvino since that was what Ben Eater partly based his computer design on, and going over digital logic in CS-related EE courses up through how a CPU is made. But most of those barely cover anything below the logic gate level and I don't think any of them covered CMOS/NMOS/PMOS specifically which is why I got curious about them this year.

It's pretty fun though (my type of fun anyway), and I'm really curious to see how the rest of it goes since it's building a GPU instead of a CPU for a change.


Fixed some of these in the update I just pushed, will fix the rest in the next update. Any thoughts on how to scale the difficulty a bit better?

It depends on the hardware, backend and options. I've recently tried running some local AIs (Qwen3.5 9B for the numbers here) on an older AMD 8GB VRAM GPU (so vulkan) and found that:

llama.cpp is about 10% faster than LM studio with the same options.

LM studio is 3x faster than ollama with the same options (~13t/s vs ~38t/s), but messes up tool calls.

Ollama ended up slowest on the 9B, Queen3.5 35B and some random other 8B model.

Note that this isn't some rigorous study or performance benchmarking. I just found ollama unnaceptably slow and wanted to try out the other options.


Sometimes the impression I get from commenters on HN is that they would sell their own grandmother for money.

Much less than just not considering morals/ethics, it's seen as a weakness here.


Too true. I won't say who it was but a prominent partner in my batch referred to, essentially, a lack of morality as a "competitive advantage". I went back to the east coast after lol

Well, I wasn't that worried for the astronauts before, but now that I know they're running windows, I'm not so sure.

I can't believe as developers we were worried about AI training on licensed code. It turns out it didn't matter at all. You can just point an LLM at some source code and you're off scot-free.

Almost all apps are just CRUD. The code is not that interesting. The valuable IP is the product.

In what way? Does it matter that there's an Adobe logo on the top if I can just point an LLM at some leaked code and get PictureStore?

There are a lot of comments here saying something to the effect of "spaceflight is inherently unsafe" or "you can't always guarantee safety." I find this rather concerning.

Surely there is a difference between "our engineers did the best they could and the mission has a X% chance of failure" and "management overrode the engineers so they can get a launch in before the program is shuttered."


There is no evidence that "management overrode the engineers".

Have you bothered to ask the gambler if they want to risk it?

No offense to the astronauts of course, but asking people that have dreamed of this opportunity their whole life doesn't actually tell you all that much about the actual safety of the mission as a whole.


The giant advantage regular games have is that I've yet to smash my hand into a wall playing them.

I think that the relatively low living space area for most of the world is a huge strain on VR adoption.


I think that will change when VR is like Striking Vipers.

If you're not sure what something is saying, how can you be sure that the AI had picked the correct interpretation?

By asking it to cite its sources. Whenever I use AI, I have it pull direct quotes from the text to justify its interpretation. Sometimes it's spot on, sometimes it's wrong. But skimming a paper to fact-check a few specific quotes is still vastly faster than reading a dense paper completely blind.

Right question to ask, however, good readers/professionals do have some sense for this and ability to dig further as needed. On the other hand, books and articles are often over-detailed, with the key stuff buried in the lede or even remaining tacit.

For me, LLMs have often pointed me to answers or given food for thought that even subject matter experts could not. I do not take those answers at face value, but the net result is still better than the search remaining open-ended.


Critical thinking.

In good faith, how do you tell yourself have good critical thinking?

I believe you're talking to an LLM, just look at the comment history

Well how do you know anything in that case? You could be dreaming right now sound asleep, or you could be locked inside a mental institution but living in a complete delusion. You might have been in a coma for the past 7 years and AI was just something you dreamed up.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You