For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | dmos62's favoritesregister

You might want to check out Dune3D. It advertises itself as combining the constraint solver from SolveSpace with a OpenCASCADE geometry kernel supporting fillets and chamfers. :)

Haven't used it much apart from some minor tests (I tend to prefer MoI3D, but that's in a different category in several ways...), but as far as FOSS solid modelers it seems like the most promising to me. I do remember some small UI quirks, but overall it felt very approachable and streamlined, and looking at the GitHub repo, development is active. FreeCAD IMHO is just too sprawling and complex, with seemingly little tought paid to UI/UX.


interesting idea for a 1.0. Using https://context7.com/ might be the right next move here.

Also look into https://cht.sh/

Remember: incorrect (misleading) documentation is worse than no documentation.

What this might be better for is use-cases that don't require extreme precision. Imagine it for learning language or reading sophisticated academic literature. For example, https://archive.org/details/pdfy-TJ7HxrAly-MtUP4B/page/n111/...

Stuff like that is hard and every tool to make the complicated more legible I'd embrace.


Talking of cheap and powerful devices one can also look at Chinese UZ801 4G LTE (Qualcomm MSM8916) dongles. They cost like only $4-5 and pack quite impressive HW: 4GB eMMC, 512MB RAM, actual 4G modem sometimes with 2 sim switching support. Since it's actually old Android SOC there is even GPU and GPS in there. And a lot of work was already done on supporting them:

https://wiki.postmarketos.org/wiki/Zhihe_series_LTE_dongles_...

https://github.com/OpenStick/OpenStick

So yeah if you looking for hardware platform for weird homelab projects that's can be it.


Anyone who does on-call should look into aviation disasters. Crew resource management, the aviate-navigate-communicate loop, it's all very applicable. ('WalterBright is an excellent source of commentary on applying lessons from the airline industry to software.)

But I did burn out on Mentour Pilot after a while, I just had my fill of tragedy.


I think it is about adopting the right framework. A mix of Clayton Christiansen and 5 whys works for me (similar issues).

I start by writing down the big things: In this year I worked at/for/with ....

WHY was I there: I think about the major projects I did.

WHY was I there: I think (and try to verify) the impact #s/%s those projects had.

WHY was I there: I think about the technical and soft skills needed to make that happen.

WHY do I care: I consider if there is any configuration of things that would make me consider doing the project again.

Having to track past performance as a business made this much clearer -- I adopted the above approach to expand out CV, then developed a similar approach for business development templates to explain recent past work.

I sometimes will drop these templates into an LLM and have it work with me to define or identify ways to better communicate.


If you're a fan of Playwright check out Crawlee [0]. I've used it for a few small projects and it's been faster for me to get what I've needed done.

[0] https://crawlee.dev/


If you want to check out the motherlode of other visual programming languages, platforms, etc check out Ivan Reese's Visual Programming Codex.

https://github.com/ivanreese/visual-programming-codex

https://github.com/ivanreese/visual-programming-codex/blob/m...

Although I'm going to have to create a pull request because he doesn't have Flowgorithm on there, which is an excellent tool for teaching the very very first steps of learning to program...

http://flowgorithm.org/

Terrible name, wonderful tool.


I've got a similar approach from a Unix philosophy.

Look at the savebrace screenshot here

https://github.com/kristopolous/Streamdown?tab=readme-ov-fil...

There's a markdown renderer which can extract code samples, a code sample viewer, and a tool to do the tmux handling and this all uses things like fzf and simple tools like simonw's llm. It's all I/O so it's all swappable.

It sits adjacent and you can go back and forth, using the chat when you need to but not doing everything through it.

You can also make it go away and then when it comes back it's the same context so you're not starting over.

Since I offload the actual llm loop, you can use whatever you want. The hooks are at the interface and parsing level.

When rendering the markdown, streamdown saves the code blocks as null-delimited chunks in the configurable /tmp/sd/savebrace. This allows things like xargs, fzf, or a suite of unix tools to manipulate it in sophisticated chains.

Again, it's not a package, it's an open architecture.

I know I don't have a slick pitch site but it's intentionally dispersive like Unix is supposed to be.

It's ready to go, just ask me. Everyone I've shown in person has followed up with things like "This has changed my life".

I'm trying to make llm workflow components. The WIMP of the LLM era. Things that are flexible, primitive in a good way, and also very easy to use.

Bug reports, contributions, and even opinionated designers are highly encouraged!


Looks good.

For a more database specific, type-safe, data querying solution, I like to use https://kysely.dev


I haven’t fully checked this site out, been saving it for when I run out of runway with my 3D printer, drill press, saw and SendCutSend. Hope it’s useful for you.

https://lcamtuf.coredump.cx/gcnc/


> The code editing experience is totally structured, where you select choices from menus rather than typing

Very cool! I wish there were something like this for working adult programmers, a general purpose structured editor for programming languages. When I program the intentions I form are in the first instance semantic (define a function here, check a condition there). Why shouldn't I just directly communicate my semantic intentions to the editor? Why do I have to first serialize my intentions into a string for the editor to then parse, in a error-prone process which introduces the possibility of silly syntactic errors like typos, mismatched brackets, operator precedence errors etc, rather than just semantic incorrectness?

Such an editor wouldn't necessarily have to be keyboard unfriendly any more than Excel is keyboard unfriendly. I guess it could also work something like input method editors for East Asian languages where you type into a "composition window" and make selections from a "candidate window" for insertion.


I remember debating with a guy I know about Net Neutrality leading up to this.

He was a sales director at a company that ran network backbone and fiber lines. He was insistent that the Net Neutrality debate was dumb, because it was trying to negate the peering agreements that make the industry run. The way he told it, anyone with a big backbone chooses to invest in peering arrangements with other companies who also make similar arrangements. If you are pushing an undue amount of traffic relative to the other party, you invest more. His take was that Netflix was supporting Net Neutrality so they wouldn't have to invest more in their peering arrangements.

Now - this is a guy who historically is almost always on the wrong side of history. I thought it an interesting talking point, but even now having had some exposure to the peering arrangements between (say) Microsoft and Akamai, I still don't know enough to be able to say whether this is even a valid talking point. I'm very sure this is just something he was told, and repeated ad nauseum.

This seems like the right audience (since I wasn't posting here back then), to ask if there's any legitimacy to this talking point?


I'm a huge fan of NestedText, especially as there is no escaping needed ever.

If you ever want to use it as a pre-format to generate either TOML, JSON, or YAML, I used the official reference implementation to make a CLI converter between them and NestedText.

When generating one of these formats, you can use yamlpath queries to concisely but explicitly apply supported types to data elements.

- My CLI converter: https://github.com/AndydeCleyre/nestedtextto

- yamlpath info: https://github.com/wwkimball/yamlpath/wiki/Search-Expression...


https://github.com/jesseduffield/lazygit (My favourite git interface. Can be used in neovim as well)

https://github.com/nvim-telescope/telescope.nvim (Fuzzy finder plugin that uses ripgrep and fzf)

https://github.com/ajeetdsouza/zoxide Smart replacement for `cd`

https://github.com/be5invis/Iosevka My favourite font for coding, it's width lets me fit more text on the screen


If people are looking for an actual modern take on text based browsing, I would refer them to the work of Igor Chubin.

    curl https://v2.wttr.in/London

    curl https://cheat.sh/rsync
With a supported terminal:

    curl https://v3.wttr.in
And also:

    https://github.com/chubin/awesome-console-services

Yes, Jean-Martin (JM) Fortier (Quebec) is a good example, though there are better / more comprehensive ones, and at larger scale too, see below. And JM is not even doing full permaculture, e.g. not using multi-height crops (i.e. ground covers through plants of a few more different heights to top canopy trees, 100 or more feet tall), so as to use more of the available sunlight and underground nutrients (shrub roots can go deeper than herbs, and trees even deeper, and what is brought up from deeper can be shared with shallow-rooting plants via compost, chop-and-drop, etc.). And he still makes fairly high profits per acre / person / year, i.e. overall. He has many videos about his work, both the technicals and commercials, on YouTube. Search by his name as well as for a great series titled Les Fermiers. And some examples higher on the axes of land area as well as permaculture and / or regenerative agriculture, vs. "just" organic farming, include:

- Gabe Brown, 20+ years at it, 4 or 5000? acres, mixed grasses (grains) and broadleaf crops, cover crops, livestock (beef, pigs, etc.), land getting better each year, saves hugely on synthetic fertilizer and pesticide (see a chart in his Treating The Farm as an Ecosystem video), profits better too, and higher than his synthetic-using and tilling neighbors and state averages (ND, USA), and going up the value chain, so getting more of the final consumer dollar vs. middlemen. The average take for "conventional" US farmers is quite low, he says. Oh, and he does not need or take govt. farm subsidies, as many others do.

Has many videos too, search for his name. Check the comparative stats and photos.

- Richard Perkins, Sweden. Not sure of area, but above a few acres at least, maybe 25. Mixed stuff again. High profits again. Videos again. Many years' good results again.

- Last but one of the best, Geoff Lawton (NSW, AU), long time permaculture expert (learned from Bill Mollison), doer, 66 acre Zaytuna Farm, is real mixed farm plus demonstration site, yearly trains many interns, consultant (to small orgs through to countries). Ditto for many of above points like axes, diversity, profits, improving over time, cost savings, etc. etc.

Edited for typos.


This comment was a joy to read right after finishing "Out of the Tar Pit" [1]. This bit is one of the critical design principles that article was advocating for:

> You can relatively easily try different search strategies while keeping the model the same.

[1] https://github.com/papers-we-love/papers-we-love/blob/master...


You can export your Hangouts chat history on takeout.google.com. The resulting Hangsouts.json file is human-readable but not fun to read. I have a one-liner to convert it into something nicer, but it doesn't resolve user names and you need to figure out the conversation ID you want to export yourself.

    jq -c '.conversations[] | select(.conversation.conversation_id.id == "YOUNEEDTOFILLTHISINYOURSELF") | .events[] | [(.timestamp | tonumber / 1000000 + (9*3600)| strftime("%Y-%m-%d %H:%M:%S (%a)")), .sender_id.gaia_id, [.chat_message.message_content.segment[]?.text], .chat_message.message_content.attachment[]?.embed_item?.plus_photo?.url?]' Hangouts.json | sort > foo.log
When you open Hangouts.json, you'll see that every conversation has something like this at the beginning:

    "conversation_id": { "id": "BASE64-LIKE_STRING" }
This BASE64-LIKE_STRING belongs into the YOUNEEDTOFILLTHISINYOURSELF placeholder.

For you who enjoying using state machines but wish they did even more and/or were embedded in each other (nested state machines!), check out this thing called State Charts!

Here is the initial paper from David Harel: STATECHARTS: A VISUAL FORMALISM FOR COMPLEX SYSTEMS (1987) - https://www.inf.ed.ac.uk/teaching/courses/seoc/2005_2006/res...

Website with lots of info and resources: https://statecharts.github.io/

And finally a very well made JS library by David Khourshid that gives you lots of power leveraging statecharts: https://github.com/davidkpiano

While we're at it, here are some links to previous submissions on HN regarding statecharts with lots of useful and interesting information/experiences:

- https://news.ycombinator.com/item?id=18483704

- https://news.ycombinator.com/item?id=15835005

- https://news.ycombinator.com/item?id=21867990

- https://news.ycombinator.com/item?id=16606379

- https://news.ycombinator.com/item?id=22093176

My own interest in Statecharts comes from wanting/trying to use them for UI development on the web, think there is lots of value to be had and time to be saved by using leveraging it.


If you're interested in a free game that has had this core functionality (online multiplayer Asteroids) since 1995 - visit https://store.steampowered.com/app/352700/Subspace_Continuum... (edit: replaced with Steam link)

There has been a dedicated community of thousands of players, many of which have been playing well over a decade. Plenty of videos on youtube under either "Subspace" or more recently "Continuum".

https://en.wikipedia.org/wiki/SubSpace_(video_game)

Also - a plug for my favorite Zone: HZ. www.rshl.org.


Setting aside GC, nailgun (JDK <= 8?) and drip already solves/d short-running VMs. This is often how to speed-up CLI tools like JRuby, ant, mvn, sbt, etc.

Also these help reduce load times:

- Class Data Sharing (CDS; JDK 5+) https://docs.oracle.com/en/java/javase/11/vm/class-data-shar...

- Application Class Data Sharing (AppCDS; JDK 10+) https://openjdk.java.net/jeps/310

- Ahead-Of-Time compilation (AOT; jaotc; JDK 9+ Linux-x86_64 only): http://openjdk.java.net/jeps/295 - JVM runtime trimmer (jlink; JDK 9+): http://openjdk.java.net/jeps/282

---

Drip: https://github.com/ninjudd/drip

Nailgun:

https://github.com/facebook/nailgun

http://www.martiansoftware.com/nailgun


The report also contains "domain scores" for "ongoing domestic and international conflict", "social safety and security" as well as "militarisation" in appendix C: http://visionofhumanity.org/app/uploads/2018/06/Global-Peace...

Of course those are still just single numbers aggregated from many different factors. However, the sources for those factors are named and are in many cases available online (although sometimes paywalled, or offering no convenient download functionality).

Uppsala Conflict Data Program https://ucdp.uu.se/downloads/

IISS Armed Conflict Database (paywalled) https://www.iiss.org/publications/armed-conflict-database/

UNHCR Refugee Population Statistics https://data.humdata.org/dataset/unhcr-refugee-pop-stats

Global Internal Displacement Database http://www.internal-displacement.org/database

UNODC Crime Statistics https://dataunodc.un.org/crime

The Economist Intelligence Unit (paywalled) https://data.eiu.com/

World Prison Brief http://prisonstudies.org/highest-to-lowest/prison-population...

Stockholm International Peace Institute https://sipri.org/databases

United Nations Register of Conventional Arms https://www.unroca.org/


Ex-professional gambler checking in.

It's no secret that it's possible to figure out which horse is going to win a race with reasonable statistical confidence.

The trick - and always has been - is making sure you can make a profit from it.

If a horse has a 30% chance of winning a race, and you're being offered odds as if it only has a 10% chance (i.e. 9/1), you have identified "value" or what Kelly Criterion would describe as "edge". You can mathematically in the long term make a profit, and Kelly can even tell you what %age of your bank to wager to optimise your expected returns: https://en.wikipedia.org/wiki/Kelly_criterion

Identifying the "true" odds of an outcome and then identifying what odds you're prepared to take (adding an over-round), is a key piece to what bookmakers do when creating what they call "the tissue" - the opening odds they offer that is then adjusted to manage liabilities in line with incoming weight of money (WoM) from bettors.

Kelly was a colleague of Claude Shannon. His papers have stood up to the rigour of mathematical analysis and also make intuitive sense to those who spend enough time thinking about it.

It has successfully been applied in casinos (Blackjack system pioneer Edward Thorpe is/was a fan), bookmakers, hedge funds (Thorpe again, but many other funds apply Kelly too), and everything in between.

But here's the rub: you won't get to keep your edge for very long. Bookmakers, betting exchanges and parimutuel systems all adapt to deal with long-term winners. Your window of opportunity is limited. Already bookmakers have good systems for identifying true odds for English Premier League games and make a tidy sum on over-rounds (profit margins) as thin as 3% because they can eliminate value.

That said, as any fan of Sabremetrics will tell you, statistical analysis of a sport you love can be very rewarding and I would not put anybody off this if they already love the horses (as I still do).

Just tread carefully, and to make a long-term killing you're going to have to treat your setup like a hedge fund and always be moving around finding new edges.

I'm @p7r on Twitter if anybody wants to talk about this - I could talk for hours and hours and hours... :-)


>> Our work has connections to existing work not only in deep learning, but also to various other fields

Perhaps I'm, er, biased, but the most obvious similarity that immediately jumped out to me when I started reading this article is with Inductive Programming, the broader field of machine learning algorithms that learn programs, that includes Inductive Functional Programming and Inductive Logic Programming (which I study; hence, "biased").

In fact, I'd go as far as to say that this is a new kind of Inductive Programming. Perhaps it should be called Inductive Continuous Programming or Inductive Neural Progamming or some such. But what's exceedingly clear to me is that what the article describes is a search for a program that is composed of continuous functions, guided by a strong structural bias.

The hallmarks of Inductive Progamming include an explicit encoding of strong structural biases, including by the use of a library of primitives from which the learned model is composed; a strong Occamist bias; learned models that are programs. This work ticks all the boxes. The authors might benefit from looking up the relevan bibliography.

A couple of starting pointers:

Magic Haskeller, an Inductive Functional Programming system.

http://nautilus.cs.miyazaki-u.ac.jp/%7Eskata/MagicHaskeller....

Metagol, an Inductive Logic Programming system:

https://github.com/metagol/metagol

ILASP, a system for the Inductive Learning of Answer Set Programs:

http://ilasp.com/


Isn't this what Common Crawl[1] is. From their FAQ:

> What is Common Crawl?

> Common Crawl is a 501(c)(3) non-profit organization dedicated to providing a copy of the internet to internet researchers, companies and individuals at no cost for the purpose of research and analysis.

> What can you do with a copy of the web?

> The possibilities are endless, but people have used the data to improve language translation software, predict trends, track the disease propagation and much more.

> Can’t Google or Microsoft just do that?

>Our goal is to democratize the data so everyone, not just big companies, can do high quality research and analysis.

Also DuckDuckGo founder Gabriel Weinberg expressed the sentiment that the index should be separate from the search engine many years ago:

> Our approach was to treat the “copy the Internet” part as the commodity. You could get it from multiple places. When I started, Google, Yahoo, Yandex and Microsoft were all building indexes. We focused on doing things the other guys couldn’t do. [2]

From what I remember reading once DuckDuckGo doesn't use Common Crawl though.

[1] https://commoncrawl.org/

[2] https://www.japantimes.co.jp/news/2013/07/28/business/duckdu...


HN tip of the month ;)

When Canon released the EOS M as its first mirrorless camera, it hasn't gone down well in many reviews as it's a slow performer[0]. Now, this has actually been addressed in a later firmware upgrade, but at that point the damage had already been done. After that, Canon basically dumped this camera on the market at very low prices.

Good thing for us is that this little camera can now be bought very cheap second hand (got mine last year at around $100) Once you load up ML, you get a fantastic fun little camera! I'm not very experienced with shooting video but for photography, it's a wonderful experience. Focus peeking, magic zoom, interval photography, all for free.

Many kudos to the ML devs from this side as well!

[0] https://kenrockwell.com/canon/eos-m/m.htm


You can read more about sr.ht at https://meta.sr.ht/ as well as on Drew’s blog https://drewdevault.com/2018/11/15/sr.ht-general-availabilit...

I recommend Lektor [0], because it ships with a administration UI. This means you can give/sell it to people that would otherwise want a wordpress site. Unfortunately that feature doesn't pop out on the staticgen list.

[0]https://www.getlektor.com/


Yes and no. IMHO Lean has generated a lot of startups that had too much of a narrow focus and optimized away all the potentially interesting parts of their products because they were focusing on a MVP that turned out to be lacking in the V part because they were focusing on the M part.

There's this great presentation by Don Reinertsen "Second Generation Lean Product Development Flow": https://www.youtube.com/watch?v=L6v6W7jkwok&index=55&list=LL...

Which is a perfectly valid way to fix some of the worst offenses of the original Lean movement, which were to throw out the baby with the bathwater by focusing on only doing things that had low option value and low risk. Sort of the opposite of what you need to do in a startup, which is to do something genuinely new and creating value while taking calculated risks.

Don does a great job of providing some nice economic rationalizations of how deal with things that are risky but valuable, planning and estimating these things, etc. He also makes a great argument for shipping value fast, which is to maximize the economic life span of your product and prioritize getting revenue earlier vs. getting revenue later with a potentially better product. Many tech startups get sucked into building the perfect thing and missing their window of opportunity for both raising funding and monetizing their product.

Unless of course your startup is not about the tech, in which case definitely avoid taking risks on the tech front and focus on shipping a product fast. If you are building a market place, the last thing you want to do is reinventing how those work. You instead want to focus on whatever the hell it is you are selling. Lean was always popular with those type of startups. I tend to think of them as no-tech startups. You get a bunch of MBAs, marketing and other non techies and then bring in the full stack hipsters to copy paste a bunch of crappy js files together to fake it until you can afford to hire a proper team.


exactly, Judea Pearl's The Book of Why opened my eyes to the fact that most of what happens in machine learning is really just curve fitting

It connected with what i've heard Chomsky say about trying to develop laws of physics by filming what's happening outside the window. We need to do experiments and interventions to learn the dynamics of a system

"What do you think the role is, if any, of other uses of so-called big data? [...]

NOAM CHOMSKY: It’s more complicated than that. Let’s go back to the early days of modern physics: Galileo, Newton, and so on. They did not organize data. If they had, they could never have reached the laws of nature. You couldn’t establish the law of falling bodies, what we all learn in high school, by simply accumulating data from videotapes of what’s happening outside the window. What they did was study highly idealized situations, such as balls rolling down frictionless planes. Much of what they did were actually thought experiments.

Now let’s go to linguistics. Among the interesting questions that we ask are, for example, what’s the nature of ECP violations? You can look at 10 billion articles from the Wall Street Journal, and you won’t find any examples of ECP violations. It’s an interesting theory-determined question that tells you something about the nature of language, just as rolling a ball down an inclined plane is something that tells you about the laws of nature. Scientists use data, of course. But theory-driven experimental investigation has been the nature of the sciences for the last 500 years.

In linguistics we all know that the kind of phenomena that we inquire about are often exotic. They are phenomena that almost never occur. In fact, those are the most interesting phenomena, because they lead you directly to fundamental principles. You could look at data forever, and you’d never figure out the laws, the rules, that are structure dependent. Let alone figure out why. And somehow that’s missed by the Silicon Valley approach of just studying masses of data and hoping something will come out. It doesn’t work in the sciences, and it doesn’t work here."

- https://www.rochester.edu/newscenter/conversations-on-lingui...

It is actually a really interesting subject, marketing people doing a/b tests for ads/features seem at least a little closer to the experimental ideal, not just fitting curves to data

For further reading, I'd recommend the epilogue of Casuality (Pearl 2000), it's from a 1996 lecture at UCLA:

- http://bayes.cs.ucla.edu/BOOK-2K/causality2-epilogue.pdf


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You