For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | tomhrr's commentsregister

Thanks, and yep, the postfix notation allows for chaining commands together without parentheses. For example:

  1 2 +; 3 *
(The semicolon is used to indicate that the previous string (token) is a function and should be executed. In some instances this is implicit, like when the last string (token) in a larger command (like the one above) maps to a function name, in which case it will be treated as a function call even when no semicolon appears after it.)


> Is explicitly disregarding several Unix philosophies, but not really discussing why the author believes those philosophies are wrong.

I'm not sure I'd characterise these parts of the Unix philosophy as 'wrong'. For example, if you are writing a shell program for use by somebody else, then having that program work with text streams makes sense. The focus here is interactive use and short functions/programs for local use, though, which means that the text stream part of the philosophy becomes a less pressing consideration.

> I would welcome a discussion of how the Unix philosophies break down, or what they prevent. But I didn’t find that here.

At least as far as text streams go, the readme talks about awkward considerations like '-0' and '-print0', but more generally, when the command doesn't output a known format like XML or JSON, parsing the output can be fun (e.g. per https://stackoverflow.com/a/15643939). Oftentimes the response is "the command has flags for getting the data that you want", but it's simpler (IMHO) not to build this sort of functionality into every command, and instead just have the shell support it more nicely, whether by providing a function that produces a first-class value (e.g. 'ls', 'ps' here) or by providing more generic parsing functions (e.g. 'split', 'splitr' here).


The problem is IMO a lack of a coherent flag to output such. If each CLI program would've a coherent flag, it would make the command line and scripting a tad easier. But with a decent command line history (or tool to parse such, such as history | fzf) its reasonably easy and quick to figure out the past.


Yep, at least as far as working with structured data rather than streams of text is concerned. Apart from the syntax differences, this is generally much lighter weight, though, in that it doesn't have classes, in-depth system integration, or similar (not that those features are bad in Powershell).


The 'sh' command that it corresponds to is above it:

  find . -iname '*test*' -print0 | xargs -0 grep data
but possibly a bad assumption on my part that the mapping was clear. In any event, 'm' is for regex string matching, and 'f<' is for reading a file into a generator (basically an iterator over the lines in the file). It's a good point that more, simpler examples would help.


The examples in the README are really not the best.

Anyway, given from the examples for me this reads as a language to write hacky onliners of code which are probably easy to write once but hard to read anytime after. One moment... we had such a language in the past: Perl! :-D

Perl5 onliners were infamous for their expressive power but sometimes crazy to understand even if you thought you were fluent in perl ;-)


> Anyway, given from the examples for me this reads as a language to write hacky onliners of code which are probably easy to write once but hard to read anytime after.

There is definitely a 'write-only' angle to concatenative (postfix) languages that rely on a stack. I think this type of language/approach is uniquely well-suited to this use case, though, where the focus is interactive use, plus short programs that are not generally intended for distribution, since you get the most out of the advantages around conciseness, without incurring the costs that come with larger programs.


We have 2 now, perl and jq


Even after perusing jq's manual multiple times and having written several complex incantations, I still have no idea how to properly combine `|` and `.[]` except by trial and error, or why `select()` needs to be used inside `map(select(...))`

Recently I needed to extract some data, and after fighting with jq and its manual for half an hour, I solved the problem in 30 seconds with node.js

I appreciate the idea behind jq, but its language is horrible. Even XPath was easier and cleaner.


Some nice alternatives for querying JSON via CLI include jello, yamlpath, and dasel.


Which one do you think is best? And, if applicable, which one do you love but it’s not quite first place material yet?


Don't forget `gron`.


Hmm. I also think jq is more awkward than it needs to be, but I don’t think the points you mention are a problem. Maybe explaining them would help?

(Note: the following explains jq’s operation using the smallest possible subset of the language, it doesn’t aim to use the most natural programs possible.)

So jq’s data model (much like XPath’s actually) is that everything is a (possibly empty) stream of (JSON) values. On input (unless you use -s), it accepts any number of concatenated JSON objects (usually separated by newlines or ACSII RS, but as JSON is self-delimiting that isn’t strictly required) and makes a stream out of them.

That is then fed into the program, a pipeline of |-separated transforms, each of which can generate zero or more output elements from each input element. For example, .foo is a one-to-one transform that, when it accepts an object, emits the value of its foo property (and fails otherwise):

  $ echo '{"foo": null}{"bar": 1, "foo": 42}' | jq .foo
  null
  42
And .[] is a one-to-zero-or-more transform that, on accepting an array, emits each array element separately (and fails otherwise):

  $ echo '[false,1][][2]' | jq '.[]'
  false
  1
  2
While select(F) is a one-to-zero-or-one transform that, on accepting an element, feeds it into F and lets its pass through if it got a truthy value or rejects if it got a falsy one:

  $ echo '{"foo": null}{"bar": 1, "foo": 42}' | jq select(.foo)
  {"bar": 1, "foo": 42}
OK, that last one was a bit of a lie. Because we don’t want to introduce functions into the language as a separate kind of thing to transforms, F is also a transform, so it might possibly emit more than one value in response to whatever select fed it. The full truth is that select(F) is a one-to-zero-or-more transform that emits each input value as many times as there are truthy values in F’s response to it:

  $ echo 'false 42' | jq 'select([true, "also truthy"] | .[])'
  false
  false
  42
  42
That might have not been terribly useful, but it illustrates two points. First, a JSON literal is a valid transform: one that emits itself every time it gets something. (That’s why you need to write .[] for flatten: plain [] is the empty array literal.) Second, while jq cannot do many-to-one transforms, on pain of losing its streaming nature, it can do something like nested contexts, where it launches a subordinate pipeline and does something with the results.

And it is willing to collect those results instead of streaming them: if you have a pipeline P, [P] is a one-to-one transform that, for each input element, runs P on it, collects all the results from them, puts them into an array and emits that. For example:

  $ echo '[[0,1],[2]] [[]] [] [[3]]' | jq '[.[] | .[]]'
  [0,1,2]
  []
  []
  [3]
Or:

  $ echo '[false,1][][2]' | jq '[.[] | select(.)]'
  [1]
  []
  [2]
(here . is the one-to-one identity transform). Instead of [.[] | P] you can write map(P).

What this boils down to select(C) will go through the input stream and pare down its elements to those that satisfy C, while map(select(C)) will go through the input stream of arrays and pare down each array’s elements to those that satisfy C.

Final point: if you want to give up streaming, the -s / --slurp flag will slurp the input stream into an array, then feed it to your program as a single element. That is, jq -s '.[] | P' is a worse synonym of jq P.


Don't listen to most of the people here. Perl? What are they talking about. It is a concatenative shell. Think forth.

That said, the examples could be better explained just like you did here now. The examples are not bad but things like the f iterator probably needs some explaining.

I really like the idea. A more simple shell language is something I've wanted. Either lisp or forth would work but the basic issue is that bash and the rest are so complicated and with so many special things you have to know. Doing anything other than most basic shell scripts is horrible.


> That said, the examples could be better explained just like you did here now. The examples are not bad but things like the f iterator probably needs some explaining.

Thanks, the readme has been expanded now so that it has better examples, and the full documentation (https://github.com/tomhrr/cosh/blob/main/doc/all.md) is more clearly marked for those who are looking for more detail.


I love the premise. I've got be your market audience, but the examples are too hard for me: both sh & cosh

I don't use `find` and I would have to look up `-0`

Why does cosh use ; the syntax kinda looks like they're not needed?


> but the examples are too hard for me: both sh & cosh

Thanks, I'll look at adding some simpler examples.

> I would have to look up `-0`

The thing about '-0' is that it's not required in cosh, because you're dealing with proper values instead of text streams. The problem that '-0' (and '-print0') is addressing doesn't arise.

> Why does cosh use ; the syntax kinda looks like they're not needed?

';' needs to be used to denote the previous string (token) as a function where that can't be determined from context. For example, if you type '1 2 +' and press enter, the shell will assume that because there's a function called '+', the intention is to run that function, but you could also enter '1 2 +;' (or '1 2 + ;') to get the same result. Whereas '1 2 + 1 2 + +' doesn't work, because the shell doesn't know if the first two '+' symbols are meant to be interpreted as function calls or just plain strings. The other place where it assumes that a function call is intended is at the end of an anonymous function, so `[1 2 +]` and [1 2 +;]` have the same effect.


It may not be helpful in your particular case depending on how your Slack is administered and similar, but my setup is like so:

- https://github.com/wee-slack/wee-slack with WeeChat (weechat.org) for IM only (i.e. configured so that it only notifies when an IM (one-on-one or group) is received); and

- https://github.com/tomhrr/paws for retrieving messages from Slack as email, and sending responses to those emails to Slack.

https://github.com/nicm/fdm is used for all filtering of email, including Slack messages. This allows for rules like e.g. marking everything from Slack as read, unless it's from channel X and matches your username, or it was sent after 6pm, or similar.

With this setup, IMs still come through as IMs, but everything else goes to email and is treated like email. Retrieving email and Slack messages happens based on local configuration, so it can e.g. be set up to fetch once per hour, and then all of those messages can be dealt with in one go. As the filters are refined, the number of useless messages that have to be reviewed decreases. With this configuration, at least in my experience, Slack is much less of a nuisance.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You