1 comments

  • cosmos0072 3 hours ago
    First stable release appeared on HN one year ago: https://news.ycombinator.com/item?id=43061183 Thanks for all the feedback!

    Today, version 1.0.0 adds structured pipelines: a mechanism to exchange (almost) arbitrary objects via POSIX pipes, and transform them via external programs, shell builtins or Scheme code.

    Example:

      dir /proc | where name -starts k | sort-by modified
    
    possible output:

      ┌───────────┬────┬───────────────┬────────┐
      │   name    │type│     size      │modified│
      ├───────────┼────┼───────────────┼────────┤
      │kcore      │file│140737471590400│09:32:44│
      │kmsg       │file│              0│09:32:49│
      │kallsyms   │file│              0│09:32:50│
      │kpageflags │file│              0│10:42:53│
      │keys       │file│              0│10:42:53│
      │kpagecount │file│              0│10:42:53│
      │key-users  │file│              0│10:42:53│
      │kpagecgroup│file│              0│10:42:53│
      └───────────┴────┴───────────────┴────────┘
    
    Another example:

      ip -j route | select dst dev prefsrc | to json1
    
    possible output:

      [{"dst":"default","dev":"eth0"},
      {"dst":"192.168.0.0/24","dev":"eth0","prefsrc":"192.168.0.2"}]
    
    Internally, objects are serialized before writing them to a pipe - by default as NDJSON, but it can be set manually - and deserialized when reading them from a pipe.

    This allows arbitrary transformations at each pipeline step: filtering, choosing a subset of the fields, sorting with user-specified criteria, etc. And each step can be an executable program, a shell builtin or Scheme code.

    If you know nushell, they will feel familiar as they are inspired by it - the implementation is fully independent, though.