3 comments

  • nneonneo 17 hours ago
    In my setup, I have one host that can take multiple actions on a second host, with a restricted set of file paths.

    I created a shell script on the second host called "from_host_1" which implements the logic to parse the first host's request and any file paths it supplies, validates them, translates the paths as needed, and then executes the corresponding program.

    This way, I can just use a single SSH key which can perform multiple functions. On the first host, I have a bunch of tiny scripts like `~/bin/func` which basically performs `ssh secondhost func "$@"`.

    In the OPs case, they seem to have two different hosts which can run two different commands. Two separate SSH keys seems like a reasonable thing to do, because sharing the same key across two systems increases your risk if one machine is compromised.

  • pickle-wizard 18 hours ago
    Handy stuff. This would be good for restricting service accounts.

    There is a whole lot that SSH can do that most people don't know about.

    • m463 17 hours ago
      > There is a whole lot that SSH can do that most people don't know about.

      I had to port ssh to embedded hardware decades ago, and pulling back the curtains I came to the opinion that everything was a mess.

      for example, I needed to be able to upload/download firmware, and was surprised to find that scp wasn't a pure file transfer protocol. It is more like "log into the remote system via shell and run a file transfer program"

      There are lots of other things I didn't like, like wholesale transferring environment variables back and forth, weird shell interactions and more.

      It is very useful, but it is an organically grown program, not a designed protocol.

      • woooooo 15 hours ago
        Scp not needing its own protocol is a feature and not a bug in my book..
        • m463 14 hours ago
          thing is, there IS a transfer protocol, there are just no controls on the files. If you can log in, there is just passing security.

          Just take a step back and think what you could do if it were a protocol:

          - limit visible files

          - limit access to files by user

          - make access strictly read-only

          - allow upload-only (sort of a dropbox)

          - clear separation between login access and file access

          - remove login user from the whole mess

          - trivially tie in as a filesystem.

          etc...

          • rad_gruchalski 13 hours ago
            But why? It can be done with ssh and some mix of linux permissions. It’s simple. There’s always room for more complexity.
            • m463 8 hours ago
              I like the simplicity of controlling everything with a hypothetical scp.conf:

                default
                  access none /dev /sys /proc
                user foo
                  access ro /var/scp/firmware
                  access rw /var/scp/user-foo
                user anonymous
                  access w /var/scp/dropbox
                user joe
                  access rw /home/joe
                user fred
                  access rw /
                user backup
                  access ro /
  • 3np 14 hours ago
    For Linux hosts on zfs, this coupled with explicit entry in sudoers is useful for remote zfs send/receive which requires root.