We have these at work and there was a big outcry when IT tried to get rid of them. I use them from time to time to do things like: Keep git and other backups. Convenient place to scp files to/from. Upload files that coworkers can grab. I don't use it, but others used them as permanent IRC endpoints (using screen or tmux presumably).
Notice a cloud VM or container probably doesn't work here. You need something with a permanent presence, and shared between users (with separate Unix accounts).
Using IRC for the company IM is true old-school. I've worked at places which do the same, and also use SIP for A/V calls. Not being dependent on Big Tech, or the Internet for that matter, for services which exist entirely inside the corporate LAN is a great way to work.
Uh, how?.. It’s a conversation window with a contact list to the left, with GNOME-standard (libadwaita) look and feel. If you want separate conversation windows (like ICQ or Pidgin), then I understand the desire, but it’s pretty much orthogonal to web-view-ness (also very rare these days regardless of protocol).
Maybe Adwaita looks like that, I don't know. But there is a ton of whitespace, lots of clickable UI elements that look just like regular text, that kind of thing.
We moved off IRC to Slack ages ago. Then they decided Slack was too expensive so we were forced to Teams which is bundled with the inescapable O365 license. We now use IRC again (UnrealIRCd) which runs on a debian VM on someone's workstation in the office.
irssi is surprisingly decent, and would even makes you think IRC the protocol was designed around irssi (and TUI), although the protocol is actually much older
Mostly annoying network configs and token expirations etc. Not saying it can't be set up well, but in my experience some security guru gets a hardon for making my life miserable.
That depends... If your AWS account is already wired up the internal network then the ec2 is basically just another VM just like the onprem VMware servers or physical machines.
Now if all your AWS accounts are only public facing then yes it can get a bit more complicated .
Perhaps it is worth noting that all super computers I know (like the Dutch Snellius and the Finnish Lumi) are Slurm clusters with login nodes.
Bioinformaticians (among others) in (for example) University Medical Centers won’t get much more bang for the buck than on a well managed Slurm cluster (ie with GPU and Fat nodes etc to distinguish between compute loads). You buy the machines, they are utilized close to 100% over their life time.
Yes, I spend a majority of my professional life on similar systems writing code in vim and running massive jobs via slurm. Required for processing TBs of data on secured environments with seamless command line access. I hate web-based connections or vscode type system. Although open to any improvements, this works best to me. It’s like a world inside one’s head with a text-based interface.
Graphical data exploration and stats with R, python, etc is a beautiful challenge at that scale.
Aside from how slow and user hostile it is compared to a text editor, my biggest complaint about vs code is the load it puts on the login node. You get 40 people each running multiple vs code servers and it brings the poor computer to its knees.
One of the more prominent uses of Slurm to hit the headlines recently is by the data access centres for the LSST data from the Vera Rubin Observatory. Such as the U.S.A. facility in Stanford and the U.K. facility at the University of Edinburgh's Somerville.
In this context, it is a Unix machine configured for general purpose use (so no minimal web-server-centric OS, for example) where everybody in the organisation gets a login (probably automatically and tied to whatever SSO they use). There may also be some niceties installed and/or preconfigured - like software used frequently by this and that research group or auto-mounting the user's file share from the central storage cluster.
Running a remote VScode backend on those while it is technically using the traditional shared server feels like that is not for the "shared" part but just because it is free and available?
Free and available is always good (especially in a research environment). ;) The author also says this in the P.S.:
"I believe the reason people run IDE backends on our login servers is because they have their code on our fileservers, in their (NFS-mounted) home directories. And in turn I suspect people put the code there partly because they're going to run the code on either or both of our SLURM cluster or the general compute servers."
I use VSCode's remote/ssh functionality all the time, particularly when I need to develop code on an environment that's more capable than my local machine (or when my internet is weak). Still use Git, no reason why you'd change that when working on a remote machine.
I mean rather than work off a remote machine for the convenience of having files you can deploy to a compute cluster use git (or maybe scp) to work local and then deploy them when needed. For a lower latency editing experience.
Open up other machines to the internet and recommend people upload their code to github instead of the school's file server. People who want to read email can use their web browser to load gmail or outlook depending on what the school goes with. For the cron jobs I would want to know what is being scheduled before providing a recommendation on how you can get rid of the login server for it.
It’s a University: students are there to learn. Mulituser UNIX machines are great place to learn; I probably wouldn’t be where I am now if I hadn’t cut my teeth on AIX, with ircii and pine for mail and X windows and a public_html directory making exploration fun and easy. Sure, you can give a student Outlook365 access and Wordpress, but will he learn anything from them?
The servers are for professors and graduate students, so this IT guy wants to block PhD students from doing AI research, demanding that they explain their cron jobs to him, because he thinks he can optimize them and save a few bucks.
And when he does, the blast radius is a single unix box/VM or even a single account. You don't put institutionally-critical stuff on things like this. If it gets hacked, you shut it down, have everyone who logged in during the window change their password, and restore from backups or rebuild from scratch.
Going around a CS research department and asking the population there to justify their cron jobs and other stuff they run sounds like an excellent use of a (rare-ish) university admin / IT staff's time... /s
And deal with the ire of Professor Foo (and grad students Bar and Baz) who want/need to use obscure software XYZ and have done so for years (or decades) without any fuss. And build some interface between your HPC clusters and Github. And keeping with regulations and agreements on privacy and security. And so on and so forth. All that for the low, low cost of... Well, certainly not less than keeping and maintaining like one to three unix machines that don't need no fancy hardware or other special attention in the data center you are maintaining anyway?! Why?
edit: By the way, from their documentation, the department mentioned by the author runs their own E-Mail-Servers (as many universities do - fortunately, in this world, there often still is a bit more choice than 'use Gmail/Outlook in the browser').
One of the things about Research Universities in particular is that there's always weird custom stuff everywhere. Thus "But why isn't it standard?" is a very stupid question. Doing the same things over and over but expecting different results isn't research it's insanity. So they're doing non-standard things because that's the whole point. There are systems that are "just" ordinary corporate stuff. Finance are not researching whether to pay suppliers, HR are not researching whether work contracts exist. But a lot of the organisation at any time is engaged in research, otherwise you're just a teaching university and to some extent that can actually be standardized and is the worse for it.
Because this is something to do and solving real problems is way more annoying.
If you work on a smaller cluster in a research institution where in silico work represents only a small portion of the research output the management of the cluster will sometimes be subcontracted out to the general IT support shop. There an administrator - usually with not nearly enough experience will start receiving support requests from users with decades of unix experience which take hours of research to solve. Unable or unwilling (and because inactivity will look bad in the next meeting of department heads) the technician will start working on some "security" matter (so it sounds urgent and important). And this is how elimination of login nodes, cutting internet access to compute nodes, elimination of switches in the offices because they pose security risks (one might be able to connect an additional devices to the network) and implementation of 2FA on pubkey only login servers come in to existence.
Most of the cluster operators are wonderful. But a bad one can make a cluster significantly less useful.
Notice a cloud VM or container probably doesn't work here. You need something with a permanent presence, and shared between users (with separate Unix accounts).
We still use IRC in some upstream communities although it has been replaced by Matrix in some (which is also terrible).
Clients are all irssi on WSL2 or Macs.
Now if all your AWS accounts are only public facing then yes it can get a bit more complicated .
Bioinformaticians (among others) in (for example) University Medical Centers won’t get much more bang for the buck than on a well managed Slurm cluster (ie with GPU and Fat nodes etc to distinguish between compute loads). You buy the machines, they are utilized close to 100% over their life time.
Graphical data exploration and stats with R, python, etc is a beautiful challenge at that scale.
* https://developer.lsst.io/usdf/batch.html
* https://epcc.ed.ac.uk/hpc-services/somerville
But they're all over the place, from the James Hutton Institute to London Imperial.
* https://www.cropdiversity.ac.uk
* https://www.imperial.ac.uk/computing/people/csg/guides/hpcom...
One time X forwarding Matlab saved me a hefty sum of money (for a student) as I could complete an assignment remotely.
Our admins urged people to nice their processes, but my overthewire password cracking sessions were always killed no matter how nice they were.
"I believe the reason people run IDE backends on our login servers is because they have their code on our fileservers, in their (NFS-mounted) home directories. And in turn I suspect people put the code there partly because they're going to run the code on either or both of our SLURM cluster or the general compute servers."
https://www.public.outband.net
Probably one of the stupider things I have thrown together, But I had fun making it.
And deal with the ire of Professor Foo (and grad students Bar and Baz) who want/need to use obscure software XYZ and have done so for years (or decades) without any fuss. And build some interface between your HPC clusters and Github. And keeping with regulations and agreements on privacy and security. And so on and so forth. All that for the low, low cost of... Well, certainly not less than keeping and maintaining like one to three unix machines that don't need no fancy hardware or other special attention in the data center you are maintaining anyway?! Why?
edit: By the way, from their documentation, the department mentioned by the author runs their own E-Mail-Servers (as many universities do - fortunately, in this world, there often still is a bit more choice than 'use Gmail/Outlook in the browser').
If you work on a smaller cluster in a research institution where in silico work represents only a small portion of the research output the management of the cluster will sometimes be subcontracted out to the general IT support shop. There an administrator - usually with not nearly enough experience will start receiving support requests from users with decades of unix experience which take hours of research to solve. Unable or unwilling (and because inactivity will look bad in the next meeting of department heads) the technician will start working on some "security" matter (so it sounds urgent and important). And this is how elimination of login nodes, cutting internet access to compute nodes, elimination of switches in the offices because they pose security risks (one might be able to connect an additional devices to the network) and implementation of 2FA on pubkey only login servers come in to existence.
Most of the cluster operators are wonderful. But a bad one can make a cluster significantly less useful.
The author proposed the thought experiment. Ask him, not me.