I came across this website when I was looking for IBM PC OEM fonts for a little HTML + Canvas-based invaders-like game I was developing a few years ago. It is impressive how much effort VileR has poured into recovering each OEM font and their countless variants, from a wide range of ROMs. The site not only archives them all with incredible attention to detail, but also offers live previews, aspect ratio correction and other thoughtful features that make exploring it a joy. I've spent numerous hours there comparing different OEM fonts and hunting down the best ones to use in my own work!
I've been using the Px437 Verite 9x6 font from this pack as my main terminal font for years now, and couldn't be happier with it. VileR's font pack is great for both retro use cases, like displaying ANSI art, and for modern ones.
And recode(1) has full support for ISO-8859-*. As does iconv and the Python3 encodings.codecs module. I'm pretty sure browsers can render pages in them, too. Firefox keeps rendering UTF-8 pages as if they were ISO-8859-1 encoded when I screw up at setting the charset parameter on their content-type.
It seems incompatible with the idea that it's "Gone. Forever." Thinking again doesn't change that for me. The only thing that's gone is the exclusivity to a single proprietary-software vendor.
Hopefully the people after us will spend some time enjoying the things we have left to them; if they dedicate all their time to creating things that will outlast them, all our efforts will have been wasted.
I was personally looking for a bitmap font that resembled old fantasy games for use in a kernel. I was able to write a compile time constant parser for the .hex file format used here.
Sixel support unfortunately came to terminals in 01988, as that page explains. I saw it myself in 01992. Sending uncompressed color raster data over a 9600-baud serial link again every time you wanted to look at it was a terrible idea, made worse by the stupid Sixel encoding inflating it by an additional 33%.
Today, when we're sending it to terminal emulators running on teraflops supercomputers over gigabit-per-second links, it's only a waste of CPU and software complexity instead of user time and precious bandwidth. But it's still a waste.
Why couldn't we have FTP and Gopher support in web browsers instead?
Site isn't loading but I have a neat side project that works with any monospace font that includes Unicode glyphs which converts raw binary to unicode and back while passing through 7-bit ASCII characters, replacing control characters with related symbol representations, and sticking with actually-monospace glyphs (a surprising number of glyphs break the width rule across various "monospace" fonts), while ALSO being denser and more directly legible than hex encoding: https://github.com/pmarreck/printable-binary
Each UTF8 character (1 to 3 bytes) corresponds to 1 byte of input data. The average increase in data size is about 70%, but you gain binary independence in any medium that understands utf8 (email, the terminal, unit tests, etc.)
The favicon is either exactly or a really close copy of The Grate Book of Moo's logo. Hopefully that's not too obscure for Hacker News, but you never know.
> Unscii is a set of bitmapped Unicode fonts based on classic system fonts. Unscii attempts to support character cell art well while also being suitable for terminal and programming use.
It took several seconds to load for me, so here's the first paragraph. It's a good first paragraph, though!
I like the look of this a lot! Especially how condensed it is, similar to my favorite monospace TrueType font Iosevka Term. The ANSI color rendering looks phenomenal.
I'll definitely give this a try in my Linux TTY. Thanks for sharing!
A great deficiency of Unifont mentioned several times in the other thread was its lack of combining-character support, and the absence of alternative glyphs for the code points in scripts like Arabic (well, and Engsvanyáli) whose form is affected by joiner or non-joiner context. Does anyone know if Unscii does better at this?
From opening it in Fontforge, Unscii seems to have pretty broad coverage, including things like Bengali, Ethiopic, and even runic, plus pretty full CJK(V) coverage. It seems to have some of the CSUR https://www.evertype.com/standards/csur/ assignments, such as the Tengwar of Feanor in the range U+E000 to U+E07F, but has conflicting assignments for some other ranges, like the Cirth range U+E080 to U+E0FF (present in Unifont but arguably duplicative with the runic block), which is assigned to Teletext/Videotex block mosaics. I note that my system has different conflicting assignments for this range, with Tux at U+E000 followed by a bunch of dingbats, while the Cirth range is a bunch of math symbols.
Given that astral-plane support is virtually universal in Unicode implementations these days (thanks largely to emoji) it might be better for future such efforts to use SPUA and SPUB to reduce the frequency of such codepoint clashes. SPUA and SPUB are each the size of the entire BMP: https://en.wikipedia.org/wiki/Private_Use_Areas
For day-to-day use of semigraphic characters, I ran into the problem two hours ago in https://news.ycombinator.com/item?id=46277275 that the "BOX DRAWING" vertical lines don't connect, consequently failing to draw proper boxes. I had the same problem in Dercuano, where I fixed it by reducing the line-height for <pre> elements. The reason seems to be that Firefox defaults line-height to "normal", which is apparently equivalent to "1.41em", which doesn't sound very normal to me (isn't an "em" defined as the normal line height?), and, although the line-drawing characters in my font (which seems to be Noto Sans Mono) are taller than 1em, they still don't reliably join up if the line-height is taller than 1.21em.
Chromium does the same thing, except its abnormal definition of "normal" is evidently more like 1.35em.
It's probably too late to make a change to the standard HN stylesheet so major as
pre { line-height: 1.2em }
since it would change the rendering of the previous decades of comments. It would be a significant improvement for things like what I was doing there, and I don't think it would be worse for normal code samples. However, given the lengths to which the HN codebase goes to limit formatting (replacing characters like U+2009 THIN SPACE with regular spaces, stripping out not just emojis but most non-alphanumeric Unicode such as U+263A WHITE SMILING FACE, etc.) maybe discouraging the use of these semigraphics is intentional?
If not, though, perhaps the fact that the line-height is already different between Chromium and Firefox represents a certain amount of possible flexibility...
Obviously the line-height would be a much more serious problem for the kinds of diagonal semigraphic characters that viznut is largely focusing on here; those would strictly require a line-height of exactly 1em, which I think would substantially impair the readability of code samples.
I ended up writing a rust parser for the .hex file format for use in my kernel[1]. So I can now display the fantasy kernel on bare-metal :)
[1]: https://github.com/LevitatingBusinessMan/runix/blob/limine/s...
See also: The Ultimate Oldschool PC Font Pack from VileR at <https://int10h.org/oldschool-pc-fonts/fontlist/>.
I came across this website when I was looking for IBM PC OEM fonts for a little HTML + Canvas-based invaders-like game I was developing a few years ago. It is impressive how much effort VileR has poured into recovering each OEM font and their countless variants, from a wide range of ROMs. The site not only archives them all with incredible attention to detail, but also offers live previews, aspect ratio correction and other thoughtful features that make exploring it a joy. I've spent numerous hours there comparing different OEM fonts and hunting down the best ones to use in my own work!
http://viznut.fi/ibniz/
I'm envious of the level of nerdiness and genius at display, and hope some of it rubbed off on me by watching that demo.
[1] https://www.nerdfonts.com
Out of curiosity I checked with lsof, apparently other fonts are used as fallback:
/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf
/usr/share/fonts/truetype/droid/DroidSansFallbackFull.ttf
/usr/local/share/fonts/MS/segmdl2.ttf
/usr/local/share/fonts/MS/seguisym.ttf
/usr/local/share/fonts/nerd/Iosevka/IosevkaNerdFont-Regular.ttf
/usr/local/share/fonts/nerd/JetBrainsMono/JetBrainsMonoNerdFontMono-Regular.ttf
At least the result is perfect!
So is Webdings: https://www.dafontfree.io/webdings-font/
Webdings even got integrated into Unicode 7.0, so all the Noto fonts support it: https://en.wikipedia.org/wiki/Webdings
And recode(1) has full support for ISO-8859-*. As does iconv and the Python3 encodings.codecs module. I'm pretty sure browsers can render pages in them, too. Firefox keeps rendering UTF-8 pages as if they were ISO-8859-1 encoded when I screw up at setting the charset parameter on their content-type.
That's the point. Think again.
Do you have a link to the MUD you're working on?
https://en.wikipedia.org/wiki/Sixel
we are full circle, 40 year later.
Today, when we're sending it to terminal emulators running on teraflops supercomputers over gigabit-per-second links, it's only a waste of CPU and software complexity instead of user time and precious bandwidth. But it's still a waste.
Why couldn't we have FTP and Gopher support in web browsers instead?
I mean not really, they are ancient and horribly insecure protocols without enough users to justify improving them.
Also, you may not have noticed this, but you're commenting on a thread that's largely about PETSCII and Videotex.
Fortunately, AFAIK, there isn't any significant body of existing Sixel art we need to preserve access to.
Each UTF8 character (1 to 3 bytes) corresponds to 1 byte of input data. The average increase in data size is about 70%, but you gain binary independence in any medium that understands utf8 (email, the terminal, unit tests, etc.)
Nice work! But if you want something like this in production, base64 only increases the size by 33%.
CNXT = Constantine's Nine x Twenty
https://github.com/cbytensky/cnxt
It took several seconds to load for me, so here's the first paragraph. It's a good first paragraph, though!
Also:
https://farside.link
https://lite.cnn.com
https://text.npr.org
I won't have to wait seconds (!!!) to read it
I come to the comments to find out what these "clickbait title" articles (meaningless words with no context) really are before clicking.
Secondly, the site appears to be "hug of death"'d at the moment. I presume it was still accessible but struggling when OP posted.
I'll definitely give this a try in my Linux TTY. Thanks for sharing!
A great deficiency of Unifont mentioned several times in the other thread was its lack of combining-character support, and the absence of alternative glyphs for the code points in scripts like Arabic (well, and Engsvanyáli) whose form is affected by joiner or non-joiner context. Does anyone know if Unscii does better at this?
From opening it in Fontforge, Unscii seems to have pretty broad coverage, including things like Bengali, Ethiopic, and even runic, plus pretty full CJK(V) coverage. It seems to have some of the CSUR https://www.evertype.com/standards/csur/ assignments, such as the Tengwar of Feanor in the range U+E000 to U+E07F, but has conflicting assignments for some other ranges, like the Cirth range U+E080 to U+E0FF (present in Unifont but arguably duplicative with the runic block), which is assigned to Teletext/Videotex block mosaics. I note that my system has different conflicting assignments for this range, with Tux at U+E000 followed by a bunch of dingbats, while the Cirth range is a bunch of math symbols.
Given that astral-plane support is virtually universal in Unicode implementations these days (thanks largely to emoji) it might be better for future such efforts to use SPUA and SPUB to reduce the frequency of such codepoint clashes. SPUA and SPUB are each the size of the entire BMP: https://en.wikipedia.org/wiki/Private_Use_Areas
For day-to-day use of semigraphic characters, I ran into the problem two hours ago in https://news.ycombinator.com/item?id=46277275 that the "BOX DRAWING" vertical lines don't connect, consequently failing to draw proper boxes. I had the same problem in Dercuano, where I fixed it by reducing the line-height for <pre> elements. The reason seems to be that Firefox defaults line-height to "normal", which is apparently equivalent to "1.41em", which doesn't sound very normal to me (isn't an "em" defined as the normal line height?), and, although the line-drawing characters in my font (which seems to be Noto Sans Mono) are taller than 1em, they still don't reliably join up if the line-height is taller than 1.21em.
Chromium does the same thing, except its abnormal definition of "normal" is evidently more like 1.35em.
It's probably too late to make a change to the standard HN stylesheet so major as
since it would change the rendering of the previous decades of comments. It would be a significant improvement for things like what I was doing there, and I don't think it would be worse for normal code samples. However, given the lengths to which the HN codebase goes to limit formatting (replacing characters like U+2009 THIN SPACE with regular spaces, stripping out not just emojis but most non-alphanumeric Unicode such as U+263A WHITE SMILING FACE, etc.) maybe discouraging the use of these semigraphics is intentional?If not, though, perhaps the fact that the line-height is already different between Chromium and Firefox represents a certain amount of possible flexibility...
Obviously the line-height would be a much more serious problem for the kinds of diagonal semigraphic characters that viznut is largely focusing on here; those would strictly require a line-height of exactly 1em, which I think would substantially impair the readability of code samples.
https://news.ycombinator.com/item?id=41370020