Today I wanted to write a script that I could execute from both *nix (using Bash or a similar shell) and on Windows (from a command prompt, i.e. a batch file). I found Max Norin’s solution which works, but has a few limitations, e.g. when run it outputs either the word “off” when run in *nix or the word “goto” when run on Windows. There’s got to be a sneaky solution, right?
Here’s my improved version:
# Linux code here
@echooffrem Windows script hereecho%OS%
Mine exploits the fact that batch files can prefix commands with @ to suppress outputting them as they execute. So @goto can be a valid function name in bash/zsh etc. but is interpreted as a literal goto command in a Windows Command Prompt. This allows me to move the echo off command – which only has meaning to Windows – into the Windows section of the script and suppress it with @.
The file above can be saved as e.g. myfile.cmd and will execute in a Windows Command Prompt (or in MS-DOS) or your favourite *nix OS. Works in MacOS/BSD too, although obviously any more-sophisticated script is going to have to start working around the differences between GNU and non-GNU versions of core utilities, which is always a bit of a pain! Won’t work in sh because you can’t define functions like that.
But the short of it is you can run this on a stock *nix OS and get:
And you can run it on Windows and get:
You can’t put a shebang at the top because Windows hates it, but there might be a solution using PowerShell scripts (which use hashes for comments: that’s worth thinking about!). In any case: if the interpreter strictly matters you’ll probably want to shell to it on line 3 with e.g. bash -c.
Why would you want such a thing? I’m not sure. But there is is, if you do.
As you might know if you were paying close attention in Summer 2019, I run a “URL shortener” for my personal use. You may be familiar with public URL shorteners like TinyURL and Bit.ly: my personal URL shortener is basically the same thing, except that only I am able to make short-links with it. Compared to public ones, this means I’ve got a larger corpus of especially-short (e.g. 2/3 letter) codes available for my personal use. It also means that I’m not dependent on the goodwill of a free siloed service and I can add exactly the features I want to it.
For the last nine years my link shortener has been S.2, a tool I threw together in Ruby. It stores URLs in a sequentially-numbered database table and then uses the Base62-encoding of the primary key as the “code” part of the short URL. Aside from the fact that when I create a short link it shows me a QR code to I can easily “push” a page to my phone, it doesn’t really have any “special” features. It replaced S.1, from which it primarily differed by putting the code at the end of the URL rather than as part of the domain name, e.g. s.danq.me/a0 rather than a0.s.danq.me: I made the switch because S.1 made HTTPS a real pain as well as only supporting Base36 (owing to the case-insensitivity of domain names).
But S.2’s gotten a little long in the tooth and as I’ve gotten busier/lazier, I’ve leant into using or adapting open source tools more-often than writing my own from scratch. So this week I switched my URL shortener from S.2 to YOURLS.
One of the things that attracted to me to YOURLS was that it had a ready-to-go Docker image. I’m not the biggest fan of Docker in general, but I do love the convenience of being able to deploy applications super-quickly to my household NAS. This makes installing and maintaining my personal URL shortener much easier than it used to be (and it was pretty easy before!).
Another thing I liked about YOURLS is that it, like S.2, uses Base62 encoding. This meant that migrating my links from S.2 into YOURLS could be done with a simple cross-database INSERT... SELECT statement:
One of S.1/S.2’s features was that it exposed an RSS feed at a secret URL for my reader to ingest. This was great, because it meant I could “push” something to my RSS reader to read or repost to my blog later. YOURLS doesn’t have such a feature, and I couldn’t find anything in the (extensive) list of plugins that would do it for me. I needed to write my own.
I wrote a script like this and put it in my crontab:
mysql --xml yourls -e \"SELECT keyword, url, title, DATE_FORMAT(timestamp, '%a, %d %b %Y %T') AS pubdate FROM yourls_url ORDER BY timestamp DESC LIMIT 30"\
| xsltproc template.xslt - \
| xmllint --format - \
The first part of that command connects to the yourls database, sets the output format to XML, and executes an SQL statement to extract the most-recent 30 shortlinks. The DATE_FORMAT function is used to mould the datetime into something approximating the RFC-822 standard for datetimes as required by RSS. The output produced looks something like this:
<?xml version="1.0"?><resultsetstatement="SELECT keyword, url, title, timestamp FROM yourls_url ORDER BY timestamp DESC LIMIT 30"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><row><fieldname="keyword">VV</field><fieldname="url">https://webdevbev.co.uk/blog/06-2021/perfect-is-the-enemy-of-good.html</field><fieldname="title"> Perfect is the enemy of good || Web Dev Bev</field><fieldname="timestamp">2021-09-26 17:38:32</field></row><row><fieldname="keyword">VU</field><fieldname="url">https://webdevlaw.uk/2021/01/30/why-generation-x-will-save-the-web/</field><fieldname="title">Why Generation X will save the web Hi, Im Heather Burns</field><fieldname="timestamp">2021-09-26 17:38:26</field></row><!-- ... etc. ... --></resultset>
We don’t see this, though. It’s piped directly into the second part of the command, which uses xsltproc to apply an XSLT to it. I was concerned that my XSLT experience would be super rusty as I haven’t actually written any since working for my former employer SmartData back in around 2005! Back then, my coworker Alex and I spent many hours doing XML backflips to implement a system that converted complex data outputs into PDF files via an XSL-FO intermediary.
I needn’t have worried, though. Firstly: it turns out I remember a lot more than I thought from that project a decade and a half ago! But secondly, this conversion from MySQL/MariaDB XML output to RSS turned out to be pretty painless. Here’s the template.xslt I ended up making:
<?xml version="1.0"?><xsl:stylesheetxmlns:xsl="http://www.w3.org/1999/XSL/Transform"version="1.0"><xsl:templatematch="resultset"><rssversion="2.0"xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Dan's Short Links</title><description>Links shortened by Dan using danq.link</description><link> [ MY RSS FEED URL ]</link><atom:linkhref=" [ MY RSS FEED URL ] "rel="self"type="application/rss+xml"/><lastBuildDate><xsl:value-ofselect="row/field[@name='pubdate']"/> UTC</lastBuildDate><pubDate><xsl:value-ofselect="row/field[@name='pubdate']"/> UTC</pubDate><ttl>1800</ttl><xsl:for-eachselect="row"><item><title><xsl:value-ofselect="field[@name='title']"/></title><link><xsl:value-ofselect="field[@name='url']"/></link><guid>https://danq.link/<xsl:value-ofselect="field[@name='keyword']"/></guid><pubDate><xsl:value-ofselect="field[@name='pubdate']"/> UTC</pubDate></item></xsl:for-each></channel></rss></xsl:template></xsl:stylesheet>
That uses the first (i.e. most-recent) shortlink’s timestamp as the feed’s pubDate, which makes sense: unless you’re going back and modifying links there’s no more-recent changes than the creation date of the most-recent shortlink. Then it loops through the returned rows and creates an <item> for each; simple!
The final step in my command runs the output through xmllint to prettify it. That’s not strictly necessary, but it was useful while debugging and as the whole command takes milliseconds to run once every quarter hour or so I’m not concerned about the overhead. Using these native binaries (plus a little configuration), chained together with pipes, had already resulted in way faster performance (with less code) than if I’d implemented something using a scripting language, and the result is a reasonably elegant “scratch your own itch”-type solution to the only outstanding barrier that was keeping me on S.2.
All that remained for me to do was set up a symlink so that the resulting output.rss.xml was accessible, over the web, to my RSS reader. I hope that next time I’m tempted to write a script to solve a problem like this I’ll remember that sometimes a chain of piped *nix utilities can provide me a slicker, cleaner, and faster solution.
Update: Right as I finished writing this blog post I discovered that somebody had already solved this problem using PHP code added to YOURLS; it’s just not packaged as a plugin so I didn’t see it earlier! Whether or not I use this alternate approach or stick to what I’ve got, the process of implementing this YOURLS-database ➡ XML ➡ XSLT ➡ RSS chain was fun and informative.
This is a basic Python shell (really, it’s a fancy wrapper over the system shell) that takes a task and asks OpenAI for what Linux bash command to run based on your description. For safety reasons, you can look at the command and cancel before actually running it.
Of all the stupid uses of OpenAI’s GPT-3, this might be the most-amusing. It’s really interesting to see how close – sometimes spot-on – the algorithm comes to writing the right command when you “say what you mean”. Also, how terribly, terribly ill-advised it would be to actually use this for real.
I am the original author of GNU grep. I am also a FreeBSD user,
although I live on -stable (and older) and rarely pay attention
However, while searching the -current mailing list for an unrelated
reason, I stumbled across some flamage regarding BSD grep vs GNU grep
performance. You may have noticed that discussion too...
This post starts very geeky, but becomes about computer games later on. Feel free to scroll down three paragraphs if you like computer games but don’t like computer hardware hacking.
My M3 Perfect and some related hardware arrived today. Basically, it’s a SD card reader that plugs into a Game Boy Advance slot (which are found on not only the Game Boy Advance series but also the Nintendo DS). By itself, it allows a Nintendo DS (or a DS Lite, as my new toy is) to play music, videos, etc. But combined with an Passcard (also arrived this morning), it allows backup games and homebrew software to be easily loaded onto the device.
Within minutes, I had DSLinux, a Linux distribution for the Nintendo DS, working. It felt immensely cool to be typing at a Bash shell using my DS stylus. I couldn’t get the wireless internet connection working, though – the drivers kept failing to load, which is probably either a result of (a) the DS Lite possibly having different firmware for interfacing with the network subsystem or (b) the M3 Perfect I got is the SD card edition, rather than the CF edition, which is better supported by DSLinux. I chose the SD card edition despite it being a few pounds more expensive because it’s slightly smaller (and therefore doesn’t stick out of the side of my handheld in such an unslightly way as the CF one would have) and because I can potentially fit more onto a SD card (although the only SD card I own is 1Gb, the same size as the largest CF card the M3 can take). In any case, both possibilities sound equally unlikely: further investigation will ensure.
The ultimate aim of this little project is to get a graphical VNC client for the DS (take a look at that screenshot!) running, or some other remote control, so I can take full control of my desktop PC, wirelessly, from, like, my bed. Or from the couch. Or from and wireless internet hotspot anywhere that somebody hasn’t secured properly. Toy.
But the other benefit of this little purchase is the ability to, how shall we say, “try before I buy” Nintendo DS games. I’ve spent quite some time today playing the stunning Trauma Center: Under The Knife. It hasn’t been since Half-Life 2 that I’ve played a computer game that genuinely made me jump with fright.
This isn’t Theme Hospital. This is Life and Death (for those of you too young to remember, this was a stunning late-80s “Sim Surgeon”). Starting as a junior surgeon, you’ll remove benign tumours, treat laceration injuries, and laser off polyps. The whole things starts with a very “hold your hand” approach, but the learning curve is steep. Within 25 minutes of play you’ll be performing surgery within the chest cavity of car crash victims when something goes wrong (their heart stops, or their symptoms severely exacerbate, or it turns out there’s something more seriously wrong with them) and you’ve got nobody there to help you: you have to work alone.
It’s dark and cold and hard. Very hard. I struggled to keep up with the pace and had to re-attempt some of the levels (such as the brutal on early in the second chapter in which I had to remove aneurisms from the arteries of the intestines, and they just kept exploding on me, showering blood everywhere and destabilising the patient’s condition) several times. Nonetheless, I had great fun watching Claire replay those levels, on the edge of my seat whenever I knew something was about to go terribly wrong. Contrary to the image Nintendo sometimes convey: this is not a game for kids.
Another game I’ve enjoyed trying out is Mario & Luigi: Partners In Time, which plays a lot like Paper Mario: The Thousand-Year Door, but with semi-independent simultaneous control over up to four (Mario, Luigi, Mario’s younger self, Luigi’s younger self) different characters. Yes, at the same time. Yes, that fucks with your head. Quite quickly.
Then there’s Super Princess Peach, a platform game in which Peach uses the power of mood swings (I kid you not – she fluctuates between singing, crying, and breathing fire, just like a real woman) in order to get her way. And Super Monkey Ball Touch & Roll, more stupid puzzle game fun…
It’s not all piracy (although at least a little bit ethically – we’ll buy legitimate copies of the good stuff, almost certainly including Trauma Center) of stuff I could have bought at my local Game: I’ve also had a great deal of fun with Electroplankton, for which a release outside of Japan is still promised, but sadly absent. Electroplankton is a software toy in the truest sense of the word. The player manipulates the movement of musical plankton in order to generate what can just about be described as music. I came home and hooked it up to the stereo and Claire and I had great fun for some time, playing with the different plankton and trying to discover how they all “worked”. And I’m also looking forward to giving some of the Naruto games (which’ll probably never be released outside of Japan) a go.
Spent the last four days in Lancashire and elsewhere in the North of England, visiting my folks (among other things). Details follow…
Thursday 26th June 2003 Linux Expo 2003, Birmingham
Sorted out Claire’s bank, packed bags, and set off for Birmingham to the last day of Linux Expo 2003 at the National Exhibition Centre, to meet up with Gareth and some other geeks to talk about a project on which my input could be valuable. Gareth is going to come over to Aberystwyth next weekend and we’ll knock together a prototype of the system we’ve suggested.
Claire got scared by the vast numbers of stereotypical geeks (and the distinct overdose of testosterone in the air – she was one of only three women in the whole place), and by the fact that, unlike normal, she couldn’t understand one in three words spoken. I smiled. She’s got a little way to go to earn her geek stripes, yet.
Bon Jovi, Manchester
Arrived late at Old Trafford – missed the support group, but in time to try to find standing room before Bon Jovi came on-stage. All-in-all, a good concert: Claire was a little short for standing on the pitch to have been a good idea, and the sound quality was a little below-par owing to a lack of adequate repeater speakers, and the only beer available was Budweiser and Boddingtons, but it was still a pretty good gig. Went to a Manchester pub afterwards before catching a really, really late train home. Got to bed sometime after 3am.