Maintaining a blog can be a lot of work. A single article can take weeks of research, drafting and editing, collecting and producing included materials, etc. It’s not unusual to
seek some form of compensation for it, and those rewards require initiative. With a good monetization strategy, it can become a fairly
lucrative venture.
So let’s talk about monetizing a blog, starting with the most obvious and perhaps easiest avenue: display advertising.
A content creator with an established audience can leverage that audience and sell ad space on their blog. Here’s an example:
…
I’m not sure I have words for how awesome this blog post is. If you’ve ever wanted to monetise your blog and are considering an ad-driven model, this should absolutely be the first (and
perhaps last) thing you read on the subject.
If you’re not convinced that Tyler is an appropriate authority to speak on this subject, I highly suggest you visit their other site that’s got a wealth of useful tips, PutAToothpickInTheChargingPortDoctorsHateThatShit.christmas. Yes, really.
Perhaps inspired by my resharing of Thomas‘s thoughts about the biggest problem in
AI (tl;dr: he thinks it’s nomenclature; I agree that’s a problem but I don’t know if it’s the biggest issue), Ruth posted some thoughts to LinkedIn that I think are quite well-put:
I was going to write about something else but since LinkedIn suggested I should get AI to do it for me, here’s where I currently stand on GenAI.
As a person working in computing, I view it as a tool that is being treated as a silver bullet and is probably self-limiting in its current form. By design, it produces average
code. Most companies prior to having access to cheap average code would have said they wanted good code. Since the average code produced by the tools is being fed back into those
tools, mathematically this can’t lead anywhere good in terms of quality.
However, as a manager in tech I’m really alarmed by it. If we have tools to write code that is ok but needs a lot of double checking, we might be tempted to stop hiring
people at that level. There already aren’t enough jobs for entry level programmers to feed the talent pipeline, and this is likely to make it worse. I’m not sure where the next
generation of great programmers are supposed to come from if we move to an ecosystem where the junior roles are replaced by Copilot.
I think there’s a lot of potential for targeted tools to speed up productivity. I just don’t think GenAI is where they should come from.
This is an excellent explanation of no fewer than four of the big problems with “AI” as we’re seeing it marketed today:
It produces mediocre output, (more on that below!)
It’s a snake that eats its own tail,
It’s treated as a silver bullet, and
By pricing out certain types of low-tier knowledge work, it damages the pipeline for training higher-tiers of those knowledge workers (e.g. if we outsource all first-level tech
support to chatbots, where will the next generation of third-level tech support come from, if they can’t work their way up the ranks, learning as they go?)
Let’s stop and take a deeper look at the “mediocre output” claim. Ruth’s right, but if you don’t already understand why generative AI does this, it’s worth a
little bit of consideration about the reason for it… and the consequences of it:
Mathematically-speaking, that’s exactly what you would expect for something that is literally statistically averaging content, but that still comes as a surprise to people.
Bear in mind, of course, that there are plenty of topics in which the average person is less-knowledgable than the average of the content that was made available to the model.
For example, I know next to noting about fertiliser application in large-scale agriculture. ChatGPT has doubtless ingested a lot of literature about it, and if I ask it what
fertiliser I should use for a field of black beans in silty soil in the UK, it delivers me a confident-sounding answer:
When LLMs produce exceptional output (I use the term exceptional in the sense of unusual and not-average, not to mean “good”), it appears more-creative and interesting but is even
more-likely to be riddled with fanciful hallucinations.
There’s a fine line in getting the creativity dial set just right, and even when you do there’s no guarantee of accuracy, but the way in which many chatbots are told to talk makes them
sound authoritative on basically every subject. When you know it’s lying, that’s easy. But people don’t always use LLMs for subjects they’re knowledgeable about!
In my example above, a more-useful robot would have stated that it didn’t know the answer to the question rather than, y’know, lying. But the nature
of the statistical models used by LLMs means that they can’t know what they don’t know: they don’t have a “known unknowns” space.
Regarding the “damages the training pipeline”: I’m undecided on whether or not I agree with Ruth. She might be on to something there, but I’m not sure. Needs more
thought before I commit to an opinion on that one.
Oh, and an addendum to this – as a human, I find the proliferation of AI tools in spaces that are all about creating connections with other humans deeply concerning. I saw a lot of
job applications through Otta at my previous role, and they were all kind of the same – I had no sense of the person behind the averaged out CV I was looking at. We already have a
huge problem with people presenting inauthentic versions of themselves on social media which makes it harder to have genuine interactions, smoothing off the rough edges of real people
to get something glossy and processed is only going to make this worse.
AI posts on social media are the chicken nuggets of human interaction and I’d rather have something real every time.
Emphasis mine… because that’s a fantastic metaphor. Content generated where a generative AI is trying to “look human” are so-often bland, flat, and unexciting: a mass-produced
most-basic form of social sustenance. So yeah: chicken nuggets.
The biggest problem with “AI” is probably that it’s used as label for two completely different things:
1. Specialized neural networks trained to do highly specific tasks (e.g. cancer screening) which often work reasonably well as a tool to support human experts
2. Generative AI which thoroughly produces the most mid bullshit
It doesn’t help that neither are intelligent in any way, they’re both statistical pattern matching.
Fundamentally, Thomas seems to be arguing that the biggest problem with AI is how it is marketed, or things-that-are-called-AI are marketed as AI. Also that LLMs, by producing
s statistical average of their input data, produce output that’s pretty-average (which is, of course, statistically that you’d expect)1.
I’m not sure he’s right: the energy footprint and the copyright issues of generative AI might be the biggest problems. But maybe.
Footnotes
1 That’s not entirely true, of course: sometimes they produce output that’s wild and
random, but we describe those as “hallucinations” and for many purposes they’re even worse. At least “mid bullshit” can be useful if you’re specifically looking to summarise existing
content (and don’t mind fact-checking it later if it’s important): y’know, the thing people use Wikipedia for.
Runners will talk about how much they enjoy the feeling of wind in their hair. Boxers won’t shut up about the grace and art of their profession. Even soccer players can be moved to
wax poetical about how enjoyable it is to be part of a truly great game.
But all golfers ever talk about is how little golf they hope to play. A typical pre-match interview will go something like this:
Some guy in a blazer: Great to have you here with us, what are your goals for the first round this morning.
Golfer: Well today I hope to play as little golf as possible. Mathematically speaking the course could be done in 18 shots but that is probably physically
impossible. But ideally as close to 18 as I can get. Any additional golf is bad.
Blazer: What is your strategy for avoiding the golf.
Golfer: I have a guy who follows me around to help share the burden of all this damn golf. He is going to help me out by suggesting ways to avoid playing any more
golf than we have to. Of course, I pay him but his real motivation is to bring this sorry excuse for a pastime to the speediest conclusion.
Blazer: Better you than me, but good luck out there.
This is what the endgame should be IMO. Some things are better represented as text. Some are best understood visually. We should mix and match what works best on a case-by-case
basis. Don’t try to visualize simple code. Don’t try to write code where a diagram is better.
One of the attempts was Luna. They tried dual representation: everything is code and diagram at the same time, and you can switch between the two:
But this way, you are not only getting benefits of both ways, you are also constrained by both text and visual media at the same time. You can’t do stuff that’s hard to
visualize (loops, recursions, abstractions) AND you can’t do stuff that’s hard to code.
…
Interesting thoughts from Niki (and from Sebastian Bensusan) on how diagrams and code might someday be
intertwined as first class citizens (but not in the gross ways you might have come across in the past when people have tried to sell you on “visual programming”).
As Niki wrote about what he calls levels 2 and 3 of the concept – in which diagrams and code are intrinsically linked I found myself thinking about Twine, a programming language (or framework? or tool?… not sure how best to describe or define it!) intended for making interactive “choose your own
adventure”-style hypertext fiction.
Twine’s sort-of a level 2 implementation of visual programming: the code (scene descriptions) is mostly what’s responsible for feeding the diagram. But that’s not entirely
true: it’s possible to create new nodes in your story graph in a completely visual way, and then dip into them to edit their contents and imply how they link to others.
It’s possible that the IF engine community – who are working to lower the barriers to programming in order to improve accessibility
to people who are fiction authors first, developers second – are ahead of the curve in the area of visual programming. Consider for example how Inform’s automated test framework graphs
the permutations you (or your human testers) try, and allow you to “bless” (turn into assertions) the results so that regression testing becomes visually automated affair:
If I could offer you only one tip for the future, coding would be it. The long term benefits of coding websites remains unproved by scientists, however the rest of my advice has a
basis in the joy of the indie web community’s experiences. I will dispense this advice now:
Enjoy the power and beauty of PHP; or never mind. You will not understand the power and beauty of PHP until your stack is completely jammed. But trust me, in 20 years you’ll look
back at your old sites and recall in a way you can’t grasp now, how much possibility lay before you and how simple and fast they were. JS is not as blazingly fast as you imagine.
Don’t worry about the scaling; or worry, but know that premature scalability is as useful as chewing bubble gum if your project starts cosy and small. The real troubles on the web
are apt to be things that never crossed your worried mind; if your project grows, scale it up on some idle Tuesday.
I can’t say I loved Baz Luhrmann’s Everybody’s Free To Wear Sunscreen. I’m not sure it’s possible for anybody who lived through it being played to death in the late
1990s; a period of history when a popular song was basically inescapable. Also, it got parodied a lot. I must’ve seen a couple of dozen different parodies of varying quality in
the early 2000s.
But it’s been long enough that I was, I guess, ready for one. And I couldn’t conceive of a better topic.
Y’see: the very message of the value of personal websites is, like Sunscreen, a nostalgic one. When I try to sell
people on the benefits of a personal digital garden or blog, I tend to begin by pointing out that the best time to set up your own website is… like 20+ years ago.
But… the second-best time to start a personal website is right now. With cheap and free static
hosting all over the place (and more-dynamic options not much-more expensive) and domain names still as variably-priced as they ever were, the biggest impediment is the learning
curve… which is also the fun part! Siloed social media is either eating its own tail or else fighting to adapt to once again be part of a more-open Web, and there’s nothing that says
“I’m part of the open Web” like owning your own online identity, carving out your own space, and expressing yourself there however you damn well like.
As always, this is a drum I’ll probably beat until I die, so feel free to get in touch if you want some help getting set up on the Web.
There are a whole bunch of things that could be the source for the name, e.g. where we found most of their work (The Dipylon Master) or the potter with whom they worked (the
Amasis Painter), a favourite theme (The Athena Painter), the Museum that ended up with the most famous thing they did (The Berlin Painter) or a notable aspect of their style.
Like, say, The Eyebrow Painter.
Guess what kind of pottery the Eyebrow Painter made?
Part of the joy of the collaborative Internet is that people can share their passion. Today’s example comes from this YouTuber who’s made an hour long video demonstrating and
ranking the 35 elevators in the first five games in the Myst series.
Starting with a discussion of what defines an elevator, the video goes on to show off some of the worst of the lifts in the series of games (mostly those that are uninspired,
pointless, or which have confusing interfaces) before moving on to the well-liked majority.
I only ever played the first two Myst games (and certainly haven’t played the first since, what, the mid-1990s?) and I don’t think I finished either. But that didn’t stop me
watching the entirety of this video and revelling in the sheer level of dedication and focus it’ll have taken on the part of the creator. When I made my
(mere 15-minute!) video describing my favourite video game Easter Egg I spent tens of hours over the prior weeks researching the quirk and its background, configuring a copy of the
(elderly) game so that it’d play and record in the way I wanted, and of course playing through the game far enough to be able to fully demonstrate the Easter Egg. Dustin’s
video, which doubtless involved replaying (possibly multiple times) five different games released over a 12-year window is mindblowing by comparison.
I don’t really care about the Myst series. I care even less about its elevators. But I really enjoyed this video, if only for its creator’s enthusiasm.
Dogs are being offered boat and rail season tickets to ease their path to walkies in the Lake District.
Ullswater Steamers and the Ravenglass and Eskdale Railway have introduced £20 annual “Rover” season tickets, which include a 10% donation to animal charities.
The cost for a standard doggie day ticket is £2.50 for the railway and £1 for a boat trip.
…
In a world where the news is dominated by war, cyber attacks, or imminent elections, it’s nice to be distracted by a nonsense bit of news. And this one’s just delightful.
For a fee of £1 – £2.50, dogs can travel on the boats and railways of Ravenglass and Eskdale Railway and Ullswater Steamers. So far, so good.
And now they’ve introduced a season pass for people who take their canines on the ferries or railways more often. Also good.
And they’ve called the season pass for dogs… a “Rover” ticket.
I clearly nerdsniped Terence at least a little when I asked whether a blog necessarily had to be HTML, because he went on to implement a WordPress theme that delivers content entirely in plain text.
A mysterious Roman artefact found during an amateur archaeological dig is going on public display in Lincolnshire for the first
time.
The object is one of only 33 dodecahedrons found in Britain, and the first to have been discovered in the Midlands.
…
I learned about these… things… from this BBC News story and I’m just gobsmacked. Seriously: what is this thing?
This isn’t a unique example. 33 have been found in Britain, but these strange Roman artefacts turn up all over Europe: we’ve found hundreds of them.
It doesn’t look like they were something that you’d find in any Roman-era household, but they seem to be common enough that if you wandered around third century Northern Europe with one
for a week or so you’d surely be able to find somebody who could explain them to you. And yet we don’t know why.
We have absolutely no idea why the Romans made these
things. They’re finely and carefully created from bronze, and we find them buried in coin stashes, which suggests that they were valuable and important. But for what?
Frustrated archaeologists have come up with all kinds of terrible ideas:
Maybe they were a weapon, like the ball of a mace or something to be flung from a sling? Nope; they’re not really heavy enough.
At least one was discovered near a bone staff, so it might have been a decorative scepter? But that doesn’t really go any distance to explaining the unusual shape,
even if true (nor does it rule out the possibility of it being some kind of handled tool).
Perhaps they were a rangefinding tool, where a pair of opposing holes line up only when you’re a particular distance from the tool? If a target of a known size
fills the opposite hole in your vision, its distance must be a specific multiple of your distance to the tool. But that seems unlikely because we’ve never found any markings on these
that would show which side you were using; also the devices aren’t consistently-sized.
Roleplayers might notice the similarity to polyhedral dice: maybe they were a game? But the differing-sized holes make them pretty crap dice (researchers have
tried), and Romans seemed to favour cubic dice anyway. They’re somewhat too intricate and complex to be good candidates for children’s toys.
They could be some kind of magical or divination tool, which would apparently fit with the kinds of fortune-telling mysticism believed to be common to the cultures
at the sites where they’re found. Do the sides and holes correspond to the zodiac or have some other astrological significance?
Perhaps it was entirely decorative? Gold beads of a surprisingly-similar design have
been found as far away as Cambodia, well outside the reach of the Roman Empire, which might suggest a continuing tradition of an earlier precursor dodecahedron!
This author thinks they might have acted as a kind of calendar, used for measuring the height of the
midday sun by observing way its beam is cast through a pair of holes when the tool is placed on a surface and used to determine when winter grains should be planted.
Using replicas, some folks online have demonstrated how they could have been used as a knitting tool for
making the fingers of gloves using a technique called “spool knitting”. But this knitting technique isn’t believed to have been invented until a millennium later than the youngest of
these devices.
Others have proposed that they were a proof of qualification: something a master metalsmith would construct in order to show that they were capable of casting a
complex and intricate object.
I love a good archaeological mystery. We might never know why the Romans made these things, but reading clever people’s speculations about them is great.
Molly White writes, more-eloquently than I would’ve, almost-exactly my experience of LLMs and similar modern generative
AIs:
…
I, like many others who have experimented with or adopted these products, have found that these tools actually can be pretty useful for some tasks. Though AI
companies are prone to making overblown promises that the tools will shortly be able to replace your content writing team or generate feature-length films or develop a video game from
scratch, the reality is far more mundane: they are handy in the same way that it might occasionally be useful to delegate some tasks to an inexperienced and sometimes sloppy intern.
…
Very much this.
I’ve experimented with a handful of generative AIs, such as:
GPT-3.5 / ChatGPT, for proofreading, summarisation, experimental rephrasing when writing, and idea generation. I’ve found it to be moderately good at summarisation
and proofreading and pretty terrible at producing anything novel without sounding completely artificial and/or getting lost in a hallucination.
Bing for coalescing information. I like that it cites its sources. I dislike that it somehow still hallucinates. I might use it, I suppose, to help me
re-phase a search query where I can’t remember the word I’m looking for.
Stable Diffusion for image generation. I’ve found it most-useful in image-to-image mode, for making low-effort concept art in bulk. For example, when running online
roleplaying games for friends I’ve fed it an image of, say, a skeleton warrior and asked it to make me a few dozen more in a similar style, so as to provide a diverse selection of
distinct tokens1.
Its completely-original2 work lands squarely in
the uncanny valley, though.
Github Copilot for code assistance. I’ve not tried its “chat”-powered functionality but I quite enjoy its “autocomplete” tool. When I’m coding and I forget the syntax
of the command I’m typing, or need to stop and think for a moment about “what comes next”, it’s often there with the answer. I’ve even made us of the “write the comment describing
what the code will do, let Copilot suggest the code for you” paradigm (though I’ve been pretty disappointed with the opposite approach: it doesn’t write great comments!). I find
Copilot to be a lot like having an enthusiastic, eager-to-please, very well-read but somewhat naive junior programmer sitting beside me. If I ask them for some pairing assistance,
they’re great, but I can’t trust them to do anything that I couldn’t do for myself!
Surely others besides that I’ve since forgotten.
Most-recently, I’ve played with music-making AI Suno and… it’s not
great.. but like all these others it’s really interesting to experiment with and think about. Here: let me just ask it to write some “vocal trance europop about a woman called
Molly; Molly has a robot friend who is pretty good at doing many tasks, but the one thing she’ll never trust the robot to do is write in her blog” –
So yes, like Molly:
I’m absolutely a believer than these kinds of AIs have some value,
I’ve been reluctant and slow to say so because they seem to be such a polarising issue that it’s hard to say that you belong to neither “camp”,
I’m not entirely convinced that for the value they provide they’ve yet proven to be worth their cost, and I’m not certain that for general-purpose generation they will be any time
soon, and
I’ve never used AI to write content for my blog, and I can’t see that ever changing.
It’s still an interesting field to follow-along with. Stuff like Sora from OpenAI and VASA-1 from Microsoft are just scary (the latter seems to have little purpose other than for
misinformation-generation3!),
but the genie’s out of the bottle now.
Footnotes
1 Visually-distinct tokens adds depth to the world and helps players communicate with one
another: “You distract the skinny cultist, and I’ll try to creep up on the ugly one!”
2 I’m going to gloss right over the question of whether or not these tools are capable of
creating anything truly original. You know what I mean.
Anyway, here’s the best printer for 2024: a Brother laser printer. You can just pick any one you like; I have one with a sheet feeder and one without a sheet feeder. Both of them have
reliably printed return labels and random forms and pictures for my kid to color for years now, and I have never purchased replacement toner for either one. Neither has fallen off the
WiFi or insisted I sign up for an ink-related hostage situation or required me to consider the ongoing schemes of HP executives who seem determined to make people hate a legendary
brand with straightforward cash grabs and weird DRM ideas.
…
It’s sort-of alarming that Brother are the only big player in the printer space who subscribe to a philosophy of “don’t treat the customers like
livestock”. Presumably all it’d take is a board-level decision to flip the switch from “not evil” to “evil” and we’d lose something valuable. Thankfully, for now at least, they still
clearly see the value of the positive marketing the world gives them. Positive marketing like like this article.
The article is excellent, by the way. I know that I’m “supposed” to stir up hatred about the fact that its conclusion is written by an AI but… well, just read it for yourself and you’ll see why I don’t mind even one bit. Top notch reporting. Consider following the links within it to
stories about how other printer manufacturers continue to show exactly how shitty they can be.
I recommended a Brother printer to the Vagina Museum the other month. I assume it ‘s still working out fine for them (and not ripping them off, spying on them, and/or contributing to the
destruction of the the planet).
If you lack the imagination to understand how a game like this could have dozens of possible endings, you desperately need to play it. My favourite path so far through the game was to
add a teabag, then hot water, then remove the teabag, then add some milk, then add a second teabag, then drink it.
Genuinely can’t stop laughing at this masterpiece.
This promotional video for Aberystwyth University has been kindly archived onto YouTube by one of the undergraduate students who features in it. It was produced in 1984; approximately
the same time I first visited Aberystwyth, although it would take until fifteen years later in 1999 for me to become a student there
myself.
But the thing is… this 1984 video, shot on VHS in 1984, could absolutely be mistaken at-a-glance for a video shot on an
early digital video camera a decade and a half later. The pace of change in Aberystwyth was and is glacial; somehow even the fashion and music seen in Pier Pressure in the video could pass for late-90s!
Anyway: I found the entire video amazingly nostalgic in spite of how far it predates my attendance of the University! Amazing.