Evolving Computer Words: “Virus”

This is part of a series of posts on computer terminology whose popular meaning – determined by surveying my friends – has significantly diverged from its original/technical one. Read more evolving words…

A few hundred years ago, the words “awesome” and “awful” were synonyms. From their roots, you can see why: they mean “tending to or causing awe” and “full or or characterised by awe”, respectively. Nowadays, though, they’re opposites, and it’s pretty awesome to see how our language continues to evolve. You know what’s awful, though? Computer viruses. Right?

Man staring intently at laptop. Image courtesy Oladimeji Ajegbile, via Pexels.
“Oh no! A virus has stolen all my selfies and uploaded them to a stock photos site!”

You know what I mean by a virus, right? A malicious computer program bent on causing destruction, spying on your online activity, encrypting your files and ransoming them back to you, showing you unwanted ads, etc… but hang on: that’s not right at all…

Virus

What people think it means

Malicious or unwanted computer software designed to cause trouble/commit crimes.

What it originally meant

Computer software that hides its code inside programs and, when they’re run, copies itself into other programs.

The Past

Only a hundred and thirty years ago it was still widely believed that “bad air” was the principal cause of disease. The idea that tiny germs could be the cause of infection was only just beginning to take hold. It was in this environment that the excellent scientist Ernest Hankin travelled around India studying outbreaks of disease and promoting germ theory by demonstrating that boiling water prevented cholera by killing the (newly-discovered) vibrio cholerae bacterium. But his most-important discovery was that water from a certain part of the Ganges seemed to be naturally inviable as a home for vibrio cholerae… and that boiling this water removed this superpower, allowing the special water to begin to once again culture the bacterium.

Hankin correctly theorised that there was something in that water that preyed upon vibrio cholerae; something too small to see with a microscope. In doing so, he was probably the first person to identify what we now call a bacteriophage: the most common kind of virus. Bacteriophages were briefly seen as exciting for their medical potential. But then in the 1940s antibiotics, which were seen as far more-convenient, began to be manufactured in bulk, and we stopped seriously looking at “phage therapy” (interestingly, phages are seeing a bit of a resurgence as antibiotic resistance becomes increasingly problematic).

Electron microscope image of a bacteriophage alongside an illustration of the same.
It took until the development of the scanning electron microscope in the mid-20th century before we’d actually “see” a virus.

But the important discovery kicked-off by the early observations of Hankin and others was that viruses exist. Later, researchers would discover how these viruses work1: they inject their genetic material into cells, and this injected “code” supplants the unfortunate cell’s usual processes. The cell is “reprogrammed” – sometimes after a dormant period – to churns out more of the virus, becoming a “virus factory”.

Let’s switch to computer science. Legendary mathematician John von Neumann, fresh from showing off his expertise in calculating how shaped charges should be used to build the first atomic bombs, invented the new field of cellular autonoma. Cellular autonoma are computationally-logical, independent entities that exhibit complex behaviour through their interactions, but if you’ve come across them before now it’s probably because you played Conway’s Game of Life, which made the concept popular decades after their invention. Von Neumann was very interested in how ideas from biology could be applied to computer science, and is credited with being the first person to come up with the idea of a self-replicating computer program which would write-out its own instructions to other parts of memory to be executed later: the concept of the first computer virus.

Glider factory breeder in Conway's Game of Life
This is a glider factory… factory. I remember the first time I saw this pattern, in the 1980s, and it sank in for me that cellular autonoma must logically be capable of any arbitrary level of complexity. I never built a factory-factory-factory, but I’ll bet that others have.

Retroactively-written lists of early computer viruses often identify 1971’s Creeper as the first computer virus: it was a program which, when run, moved (later copied) itself to another computer on the network and showed the message “I’m the creeper: catch me if you can”. It was swiftly followed by a similar program, Reaper, which replicated in a similar way but instead of displaying a message attempted to delete any copies of Creeper that it found. However, Creeper and Reaper weren’t described as viruses at the time and would be more-accurately termed worms nowadays: self-replicating network programs that don’t inject their code into other programs. An interesting thing to note about them, though, is that – contrary to popular conception of a “virus” – neither intended to cause any harm: Creeper‘s entire payload was a relatively-harmless message, and Reaper actually tried to do good by removing presumed-unwanted software.

Another early example that appears in so-called “virus timelines” came in 1975. ANIMAL presented as a twenty questions-style guessing game. But while the user played it would try to copy itself into another user’s directory, spreading itself (we didn’t really do directory permissions back then). Again, this wasn’t really a “virus” but would be better termed a trojan: a program which pretends to be something that it’s not.

Replica Trojan horse.
“Malware? Me? No siree… nothing here but this big executable horse.”

It took until 1983 before Fred Cooper gave us a modern definition of a computer virus, one which – ignoring usage by laypeople – stands to this day:

A program which can ‘infect’ other programs by modifying them to include a possibly evolved copy of itself… every program that gets infected may also act as a virus and thus the infection grows.

This definition helps distinguish between merely self-replicating programs like those seen before and a new, theoretical class of programs that would modify host programs such that – typically in addition to the host programs’ normal behaviour – further programs would be similarly modified. Not content with leaving this as a theoretical, Cooper wrote the first “true” computer virus to demonstrate his work (it was never released into the wild): he also managed to prove that there can be no such thing as perfect virus detection.

(Quick side-note: I’m sure we’re all on the same page about the evolution of language here, but for the love of god don’t say viri. Certainly don’t say virii. The correct plural is clearly viruses. The Latin root virus is a mass noun and so has no plural, unlike e.g. fungus/fungi, and so its adoption into a count-noun in English represents the creation of a new word which should therefore, without a precedent to the contrary, favour English pluralisation rules. A parallel would be bonus, which shares virus‘s linguistic path, word ending, and countability-in-Latin: you wouldn’t say “there were end-of-year boni for everybody in my department”, would you? No. So don’t say viri either.)

(Inaccurate) slide describing viruses as programs that damage computers or files.
No, no, no, no, no. The only wholly-accurate part of this definition is the word “program”.

Viruses came into their own as computers became standardised and commonplace and as communication between them (either by removable media or network/dial-up connections) and Cooper’s theoretical concepts became very much real. In 1986, The Virdim method brought infectious viruses to the DOS platform, opening up virus writers’ access to much of the rapidly growing business and home computer markets.

The Virdim method has two parts: (a) appending the viral code to the end of the program to be infected, and (b) injecting early into the program a call to the appended code. This exploits the typical layout of most DOS executable files and ensures that the viral code is run first, as an infected program loads, and the virus can spread rapidly through a system. The appearance of this method at a time when hard drives were uncommon and so many programs would be run from floppy disks (which could be easily passed around between users) enabled this kind of virus to spread rapidly.

For the most part, early viruses were not malicious. They usually only caused harm as a side-effect (as we’ve already seen, some – like Reaper – were intended to be not just benign but benevolent). For example, programs might run slower if they’re also busy adding viral code to other programs, or a badly-implemented virus might even cause software to crash. But it didn’t take long before viruses started to be used for malicious purposes – pranks, adware, spyware, data ransom, etc. – as well as to carry political messages or to conduct cyberwarfare.

XKCD 1180: Virus Venn Diagram
XKCD already explained all of this in far fewer words and a diagram.

The Future

Nowadays, though, viruses are becoming less-common. Wait, what?

Yup, you heard me right: new viruses aren’t being produced at remotely the same kind of rate as they were even in the 1990s. And it’s not that they’re easier for security software to catch and quarantine; if anything, they’re less-detectable as more and more different types of file are nominally “executable” on a typical computer, and widespread access to powerful cryptography has made it easier than ever for a virus to hide itself in the increasingly-sprawling binaries that litter modern computers.

"Security" button
Soo… I click this and all the viruses go away, right? Why didn’t we do this sooner?

The single biggest reason that virus writing is on the decline is, in my opinion, that writing something as complex as a a virus is longer a necessary step to illicitly getting your program onto other people’s computers2! Nowadays, it’s far easier to write a trojan (e.g. a fake Flash update, dodgy spam attachment, browser toolbar, or a viral free game) and trick people into running it… or else to write a worm that exploits some weakness in an open network interface. Or, in a recent twist, to just add your code to a popular library and let overworked software engineers include it in their projects for you. Modern operating systems make it easy to have your malware run every time they boot and it’ll quickly get lost amongst the noise of all the other (hopefully-legitimate) programs running alongside it.

In short: there’s simply no need to have your code hide itself inside somebody else’s compiled program any more. Users will run your software anyway, and you often don’t even have to work very hard to trick them into doing so.

Verdict: Let’s promote use of the word “malware” instead of “virus” for popular use. It’s more technically-accurate in the vast majority of cases, and it’s actually a more-useful term too.

Footnotes

1 Actually, not all viruses work this way. (Biological) viruses are, it turns out, really really complicated and we’re only just beginning to understand them. Computer viruses, though, we’ve got a solid understanding of.

2 There are other reasons, such as the increase in use of cryptographically-signed binaries, protected memory space/”execute bits”, and so on, but the trend away from traditional viruses and towards trojans for delivery of malicious payloads began long before these features became commonplace.

× × × × × ×

Note #15043

My hardware engineering is a little rusty, @ComputerHistory, but wouldn’t signals propogate along this copper cable at “only” somewhere between 0.64c and 0.95c, not 1c as you claim?

Exhibit of early transatlantic telegraph cable with message implying that it enabled "speed of light" communications.

×

Note #14657

Annabel repairing her mother's computer.

Instructed a 5 year-old in diagnosing and replacing a blown PSU in her mother’s computer.

Additional Processors

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Computerphile at its best, here tackling the topic of additional (supplementary) processors, like FPUs, GPUs, sound processors, etc., to which CPUs outsource some of their work under specific circumstances. Even speaking as somebody who’s upgraded a 386/SX to a 386/DX through the addition of a “math co-processor” (an FPU) and seeing the benefit in applications for which floating point arithmetic was a major part (e.g. some early 3D games), I didn’t really think about what was really happening until I saw this video. There’s always more to learn, fellow geeks!

The Case Against Quantum Computing

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

by Mikhail Dyakonov

Quantum computing is all the rage. It seems like hardly a day goes by without some news outlet describing the extraordinary things this technology promises. Most commentators forget, or just gloss over, the fact that people have been working on quantum computing for decades—and without any practical results to show for it.

We’ve been told that quantum computers could “provide breakthroughs in many disciplines, including materials and drug discovery, the optimization of complex manmade systems, and artificial intelligence.” We’ve been assured that quantum computers will “forever alter our economic, industrial, academic, and societal landscape.” We’ve even been told that “the encryption that protects the world’s most sensitive data may soon be broken” by quantum computers. It has gotten to the point where many researchers in various fields of physics feel obliged to justify whatever work they are doing by claiming that it has some relevance to quantum computing.

Meanwhile, government research agencies, academic departments (many of them funded by government agencies), and corporate laboratories are spending billions of dollars a year developing quantum computers. On Wall Street, Morgan Stanley and other financial giants expect quantum computing to mature soon and are keen to figure out how this technology can help them.

It’s become something of a self-perpetuating arms race, with many organizations seemingly staying in the race if only to avoid being left behind. Some of the world’s top technical talent, at places like Google, IBM, and Microsoft, are working hard, and with lavish resources in state-of-the-art laboratories, to realize their vision of a quantum-computing future.

In light of all this, it’s natural to wonder: When will useful quantum computers be constructed? The most optimistic experts estimate it will take 5 to 10 years. More cautious ones predict 20 to 30 years. (Similar predictions have been voiced, by the way, for the last 20 years.) I belong to a tiny minority that answers, “Not in the foreseeable future.” Having spent decades conducting research in quantum and condensed-matter physics, I’ve developed my very pessimistic view. It’s based on an understanding of the gargantuan technical challenges that would have to be overcome to ever make quantum computing work.

Great article undermining all the most-widespread popular arguments about how quantum computing will revolutionise aboslutely everything, any day now. Let’s stay realistic, here: despite all the hype, it might well be the case that it’s impossible to build a quantum computer of sufficient complexity to have any meaningful impact on the world beyond the most highly-experimental and theoretical applications. And even if it is possible, its applications might well be limited: the “great potential” they carry is highly hypothetical.

Don’t get me wrong, I’m super excited about the possibility of quantum computing, too. But as Mickhail points out, we must temper our excitement with a little realism and not give in to the hype.

Reply to Hardware Issue – when did hard drive space get like this?

This is a reply to a post published elsewhere. Its content might be duplicated as a traditional comment at the original source.

Hardware Issue (Aquarionics)

So, I am a professional system administrator. It says it on my business cards and everything. Every couple of months, when I have to explain to the receptionist at the London office that yes, I do work here, and so

Nicholas Avenell (Aquarionics) wrote:

(My first hard drive for the Amiga 600 was second hand from my dad’s old laptop. It was SIXTY MEGABYTES. It held DOZENS of games. I would need over EIGHT HUNDRED of those drives to hold a 50Gb World of Warcraft install).

I remember my first hard drive. It was 40Mb, and that felt flipping MASSIVE because I’d previously, like most people, been using floppy disks of no larger than 1.44Mb. My second hard drive was 105Mb and it felt like a huge step-up; I ripped my first MP3s onto that drive, and didn’t care for a moment that they each consumed 2%-3% of the available space (and took about 15 minutes each to encode).

Nowadays I look at my general-purpose home desktop’s 12TB RAID array and I think to myself… yeah, but it’s over half full… probably time to plan for the next upgrade. What happened‽ Somewhere along the line hard drive space became like mobile phone battery level became before it: something where you start to worry if you have less than half left. I don’t know how we got here and I’m not sure I’m happy about it, but suffice to say: technology today is nuts.

What Happens When You Mix Java with a 1960 IBM Mainframe

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

IBM Mainframe

As an engineer for the U.S. Digital ServiceMarianne Bellotti has encountered vintage mainframes that are still being used in production — sometimes even powering web apps. Last month she entertained a San Francisco audience with tales about some of them, in a talk called “7074 says Hello World,” at Joyent’s “Systems We Love” conference.

Created under the Obama administration, The U.S. Digital Service was designed as a start-up-styled consultancy to help government agencies modernize their IT operations, drawing engineering talent from Google, Facebook and other web-scale companies.

Or, as President Obama put it last March, it’s “a SWAT team — a world-class technology office.”

So it was fascinating to hear Bellotti tell stories about some of the older gear still running, and the sometimes unusual ways it was paired with more contemporary technology…

The Lost Civilization of Dial-Up Bulletin Board Systems

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

I have a vivid, recurring dream. I climb the stairs in my parents’ house to see my old bedroom. In the back corner, I hear a faint humming.

It’s my old computer, still running my 1990s-era bulletin board system (BBS, for short), “The Cave.” I thought I had shut it down ages ago, but it’s been chugging away this whole time without me realizing it—people continued calling my BBS to play games, post messages, and upload files. To my astonishment, it never shut down after all…

The author’s computer connecting to BBS in 1996 (Benj Edwards)

The Golden Age of x86 Gaming

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

I’ve been happy with my 2016 HTPC, but the situation has changed, largely because of something I mentioned in passing back in November: The Xbox One and PS4 are effectively plain old PCs, built on: Intel Atom class (aka slow) AMD 8-core x86 CPU 8 GB RAM AMD Radeon 77xx / 78xx GPUs cheap commodity…

The 2016 HTPC Build

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

I’ve loved many computers in my life, but the HTPC has always had a special place in my heart. It’s the only always-on workhorse computer in our house, it is utterly silent, totally reliable, sips power, and it’s at the center of our home entertainment, networking, storage, and gaming. This handy box does it all,…

Post-It Minesweeper

Remember Minesweeper? It’s probably been forever since you played, so go have a game online now. And there went your afternoon.

A game of Microsoft Minesweeper in progress.
This is actually a pretty tough move.

My geek-crush Ben Foxall posted on Twitter on Monday morning to share that he’d had a moment of fun nostalgia when he’d come into the office to discover that somebody in his team had covered his monitor with two layers of Post-It notes. The bottom layer contained numbers – and bombs! – to represent the result of a Minesweeper board, and the upper layer ‘covered’ them so that individual Post-Its could be removed to reveal what lay beneath. Awesome.

Ben Foxall discovers Post-It Minesweeper
Unlike most computerised implementations of Minesweeper, the first move isn’t guaranteed to be safe. Tread carefully…

Not to be outdone, I hunted around my office and found some mini-Post-Its. Being smaller meant that I could fit more of them onto a monitor and thus make a more-sophisticated (and more-challenging!) play space. But how to generate the board? Sure: I could do it by hand, but that doesn’t seem very elegant at all – plus, humans make really bad random number generators! I didn’t need quantum-tunnelling-seeded Minesweeper (yes, that’s a thing) levels of entropy, sure, but it’d still be nice to outsource the heavy lifting to a computer, right?

Screenshot of my Post-It Minesweeper board generator.
Yes, I’m quite aware of the irony of using a computer to generate a paper-based version of a computer game, why do you ask?

So naturally, I wrote a program to do it for me. Want to see? It’s at danq.me/minesweeper. Just line up some Post-Its on a co-worker’s monitor to work out how many you can fit across it in each dimension (I found that I could get 6 × 4 standard-sized Post-Its but 7 × 5 or even 8 × 5 mini-sized Post-Its very comfortably onto one of the typical widescreen monitors in my office), decide how many mines you want, and click Generate. Don’t like the board you get? Click it again!

Liz McCarthy tweets about her experience of being given a Post-It Minesweeper game to play.
I set up the first game on my colleague Liz’s computer, before she came in this morning.

And because I was looking for a fresh excuse to play with Periscope, I broadcast the first game I set up live to the Internet. In the end, 66 people ended up watching some or all of a paper-based game of Minesweeper played by my colleague Liz, including moments of cheering her on and, in one weird moment, despair at the revelation that she was married. The internet’s strange, yo.

Anyway: in case you missed the Periscope broadcast, I’ve put it on YouTube. Sorry about the portrait-orientation filming: I think it’s awful, too, but it’s a Periscope thing and I haven’t installed the new update that fixes it yet.

Now go set up a game of Post-It Minesweeper for a friend or co-worker.

× ×