Hello everyone, and Happy (finally) New Year 2026!
Well, finally, this 13th issue.
It's been a while since the magazine was published, to be precise, and a lot has changed in that time, for me, for you, and for the rest of the people on this small planet.
But as long as we're alive, the planet still spins, the internet works, and we're still online, which means it's not all bad, not all is lost, and there's hope that everything will be fine, maybe even wonderful ;-)
There's been a lot of sad, terrible, and scary stuff: life tells you to go to hell, politics goes viral online, the news tells us all sorts of different scenarios. Wars, revolutions, annexations, regime changes, and other idiotic decisions from the leaders of the countries we elected are all around us.
And then, the whole planet waited for aliens to invade us. But they flew right past; apparently, they didn't like us. You know? If you look at many of the world's leaders, I'd have missed them too.
That big space metal sausage made the right choice!
Just think about it, they flew through dozens of galaxies to get here for what? To see Trump? Just the thought (I briefly imagined what it would look like) makes me laugh out loud. It reminds me of that famous scene from Home Alone, where Kevin McAllister meets him in the hotel lobby. Perhaps we would have finally seen the answers to our questions and problems, but the old man probably scared them off.
I'm just trying to be funny, you know?
Let's put the bad news aside (I won't write about it here for obvious reasons), and let's remember the good news, so:
web1.0hosting.net is 4 years old! Congratulations! As of this momentous day, we have 1,234 users ;=) How cool! If you're not yet with us, please join our community; we'd love to see you.
Welcome to the thirteenth issue!
Happy reading, everyone! ;=)
Elpis`2026
Table of Contents
1. Fonts That Are Gone and Those That Remain
A Chronicle of the Great Typographic Stagnation
2. Building the Web by Hand
3. About HTML
4. The Web as a Home Printing Press
5. In the grip of lag, grand illusions, and slow speeds
what were the first online games like?
6. Epilogue
Fonts That Are Gone and Those That Remain
A Chronicle of the Great Typographic Stagnation
Once upon a time, in those glorious days when web pages took longer to load than it now takes a snail to run a marathon, something sacred existed. It wasn't code or design. It wasn't even HTML, which back then looked like ascetically. It was fonts. Simple, boring, systemic, predictable, like wallpaper in an accounting office—but reliable, like a push-button telephone in the era of touchscreen facepalms.
Yes, that same Arial, the ever-cheerful Verdana, the ever-present Times New Roman, the ever-suspiciously businesslike Tahoma, and its slightly more relaxed brother, Trebuchet MS. A small band of typographic veterans that has survived the ages, outliving Flash animations, tables within tables, CSS before CSS, IE6, and even Google Fonts. Everyone else is gone, but they remain. And no one really knows why.
How did it all begin?
In the wild nineties, when the internet wasn't so much a network as an adventure with an unpredictable ending ("Will the page load? We'll find out after the ads!"), designers lived in fear.
This fear had a name: incompatibility.
While there were more encodings than common sense, and browsers communicated exclusively through insults, fonts were the last bastion of stability.
Microsoft, Apple, and even Sun Microsystems (may they rest in peace) decided to agree on something. Thus was born the idea of "web-safe fonts"—safe fonts guaranteed to look the same on all machines. Translated into plain English: if a user with Windows 95, Mac OS 8, and Netscape Navigator visited a website, the text shouldn't turn into surreal gibberish.
And then a miracle happened. Arial, Times New Roman, Courier New, Georgia, Verdana, and a couple of others became a sort of font UN. They were embedded in every system, every computer, as if ingrained into the web's very DNA. Arial was a universal soldier: not particularly smart, but always there. Times New Roman was a learned professor, boring but respected. Verdana was a cheerful fellow with rounded letters and a generous soul. Georgia was a noble lady with a touch of old-fashioned elegance. And Courier New... well, it's the eternal programmer who still believes that an 80-character interface is the pinnacle of aesthetics.
And yet, oddly enough, more than 30 years have passed—and this entire family is still alive and kicking. Why? Because everything else either died or turned out to be too beautiful for this life.
In the early 2000s, designers began to rebel. They wanted expressiveness, individuality, fonts that would say, "Look, I'm not just a website—I'm a brand!" They started embedding images with text, writing JPEG headlines, and later, Flash animations where letters jumped, shimmered, disappeared, and reappeared accompanied by MIDI. Naive children of optimism! They thought freedom meant a font that didn't look like Arial.
But reality quickly set everyone back. JPEGs with text weren't indexed, Flash wouldn't open, and users with slow internet speeds saw only empty squares instead of headlines. Then CSS entered the arena and said, "Relax, you can now include your own fonts via @font-face." And everyone cheered.
However, the joy was short-lived. Because yes, you can include a font, but how will it load the browser? How much will it weigh down the page? And how will it look in Safari 3.2 under Mac OS X Leopard? (Spoiler: it's bad). And at that moment, the old hands from the "Big Six" font companies looked at this disgrace and merely smirked condescendingly. They knew they had no equal.
Arial is like vodka. Not everyone likes it, but it's always at hand. Times New Roman is like a literature teacher: annoying, but you always come back to it. Verdana is comfort. Georgia is confidence. And Courier New is honesty, albeit with a hint of despair.
Many people ask: why are there so few standard fonts? Hasn't anyone come up with anything new in thirty years? Oh, they have! But they couldn't implement it. After all, a font isn't just a bunch of letters. It involves licensing, rendering, multilingual support, file size, hinting (yes, that mysterious word designers use to scare students). And every OS manufacturer protects its fonts like a knight protects his queen.
For example, Apple spent a long time promoting Lucida Grande and Geneva—they looked cute until Retina displays arrived and revealed that their pixelated beauty was only partially true. Microsoft, for its part, churned out fonts with the same enthusiasm with which it churns out Windows updates: lots of them, for no apparent reason, and half of them incompatible with the rest of the world. And Linux? Well, Linux has always been that student who sews his own clothes. It had DejaVu Sans, Liberation Serif, and other home-made creations that looked like non-modern crap.
And out of this chaos emerged a simple truth: fewer means more stability. That's why those few "immortal" fonts remain.
But this stagnation isn't just a technical oddity. It has affected the very aesthetics of the system.
The internet. Designers are accustomed to thinking in terms of Arial and Times New Roman. Their proportions, letter spacing, and visual rhythm—all of this has become the foundation on which an entire web generation grew up. Even modern websites, seemingly shiny and trendy, subconsciously structure their compositions around the good old sizes of standard fonts.
After all, you'll agree, there are no coincidences: if you open an old website from 1998, where everything is in tables, the header is a GIF, and the navigation is via a <map>, the font is still legible. It hasn't fallen apart. It hasn't become outdated. Unlike everything else. Arial doesn't know the word "yesterday."
The irony is that the more typography progresses, the more designers return to the same system fonts. Not out of a love for retro, but out of simple considerations: loading speed, accessibility, readability. Even Apple and Google now openly recommend using "system-ui," the user's system font. We're back where we started. The great circle has closed.
Today, you can use a thousand fonts from Google Fonts, and each one will look beautiful on your monitor. But launch from Android 6 or Windows XP (and those dinosaurs still exist somewhere), and your neat design turns into a visual carnival. Letters dance, margins shift, and buttons start acting like they've taken an acting class.
However, if you dig deeper, this isn't a bug, but the very nature of the web. The internet has always been a compromise between dream and reality. If you want beauty, prepare for chaos. If you want reliability, return to Arial. And designers, tired of fighting incompatibility, sooner or later give up. They sit quietly, open their CSS, and type: `font-family: Arial, Helvetica, sans-serif;` And they feel a gentle calm. That's it. No more searching.
Helvetica is a whole other story. A font that has become legendary, though most users don't even realize they're reading a variation of it. On Mac, it was the system font; on Windows, it was replaced by Arial. And for thirty years, these two fonts have been vying for the throne of neutrality. Helvetica is an aristocrat with perfect posture, Arial is its clone, born in the provinces. But both are beautiful in their own way.
Sometimes it seems like 21st-century designers are living in a state of typographic schizophrenia. On the one hand, you crave expressiveness—handwritten fonts, exotic serifs, geometric experiments. On the other, you know the user won't see your font the way you intended. He'll come with Xiaomi to Android 10, and your exquisite typography will turn into something between a circus poster and a government document.
So why isn't anyone breaking these rules? Why aren't we making a revolution, marching with signs saying, "Down with Arial! Free Typography!"? Because over the years, it's become clear: standards aren't enemies, they're saviors. They're the backbone of the web, its visual skeleton. You can joke, you can grumble, but without them, the entire internet would have long ago devolved into typographic chaos, where every email would look like a ransom note.
And now for the main thing: which fonts should you use? The answer, of course, is simple and cynical: those that won't break your website. If you want to be practical, use system fonts. Arial, Helvetica, Verdana, and Georgia are good old friends, time-tested. If you need neutrality, go with Arial. If you need a little more warmth, go with Verdana. Academic rigor – Times New Roman. Cozy elegance – Georgia. Everything else is just toys. Beautiful, but dangerous.
You could, of course, add trendy Inter, Open Sans, or Roboto. They look modern, but deep down, they still gravitate toward the same proportions, the same principles. Because in typography, as in life, evolution doesn't make leaps – it simply polishes the good old.
So the next time someone tells you, "Arial is boring," just smile. Arial isn't boring, it's timeless. Verdana isn't outdated – it simply knows its worth. And Times New Roman… well, yes, it's a bit of a bureaucrat, but it's the one that has helped billions of students defend their diplomas. These aren't just fonts. They're the foundation of the visual language of the web.
You could say these fonts have survived everything: design trends, paradigm shifts, screen revolutions. And they will remain even when all the current fashionable fonts are forgotten. When the meta-universe finally turns into a three-dimensional Excel spreadsheet, Arial will still stand guard over readability.
The world changes, but typography doesn't. Because at some point, humanity realized a simple truth: you don't have to invent something new for everything to work. Sometimes, it's enough for letters to simply be visible.
And that's the great irony. We can change HTML standards, invent flexbox and grid, rename JavaScript every three years (lol, hello Kotlin), but fonts are like old panel houses: ugly, but eternal. We can laugh at them, but we still live inside them.
So there you are, opening yet another website, and seeing Arial. Somewhere deep down, you even rejoice – stability. You know this font won't betray you. It won't slip, it won't blow up the layout, it won't incite a fury.
Someday, maybe new standards will emerge. Maybe browsers will agree on a single font format, maybe neural networks will start dynamically rendering fonts based on the user's mood. But until that happens, the old-timers continue to hold the line. And frankly, let them. Because they're the ones who hold together this entire decrepit but beautiful construct called the web.
Arial, Verdana, Times New Roman, Georgia, Tahoma, Trebuchet—these aren't just names from the past. They are the visual chords of the internet. They ring quietly but confidently, like an old modem connecting to the world. They remind us that progress is wonderful, but reliability is more important.
And perhaps that's why. There are always few ideal options, there will always be few of them.
Building the Web by Hand
In a quiet corner of the Internet, there is a website that seems to have escaped the passage of time. It is simple, fast, free of ads and unnecessary scripts. Created in October 1999, it still stands alive today. Its author, Professor William T. Verts, has kept it almost unchanged since then. More than a relic, this site is a testament to a way of teaching—and inhabiting—the web that now feels almost subversive.
Teaching by Building the Web by Hand
Before platforms like Wix promised to build websites “without writing a single line of code”, there was another way to teach and understand the web: building it by hand. Not with “construction” metaphors or drag-and-drop systems; rather, by typing, tag by tag, in a plain text editor.
The professor William T. Verts embraced this practice and made it the core of his university teaching for years.
“I very much wanted the students to get their hands dirty, so to speak.”, he explained to me in an email.
Verts taught courses aimed at students without a background in computer science, and his approach was deliberately minimalist: writing HTML by hand using only Windows Notepad or Mac TextEdit, then uploading the files to a Linux server reserved for his class, using traditional tools like telnet (SSH), FTP, and Emacs.
There were no templates or WYSIWYG editors. Students had to understand the document structure, carefully write each closing tag, and face the unfiltered reality of what it means to publish on the web.
“Anyone equipped with nothing more than an encrypted ftp and a text editor could maintain their pages adequately.”, says the professor.
One of the courses even included basic JavaScript and a
server-side Python script that simulated an online ordering system. The goal was never to compete with modern tools; rather, it was to show that they were unnecessary to understand the fundamentals.
By intentionally giving up the comfort of graphical tools, it becomes clear that the web was readable, malleable, accessible—and that its principles were within anyone’s reach.
Students learned to upload files, deal with typos, test locally, and debug code manually. This approach contrasts sharply with many current courses that start directly from the abstraction of frameworks like React or from platforms where the code isn’t even visible—such as in “vibe coding” environments.
This type of training also carries a cultural resistance, reclaiming the idea that the Internet should not be an opaque space managed by magical tools or third-party services. It (also) should be an environment that anyone can inhabit and build for themselves.
The Site That Never Needed Modernization
That same teaching philosophy—based on direct understanding, manual coding, and full control over what is published—not only shaped the classroom experience; it also defined the way Professor Verts built and maintained his own website.
His personal webpage (https://people.cs.umass.edu/~verts/), maintained since 1999 using the same tools and principles he taught, is a reflection of that idea.
It has been online since at least 1999, according to its own source code, and it is still alive today.
Visiting William T. Verts’ homepage — a light background, default typography, and a GIF image map with buttons — many would think it’s “old-fashioned.” But this quick judgment misses an essential nuance: it’s not a site frozen by neglect; it never needed to undergo the logic of “perpetual modernization.”
“The effort expended in correcting the errors [from design tools] was not worth the original gains in ease‑of‑design”, Verts points out.
Enough from Day One
When the professor published the first version in 1999, HTML 4.0 had just become established, and the web still mainly loaded static documents. That technology still works exactly the same today in any modern browser. Therefore, the promise of “responsive redesign,” “JAM stack,” or “Single-Page App” offers no real advantage for the type of content Verts publishes, which includes teaching notes, utilities, and personal software.
Absolute Control, Minimal Weight
By writing every line of code by hand, Verts maintains almost
surgical control over every byte of his site. According to an analysis with GTmetrix, the homepage weighs around 231 KB, mostly due to static images, without unnecessary code. In contrast, many modern JavaScript frameworks easily exceed that size—even before displaying any content. As a result, the site loads almost instantly, even on slow connections.
GTmetrix request-by-request visualization of Professor Verts’ homepage. Minimal static files, no external scripts—fully loaded in 1.3 seconds, even on slow connections.
The Only External Tool
The biggest concession to automation is a Delphi program he wrote himself to produce the button panel as a client-side image map. The application generates:
the resulting GIF file;
the <MAP> block with <AREA> coordinates ready to paste into the HTML.
By avoiding databases, CMSs, or third-party libraries, maintenance is reduced to the essentials. There’s no “update WordPress”, no “migrate from Angular 11 to 17”, no CDN bills. Paradoxically,
technological inertia becomes a preservation strategy: the less you add, the less can fail.
The Value of Essentials
Having the opportunity to correspond (epistolarily) with Professor Verts allowed me to understand that the design of his personal website is not only a technical choice, is an extension of his teaching practice, one that prioritizes deep understanding over superficial appearance, and direct work over unnecessary abstraction.
Perhaps true modernity is not found in adopting every novelty; it lies in pursuing solutions that remain valid twenty years from now. In this way, the hand-crafted web reveals itself as a lucid commitment to permanence.
In this article, I want to discuss the importance of a markup language like HTML. Why HTML? Because, despite its widespread use, this language, in my opinion, is critically underrated. I'm not talking about its popularity, but about one important function it performs.
It's clear that we encounter HTML on every website on the internet. This has been the case since the dawn of the web. It evolved alongside the web, preserving its most important feature at a time when computers weren't widely available, and, consequently, each of us had completely different computer configurations, along with a different set of software, from operating systems to document viewing and editing programs.
Some people kept up with the times and had up-to-date hardware that could easily handle modern operating systems supporting all available technologies, while others had computers with significantly weaker configurations, and some had no hardware at all.
When the HTTP protocol and, with it, the first version of HTML, were first introduced, the language had a handful of tags that were easy to remember. Even without a web browser, you could open an HTML document, read its contents, and even roughly imagine how it would look in a browser.
As the World Wide Web grew in popularity, HTML also adapted to new web page design requirements. It became possible to embed various elements, such as images, animations, and other embedded elements, customize text flow around images, and introduce styles. Even the ability to dynamically interact with a page using JavaScript gave rise to the concept of layout, previously used in typography.
But one of HTML's most important properties remains: backward compatibility. While some websites may no longer look exactly as the author intended when designing them for a specific browser, you can still get the information you were looking for.
HTML is the ink of the internet, and web pages are the paper. Your websites are booklets, brochures, books, and sometimes even entire encyclopedias. Unlike their paper cousins, you can insert pages into your books at any time. Tearing out pages, however, is not a good idea. Just as in paper books, we can mark text or pages in pencil with a comment, such as "outdated, no longer relevant," the same should be done with e-books. This is no less important with websites, as many other sites may have a link to this specific page. When accessing it, readers will only see the spine of the torn page in the form of a 404 error, which at least indicates that the web server is still working.
We don't typically see references to other books in books, telling us which page, paragraph, or line we can access for additional information. But on web pages, we can do this using hyperlinks. Not just links, not superlinks, but hyperlinks, because our document is hypertext, meaning we can navigate to pages on other sites, and from other sites, we can navigate to ours. This is what's called a "web."
To allow others to more precisely reference the information on your website, you need to place "anchors" in the text of your pages. Then other web users, or you yourself on other pages, can link to a specific paragraph in the text, making life even easier.
Time has passed, and now websites are no longer books with extra pages "for notes." We're now surrounded by web applications, and if we draw an analogy with the real world, websites are now devices for various purposes. They use the same protocol for transmitting hypertext documents, even for transmitting video. It's as if we were sending a video stream through an Enigma-encrypted telegraph.
And yet, we can still visit old websites, or even create one, turn on our old computer, the one we spent such pleasant childhoods on, and remember that the true value of information lies not in the clutter of technology, but in accessibility. This is Web 1.0—our little world that still flickers on the internet, uniting enthusiasts from all over the world and reminding modern users of the internet's once-big bang, the traces of which we can still observe in this relic of its origins.
This article was written on a computer with a Pentium MMX processor.
The small web is not something recent. It is not a late reaction against platforms, nor a nostalgic trend driven by social media saturation. It never went away. It was simply covered by layers of centralized services, polished interfaces, and promises of scale. So compressed by that weight that today we feel the need to call it “small,” even though it remains an active, productive, and living space. Long before the terms IndieWeb or small web existed, there were already people publishing on the Internet with a clear logic: doing it on their own, understanding the medium, controlling the infrastructure, and communicating directly with whoever was on the other side. That logic did not originate on the web. It came from earlier practices.
Before the Internet
For decades, small press was the natural territory for authors who preferred full creative control over mass visibility. It was not limited to fanzines, but also included comix, mini-comics, and self-published works that circulated outside the major publishing houses. Small press was more than just a format; it was also an ethic. It meant writing, drawing, printing, distributing, and sustaining a work without intermediaries. It involved accepting small print runs, limited audiences, and slow circulation in exchange for autonomy.
From Paper to HTML
By the late 1990s, that same ethic found a new medium in the early web, not merely as a replacement for paper. The personal website functioned as a kind of digital home printing press, with rudimentary tools (from our perspective 30 years later) and fully manual processes. John MacLeod (https://www.sentex.net/~sardine/)—a Canadian cartoonist who emerged from the small press circuit of the 1980s and creator of the character Dishman—whom I had the pleasure of corresponding with, describes this transition with retrospective clarity: “I’m not sure I consciously realized it at the time, but yes it was an extension of small press, the whole DIY mentality.”
The web did not appear as a professionalized channel or a stable medium; rather, it was an experimental space. Publishing meant learning by doing, solving basic technical problems, and accepting very concrete material limitations. As MacLeod recalls: “I guess it felt more experimental, since at those early stages it wasn’t always clear to me how I could do something as basic as upload a graphic.”
Making the Web by Hand
Building a personal website in the 1990s did not require platforms with artificial intelligence, but simple text editors. HTML was written line by line, without layers of abstraction or “magic” frameworks: “I started off with MS Notepad… and in all cases I was working direct hands-on with the raw text HTML code,” MacLeod explains in his email. That direct contact with the code was like a craftsperson working with their materials. As in small press, the author understood and controlled the entire process. The HTML editor played the same role as the photocopier, the stapler, or the layout table: simple tools in the service of a DIY logic.
Publishing Without Permission
One of the clearest points of contact between small press and the small web is the absence of intermediaries. There were no algorithms, metrics, or optimization. There was also no need to ask for permission. Publishing simply meant uploading files to a server and linking them. From that perspective, the continuity with the present becomes clear. MacLeod sums it up directly: “Everyone today who bothers to post their own blog via WordPress or whichever is basically publishing a zine.” The medium changes, not the logic. A personal blog remains a form of self-publishing.
Small Infrastructure, Long Time
Another shared trait is the scale of the infrastructure. Many of these sites were hosted on modest services, often provided by small local ISPs. In MacLeod’s case, his site is still online because his Internet provider has remained the same for decades. “Limited web page hosting was included with their package,” he recalls. The question of preservation then emerges naturally. “I feel like everything deserves preservation,” MacLeod says, emphasizing the value of marginal and less popular works, precisely because they have fewer chances to survive. Preserving personal websites, hand-built pages, and early digital publications does not mean freezing them as fossils, but recognizing them as part of a living continuity. In the same way that small press comix find new readers decades later, the small web continues to offer a space to publish without permission, at a human scale and with its own sense of purpose.
It is not a return. It is a line that was never broken.
In the grip of lag, grand illusions, and slow speeds
what were the first online games like?
A story about how humanity tried to play online when the internet was just beginning to crawl,
and computers accounted for half the audience.
Today, approximately three billion people play online games (with approximately six billion internet users), meaning every second person who ever picks up a gamepad or mouse. We fight, build, meet people, argue, make peace, and all this without leaving the internet. It seems like online multiplayer has always been with us, but in reality, it all started when students simply didn't want to do their coursework.
1970s: Empire, Written for a Credit
In the early 1970s, humanity didn't have Steam, Wi-Fi, or even decent computer screens—but there was John Daleske, a student who wrote the game "Empire" for his 1973 term paper. Initially, it was a space strategy game in which up to eight players competed for galactic dominance, with each planet having its own economy, fleet, and politics.
A few months later, Daleske apparently got tired of calculating taxes and decided to remake the game into a small space shooter. The strategy disappeared, players boarded starships and started firing phasers—the very same ones from Star Trek. Empire now allowed up to fifty players, divided into teams, and all of this ran on the PLATO network—a computer-based learning system designed for education but, as usual, becoming a platform for procrastination.
The graphics were austere: a large circle for a planet, a few rectangles for ships. Everything else was text. Commands like "turn 45 degrees" or "shoot 180 degrees." And the most important thing was patience. After all, network speeds back then were 180-1200 baud, which was like a snail's pace. Simply landing troops on a planet sometimes required a ten-minute wait.
Players would turn on their terminals in advance—a couple of hours before the game—to accumulate computing time. Otherwise, the system simply couldn't cope. Sessions could last for hours, sometimes days, but no one cared. For the first time, you could play "together."
Spasim, Maze, and other PLATO children
Empire had many successors. For example, in 1974, Spasim, with as many as 32 players and 3D wireframe graphics. Its creator later argued with the world for a long time that he was the inventor of the first first-person shooter, but he was a bit late. That same year, three high school students from a NASA research center created "Maze"—a game where giant eyeballs darted through mazes and shot at each other. It was played over a cable, directly connecting computers, making it essentially the first LAN shooter.
Later, one of the students enrolled at MIT and, together with a professor, created "Maze War" for ARPANET, the ancestor of the modern internet. Up to eight players played, and its popularity was so great that at one point "Maze War" was even banned: the game consumed half the traffic between Stanford and MIT. Thus, humanity experienced its first DDoS attack from fans ;-)
Maze War gameplay
In 1977, "Maze War" was ported to the Xerox Alto computer, using the PUP protocol—the same one that inspired the creation of TCP/IP. "Maze War" was, arguably, the first game to truly utilize the client-server model. It was also the first to feature cheating: players could see their opponents' positions on the map, even though the rules required them to be blind. The first "wallhack wizards" were already here.
D&D at the Monitor
At the same time, "Oubliette"—the first online role-playing game—was released on the same PLATO in 1977. Inspired by "Dungeons & Dragons," it featured players creating characters, choosing races and classes, gathering in taverns, and delving into dungeons. One player would direct, while the others would obey and press a few buttons. And it wasn't just "online play" anymore—it was a shared world!
The problem remained the same—speed and hardware. But it was precisely these limitations that gave birth to a different genre. In 1975, Colossal Cave Adventure was released, which had no graphics at all: the player saw only text and descriptions of actions. Thus began the era of "text adventures," which still shows signs of life today.
MUDs: Text, the Web, and the Beginning of Cyberspace
Inspired by "The Cave," Roy Trubshaw and Richard Bartle created MUD—Multi-User Dungeon—in 1978. Players wandered through rooms, typing commands like "go north" or "kill rat," and saw on the screen:
> You enter the forest. A rat appears. It looks hungry.
The first real chat, the first co-op, the first internet dramas. The second version of MUD already had twenty rooms and a dozen teams. In one mission, you had to keep one person in each room, otherwise the maze would collapse. They played at night—because during the day, the "JANET" network was busy with science.
By the early 1980s, MUDs had become numerous. People created them at universities, adapting them to their own worlds and rules. The speed is still the same, but text doesn't need more.
The most famous wave of MUDs began with "Scepter of Goth" (1983). It featured up to 16 players simultaneously. It even had an economy: arenas, duels, bars, monsters, and an administrator.
y who kept track of everything—almost like D&D game masters.
In 1984, the French created "MAD," the first international MUD, accessible via the BITNET network. When traffic suddenly surged, network administrators assumed the network was under attack and blocked the game. Thus, the first "anti-cheat" was born, created by a system administrator who simply pulled the plug ;-)
This is how Big K magazine described MUD1 in 1984.
MUDs became a cultural phenomenon. Nicknames, clans, digital duels, even the nascent "roleplay"—all of it was born there. They laid the foundations of cyberspace even before Gibson wrote "Neuromancer."
1980s: From Symbols to Pixels
In 1985, "Island of Kesmai," the first commercial MUD with pseudographics, was released. Visually, letters and symbols replace monsters. But at least it was paid. A CompuServe connection cost $6 an hour, while an advanced 4800 bps connection cost $12. Some fans later showed bills of two thousand dollars a month.
In 1986, Lucasfilm Games (the future LucasArts) launched a beta test of Habitat—the first MMO with real graphics. The word *MMO* didn't even exist back then, so it was called a "graphical MUD."
Habitat was more of a social sandbox: avatars, houses, communication, robberies, weddings, and divorces. There's a famous story about a couple of players who had a virtual wedding, then divorced, and had to divide their property—the first digital drama.
And when a bug started causing characters' heads to disappear, a cult of "headhunters" emerged, trying to infect everyone else with the bug. The internet didn't yet know memes, but it knew chaos.
Habitat operated through the CompuServe network, and access to the servers was only available in the evenings and on weekends. Every minute cost 8 cents. Online life literally had a price.
1990s: Modems, Swords, Megabytes
In 1989, "Kingdom of Drakkar" was released—a fantasy RPG with isometric graphics and mouse controls. It already looked like a modern game, but it still cost 8 cents an hour through CompuServe. Up to 200 players could play it—a figure that seemed incredible at the time.
At the same time, Sega was trying to establish itself online on consoles: in 1990, "Sega Meganet" appeared in Japan. Games were downloaded via modem, with a subscription costing a hundred dollars a month (plus the phone bill). The library contained only 25 games, and online play was limited to tennis and mahjong. The project quickly foundered for obvious reasons.
In 1991, "Neverwinter Nights" was released—not the famous BioWare game, but the first one from Stormfront Studios. It ran on the Gold Box engine, featured chat, and turn-based battles with 20-second turns. It was played through AOL, which was already delivering 9600 bps. In the evening, the servers were completely full: 2,000 players at a time, 115,000 registered. Online gaming was starting to become widespread.
At the same time, the first dial-up cards went on sale, allowing internet access over the phone. For $4 an hour, you could play at home—without a university network. By 1996, modems had accelerated to 33.6 kbps, and AOL launched unlimited internet for $14.95 per month.
Doom, Quake, and the Birth of Esports
While some were building dungeons, others were shooting. In 1995, millions of people paid $9.95 a month to play Doom online with the DWANGO service. Quake followed, but online play initially didn't work—lag turned matches into turn-based meditations. Everything changed on December 17, 1996, when QuakeWorld, a version with full online multiplayer, was released. Remember that date; it's still called the birthday of esports.
To connect, you had to download QuakeSpy, which searched for servers. It later evolved into the famous GameSpy, and by the late 1990s, no online game could function without it.
The First MMORPGs
1996 was a fruitful year for MMORPGs. Among them were the boring Dark Sun Online and the quickly forgotten The Realm Online. But "Meridian 59" looked fresh: 3D graphics, a living world, and an attempt to coin the term "MMPRPG" (Massively Multiplayer RPG). But another term, "MMORPG," prevailed, championed by Richard Garriott, creator of "Ultima Online."
When "Ultima Online" appeared before everyone's eyes in 1997, players saw what a real online world could look like. Not just battles and chat, but life: you could sew clothes, cook soup, grow crops, steal wallets, build houses, and even beg. Players organized clan wars, weddings, RP scenes, and fairs. It was an entire universe. And in 1999, "EverQuest" appeared—full 3D, fantasy, and thousands of players. The interface was cumbersome, but the world was vast. Online was finally no longer an experiment.
"EverQuest" gameplay
While the US was mastering Ultima, Korea was experiencing its own explosion. By 1996, there were already 200,000 MUD players, and a couple of years later, "Nexus: The Kingdom of the Winds," the first Korean graphical MMORPG, was released.
But it was "Lineage" (1998) that made a real splash. At its peak, Ultima Online had 240,000 players, EverQuest 460,000, and Lineage 3.4 million. Online gaming became not just entertainment, but a national sport.
"Lineage" gameplay
In 1999, Sega released the "Dreamcast," the first console with a built-in modem. For $21.95 per month, players received a then-pretty 56 kbps speed and the ability to play dozens of games online, including VK.
Including the first console MMORPG, "Phantasy Star Online."
And although all this truly blossomed in the new millennium, the 90s marked the era when online games ceased to be a novelty. They moved from university basements into apartments, from cables into modems, from coursework into industry.
Epilogue
And so, as the pages of this issue slowly draw to a close, it feels right to say something simple and honest. Elpis has never been about trends. It's always been about stopping for a moment, looking around, and remembering why we loved the internet in the first place—when it was more human.
If you've read this far, thank you. Seriously. In an age where attention is the scarcest resource, the fact that you've chosen to spend some of it here means a lot.
This issue, like the previous ones, isn't trying to change the world or explain how everything works. It's simply a reminder that curiosity is still important, that creating things with your own hands still has value, and that there's nothing wrong with enjoying old technologies, old ideas, and habits if they still bring you joy. Progress doesn't always mean moving forward; sometimes it means carefully preserving something authentic.
Elpis exists thanks to people like you, who are still here—reading, experimenting, reminiscing, and feeling nostalgic.
So let's end this question without any grand conclusions or dramatic promises. The internet is still here. We're still here. And for now, that's the case. Until then, take care of yourselves, your retro computers, and keep creating.
See you there!
p.s: Thanks to all the authors who contributed to this issue,