- Everything was text-based;
- There was (almost) no spam;
- There was no such thing as a search engine;
- You did everything using these crappy UNIX text-based command-line tools.
I first started using the Internet back in high school, around 1990, on an IBM RT UNIX system connected via dialup from school. Back then, there were three main uses of the Internet: email, USENET, and FTP. Email was not very common but some universities had it, and by the time I started college in 1992, you automatically got an email address as an undergraduate.
USENET
USENET was a huge distributed newsgroup system. It was based on peer-to-peer file exchange well before we called it that. There were thousands of newsgroups on topics ranging from the C programming language to the rock band Rush. Yes, it still exists; I think it's largely overrun with spam and warez these days, and I haven't looked at it in years. At the time, spam was almost unheard of so newsgroup discussions tended to stay on-topic. I was the moderator for comp.os.linux.announce for a while, which meant that every announcement to the Linux community (like a new kernel release or software package port) came to my inbox for approval before I posted it to the world.
At some point I want to write a book about USENET culture circa 1992. It was a very interesting place. Groups like talk.bizarre were frequented by the likes of Roger David Carrasso (who pulled off some of the best trolls imaginable); Kibo (founder of alt.religion.kibology); and who can forget the utterly brilliant and bizarre stories by RICHH?
I was a member of the "inner circle" for a group called alt.fan.warlord, which was centered on making fun of ridiculous signatures at the end of USENET posts, like this:
Paul Tomblin, Head _ _ ____
Automated Test Tools Team | | | | | __| ___._`.*.'_._
_______________ ______ ____| |________ | |_| |__ + * .o u.* `
/ ________ _ \ | __ | / ________ _ \ | ______| . ' ' |\^/| `.
| | | | / / | | | | | | | | | | / / | | | | | | \V/
| |__| | / /__| |_| | | | | |_| | / /__| |_| | | | /_\
\ _____/ \__________| |_| \___|_| \__________| |_| === _/ \_ ===
//
\\____ Phone: (613) 723-6500x8018 Mail: Gandalf Data Limited
/ _ \ Fax: Don't know it yet 130 Colonnade Road South
| |_| | Email: ptomblin@gandalf.ca Nepean, Ontario
\_____/ or ab401@freenet.carleton.ca K2E 7J5 CANADA
There was a group called alt.hackers that was a moderated group with no moderator. In order to post you needed to figure out how to circumvent the moderation mechanism (which was simply a matter of adding an extra header line to your post).
FTP
USENET was all about discussions, although there were newsgroups where you could post binary files -- typically encoded in an ASCII format like UUENCODE, and broken up into a dozen or more individual posts that you would have to manually stitch back together and decode. This became a popular way to post low-resolution porn GIFs, but was pretty much useless for anything larger than a few megabytes. A better way to download files was to use FTP, which allowed you to download (and upload) files to a remote FTP server. Of course, FTP had this totally unusable command-line interface which required you to type a bunch of commands just to get one file. This site at Colorado State helpfully explains how to use FTP, like anybody still needs to know how. A typical FTP session looked like this:
% ftp cs.colorado.edu Connected to cs.colorado.edu. 220 bruno FTP server (SunOS 4.1) ready. Name (cs.colorado.edu:yourlogin): anonymous 331 Guest login ok, send ident as password. Password: 230-This server is courtesy of Sun Microsystems, Inc. 230- 230-The data on this FTP server can be searched and accessed via WAIS, using 230-our Essence semantic indexing system. Users can pick up a copy of the 230-WAIS ".src" file for accessing this service by anonymous FTP from 230-ftp.cs.colorado.edu, in pub/cs/distribs/essence/aftp-cs-colorado-edu.src 230-This file also describes where to get the prototype source code and a 230-paper about this system. 230- 230- 230 Guest login ok, access restrictions apply. ftp> cd /pub/HPSC 250 CWD command successful. ftp> ls 200 PORT command successful. 150 ASCII data connection for /bin/ls (128.138.242.10,3133) (0 bytes). ElementsofAVS.ps.Z . . . execsumm_tr.ps.Z viShortRef.ps.Z 226 ASCII Transfer complete. 418 bytes received in 0.043 seconds (9.5 Kbytes/s) ftp> get README 200 PORT command successful. 150 ASCII data connection for README (128.138.242.10,3134) (2881 bytes). 226 ASCII Transfer complete. local: README remote: README 2939 bytes received in 0.066 seconds (43 Kbytes/s) ftp> bye 221 Goodbye.
All this just to download a single README file from the site.
In order to use FTP, you needed to know what FTP sites were out there and manually poke around each one of them to see what files they seemed to host, using "cd" and "ls" commands in the crappy command-line client. There was no such thing as a search engine. So, people in the FTP community made this giant "master list" of every FTP site out there and a short (one-line) summary of what the side had, e.g., "Linux, UNIX utils, GIFs." This master list was mirrored on a bunch of sites but of course that was a manual process, and in effect the mirrors were often conflicting and out-of-date. These days you can find a giant list of FTP sites on the Web, of course.
The Birth of the Web
Back in 1994 or so I was doing research as an undergraduate at Cornell. At the time I was a huge USENET junkie, and from time to time I would see these funny things with "http://" in people's signatures. I had no idea what they were, but at some point saw a USENET post that explained that if you wanted to browse those funny "URL" things you needed to download something called Mosaic from an FTP site at NCSA. When you launched the Mosaic Web browser it brought up this page which was the home page for the entire World Wide Web. At some point I managed to get the original NCSA Web server running and put Cornell's CS department on the Web. Here's an archive of the Cornell Robotics and Vision Lab web page that I made back around 1995.
In 1995 I had bought a big yellowpages-sized book that had most of the web pages in the world listed in it, in subject ordering. *facepalm*
ReplyDeleteAaah good old times, gawd I am getting old.
ReplyDeletemmm iscabbs
ReplyDeleteHey, you forgot to mention archie and gopher!
ReplyDeletei fondly remember the days of always asking people to reply to an email so i could have a copy for myself.
ReplyDeleteMatt, are you trying to tell us by omission that you are the only pre-Web Internet user who didn't waste countless hours in college playing MUDs/MUSHes/MOOs?
ReplyDeleteUhh, not that I'm admitting anything... :)
I still have a copy of "The Whole Internet, User Guide & Catalog", O'Reilly, 376 pages $24.95. Unbelievable.
ReplyDeleteMatt, thanks for the walk down memory lane. I also started college in 1992, but we had to walk down to IT and fill out a paper form to get assigned an e-mail address, a process that took a couple of weeks (!!!) for IT to complete.
ReplyDeleteOnce online, I remember tying up our dorm room phone line (pre-cat5) for hours on end, poking around the text-based internet. Good times.
Memories come flowing, I remember an FTP accessible MIDI file archive at cs.ruu.nl, mantained by a guy called Guido van Rossum.
ReplyDeleteYou forgot to mention Perth.
ReplyDelete-- MV
Matt, it is hard to imagine pre-http Internet that was less than 20 years ago, how about Internet search before "Google", less than 10 years ago, BG = Before Google, seems much longer than that :-)
ReplyDeleteI think lynx is still my favorite web browser. You know, Amazon.com used to explicitly support text-based browsers in the code. Those were the good old days.
ReplyDelete