December 1974 - TCP, the protocol that allows computers to have a persistent enough connection. Also first mention of "internet" as in "internetwork". And it's wrapped in the Internet Protocol, then v1, of 1973. More or less where it all started for what we have today. And we are still stuck with both of them!
[…]
DNS was created in 1983, and became more common about two years later, I think?
Internet Protocol (IP) standardization was March 1982, with full move of ARPANET being "done" in January 1983.
You can more or less consider this being the start of "global internet", or rather its larval but now stable enough form. Most of what we consider internet today is built on IPv4, TCP, BGP and HTTP. Just a decade more from this point.
Except that's standardization. The actual thing was first written in 1973, and IPv4 was made in 1981 to replace RFC 760 (IP for DoD) of 1980. Said RFC already has the 32 bit addresses we all love. So we already have a draft of the plague that will haunt everyone for next half a century at least, hah...
[…]
BGP, the core protocol of modern internet backbone address allocation, and also a security nightmare, "was sketched out in 1989 by engineers on the back of "three ketchup-stained napkins"", but only started to be used in 1994. Prerequisite for it is IPv4.
Quick someone develop an IPv4 alternative before it's too late!
... Wouldn't it be funny if we made communist internet and web before it was cool
TBH I can see someone taking a heavy reading through the manifest, go all in on "automated everything" and then seeing the 32 bit address stuff (4 billion devices total possible, less in reality) and going "Hey that's not enough for everyone". Like I imagine even just one automated factory probably has a shitton of devices connected to its own network? Hm
Hm... Actually does make me wonder how exactly the telecom network works in Guangchou? Like in terms of protocols, etc. Is it custom, is it just the goddamn TCP/IP, etc. Is it all still in the stage of "all user inputs are through a dumb terminal connected to a powerful mainframe" or are there computers that are single-user, even if they are big, etc?
I finally have time to write this, so here's my response:
All of this refers purely to the various versions and parts of the
Internet Protocol suite (TCP/IP), but TCP/IP only became the universal and permanent networking protocol it now is in the mid '90s, and many of the services you describe (SMTP, HTTP, etc.) are TCP/IP specific, with other protocol suites having their own similar protocols. Before that, there were dozens of the things, such as IBM's
System Network Architecture,
DECnet,
PARC Universal Packet,
Xerox Network Systems (all proprietary), and
X.25, an open standard mostly used in Europe (Guangchou's computers almost certainly use one of these, depending on who we bought them from; locally developed ones probably use either a locally developed protocol or X.25). More or less all of these were
more popular than the TCP/IP, which was used purely by the US military and parts of American, Canadian, and British academia (as the
ARPANET, and then the Internet). The whole reason the
OSI model is so ill-suited for TCP/IP is because it was designed to both account for all of these
and to express the best practices of the time. These were intended to be implemented in the
OSI protocol suite, which would be based on X.25, and developed jointly by the
International Organization for Standardization (ISO) and the
Telecommunication Standardization Sector of the International Telecommunication Union (ITU-T), a specialized agency of the UN.
The reason the US military/academia protocol suite won is a
bit complicated, but it boils down to a few things. First, the OSI protocols were somewhat overcomplicated and got stuck in development hell/committee, and computer manufacturers rarely implemented it, even if more or less everyone was committed to
eventually moving to it. Second, as the '80s went on, increasing numbers and types of organizations were connecting to the ARPANET, and the US military broke off their own portion of it into the
MILNET. This resulted in the ARPANET getting transferred to the
National Science Foundation, who soon made it publicly accessible and simultaneously began privatizing it. As the Internet was readily available and gave the best access to American computers, the existing X.25 networks began adding compatiability layers to the and in the early '90s began offering "secondary" TCP/IP networks before switching entirely. The release of the world wide web (HTTP) only on TCP/IP would kill the OSI suite, and the last version would be released in 1996 (a few remnants remain in some random places like airline computer systems, fiber optic cable configuration, and similar niches, but they're generally being slowly phased out, as has already happened in finance).
Even if we
could make sure OSI didn't fail, it's not entirely clear we should, though. X.25 had a couple different address formats (I said it was overcomplicated!), but the primary one was literally phone numbers (specifically
E.164 phone numbers for
NASP addresses). It was an
extremely telephone-brained protocol, and while it was at least fully packet switched, the old telecom demand that networking protocols should actually be based on circuit switching remains in a lot of places; e.g. the name of a connection is a "virtual circuit". The standards committee is more or less run by capitalist telecom companies, whereas TCP/IP is mostly controlled by a bunch of random nerds at American universities who call themselves the "network working group" and accept public submissions (and still do to this day! [well, mostly, IETF stopped saying its RFCs were part of the network working group a decade or so ago, but is otherwise the same]). Sure, there's address exhaustion, but there's also the problem of telephone number exhaustion, which is
also a concern nowadays.
As a final note on some of your other comments:
x86 is sadly already here (1978), or at least the 16-bit version is. x86 for 32-bit was made in 1985, and still haunts us to these days (well, kinda, in part).
It's honestly not
ungodly for a CISC processor that's been around for as long as it has, and since the IBM PC came out in 1981, I expect that the world will be stuck with it for compatibility reasons, absent emulators being good enough for an ISA switch
ala Apple's transition to PowerPC/ARM. RISC isn't really much of a thing yet anyways, and the best RISC design (the
Alpha AXP,
fight me) hasn't been made yet. Actually, the
only RISC processor out right now is the
IBM 801, and IMO none of the CISC processors available currently are substantially better than x86. So yeah, it sucks, but what can you do.
Also cryptography and security... That's gonna be a fun one. Just please avoid the temptation to follow the USA and barf out the "wonderful" idea of "export grade cryptography". The fucking stuff is still soaked through everything and it's annoying to tear it out from old things.
This, on the other hand, we can stop. All we need for that is to define a public encrypted connection protocol (and throw in an encrypted datagram protocol for good measure) on top of our preferred protocol suite, supply some full-strength publicly available encryption algorithms along with, and the Americans can just import our code to use in their software. Unfortunately I doubt the US will change its laws until the TTL equivalent of
Bernstein v. United States happens though.