Imagine a world without Internet. How would technology have fared? Only 36 years after its invention, the internet has revolutionized nearly all aspects of human life; be it science, technology or research. But the internet itself wasn’t born perfect. Since its debut in 1983, the internet has been steadily evolving up to the point it is today.
The internet was born flawed. But if it hadn’t been, it might not have grown into the worldwide phenomenon it’s become
That’s the take of Vint Cerf, and if anyone would know, it’s him. He’s widely considered to be one of the fathers of the international network and helped officially launch it in 1983.
Cerf’s initial internet design basically didn’t set aside enough room to handle all the devices that would eventually be connected to it. Perhaps even more troubling, he and his collaborators didn’t build into the network a way of securing data that was transmitted over it.
Cerf who is now a vice president at Google and its chief internet evangelist, said he was only helping to set up the internet as a simple experiment, and he had never imagined it getting as large as it is today.
“I had been working on this for five years,” he said. “I wanted to get it built and tested to see if we could make it work.”
The first internet design had quite a few flaws.
Read also: Apple Secures Smart Fabric Patent
Cerf’s initial internet design had a space problem
The lack of room on the internet has to do with the addressing system Cerf created for it. Every device connected directly to the network must have a unique numerical address. When Cerf launched it, the internet had a 32-bit addressing system, meaning that it could support up to 4.3 billion (2 to the 32nd power) devices. And that seemed plenty when he was designing the system in the 1970s.
That number “was larger than the population of the planet at the time, the human population of the planet,” he said.
But after the internet took off in the 1990s and early 2000s, and more and more computers and other devices were connecting to the network, it became clear that 4.3 billion addresses weren’t going to be nearly enough. Cerf and other internet experts realized relatively early that they needed to update the internet protocols to make room for the flood of new devices connecting to the network.
So, in the mid-1990s, the Internet Engineering Task Force started to develop Internet Protocol version 6, or IPv6, as an update to the software underlying the network. A key feature of IPv6 is its 128-bit addressing system, which provides room for 2 to the 128th power unique addresses. But it’s taken years for companies and other organizations to buy into, test, and roll out IPv6. The standard wasn’t officially launched until 2012. And even today, Google estimates that only a little over a quartre of users globally have an IPv6 address. Even the United States only has about a 35% adoption rate, says Google.
“Now that we see the need for 128-bit addresses in IPv6, I wish I had understood that earlier, if only to avoid the slow pain of getting IPv6 implemented,” Cerf said.
But hindsight is 20-20, and he acknowledges that it’s highly unlikely that he could have pushed through a 128-bit addressing system at the time, because it would have seemed like overkill.
“I don’t think … it would have passed the red-face test,” Cerf said. He continued: “To assert that you need 2 to the 128th [power] addresses in order to do this network experiment would have been laughable.”
Internet security was skipped in the initial experiment
Security was also something Cerf skipped for his experiment. Transmissions were generally sent “in the clear,” meaning they could potentially be read by anyone who intercepted them. And the network didn’t have built-in ways of verifying that a user or device was who or what it attested to be.
Even today, some data is still transmitted in the clear, a vulnerability that has been exploited by hackers. And authentication of users remains a big problem. The passwords that consumers use to log into various web sites and services have been widely compromised, giving malicious actors access to plenty of sensitive data.
One of the most widely used security methods on the internet was actually developed around the time that Cerf was putting together the protocols underlying the network. The concept for what’s called public-key encryption technology was first publicly described in a paper published in 1976. The RSA algorithm— one of the first public-key cryptographic systems — was developed the following year.
But at the time, Cerf was head deep in trying to finalize the internet protocols so that after years of development, he could launch the system. He needed to get them ported to multiple operating systems and needed to be able to set a deadline for operators of the internet’s predecessor networks to switch over to the new protocols.
“It would not have aided my sense of urgency to have to … have to stop for a minute and integrate the public-key crypto into the system,” he said. “And so we didn’t.”
Internet usage increased fast because of lack of security
Even with the benefit of hindsight, Cerf doesn’t think it would have been a good idea to build security into the internet when it launched. Most of the early users of the network were college students, and they weren’t likely to be very “disciplined” when it came to remembering and maintaining their password keys, he said. Many could easily have found themselves locked out of it.
“Looking back on it, I don’t know whether it would have worked out to try to incorporate … this key-distribution system,” he said, continuing: “We might not have been able to get much traction to adopt and use the network, because it would have been too difficult.”
The security situation on the internet ended up being somewhat easier to address than its lack of space, Cerf said. It was relatively easy to add on public-key cryptography to the internet later on through various services and features, and several are now widely used. For example, the protocol that web sites rely on to secure the transmission of web pages — HyperText Transfer Protocol Secure, or HTTPS — relies on a public-key cryptographic system.
Other types of security features have also been bolted on after the fact, he noted, such as two-factor authentication systems, which typically require users to enter a randomly generated code in addition to their password when logging into certain sites.
Read more about: What’s Making Headlines In The Tech World Today