top of page

The birth of the internet …

No-one knows exactly but it’s a pretty fair guess that at any moment several hundred million people are communicating with each other, working, sharing, living big chunks of their lives via internet conferencing platforms.  Doctors diagnose, businesses operate, concerts and theatre happen, parliaments sit – all in virtual space.  But where and how did this technology emerge?

To answer that we have to go back quite a way.  Communication has always been important and plenty of inventors have worked on the problem, from the early days of semaphore and heliographs which could outpace even the fastest horses through to telegraph, telephone, radio and the other wonders of the 20th century.  Some big names were involved as well – Edison, Tesla, Marconi, Bell, all chasing down the elusive possibility of widespread fast communication over a distance.

One group of people with a particular interest has always been the military – knowing what’s going on on the battlefield and in the political circles around it is crucial as swords or guns.  But in the 1960s during the Cold War there was an urgent concern – nuclear weapons raised the possibility of whole communication systems being wiped out.  What was needed was some form of decentralized communication network – and so funds began to move in the direction of finding out how.

At the same time the burgeoning world of computing was drawing in a variety of people and swirling them together in what would become an important soup of ideas.  For example Bob Taylor, a restless psychology student who dabbled in psycho acoustics but who had a passion for what the new technology might do.  One which he shared with Joseph Licklider  who he met in 1962 after reading Licklider’s essay on ‘Man-machine symbiosis’ in which he talked about new ways or working with computers.  They began talking and developing a vision, one which in 1965 began to take more tangible shape as Licklider persuaded Taylor to join him in working together at IPTO.

IPTO?  The Information Processing Techniques Office was part of the huge Defence Advanced Research Projects Agency (DARPA) which was looking at the decentralized communication problem, specifically ways of  linking the major defence computers of the Pentagon, the SAC HQ and at Cheyenne Mountain, the secretive complex buried under housing NORAD.  IPTO gave the two men the opportunity (and the funding diverted from a ballistic missile project) to explore the idea of an ARPANet linking different computers at different locations.  (Their nickname for the loose community of researchers engaged around the project was an indicator of their underlying ambition – the Intergalactic Computer Network)!

Two key ideas emerged during the early stages of the project; in 1966 one of their team, Wesley Clark, suggested that they use a dedicated computer- an Interface Message Processor to give it its grand title – at each node of the network instead of one large centralised controller.  And then in 1967 they went to a conference on new computer techniques…..

The postman always rings twice – or more

The problem of decentralised communications isn’t just one of making sure enough computers survive an attack and can link up with each other.  There’s also the challenge of making sure whatever messages they send arrive safely.  One idea being explored simultaneously on both sides of the Atlantic was to break the messages themselves down into small chunks, transmit them in different routes across a computer network and then reassemble them at their destination.  And it was this idea that so excited the ARPANet team, drawing in the ideas of Paul Baran (working for the RAND Corporation in the US) and Donald Davies of the UK National Physical Laboratory (who had developed a local area network based on what he called ‘packet switching’).   A third player, Leonard Kleinrock contributed the underlying mathematical models which enabled the theory to be put into practice.

The basic idea is like a postal network.  One in which the postman doesn’t take the same route and often rings many times. The message you want to send is broken down into small chunks – packets – each of which is given a destination address and some other identifying information and then sent via different routed before being reassembled at that destination address.  A message goes back the other way confirming receipt; if that doesn’t happen the sender repeats the transmission.

The birth of the Internet

On October 29, 1969, ARPAnet delivered its first message: a “node-to-node” communication from one computer to another. (The first computer was located in a research lab at UCLA and the second was at Stanford; each one was the size of a small house.) The message—“LOGIN”—was short and simple; Leonard Kleinrock described in a later interview what happened next.

‘We typed the L and we asked on the phone,

“Do you see the L?”

“Yes, we see the L,” came the response.

We typed the O, and we asked, “Do you see the O.”

“Yes, we see the O.”

Then we typed the G, and the system crashed …’

Some things never change – just when you get to the important bit your system goes down!

But the demonstration proved the point; within three weeks there was a permanent computer link between the two sites and a month after that a four node network linking three sites in California and a fourth in Utah. 

If October 1969 was the birthday of the internet it had a late christening, one which came five years later buried in a technical document called an RFC – Request for Comments.  It described for the first time an internetworking  (shortened to internet) of computers linked by a common protocol. Sounds pretty dry and technical  but it was pretty important not just because RFC675 (dated December 1974) contributed a catchy name.  It also highlighted a growing problem of traffic control.

Not surprisingly once the ARPANet team had proved the system could work they began using it extensively.  And pretty soon there was so much data flowing that there was a need to organize it.  Packet switching is fine until the mail system starts to struggle with the sheer volume of packages and the many different addresses to which and through which they are to be sent; without some form of traffic control the whole thing risks seizing up.  Enter TCP/IP – initials you’ve almost certainly seen but probably haven’t a clue about. 

They are the basis for a universally accepted addressing system developed by Bob Kahn and Vint Cerf  and described in the RFC. It’s an elegant idea – basically the IP (internet protocol) part obtains the destination address and the TCP (transmission control protocol) guarantees delivery of data to that address.  It took a couple more years to work out the details but on November 22nd, 1977 an important event took place in the back of a repurposed delivery van driving round the streets of San Francisco.  The van had been refitted with some expensive radio communications equipment which enabled it to send a message from California to Boston, on to Norway and Great Britain, and then back to California by way of a small town in West Virginia.  Importantly it was sent via three different networks – ARPANet, a packet radio network and a satellite network.

The global Internet had arrived.

From the how to the what and who

At this point attention shifts from the how of communication to the what – and in particular to the communities who needed to share information.  There was an explosion of interest amongst universities and research centres, sharing on an international scale.  Their students were not to be left out; in addition to exploiting the growing internet possibilities for their studies they also developed the idea of  a Usenet linking a huge range of communities of shared interest. And the business market began to see the significant possibilities, not least through using the X-25 networks which had emerged as much of the DARPA work was declassified.

And in 1989 at one particular research network, CERN, which drew together international scientists working on nuclear physics under a  mountain in Switzerland began experimenting with connecting internal and eternal networks.  Tim Berners-Lee developed ideas which brought Licklider’s global library concept to life, using emerging principles of hypertext links to create an information system, accessible from any node on the network.  He developed the first web server, and the first web browser, called WorldWideWeb  and later renamed Nexus.

There’s decentralisation – and there’s decentralisation…

There’s one difficulty with decentralizing things – you tend to lose control.  When ARPANet was a defence project it was fairly straightforward keeping the lid on it.  But with the explosion of networks which became the Internet and the proliferation of resources and traffic flowing across it in a world-wide-web it pretty soon took on a life of its own.  And one place where that happened big-time was the slightly shadowy world of ‘warez’

This community had a long-standing interest in sharing resources of various kinds, not always fully respecting legal frameworks like copyright and IP law.  For them the emergence of the internet opened the floodgates of innovation, with opportunities and challenges for hacking. 

The make-up of this community was also changing – with the advent of the mp3 the possibilities in sharing music became apparent to millions of people.  Pretty soon there were file sharing platforms like Napster springing up everywhere to enable this – with the inevitable response on the part of the mainstream music industry to try and close them down again.  But the more they tried to put the lid on th pot the more innovation spilled out in new directions.

In particular the idea of peer-to-peer (P2P) networking was born where people shared resources, such as processing power, disk storage or network bandwidth without the need for central coordination by servers or stable hosts.  Around the turn of the century we had generations of peer-to-peer players offering variations on the core trajectory in a pattern reminiscent of a passage form the Old Testament where ‘Napster begat Gnutella begat LimeWire begat Kazaa begat Morpheus begat ……’

The pace of innovation came to resemble a high-speed car chase, with ingenious approaches to staying one step ahead of the law. And by this time it wasn’t just music ; there was a growing movement towards video which in turn drove innovation in another direction.

Video files are big – where mp3 had managed to get sound files to manageable proportions video represented a huge challenge which couldn’t easily be solved through compression.  Instead  in 2001 Bram Cohen revisited an old idea.  He’d been working on a project called MojoNation designed to help people exchange confidential files securely by breaking them up into encrypted chunks and distribute those pieces on computers also running the software.  (Not a million miles from the original packet switching concept).  If someone wanted to download a copy of this encrypted file, they would have to download it simultaneously from many computers. This had the big advantage over other file sharing programs like Kazaa which took a long time to download large files because they would typically come from only one source – the single peer in a P2P network. 

The idea was refined into the BitTorrent protocol which was able to download files from many different sources, massively speeding up the download time.  The real advantage came because it had a built-in accelerator – the more popular a file was, the faster users could download it since there were more computers involved from which other users could also download.  It was a runaway success even by the fast diffusion standards of the P2P world; within a year it represented close to 70% of total internet traffic and within ten years it accounted for close to 200 million users with 30 million concurrently active at any one time.  The genie was well and truly out of the bottle.

Pirates to P2P pioneers

Gradually the music and media industries began adopting an alternative approach, based on finding new business models which allowed legal use of media across the internet.  Apple pioneered much of this with their I-Tunes platform, bringing in the complementary assets around the music publishing and recording industries; Spotify took the model further by moving to a rental rather than ownership approach.  But in both cases the strategy was based on improving quality and accessibility;  Netflix and others followed a similar pathway for films with their proprietary alternatives to BitTorrent.  But importantly  all of these drew heavily on innovations originally developed by the pirate communities.

Of course pirates face a choice – they can continue to raid at the edges of an increasingly precarious ocean.  Or they can put the proceeds of their adventures into a new venture – something which Niklas Zennström, from Sweden, and Janus Friis, from Denmark

did after they had sold their stakes in the P2P site Kazaa.  Together with Estonian developers Zahti Heinla, Priit Kasesalu and Jaan Tallinn, they took their learning about distributed P2P as a mechanism for carrying high volumes of data and began exploring the new possibilities in voice transmission – effectively internet telephony.  They founded Sky Peer to Peer in 2003, giving it the catchier name of Skype and quickly adding video calling to the range of capabilities in the network.  By the time they sold it to eBay it was worth $2.6bn and Microsoft later bought it for $8.5bn. But of course its biggest impact was in creating the template for what has become such a hugely important industry in these pandemic times – online conferencing.

We’ve seen this pattern before in the world of communications.  At the tail end of the 19th century there was an explosion of technological change and market expansion around the railways, one that also involved a mixture of the formal and informal, legal and not-so-legal.  And one which was driven by pioneering entrepreneurs.  The context might change but the innovation game remains remarkably stable.

Share this:

311 views0 comments

Recent Posts

See All
bottom of page