Transcript: Louis Pouzin

Computer Communications: The First 50 Years, and After!
Louis Pouzin, Open-Root Project
ENOG 11 / RIPE NCC Regional Meeting: 7-8 June 2016

Good afternoon everybody. Who was born in 1980 or later? Raise your hand. Not that young after all. Because if you were born after 1980 it would mean you have missed the twenty first years of networking. And of course the more we live the more we lose of history. But we should’t forget.

I was just very lucky to be born at about the beginning of the computing era in general and networking in particular. I started networking in 1971 when I was asked by the French Government to build a network like ARPANET. You all know ARPANET, do you?

ARPANET was the first American network based on packet switching that was designed as a research project. But it was not the first actually. Its development started in 1968 and produced a lot of publications, but that didn’t make it the first one. At the same time there were three networks being developed in the US, all using packet switching.

One of them was TYMNET, a subsidiary of TYMSHARE, a time-sharing company. TYMSHARE was smart enough to sell their network service to their competitors. The competitors had an advantage that they had not to build their own network. On the other hand, TYMNET could make money out of each competitor and enlarge its network. And finally after five or ten years it became the major data communication system used commercially all over the world, before the telecom companies had time to put up their own.

There was also a third network from SITA (Société Internationale de Télécommunications Aéronautiques), a company for aeronautic communications, which was not used in the US because they had their own, but SITA was used by practically all other airline companies around the world. It was used mainly at the beginning of online reservation systems for passengers.

And if I don’t want to be unjust, the first packet communication system which I heard of, was earlier than that. It was built by the Compaña Telefónica Nacional de España, that is the national telecom company in Spain. They developed that network for carrying banking transactions between the Spanish banks. This one is mostly ignored by everybody. Furthermore the Spanish are usually very discrete about what they are doing. So there were very few publications on that.

And after that of course you had more networks. One was CYCLADES in France, which was started as I said in 71. ARPANET had not been demonstrated yet. I was happy enough to be in Washington in 72 when they gave their first demonstration. It was quite impressive. They had about fifty terminals connected to about twenty different datacenters. We called them “Host” at that time. They had just set up that the night before, and it was working. I mean the components had been prepared earlier but it was the first time they were hooked together in the place where everybody could use them. You had to know a lot of specific commands to make it work. But it was impressive.

After that everybody was aware that this was a new era. Making computers talk together was really innovative. Before that a computer was a single unit, sort of a fortress which was accessible by terminals and a variety of communication systems including the telephone system. There were also high speed networks for companies like newspapers. There were also local systems. Ethernet had not been invented yet. So there were very differentiated kind of communications. But everything was incompatible. You could not connect to another system using your terminal or a language that you knew. Everything was specific, proprietary.

The idea of building a computer network in ARPANET was to make computers compatible, not in every detail but enough to allow people to use their usual terminal to connect to any other computer that happened to be available in an open communication system. The idea was to save money, while that probably was merely the official reason, but what they wanted to do was create technology. They wanted to make technology progress because the costs of telephone lines were quite expensive and the bandwidth was not satisfactory, it was not digital yet. There was of course noise, systems were incompatible with each other and furthermore lines were only used when there was something to transmit. There were a lot of idle minutes or seconds or milliseconds on the lines. By multiplexing packets you could raise considerably the efficiency of the communication lines. Of course to convince people who put the budget you have to demonstrate some kind of economy. There were a lot of big computers in many US universities. That costed money but everybody would sleep something like six – eight hours a day. It meant that when the East Coast was using computers the West Coast was sleeping and vice versa. And the rest of the world is also sleeping at different times. So by making computers available from everywhere at all times they could increase the amount of usable computer time.

Besides, a similar rationale was also used in France to convince the people who put the money, military administration, government and occasionally some private companies. They wanted to share databases, not merge them but allow administrations to use other administration databases, because it was fashionable at that time to build databases. Even though there was no well defined usage it was a good idea to make it. They thought that by sharing databases they would of course save a lot of information processing and also make them more consistent. It didn’t happen. All administrations are extremely jealous of keeping their own data. And even now it’s not really possible to share databases routinely in the French administration. They need a legal authorization, and the interpretation of the data is tricky, as their meaning may be defined differently depending on various laws and administration responsibilities.

So, what did I do? First, I recruited a top notch team of people. So this was the first, let’s say, condition. The network we had to build initially was connecting Paris, because it’s the biggest place, and Grenoble as you see

in the South East, and Toulouse in the South, and Rennes in Brittany in the West, using telephone lines at 9,6 kbit/s and most typically 4,8 kbit/s, really really slow.

Then, what equipment for the communication infrastructure? Mini-computers like in ARPANET. As we had a French company called CII (Compagnie Internationale pour l’Informatique), strongly promoted by the French government, it was politically wise to use their product. So we selected the MITRA 15 mini-computer for terminal concentrators and routers (aka packet switches). The host computers were already part of the existing equipment located on the users premises. They were definitely heterogeneous: we had CII computers, IBM, Philips, Siemens, Telemecanique and perhaps some more storage equipment. Of course everything was incompatible, everything was proprietary, so we had to develop some way to make them communicate.

Again the idea was to have something similar to ARPANET. The first thing I did when I was in charge of the project, besides recruiting a team, was to go and visit a number of places in the US where ARPANET and other networks were being developed. I had been lucky because ten years earlier I had spent two and a half years at MIT (Massachusets Institute of Technology) developing the first large time-sharing system CTSS. I knew a number of people in the US who were very happy that I visited their place. I collected a lot of ideas, good or bad. It was easy to see what was good and also easy to see what was not that good. So the principle was to have on every host a sort of interfacing software that we dubbed “Transport Station” (as you know that’s now called layer 2 and 3 in an OSI system). Communication was to be operated by a packet net, called CIGALE, which would route datagrams between transport stations.

Then we started developing that architecture. It took us about the year 1972 to define the protocols, and in Fall 1973 we were able to stage a demo, a very limited demo. We had a CII computer in the Paris area (actually in IRIA, Institut de Recherche en Informatique et Automatique) sending batch jobs to an IBM computer in Grenoble and recovering the results back to a printer in IRIA. It was quite impressive for the ministries who were attending the show. Then we continued developing software for another year.

Of course the magic of it was not only using computers attached to the CIGALE net but we could also attach local nets of various kinds whether they were just terminal multiplexers or rings or Ethernet. Ethernet was not yet available but there were already similar prototypes available for experiment. Of course our end-to-end protocol (TS) was not TCP/IP which appeared only in 1983. In 1973 the ARPANET protocol was NCP. It was sending one message at a time and waiting for an acknowledgement. I thought it was inadequate. We had already developed our own protocol based on datagrams.

For handling I/O (input/output) between their packet net and host computers, ARPANET had adopted the option of imposing a specific hardware interface, which user sites were supposed to build. This idea of introducing specific hardware in every computer couldn’t work in France. Everybody, and specially computer manufacturers, would have considered that interfering with their computer hardware was much too risky to be acceptable. So we used essentially the standard products of the various manufacturers. They all had some kind of I/O package for handling telephone lines. So we used that just by adding a layer of software to make the I/O interface look alike in every computer. There was no need to re-invent the wheel for that purpose.

Our protocol was apparently the first of its kind at that time. Some elements were inspired by HDLC, an ISO standard already approved for point-to-point and multi-point lines, but its design needed entire remodeling for the network environment: longer transit delays, full duplex, larger address space, non sequential transmission, flow control, error control, multiple I/O channels.

By the way this protocol was later on adopted by ARPA within INTERNET, but with slight modifications which didn’t bring up a measurable improvement.

The packet switching.

When I visited the US in 1971 they had already built ARPANET and they were always talking about their marvelous system that had never lost a packet. Thereafter when observing what they were doing I decided that we would not going to take their software. They were very unhappy about that. They thought they could sell their machines to everybody. A second thing which couldn’t be acceptable, their design was based on a unique network which everybody had to connect to. So imagine the whole world, including Russia, connecting to an American system. That was inacceptable. So the possible concept we adopted was very simple, something that can carry packets, period. It could be networks attached to each other, any number of networks hooked together carrying a single format of packets, quite simplistic, no error control. Well, there was of course error control on every hop due to the line transmission procedures but there was no error control between the entrance and the output of the network. A lot of people were absolutely upset about that. They thought it could not work. We had a little bit of congestion control as you will see. But we said we will not fragment packets, we will not try to do any sequencing. Packets arrive whenever they want. There are no virtual circuits at all. Simplistic like postcards. Everything else which will be needed will be done at the host level. You can look, just a black box carrying packets. End-to-end control is done at the host level or at the terminal level if the terminal is sufficiently intelligent, or at the level of a terminal concentrator.

We did not cook up the word “datagram” for this communication scheme. It was proposed later in 1976 by a telecom engineer from the Norwegian PTT, Halvor Bothner-By. In ITU parlance the official language is “connectionless communication”.

The addressing system in CIGALE was rather flexible. Of course we had normal addresses, for packets going out of the net. But it could be any other network address. We did not make our system proprietary. We wanted it to be able to talk to any existing system including the ARPANET, if they wanted to. And then we had CIGALE, our packet net. Within it that could be general addresses, or regional addresses more efficient for routing. Plus we could address internally the various functions within the net software, also addressable the same way as packets. So we could send packets to our internal machinery. No specific software, no specific addressing system to reach the internal guts of the packet net.

We also invented a kind of network congestion. That’s always a tricky thing. In ARPANET they had virtual channels carrying one message at a time. In INTERNET they relied on end-to-end control at TCP level. It works, but it’s somehow unpredictable at times. So we thought that a simpler and more pragmatic way would be adequate. Just putting a packet length counter, in other words, a throughput limit on every line, and as soon as we reach a certain percentage which seems to be critical, like 70 – 75 percent, we send an alarm message to the sender of the packet, which means practically: if you insist you will lose packets. It worked pretty well. We made simulations of the net. On the beginning when you send not much traffic “out” is totally linear [on slides]. Anything that gets into gets out. But then you reach the point where some lines are beginning to get overloaded, I mean too much traffic. So we start losing packets. It means the output is getting slightly lower than the input. And at some point it stops carrying more packets. In other words, the network is totally saturated by the input lines, then it cannot carry anything more than that. In other word it stabilizes in a fully loaded situation which is not blocking traffic. It keeps working but it keeps losing packets as well, if senders insist.

So the whole project stopped later on when in France we changed President. They changed the whole politics, the whole policies they had in computing. They stopped many things which had been started. For example they had set up a consortium between various European manufacturers, CII, Plessey and Philips. Olivetti was supposed to become part of it. They abandoned that project. They abandoned also CYCLADES. And finally the Americans took the lead. X.25 was adopted in Europe by the telephone operators and therefore there was no more way to keep going with datagrams because the PTTs had won the decision of making an international standard called X.25 based on virtual circuits. So all development on packet nets were stopped in France. And that’s why the Americans were very happy to be left alone developing INTERNET and all kind of utilities between 1974 and 1990 when we had the Web. The Web was the second European INTERNET invention by CERN (Conseil Européen pour la Recherche Nucléaire) which completely changed the network because before that you had teletypes or IBM 3270, and a very primitive command line interface, not ergonomic at all.

So we have been some way left in the dark, in a desert. Then we did other things with my team. We were also dismantled. We had to move to a different place or to change activity. Somehow I maintained different activities as various pilot projects. At some point in 1989 I was hired by another agency of the French PTTs to teach courses in a business school in the south of France. A few years later I took my retirement and I was involved in dealing with the politics of United Nations Summits about networking, about the essential effects on society of developing networking systems. That was called WSIS in English (World Summit on the Information Society). So I had a lot of conferences and meetings with diplomats and so on around the world. They knew nothing about INTERNET but they were pretty smart people. We discussed with them and they understood quite quickly that the INTERNET as it was pushed by the Americans was not really satisfactory for them. First, it was in pure ASCII English and that’s no good for eighty percent of the world using scripts that are not strictly ASCII. Second, all the power was in United States since the European had abandoned and the other countries had not yet started. So that was not very pleasant for them. And finally, there were disputes about who is in charge of spam, who is in charge of content control and so on. There was no agreement on that at all. So it ended up with a sort of diplomatic agreement whereby every country could develop things by their own but the United States kept the ability to determine what a future of the INTERNET should be. And that was ICANN (Internet Corporation for Assigned Names and Numbers). ICANN as you know was created in 1998. UN Summits preparation started in 2001 and they were held in 2003 and 2005. From that time on there has been a continuous guerrilla between the US government represented by ICANN, which stands for all kind of lobbies, and on the other hand the rest of the world which was not necessarily very unified but which was not satisfied with the situation.

Then what I did after that is creating another company called Savoir-Faire. Actually it isthe legal name, but the commercial name is Open-Root. So what we do there is to diversify. In other words the domain system created by ICANN has a lot of restrictions as you know them. You are in the business so I won’t tell you what’s good and what’s bad. You know better than I do. But my personal idea was that it should be more diversified. It should be open to other sources of power, other scripts, other company policies and so on, and create competition. Because at the moment there is no competition. Practically competition only happens at the registrar level but not at the operator level. And there should be more variety in the way of handling customers, handling tariffs and so on.

So that’s what we did. We introduced a new business model for marketing domain names. It does not require any new software nor any new regulation. We use a variety of roots. We have our own root, we use the ICANN root, we use the Arabic root, the Chinese root, and some private roots.

Once again we can introduce customization. If some companies don’t want to be accessible by some kind of packets or some kind of information they may decide that it сan be filtered. That’s their choice. This is not supposed to be a public service. And each company offering some kind of root service like we do can have its own marketing policy.

We do sell TLDs (Top Level Domain) selected by clients. As opposed to what is the rule in the ICANN system we are not collecting money every year or every month. Clients have full ownership of the TLDs they buy from Open-Root. They can create at no cost any sub-domains of their choice, and resell their TLD if they want to.

Now we don’t do any hosting or any maintenance. We have excellent companies which are competent for doing that as contractors.

Now what else? Well, just an example. But that’s nothing new. You have dot pirates in Germany, and so on in Sweden. So technically it’s nothing really new, it’s just a technology which is within the present domain name system. Except we have made it open and usable by any kind of companies which want to make their own business or which are not satisfied with what is offered to them.

So typically the companies or the groups of people who could be interested are those who are not willing to wait for years to get a top level domain, or do not agree about the constraints put on the domain names, or want to use specific names for their own internal needs.

Large companies! All of them of course have access to the internet, but a lot of their documents should remain confidential. Unfortunately the INTERNET is not very secure, not at all secure actually. Some of their applications may be secured but it’s much more difficult to implement and cumbersome for users. So what they can have is an EXTRANET addressing only their own customers, their own people, their own business relationship, and which is of course invisible from the rest of the world.

It can also be in places like Africa with a lot of languages, or the Middle East where there are Arabic languages and also non Arabic languages using Arabic scripts. It’s quite complicated as local interpretations are needed.

Furthermore that is off limits for the US Patriot Act. As you certainly know the US government considers itself as having the privilege to request or require accessing every information that is handled, stored or transmitted by American companies, including in foreign countries, as long as the company is American. Routinely the FBI blocks web sites or domain names which appear undesirable to their censors. Obviously this is not only objectionable but also intolerable by many countries. Then what some countries do is developing their own system. Like China did, by the way. In 2005 China had already set up a national INTERNET in Chinese script , completely independent from ICANN. Now (in 2016) there are about eight hundred million users, much more than in any other country.

So now there are options for companies and countries in an open and competitive domain name market. This model is much more tolerant and affordable for users than the present de facto monopoly under US control.

Thank you, that’s it.

___________________________________________________________________________________

Note: The text above is the output resulting from transcribing an audio file into a text which has been slightly revised by the speaker Louis Pouzin.