ALOHAnet was a pioneering computer networking system developed at the University of Hawaii. It was first deployed in 1970, and while the network itself is no longer used, one of the core concepts in the network is the basis for the almost-universal Ethernet.

Table of contents
1 Overview
2 The ALOHA protocol
3 History
4 Description
5 References
6 External links

Overview

One of the early computer networking designs, the ALOHA network was created at the University of Hawaii in 1970 under the leadership of Norman Abramson. Like the ARPANET group, the ALOHA network was built with DARPA funding. Similar to the ARPANET group, the ALOHA network was built to allow people in different locations to access the main computer systems. But while the ARPANET used leased phone lines, the ALOHA network used packet radio.

ALOHA was important because it used a shared medium for transmission. This revealed the need for more modern contention management schemes such as CSMA/CD, used by Ethernet. Unlike the ARPANET where each node could only talk to a node on the other end, in ALOHA everyone was using the same frequency. This meant that some sort of system was needed to control who could talk at what time. ALOHA's situation was similar to issues faced by modern Ethernet (non-switched) and Wi-Fi networks.

This shared transmission medium system generated interest by others. ALOHA's scheme was very simple. Because data was sent via a teletype the data rate usually did not go beyond 80 characters per second. When two stations tried to talk at the same time, both transmissions were garbled. Then data had to be manually resent. ALOHA did not solve this problem, but it sparked interest in others, most significantly Bob Metcalfe and other researchers working at Xerox PARC. This team went on to create the Ethernet protocol.

The ALOHA protocol

The first version of the protocol was basic:

Many people have made a study of the protocol. The real kicker here is the later concept. What is later? Determining a good backoff scheme for the protocol also determines much of the total efficiency of the protocol, and how deterministic its behavior will be (how predictable will the protocol change as load changes). Modern Ethernet uses CSMA/CD.

ALOHA had about a 18.4% max throughput. This means that 81.6% of the total available bandwidth is basically being wasted due to stations trying to talk at the same time. (Like a grade school classroom at recess.) Slotted ALOHA was an improvement upon this, which gave the stations a discrete amount of time slots which they could use. This improved efficiency to about 36.8%

It should be noted that ALOHA's characteristics are still not much different than those experienced today by hubbed Ethernet, Wi-Fi and similar contention based systems. There is a certain amount of inherent inefficiency in these systems. For instance 802.11b sees about a 2-4 Mbps real throughput with a few stations talking, versus its theoritical maximum of 11 Mbps. It is typical to see these types of network's throughput breakdown significantly as the number of users and message burstiness increase. For these reasons, applications which need highly deterministic load behavior often use token passing (like Token ring) schemes instead of contention systems. For instance ARCNET is very popular in embedded applications. Nonetheless, contention based systems also have significant advantages, including ease of management and speed in initial communication.

History

Norm Abramson was a professor of engineering at Stanford, but was also an avid surfer. After visiting Hawaii in 1969, he inquired at the University of Hawaii if they were interested in hiring a professor of engineering. He joined the staff in 1970 and started working on a radio-based data communications system to connect the Hawaiian islands together, with funding from Larry Roberts.

By late 1970 the system was already in use, the world's first wireless packet-switched network. Abramson then managed to get a IMP from Roberts and connected ALOHAnet to the ARPANET on the mainland in 1972. It was the first time another network was connected to the ARPAnet, although others would soon follow.

Description

Prior to ALOHAnet, most computer communications tended to share similar features. The data to be sent was turned into an analog signal using a device similar to a modem, which would be sent over a known connection like a telephone line. The connection was point-to-point, and set up (typically) by manual control.

In contrast ALOHAnet was a true network. All of the computers "connected" to ALOHAnet could send data at any time without operator intervention, and any number of computers could be involved. Since the medium was a radio, there were no fixed costs so the channel was "left open" and could be used at any time.

Using a shared signal in this way leads to an important problem, if two systems on the network – known as nodes – send at the same time, both signals will be ruined. Some sort of system needs to be in place to avoid this problem. There are a number of ways to do this.

One would be to use a different radio frequency for every node, a system known as frequency multiplexing. However this would require each mode added to able to be "tuned in" by all of the other machines. Soon there would be hundreds of such frequencies, and radios capable of listening to this number of frequencies at the same time are very expensive.

Another solution is to have "time slots" into which each node is allowed to send, known as time division multiplexing. This is easier to implement because the nodes can continue to share a single radio frequency. On the downside if a particular node has nothing to send, their slot goes wasted. This leads to situations where the available time is largely empty and the one node with data to send has to do so very slowly just in case one of the other 100 nodes decides to send something.

ALOHAnet instead invented a new solution to the problem, one that has since gone on to become the standard, carrier sense multiple access. In this system there is no fixed multiplexing at all. Instead each node listens to see if anyone else is using the channel, and if they don't hear anyone, they start talking.

Normally this would mean that the first node to start using the radio would have it for as long as it wanted, which means the other nodes "can't get a word in edgewise". In order to avoid this problem the ALOHAnet made the nodes break down their messages into small packets, and send them one at a time with gaps between them. This allowed other nodes to send out their packets in between, so everyone could share the medium at the same time.

There is one last problem to consider. If two nodes attempt to start their broadcast at the same time, you'll have the same sorts of problems you would with any other system. In this case ALOHAnet invented a very clever solution. After sending any packet the nodes listened to see if their own message was sent back to them by a central hub. If they got their message back, they could move on to their next packet.

If instead they never got their packet back, that would mean that something prevented it from arriving at the hub – like a collision with another node's packet. In this case they simply waited a random time and tried again. Since each node picked a random time to wait, one of them would be the first to re-try, and the other nodes would then see that the channel was in use when they tried. Under most circumstances this would avoid collisions.

This sort of collision avoidance system has the advantage of allowing any one node to use the entire network's capability if no one else is using it. It also requires no "setup", anyone can be hooked up and start talking without any additional information like the frequency or time slot to use.

On the downside if the network gets busy, the number of collisions can rise dramatically to the point where every packet will collide. For ALOHAnet the maximum channel utilisation was around 18%, and any attempts to drive the network over this would simply increase collisions, and the overall data throughput would actually decrease, a phenomenon known as congestion collapse.

Slotted Aloha was a modification to the Aloha protocol which raised the channel utilisation efficiency up to around 35%. With this method, a centralised clock sent out small clock tick packets to the outlying stations. Outlying stations were only allowed to send their packets immediately after receiving a clock tick. If there is only one station with a packet to send, this guarantees that there will never be a collision for that packet. On the other hand if there are two stations with packets to send, this algorithm guarantees that there will be a collision, and the whole of the slot period up to the next clock tick is wasted. With some mathematics, it is possible to demonstrate that this protocol does improve the overall channel utilisation, by reducing the probability of collisions by a half.

The relatively low utilisation turns out to be a small price to pay given the advantages. A slight modification of this system for wired networks by Bob Metcalfe improved collision avoidance on busy networks, and this became the standard for Ethernet. Today the technique is known as CSMA/CD, carrier sense, multiple access, collision detection.

Collision detection mechanisms are much more difficult to implement in wireless systems when compared to wired/cabled systems, and Aloha did not even attempt to check for collisions. In a wired system, it is possible to abandon the transmission of colliding packets by first detecting the collision, and then notifying the senders. This is not generally a feasible option in wireless systems, and was not attempted in Aloha.

The ALOHAnet itself was run using 9600 baud modems across Hawaii. The system uses two 100 kHz "channels" (slices of frequency), one known as the broadcast channel at 413.475 MHz, and the other the random access channel at 407.350 MHz. The network was a star, with a single central computer (a HP 2100) at the university receiving all messages on the random access channel, and then re-broadcasting them to all of the nodes on the broadcast frequency. This setup reduced the number of collisions possible, there could be no collisions at all on the broadcast frequency, for what that was worth. Later upgrades added repeaters that also acted as hubs, greatly increasing the area and total capability of the network.

Send and receive packets were identical. The packet had a 32-bit header with a 16-bit parity check, followed by up to 80 bytes of data and another 16-bit parity check.

Historical details of the original wireless network are now rather difficult to come by.

References

External links