Serial Transmission Standards
Serial transmission is the basis of most data communication between computing
devices peer-to-peer, or computing device and peripheral device such as
a printer. There are several different serial communication standards available for use in modern day computers, including RS-232, USB, and IEEE 1394 (Firewire).
The RS-232 protocol is currently the most commonly used serial standard for modem communication. Standardized by the Electronics Industry Association (EIA), RS-232 is currently in its third release (RS-232-C). The
prevalence of RS-232 in the PC marketplace is so great that the term serial port has come to mean an RS-232 serial connection.
RS-232 is officially limited to 20 Kbps for a maximum distance of 50 feet. In reality, depending on the type of media being used and the amount of external interference present - RS-232 can be transmitted at higher speeds and/or over greater distances. However, modern hardware typically
supports speeds up to 115 Kbps using 16550 family UARTS.
RS-232 uses electrical signals to transmit the ones and zeros of the digital datastream. The RS-232 standard defines voltages of between +5 and +15 volts DC on a given pin to represent a logical zero, otherwise known as a space, and voltages of between -5 and -15 volts DC to represent the logical ones, otherwise known as a mark.
Physical Interfaces vs. Transmission Protocols
It is important to distinguish between those standards that describe the connectors or physical interfaces that are used to connect appropriate cables to a computer's physical ports, and standards that describe the electrical characteristics, or transmission protocols used;
also referred to as signaling.
To connect devices via RS-232, a multi-wire cable must be used. The cable has several small, insulated wires within an outer jacket. Each signal to be carried, or RS-232 pin to be supported, requires its own individual inner wire.
The number of signals that must be transmitted across an RS-232 connection between two devices will depend on the software used. The number of signals required can vary from as few as two for one-way communication to twelve for a full-duplex modem connection.
DCE vs. DTE
The original application of RS-232 was to connect computing devices to modems. To make this "standard" solution easier, two classifications of RS-232 devices were created. The pin-outs for these classifications were established so that the cable used to connect the two devices would be "straight through" in nature, where each pin is connected to the same pin on the other end of the cable. Pin 1 will connect to pin 1, pin 2 to pin 2, and so on. The PC and the modem in our scenario are examples of these two classifications: 1) the PC represents a Data Terminal Equipment (DTE) and 2) the modem represents the Data Communications Equipment (DCE), respectively. Note: DCE is also referred to as Data Circuit Terminating Equipment. To connect these two devices, simply use a stright-through cable. However to connect two DTE's to each other or two DCE's to each other you would need to use a crossover cable also known as a null modem cable. In the case of this cable, the
pins are reversed on one end - allowing the pins on the two interfaces to connect with the proper pinout on each end.
Universal Serial Bus (USB)
Historically RS-232 has been the most widely implemented serial standard. However due to its limitations such as: speed (only up to 115 Kbps), the fact that it only supports one device per port, and that it requires significant configuration to attach the device, RS-232 has been virtually replaced by USB in all but the most basic applications.
USB 1.1 operates at either 1.5 Mbps or 12 Mbps. USB 2.0 can operate at speeds up to 480 Mbps. USB is capable of supporting up to 126 devices on each port and can be daisy-chained from one device to another. A better solution for larger implementations is to use a USB hub as shown in figure 3.
Another high-speed serial standard is IEEE-1394. Originally developed by Texas Instruments and implemented by Apple Computer Inc. as the proprietary Firewire, the interface was standardized by the IEEE in 1995. Sony has trademarked the name i.Link for their implementation of IEEE-1394, and that moniker can now be found on numerous Japanese consumer electronic devices like digital video recorders and cameras.
IEEE-1394 is a multipoint serial bus-based solution. Devices can be added or removed from the live bus. Devices can be daisy-chained or connected to IEEE-1394 hubs. The original IEEE-1394 specification supported data transfer rates up to 400 Mbps and 1394b supports data transfer rates up to 800 Mbps. In addition to using standard asynchronous communications it also includes support for Isochronous communication which guarantees data delivery at a constant, predetermined rate. This allows IEEE-1394 to be used in time critical multimedia solutions. IEEE-1394 supports transmitting data over media up to 100 meters without attenuation. IEEE-1394 uses two different types of connectors 4-wire and 6-wire which can provide power to devices.
A telephone switching center (central office) that does not connect directly to the customer. It connects offices in the same network or between networks, but always deals with trunks rather than customer lines. After Divestiture, most Class 4 tandem offices moved to AT&T, while Class 5 local tandem offices stayed with the RBOCs where they remain today. RBOCs installed new tandem offices to handle intraLATA toll and provide access to the interLATA toll carriers. Tandem offices and end offices are generally located in the same facility, and may even be serviced by the same switch.
Tandem Offices and End Offices
The PSTN is made up of local exchange carriers (LECs) and interexchange carriers (IXCs) that are governed by LATA boundaries. Note that a tandem office carries traffic between end offices and does not link to the customers themselves. Also note that IXC 2 does not have POPs in LATAs 1 and 2 and must have a reselling agreement with IXC 1 in order to gain access to those areas.
Dial-Up vs Dedicated vs Roaming
It is not as common anymore, but in the not too distant past, many people used a dial-up connection to access the Internet or their place of work, using a computer modem. Which isn't quite as bad as having to use tin cans for phones - but close. Pause and think about the fact that right now there is a whole generation of humans being born who will never know the headaches of having to use a dial-up connection to access the Internet.
The type of connection you have is not always an economical choice. At present, the first question that needs to be answered is "what service is available in my area?" Then you ask "how much does it cost," and "can I afford it?" You can find answers to these questions by calling your local service provider(s) - which you may have to do if you don't already have an Internet connection available. But, then, I have to ask, how are you reading this screen? If you do have access to the Internet then it is quite easy to search and find what services are available in your area and how much it will cost.
Dial-Up & the Central Office
Dial-up remains problematic and is slowly, but surely, being replaced with higher speed, "always on," dedicated connections. If the phone or cable company is digging up the streets in your neighborhood, more than likely, they are installing fiber cabling that reaches from their nearest switching facility (called a Central Office or CO) to your neighborhood. The limiting factor that has always been the bottleneck for even more households getting high-speed Internet has been that the phone lines were all made of copper, just like an American penny. Ma Bell and others have been stringing copper wire from one CO to the next, and from each CO to each household, for over a century. That's a lot of copper. Problem is, the farther your node is from the CO, the worse the quality of the single on the wire is. With voice you hardly ever notice a problem, but with high speed data the fact that the signal "attenuates" the farther it travels means a lessor amount of data can be reliably transmitted back and forth.
Mobile with WiFi
Roaming is a fairly new phenomenon and like anything having to do with electronics there are competing standards. WiFi came first and continues to be popular with open networks being provided by city and other government agencies. Many university and college campuses offer their students and employees free WiFi access to the
Internet as well. WiFi refers to the IEEE 802.11 standard for wireless data communications. Almost all computing devices created in the last 5 years, laptops, handhelds, Pocket PCs, Tablet PCs, and pretty much all models, have built-in support for 802.11 wireless technology. So currently you'll find that many coffee shops, bookstores, shopping malls, city plazas, and the like, are providing WiFi "hot spots" for accessing the Internet. People taking advantage of these hot spots are considered to be "Mobile" users, gaining access to the Internet wherever a hot spot is accessible. The downside is that there are wireless hackers out there looking for accesible hot spots as well, including at YOUR house - it's called "war games" driving. War games driving is where someone cruises around in their car from neighborhood to neighborhood looking for unsecured wireless access points. So that wireless connection you are using to view this Web page with, if it's not secured, could be being shared with somebody parked out in front of your house right now.
Roaming with Cellular
Then along came the cellular phone companies, who began offering Internet access through their communications networks. And, many of the big ones, like Verizon, opted to implement the EVOC wireless technology instead of 802.11. So while most of us consumers, who have already invested in wireless technology, all use devices based on 802.11, we now need to add-on another transceiver that is EVOC compatible if we want the Internet via our cell phone providers' network. Cellular companies can offer their customers the ability to roam from one location to another without losing their connection to the Internet, as long as they don't lose cell service - so these users are "roaming" users of the Internet.
A computer network is defined as: "two or more computing devices equipped with transceivers attached to a shared media." Similar to what you see in
figure 1, computer networks that are physically confined
geographically to a location like a room, a building, a campus, or a hotspot are called Local Area Networks (LANs). The computing devices that are
connected (including wireless devices) to the local area network are referred to as nodes
or hosts. In order to communicate on a network each node must have a properly configured device which is capable of transmitting and receiving messages, this device is known as a transceiver
or a network interface card (NIC). Each transceiver will send and receive messages over some type of media, usually copper wire, fiber optic cable, or in the case of wireless transceivers,
over airwaves. As was just illustrated, the media used by the transceivers can vary depending on the transceiver type and the needs of the network. Today, most LAN nodes are interconnected using unshielded twisted-pair (UTP) cabling. If the node is wireless, more than likely, it will be using an 802.11g wireless transceiver which uses the 2.45 - 2.5 GHz band of radio frequencies(RF) to carry the data.
Large companies that have computers in many different locations around a city can form their own Metropolitan Area Networks (MANs) as shown in figure 2 and if they are spread throughout a single country or worldwide
across many countries, they can create their own Wide Area Network (WAN) as shown in figure 3. In both the case of the MAN and the WAN, usually, a telecommunications provider
(AT&T, Verizon, etc.) is needed to provide leased phones lines that connect one location's LAN to another
location's LAN. Privately leased lines are expensive, the advantage to businesses and universities is privacy and security. Since the organizations leasing the lines are the only ones having access to them they are considered safe and secure, mainly because they are not using a public network like the Internet to transfer data from one location to another.
Simply stated, the Internet is the largest WAN in the world (see LAN and WAN for context). A more verbose definition would be, the Internet is a publicly accessible infrastructure of interconnected routers, switches, and access points which process and forward packets of information from one host to another using protocols defined by the Internet Engineering Task Force (IETF). Some of these protocols are commonly known, like http for instance. The Hypertext Transfer Protocol (http), also known as the World Wide Web (WWW), is how Web browsers communicate back and forth with Web servers. In other words the World Wide Web is a service that operates over the Internet infrastructure. The Simple Mail Transfer protocol (SMTP), or email, operates this way as well. Email servers are simply computer software utilizing the SMTP protocol to service send and receive requests. These servers process and forward emails over the Internet's infrastructure using SMTP. Email clients generally use
the Post Office protocol (POP) to communicate with email servers over the Internet's infrastructure to send and retrieve electronic messages.
A Brief History of The Internet
So how did the Internet get started? Some say the journey began in 1946 when a writer of science-fiction comics, using the pen-name of Murray Leinster (real name William F. Jenkins), wrote a story called A Logic Named Joe which described, to some degree, concepts that would be used by the Internet decades later.
The Advanced Research Projects Agency
In 1958, for defense purposes spurred on by the Cold War and Russia's launch of Sputnik, a branch of the United States Department of Defense (DOD) was established and referred to as the DOD's Advanced Research Projects Agency (DARPA) which was later renamed ARPA by officials who felt that the "D" (which represented the Department of Defense) should be removed from the acronym in order to make the organization's name less ominous sounding. The organization has since been renamed again back to DARPA. At about that same time, during the late 1950's, a gentleman by the name of J. C. R. Licklider, known as JCR or "Lick" to his friends, colleagues, and acquaintances, worked as a vice-president at a company called Bolt, Beranek, and Newman (BBN). Based on the projects that Lick and his co-workers worked on at BBN, in 1960 he began writing a series of three papers known as the Galactic Network memos which described his vision of a galactic network - a network of computers that allows users to gather data and access computer programs anywhere in the world. The first paper he wrote Man-Computer Symbioses, described Licklider's view of man's interaction with computers and the need for computer time sharing.
A year later another Internet visionary and Massachusetts Institute of Technology (MIT) professor, Leonard Kleinrock, wrote a paper on packet switching networks. In 1962, Licklider wrote his second paper titled,
On-Line Man Computer Communication, describing the concept of social interaction through the networking of computers; later that same year Lick was appointed as the first director of the Information Processing Techniques Office (IPTO) for ARPA.
First Packet On The Internet
In 1965, MIT researcher Lawrence G. Roberts and Thomas Merrill made the first interstate connection of computers by telephone line; proving that computer network communication would have to use "packets" to exchange data instead of using single bits at the circuit level. In 1966, Roberts joined DARPA and in 1967 completed drawing up a plan for realizing Lick's dream, the ARPAnet. In 1968, Licklider and Robert W. Taylor wrote the third and final paper about Lick's Galactic Network which was titled The Computer as a Communication Device. Further work by Robert Kahn, Kleinrock, and others brought the ARPAnet closer to fruition until in 1969 it became a reality. The now UCLA computer science professor Kleinrock and U.S. DOD contractor BBN were instrumental in establishing communication between the first two nodes of the ARPAnet, UCLA and the Stanford Research Institute (SRI), on October 29, 1969 with the implementation of two
Internet Message Processors (IMPs) installed at each site. Professor Kleinrock was supervising his student/programmer Charley Kline (CSK) and they set up a message transmission to go from the UCLA SDS Sigma 7 host computer to another programmer, Bill Duvall, at the SRI SDS 940 host computer. The transmission itself was simply to "login" to SRI from UCLA. They succeeded in transmitting the "l" and the "o" and then the system crashed! Hence, the first message on the Internet was "lo", as in "lo and behold! They were able to do the full login about an hour later.
From NCP to TCP/IP
The first communication protocol designed for use on the ARPAnet was called the Network Control Program (NCP). During the year 1970, the Arpanet is growing at the rate of one node added per month. In 1972, Ray Tomlinson of BBN writes the first email program to send messages across the ARPAnet, he uses the @ sign to test his program by sending an email to himself. In 1973 Vint Cerf and Robert Kahn began writing a paper later published in May 1974 describing the Transmission Control Protocols (TCP). In December of 1974 the term "Internet" began to be used to describe a global TCP/IP-based network of the ARPAnet along with several other, mainly X.25-based, networks that were cropping up in the U.S. as well as Canada and Europe (Britain & France). However, it wasn't until flag day of 1983 that the ARPANET officially adopted TCP/IP as its core networking protocol.
Reluctance of Early Adopters
In the beginning there was a lot of resistance by large companies, research facilities, and places of higher learning to join the Arpanet; all of them had large, powerful computer networks of their own, and didn't really want to share them with others, so their acceptance of the idea of joining their networks together with others via a public, unsecured, network was not a vision embraced by many network administrators of the day. The Internet then got a little more prompting from the U.S. government. In 1988, Vice President Al Gore, heard a report titled Toward a National Research Network by UCLA's Kleinrock (view book on right - you will need Flash installed in your browser to view it), which prompted Gore to introduce a bill into congress, the High Performance Computing and Communication Act of 1991 which led to the creation of a document called the National Information Infrastructure (NII) defining four Network Access Points (NAPs) to be located in Washington D.C., Pennsauken, N.J., Chicago, Il. and San Francisco, CA, for providing regional access to what was then emerging into the mother of all WANs, the Internet. The National Science Foundation (NSF) awarded contracts for operations and maintenance of the original four NAPs to MFS Datanet, Sprint, Ameritech, and Pacific Bell respectively.
The Internet Spawns The World Wide Web
As more and more people who had influence over the huge and previously private
data networks realized the benefits of the Internet, one by one they began joining.
The Internet began growing rapidly, by 1990 the ARPAnet was phased out. In 1991 the Gopher protocol
was released by the University of Minnesota, and the World Wide Web (WWW) is developed by Tim Berners-Lee and released by the European Organization for Nuclear Research - CERN, and the rest as they say "is history."
Internet Exchange Points (IXPs)
Several telecommunications mergers have occurred since the formation of the original NAPs resulting in many of the NAPs being owned by Verizon, who subsequently bought MCI. MCI owned a trademark on Metropolitan Area Exchanges (MAE), so you'll hear many of the Internet's NAPs also referred to as MAEs, i.e. MAE-West or MAE-East. But, not all NAPs were MAEs! Today, NAP is a legacy term only found in the history books and the huge infrastructure of access points to the Internet are called Internet Exchange Points (IXPs).
Telecommunications providers, Verizon, AT&T, Sprint, and the like, are big players on the Internet because they provide the connectivity between all those LANs, MANs and WANS. Follow the three links below to learn more about the Internet backbone providers. The first link leads you to a page of more links, this is Russ Haynal's Internet Map Collection. With all of the mergers between telecommunications companies lately, many of Haynai's links are now broken. The next two links give you an example of how large these telecommunication networks are.
Internet Service Providers (ISPs)
Internet Service Providers (ISPs) are companies who have paid the fees to be attached to Internet Exchange Provideors (IXPs). You, like most, connect to the Internet through an ISP. Now many times the ISP is also your local telephone company, but not always, in fact due to the Telecommunications Act of 1996 competition for your communications dollars is still quite robust although it's hard to tell how long that will last as huge telecomms continue to gobble up the littler ones.
An ISP provides you with "Internet Service," in other words they give you connectivity to the rest of the Internet through their network which is ultimately attached to a IXP. In a home environment ISPs pretty much do all of the work for you. They will install your Internet router, attach your home network to it, and configure all of the necessary settings to get the homeowner up and running with access to the Internet including an all important "IP address."
In a business environment ISPs will provide any wiring necessary up to the
Maximum Point of Entry (MPoE) or the company's Main Distribution Facility
(MDF). It is up to the company's IT personnel to get their LAN configured to
work with the ISP's Internet connection.