CSIS 111B: Fundamentals of Computer Programming: Lessons

Lesson 1

Introduction to Computer Programming

Objectives

By the end of this lesson you will be able to:

  • Summarize some of the key milestones which have occurred in computer programming and some of the possibilities for the future of computer programming.
  • Describe the components of the von Neumann architecture.
  • Explain the differences between first generation (1GL) and high-level programming languages.
  • Categorize elementary computer programming concepts like digital computers and ready-made programs.
  • Explain the basics of how a computer program works.
  • Describe the difference between a compiler and an interpreter as well as the advantages and disadvantages of each.
  • Describe the difference between procedural programming and object-oriented programming.

Overview

As a computer programmer you will need to create computer programs that solve problems using:

  • Digital tools (compilers, languages, editors)
  • Design patterns appropriate for the environment in which your solution will be implemented
  • Frameworks (APIs, code libraries, services)

Your programming decisions will also include selecting the appropriate data types and control structures needed to solve the task at hand.

This lesson begins with the history of computer programming and then introduces the reader to elementary computer programming concepts, including compilers and interpreters, high-level versus low-level programming languages, and the difference between procedural and object-oriented programming languages.

Table of Contents

The History and Future of Software

Grady Brooch is a Chief Scientist for Software Engineering at IBM Research and offers a vast experience-based perspective on software development including where it has been and where it is headed. You will not be tested on the contents of this video, but you are highly encouraged to take the time to sit back and absorb as much of this speakers knowledge as you can. Having a well-informed understanding of the field you will soon be working in will be invaluable to the progress of your career.

Table of Contents

History of Computer Programming

The term computer programming refers to the process required to configure a computer to perform a particular computational task. During the inaugural days of computers, the early 1940's, computers were programmed by rearranging the electrical wiring within the computer. Every time you wanted to have the computer perform a different computational task you had to first re-wire it to do so. You might say that these computer programs were "hard-wired" instructions.

The computing process consisted of electromechanical relays being turned on and off based on the "program" the system was configured to run. When a relay was open, electricity would not flow on that circuit; when the relay was closed, electricity could flow on the circuit. Very similar to a light switch. Many of the computer concepts used in the first part of the twentieth century were based on work done by mathematician and computer scientist Alan Turing  which led to the introduction of the terms Turing Machine and Turing Completeness.

Staffers working on reconfiguring the ENIAC computer.
Computer technicians "programming" the Eniac computer located in the BRL building at the University of Pennsylvania's Moore School of Electrical Engineering circa 1946. (photo courtesy of the Computer History Museum)

von Neumann Architecture

diagram of von Neumann's computer architecture.
The von Neumann Architecture

A major advance in computer design occurred in the late 1940s, when John von Neumann (pronounced noy-man) had the idea that a computer should be permanently hardwired with a small set of general-purpose operations [Schneider and Gersting, 2010]. The operator could then input into the computer a series of binary codes that would organize the basic hardware operations to solve more specific problems. Instead of turning off the computer to reconfigure its circuits, the operator could flip switches to enter these codes, expressed in machine language, into computer memory. At this point, computer operators became the first true programmers who developed software/machine code to solve problems using computers. This process developed by von Neumann is known as input-process-output (IPO).

Photo of Jon von Neumann next to the IAS computer.

However, the earliest computers were not capable of storing a computer program for re-use. It wasn't until the IAS machine, introduced in 1952, that a computer could store a program written by a programmer. The IAS machine was an electronic tube-based computer built at the Institute for Advanced Study (IAS) in Princeton, New Jersey. It is sometimes called the von Neumann machine, since the paper describing its design was edited by John von Neumann. He was a mathematics professor at the time at both Princeton University and IAS. The computer was built from late 1945 until 1951 under his direction. The general architectural design of the IAS is called the Von Neumann architecture, even though it was both conceived and implemented by others. The computer architecture of input, process, output and memory can be found in all of today's modern computing devices.

Computers continued to operate using mainly vacuum tubes until the early 60's when transistors became more reliable and mass produced. Invented in 1947 by William Shockley at Bell Labs, the earliest transistors were not very reliable and were difficult to produce. The production of transistors advanced tremendously with the invention of the integrated circuit (IC) developed by Robert Noyce and others at Fairchild Electronics. Later on, Noyce joined with Gordon Moore, of Moore's Law fame, to start the company Intel in 1968. Intel released the world's first microprocessor, the Intel 4004, a 4 bit processor in 1971. Most modern central processing units (CPUs) are microporcessors, meaning they are contained on a single IC chip.

The one thing that all these early computers have in common is that they all used binary notation for both the programming of the computer and its internal computational processes; this notation is also known as machine language.

1GL

In order to load instructions into computer memory for processing on the IAS and other similar mainframe computers of the era, the instructions needed to be represented as zeros and ones using, as previously mentioned, a machine language. The term 1gl or first generation language is used to refer languages that utilize machine code instructions.

Originally, no translator was used to compile or assemble first-generation language programs. Initially the first-generation program's instructions were entered through the front panel switches of the earliest computers and then on later computers, up through the minis, were stored as a collection of punch cards which would be used to load the machine code instructions into the computer's memory. The instructions in 1GL are formed using varying combinations of binary numbers, meaning zeros (0) and ones (1). This makes the language suitable for computing devices to understand, but far more difficult to interpret and learn by the human programmer.

Advantage of programming in 1GL: Code can run very fast and very efficiently, precisely because the instructions are executed directly by the central processing unit (CPU).

Disadvantage of programming in a low level language: When an error occurs, the code is not as easy to fix. Also, first generation languages are very much adapted to a specific computer and CPU, therfore code portability is significantly reduced in comparison to higher level languages.

Modern day programmers still occasionally use machine level code, especially when programming lower level functions of the system, such as drivers, interfaces with firmware and hardware devices. Modern tools such as native-code compilers are used to produce machine level code from a higher-level language.

High-level Programming Languages

The first high-level programming language was Plankalkül, created by Konrad Zuse between 1942 and 1945. The first high-level language to have an associated compiler, was created by Corrado Böhm in 1951, for his PhD thesis. The first commercially available language was FORTRAN (FORmula TRANslation); developed in 1956 (first manual appeared in 1956, but first developed in 1954) by the team of John Backus at IBM.

When FORTRAN was first introduced it was treated with suspicion because of the belief that programs compiled from high-level language would be less efficient than those written directly in machine code. FORTRAN became popular because it provided a means of porting existing code to new computers, in a hardware market that was rapidly evolving. FORTRAN eventually became known for its efficiency. Over the years, FORTRAN had been updated, with standards released for FORTRAN-66, FORTRAN-77 and FORTRAN-92.

High-Level vs. Low-Level Languages, Compilers and Interpreters

Computers speak machine language and only machine language. Human programmers prefer to write source code using high-level languages. In order to satisfy the needs of both the computers and the programmers, two types of "translators," one called an interpreter, and one called a compiler were designed to take the human source code and convert it to the necessary machine language that the microprocessor needs to actually perform the requested instructions.

This video explains the differences between low-level and high level languages.

In this video you will learn the "translation" process of both the interpreter and the compiler and what the differences are between the two.

This table displays modern-day high-level programming languages arranged by their compilation method.
Compiled Intermediate Interpreted Interpreter Intermediate Form
C++        
Fortran        
Cobol        
LISP        
  C#   Common Language Runtime (CLR)) provides one or more Just-In Time (JIT) compilers. MSIL
  Java   Java Virtual Machine (JVM) Bytecode
    Python Python interpreter  
    ActionScript Shockwave and Flash Player  
    JavaScript Built into Web browsers, some operating systems, and runtime environments like node.js.  
    Perl Server-side application  
    Ruby Server-side application  

A Multitude of Programming Languages to Choose From

Do you really need to know the names of all of the programming languages that have ever existed? No, not at all, but you will encounter many of their names throughout your quest for programming knowledge. The likelihood is you will start with one language and then learn additional languages as determined by the needs of the organization you're working for. In this course you will be exposed to several of the most popular programming languages. In the next video, speaker Mark Rendle goes through the exhaustive list of programming languages that have come and gone with some well placed humor interspersed throughout. Probably one of the more entertaining videos that you will ever see about computer programming.

Video: The History of Computer Programming
Table of Contents

Elementary Computer Programming Concepts

The following series of videos comes from a TV show named Bits and Bytes that aired in the early 1980's. They may seem pretty "campy" by today's standards, however they do a good job of illustrating and demonstrating basic computer concepts that are still in practice today. Pay close attention and see if you can identify concepts and procedures shown in these videos that are still in use today and those that have been supplanted by newer concepts or procedures. Also the computers you will be seeing were among the very first microcomputers; computers based on the newly developed microprocessors by Intel, Motorola, Texas Instruments, and others.

Getting Started: The Digital Electronic Computer
Table of Contents

Ready-Made Programs

As we stated previously, in the early days of computing there were no methods available for saving a computer program so that it could be reused. In the cases when hard-wired instructions were used they were written or typed on paper by a human, and in order to be reused, the instructions on paper had to be read by humans in order to place the plugs and flip the switches to re-program the computer.

In those days, downloading an app from a mobile store or an Internet Web site wasn't even conceived of yet. Development of what later became the Internet didn't occur until the early 1960's. The World Wide Web didn't exist prior to the 1990's. The stored program concept had been a theoretical concept of a universal Turing machine. It wasn't until 1948 that a computer existed which could store its instructions in some kind of computer memory (storage technologies varied widely at the time).

How Computer Programs Work

In this video Billy Van learns how to write his first computer program. How to write programming instructions will vary by language and system your coding for, but there are also many common concepts among languages and tools used to write source code. Typing quotes around text you want to print to the screen or elsewhere is still used in most programming languages. However, using numbers typed before a line of code is no longer used because the technique led programmers to write what was known as "spaghetti code;" code which could not be easily followed or debugged.

Procedural vs. Object-Oriented Programming

There are many different paradigms that have been developed for writing computer programming source code. This video explains the two most commonly used programming paradigms, procedural and object-oriented, and the differences between the two. Object-oriented programming is the most commonly used paradigm, but a lot of software development still uses modular procedural programming techniques.

Table of Contents

Summary

In this lesson you learned a lot about both the history and the future of computer programming, what a computer program is, what a programming language is and how they differ, how all computer programs are compiled into machine or interpreted languages in order to run on digital computerized devices, and the major programming pardigms, procedural and object-oriented.

In the next lesson you will learn about the binary numbering system, ASCII encoding, and the systems development lifecycle, a methodology for planning, designing, developing, implementing, and evaluating software systems and software testing.

Table of Contents