Mt. San Jacinto College Computer Information Systems
Minimize header.

CSIS 111B: Fundamentals of Computer Programming: Lessons

Lesson 1

Introduction to Computer Programming


By the end of this lesson you will be able to:

  • Summarize some of the key milestones which have occurred in computer programming and some of the possibilities for the future of computer programming.
  • Describe the components of the von Neumann architecture.
  • Explain the differences between first generation (1GL) and high-level programming languages.
  • Categorize elementary computer programming concepts like digital computers and ready-made programs.
  • Explain the basics of how a computer program works.
  • Describe the difference between a compiler and an interpreter as well as the advantages and disadvantages of each.
  • Describe the difference between procedural programming and object-oriented programming.

Introduction to Computer Programming

As a computer programmer you will need to create computer programs that solve problems using:

  • Digital tools (compilers, languages, editors)
  • Design patterns appropriate for the environment in which your solution will be implemented
  • Frameworks (APIs, code libraries, services)

Your programming decisions will also include selecting the appropriate data types and control structures needed to solve the task at hand.

This lesson begins with the history of computer programming and then introduces the reader to elementary computer programming concepts, including compilers and interpreters, high-level versus low-level programming languages, and the difference between procedural and object-oriented programming languages.

Table of Contents

The History and Future of Software

Table of Contents

History of Computer Programming

The term computer programming refers to the process required to configure a computer to perform a particular computational task. During the inaugural days of computers, the early 1940's, computers were programmed by rearranging the electrical wiring within the computer. Every time you wanted to have the computer perform a different computational task you had to first re-wire it to do so. You might say that these computer programs were "hard-wired" instructions.

The computing process consisted of electromechanical relays being turned on and off based on the "program" the system was configured to run. When a relay was open, electricity would not flow on that circuit; when the relay was closed, electricity could flow on the circuit. Very similar to a light switch. Many of the computer concepts used in the first part of the twentieth century were based on work done by mathematician and computer scientist Alan Turing  which led to the introduction of the terms Turing Machine and Turing Completeness.

Staffers working on reconfiguring the ENIAC computer.
Computer technicians "programming" the Eniac computer located in the BRL building at the University of Pennsylvania's Moore School of Electrical Engineering circa 1946.

A major advance in computer design occurred in the late 1940s, when John von Neumann (pronounced noy-man) had the idea that a computer should be permanently hardwired with a small set of general-purpose operations [Schneider and Gersting, 2010]. The operator could then input into the computer a series of binary codes that would organize the basic hardware operations to solve more specific problems. Instead of turning off the computer to reconfigure its circuits, the operator could flip switches to enter these codes, expressed in machine language, into computer memory. At this point, computer operators became the first true programmers who developed software/machine code to solve problems using computers. This process developed by von Neumann is known as input-process-output (IPO).

diagram of von Neumann's computer architecture.
The von Neumann Architecture

However, the earliest computers were not capable of storing a computer program for re-use. It wasn't until the IAS machine, introduced in 1952, that a computer could store a program written by a programmer. The IAS machine was an electronic tube-based computer built at the Institute for Advanced Study (IAS) in Princeton, New Jersey. It is sometimes called the von Neumann machine, since the paper describing its design was edited by John von Neumann. He was a mathematics professor at the time at both Princeton University and IAS. The computer was built from late 1945 until 1951 under his direction. The general architectural design of the IAS is called the Von Neumann architecture, even though it was both conceived and implemented by others. The computer architecture of input, process, output and memory can be found in all of today's modern computing devices.

Computers continued to operate using mainly vacuum tubes until the early 60's when transistors became more reliable and mass produced. Invented in 1947 by William Shockley at Bell Labs, the earliest transistors were not very reliable and were difficult to produce. The production of transistors advanced tremendously with the invention of the integrated circuit (IC) developed by Robert Noyce and others at Fairchild Electronics. Later on, Noyce joined with Gordon Moore, of Moore's Law fame, to start the company Intel in 1968. Intel released the world's first microprocessor, the Intel 4004, a 4 bit processor in 1971. Most modern central processing units (CPUs) are microporcessors, meaning they are contained on a single IC chip.

The one thing that all these early computers have in common is that they all used binary notation for both the programming of the computer and its internal computational processes; a notation known as machine language.


In order to load instructions into computer memory for processing on the IAS and other similar mainframe computers of the era, the instructions needed to be represented as zeros and ones using, as previously mentioned, a machine language. The term 1gl or first generation language is used to refer languages that utilize machine code instructions.

Originally, no translator was used to compile or assemble first-generation language programs. Initially the first-generation program's instructions were entered through the front panel switches of the earliest computers and then on later computers, up through the minis, were stored as a collection of punch cards which would be used to load the machine code instructions into the computer's memory. The instructions in 1GL are formed using varying combinations of binary numbers, meaning zeros (0) and ones (1). This makes the language suitable for computing devices to understand, but far more difficult to interpret and learn by the human programmer.

Advantage of programming in 1GL: Code can run very fast and very efficiently, precisely because the instructions are executed directly by the central processing unit (CPU).

Disadvantage of programming in a low level language: When an error occurs, the code is not as easy to fix. Also, first generation languages are very much adapted to a specific computer and CPU, therfore code portability is significantly reduced in comparison to higher level languages.

Modern day programmers still occasionally use machine level code, especially when programming lower level functions of the system, such as drivers, interfaces with firmware and hardware devices. Modern tools such as native-code compilers are used to produce machine level code from a higher-level language.

High-level Programming Languages

The first high-level programming language was Plankalkül, created by Konrad Zuse between 1942 and 1945. The first high-level language to have an associated compiler, was created by Corrado Böhm in 1951, for his PhD thesis. The first commercially available language was FORTRAN (FORmula TRANslation); developed in 1956 (first manual appeared in 1956, but first developed in 1954) by the team of John Backus at IBM.

When FORTRAN was first introduced it was treated with suspicion because of the belief that programs compiled from high-level language would be less efficient than those written directly in machine code. FORTRAN became popular because it provided a means of porting existing code to new computers, in a hardware market that was rapidly evolving. FORTRAN eventually became known for its efficiency. Over the years, FORTRAN had been updated, with standards released for FORTRAN-66, FORTRAN-77 and FORTRAN-92.

Video: The History of Computer Programming
Table of Contents

Elementary Computer Programming Concepts

Getting Started: The Digital Electronic Computer
Table of Contents
Ready-Made Programs

As we stated previously, in the early days of computing there were no methods available for saving a computer program so that it could be reused. In the cases when hard-wired instructions were used they were written or typed on paper by a human, and in order to be reused, the instructions on paper had to be read by humans in order to place the plugs and flip the switches to re-program the computer.

In those days, downloading an app from a mobile store oran Internet Web site wasn't even conceived of yet. Development of what later became the Internet didn't occur until the early 1960's. The World Wide Web didn't exist prior to the 1990's. The stored program concept had been a theoretical concept of a universal Turing machine. It wasn't until 1948 that a computer existed which could store its instruction in some kind of computer memory (storage technologies varied widely at the time).

Table of Contents
How Computer Programs Work
Table of Contents
Compilers and Interpreters

This table displays modern-day high-level programming languages arranged by their compilation method.
Compiled Intermediate Interpreted Interpreter Intermediate Form
  C#   Common Language Runtime (CLR)) provides one or more Just-In Time (JIT) compilers. MSIL
  Java   Java Virtual Machine (JVM) Bytecode
    Python Python interpreter  
    ActionScript Shockwave and Flash Player  
    JavaScript Built into Web browsers, some operating systems, and runtime environments like node.js.  
    Perl Server-side application  
    Ruby Server-side application  
Table of Contents
High-Level vs. Low-Level Languages
Table of Contents
Procedural vs. Object-Oriented Programming
Table of Contents


In this lesson you learned a lot about both the history and the future of computer programming, what a computer program is, what a programming language is and how they differ, how all computer programs are compiled into machine or interpreted languages in order to run on digital computerized devices, and the major programming pardigms, procedural and object-oriented.

In the next lesson you will learn about the systems development lifecycle, a methodology for planning, designing, developing, implementing, and evaluating software systems and software testing.

Table of Contents