What is Computer Architecture? Types, Functions & More

What is Computer Architecture?

The term computer architecture refers to the set of rules, techniques, and procedures. It signifies the functionality of the whole computer system as well as the execution of the programs.

In other words, computer architecture means the design of the computer system using well-matched technologies.

This means that it is a specification that particularly details the set of hardware and software technology standards used to build a computer system that will interact with each other for producing the optimal results.


  • Computer architecture signifies the structure of the computer which includes the components in it and their relationship, organization, implementation, as well as the functionality of the whole system.
  • It also includes other aspects of the computer such as its logic design, Instruction Set Architecture design and micro-architecture design.
  • There are different types of computer architecture such as Von-Neumann architecture, Harvard architecture, system design and architecture, micro architecture and more.
  • The primary job of the architecture is to ensure that the system functions in perfect harmony and all the components process the data and instructions just as the way it should.

Understanding Computer Architecture

Understanding Computer Architecture

According to computer engineering, computer architecture is the method that describes the organization, implementation, and functionality of a computer system.

The architecture refers to the structure of the computer system with reference to the different components of it that is specified separately along with interrelationships among them.

However, not all definitions of computer architecture that you may find on the internet define it in the same way.

There may be a few that will describe it as the programming model and its capabilities of the computer but may exclude the particular implementation aspect.

On the other hand, you may also find a few other specific definitions of computer architecture where it involves other specific aspects such as:

The best way to clear all your doubts and confusions is to know about the history and the structural aspect of computer architecture to start with.

According to computer literature, the term architecture indicates the efforts put in by Sir Frederick P. Brooks and Sir Lyle R. Johnson who were the members of the Machine Organization department.

In 1959, Sir Johnson described instruction types, formats, hardware limitations, as well as speed improvements at the system architecture level.

This is a term that is very useful as compared to machine organization. Subsequently, the term can also be used by a computer user in different but less precise methods.

Prior to this, computer architecture was designed by computer architects on paper and then it was built directly into the final form of the hardware.

Later on, the computer scientists created computer architecture designs significantly in the form of Transistor-Transistor Logic or TTL computers.

In the 1990s, new computer architectures were built, scrutinized, and tweaked inside separate computer architecture or an FPGA or in a computer architecture simulator as a microprocessor before carrying out to the final hardware form.

As said earlier, CISC and RISC are the two main approaches to the processor architecture.

Here, the CISC processors come with a single processing unit, a small register and an external memory with lots of different instructions.

Since these processors have only one instruction to perform, it makes it easier for the programmer who needs to create fewer lines of code.

It uses less memory but takes longer time to complete.

The RISC architecture, on the other hand, is simple and fast and can perform complex instructions with simple instructions.

These microprocessors or digital systems can read and carry out machine language instructions that are represented in assembly language or a symbolic format. These processors are employed on one integrated circuit.

Some of the common microprocessors of today are:

  • Intel Pentium series
  • The Sun SPARC
  • IBM PowerPC and more.

Almost all recent processors are microprocessors and these are usually obtainable on regular Von Neumann machines.

The microprocessors of today belong to the 5th generation. The different generations of these processors are;

  • First generation – From 1971 to 1972 when Intel 4004, Rockwell International PPS-4, Intel 8008 and others hit the market
  • Second generation – From 1973 to 1978 when 8-bit microprocessors such as Intel 8085, Motorola 6800, and 6801 and others came to the market
  • Third generation – From 1979 to 1980 when 16-bit microprocessors using HMOS technology such as Intel 8086/80186/80286, Motorola 68000/68010 and others came into existence
  • Fourth generation – From 1981 to 1995 when 32-bit processors with HMOS technology such as Intel 80386, Motorola 68020, and others were launched and
  • Fifth generation – From 1995 till date when 64-bit processors such as Intel Pentium, Celeron, dual-core, quad-core, and octa-core processors were developed and released.

There are different types of microprocessors available now such as:

These processors can be improved as per the need. The other different advantages of using a microprocessor include:

  • Compact size
  • High processing speed
  • Low and easy maintenance
  • More flexibility and
  • Ability to handle complex mathematics.

However, as for the demerits of these processors, they overheat if overused and most of them do not support floating-point operations. Their performance also depends on the size of data

Types of Computer Architecture

What is Computer Architecture

Now, take a look at the different types of architecture that a computer system may have.

Von-Neumann Architecture

Von Neumann architecture may be quite an old concept of architecture but it is still used by most of the computers even today. This architecture was proposed in 1945 by the mathematician John von Neumann.

The architecture defines the design of an electronic computer along with its CPU. This includes:

  • The Arithmetic Logic Unit or ALU
  • The Control Unit
  • The memory for storing data and instructions
  • The external storage functions
  • The registers and
  • An I/O interface.

The computers that you find today use the same architectures coined and developed by John von Neumann because they follow the same concepts.

Also known as Princeton architecture, the design is unique for an electronic digital system of today that has the same components mentioned above.

This is primarily because the computers may have changed dramatically in design as a physical object right from the supercomputers that occupied an entire room to the modern day laptop that fits in a bag in the years after von Neumann proposed his architecture but the basic functionality is still the same.

Though the performance level may have increased, at their core, there is a very little change in the computers created between then and now. Therefore, it still can and does run on the same old von Neumann architecture.

Read Also:  Fully-Rugged & Semi-Rugged Laptops: 11 Differences

That is why the design of von Neumann is considered to be the foundation of modern computers.

Though the Harvard architecture is pretty similar and uses dedicated data addresses and buses to read and write to memory, the von Neumann model wins because it is very easy to implement in real hardware.

The primary computation concept of the von Neumann architecture is that both data and instructions are loaded in the same main memory unit of the computer. It is made up of a series of addressable locations.

It allows the process to access the data and instructions needed for execution of any task or a computer program by using committed connections called buses.

These are the address bus, which is used to make out the address location and a data bus which is used to move the contents to a location or from it.

However, the von Neumann architecture comes with some pros and cons that are good to know.

There are a number of good reasons for the von Neumann computer architecture to be so successful.

One, it is comparatively easy to put into practice in hardware. Two, the von Neumann machines can be introspective and deterministic.

This means that these can be expressed mathematically and every step of the computing process can be clearly understood.

The performance of it is consistent and therefore one can rely on it to get the same output over and over again for any given set of inputs.

However, one of the most significant challenges with von Neumann machines is that coding is very difficult.

This eventually resulted in the development of computer programming which considers the real-world issues and explains them to the von Neumann machines.

This helps in reducing an algorithm when a software program is being created to the formal instructions that can be followed by the von Neumann machine.

However, there is one significant issue in here as well, which is, all algorithms and problems are not very easy to reduce. This leaves a number of problems unsolved.

Harvard Architecture

The Harvard Architecture, on the other hand, basically comprises data and code that are laid in separate memory sections.

This means that it needs a distinct memory block for the instructions and data.

Typically, the data storage is solely contained within the CPU.

The accessibility of data in one memory is usually completed by a single memory location and solo collection of clock cycles is required in the case of Harvard architecture.

Apart from that, the modern computers have the most recent CPU processes for both ways but these are unrelated in a hardware design.

In a standard computer that pursues von Neumann architecture, it is the same memory in which both the data and instructions are stored.

This means that the same buses can be used to retrieve them as and when required.

This allows the CPU to do both tasks together such as reading the instructions and reading or writing the data.

In comparison, in the case of the Harvard Architecture, there are separate buses and separate storage.

This means that there is a signal path to fetch the instruction and data.

The key benefit of having separate buses for data and instruction is that it allows the CPU to access instructions and read or write data at the same time.

The main idea of the Harvard architecture was to do away with the bottleneck of Von Neumann Architecture.

In the Harvard architecture there are different types of buses or separate signal pathways used. These are:

  • Data buses that carry data between the processor, main memory system, and I/O devices
  • Data address buses that carry the data address to the main memory system from the processor
  • Instruction buses that carry instructions among the processor, main memory system, and I/O devices and
  • Instruction addresses buses that carry the address of instructions to the main memory system from the processor.

There are also different types of operational registers involved in the Harvard architecture that are mainly used to store different types of instruction addresses.

Two such operational registers are the Memory Data Register and Memory Address Register.

There is also a program counter that contains the location of the subsequent instruction that is to be executed. This counter passes this address to the Memory Address Register.

The architecture also contains other components such as:

  • The Arithmetic and Logic Unit as a part of the CPU that does all additions, subtractions, bit shifting operations, comparisons, logical operations, and several other arithmetic operations
  • The Control Unit as a part of the CPU that manages all control signals of the processors along with the input and output devices and the movement of data and instructions within the system and
  • I/O systems that are used to read data with the help of CPU input instruction into the main memory and the result is given as output through the output devices.

The Harvard Architecture also comes with some significant advantages such as the separate buses for data and instruction.

And, the modified architecture, in practice, can be used where there are two distinct data and instruction caches such as in the ARM and the x86 processors.

Instruction Set Architecture

Commonly known as ISA, this is another important digital computer architecture which comprises a set of instructions.

The processor renders and deduces these instruction sets that are typically of two distinct types namely:

CISC includes several specialized instructions that are useful for specific programs but are not universal. Such programs characteristically use fewer instructions but each of these instructions will usually take extra cycles.

RISC, on the other hand, has a more optimized, smaller, and generalized set of simple instructions.

However, there are separate instructions for load/store and not as an element of another instruction.

There is a larger number of instructions but each takes a single clock cycle. There are also greater numbers of registers and concurrent execution of parts through pipelining in RISC processors.

Though the implementation of the Instruction Set Architecture is quite versatile, it usually varies in features such as:

  • Physical size
  • Performance and
  • The price.

However, it allows the development of the micro-architectures by implementing ISA as a unique and high-performing system which can run software on previous generations of execution.

Ideally, an ISA describes the set of essential operations that must be supported by a computer.

This includes the practical explanation of operations and the particular descriptions of how to access and invoke them.

An ISA means implementation of it in a processor and it is not dependent on the micro-architecture.

In fact, an ISA itself can have several micro-architecture implementations.

Ideally, an ISA includes instructions for different tasks such as:

  • Memory operations
  • Data handling
  • Arithmetic and logic operations
  • Coprocessor instructions and
  • Control flow operations.
Read Also:  Why Should You Not Buy Used PC Parts? (11 Reasons)

Apart from that an Instruction Set Architecture also describes the highest bit length of all instructions as well as the encoding process of an instruction.

With the definition of the ISA hardware and software development can be alienated from each other.

This means that a particular company can develop the hardware and several other companies can work on the software development separately knowing that the software program will run on that particular hardware.


Micro-architecture refers to the specific structural blueprint of a microprocessor. It is a computer organization that leverages a process in which the ISA holds a processor that is built-in.

The ISA is implemented by the hardware scientists and engineers with different micro-architectures that may vary due to changing technology.

It includes the methods, the resources and the technologies used as well. In doing this, the processors are actually devised to manage a specific instruction set.

In simple words, this is a digital logic form of all data pathways and electronic elements that may be present in the microprocessor and are designed in a particular manner.

This allows the best possible completion of instructions that are needed to be executed.

Ideally, it is all about the combined implementation of different elements such as:

  • The registers
  • The memory
  • The Arithmetic Logic Units
  • The multiplexers and
  • All other digital logic blocks.

All of these elements together form the processor.

When a micro-architecture is combined with an ISA it makes up the computer architecture of the system on the whole.

Same ISA can be implemented by different micro-architectures but at the cost of speed of execution and power efficiency.

However, the fundamental processor will consist of different elements such as:

All these elements allow the processor to make decisions according to the instruction that is being executed.

This specific type of computer architecture involves data pathways between the components and buses and how exactly these are laid out to determine the shortest paths and establish proper connections.

There are typically multiple layers in the modern microprocessors that help in handling complexity but the primary idea regarding the layout of a circuit that will help in executing operations and commands are defined in the instruction set.

Pipelined data path is a technique currently used in micro-architecture that gives a sort of parallelism.

It can be implemented in data processing and it will help in overlapping several instructions while execution.

This is achieved by running several execution pipelines in parallel or close to parallel.

The execution units are a very important aspect of micro-architecture since these perform the calculations and operations of the processor.

Therefore, the most significant consideration for micro-architecture design is the selection of the number of execution units along with their throughput and latency.

On the other hand, the other important micro-architectural decisions are:

  • The size
  • The throughput
  • The latency and
  • The connectivity of memories in the system.

The system level design is another vital aspect to consider in micro-architecture and it includes assessments on the performance level, connectivity of input, I/O devices, and output.

Typically, micro-architectural design emphasizes on restrictions rather than capability. The design decision will affect things going into it directly.

Therefore, the specific areas that should be focused on while selecting the design include and are not limited to:

  • The performance
  • The chip area
  • The cost
  • The logic complexity
  • The testability
  • The ease of debugging
  • The ease of connectivity
  • The manufacturability and
  • The power consumption.

Ideally, a micro-architecture is considered to be good only when it caters to all of the above criteria.

Systems Design

A good systems design will be able to meet all the requirements of the user such as data management in the system, its architecture, and the computer modules that have different interfaces.

Therefore, considering all these facts, a systems design can be viewed as an application of systems theory to product design and development.

Apart from that, in system design there is also come overlapping with the other disciplines of the system such as:

  • Systems architecture
  • Systems analysis and
  • Systems engineering.

Ideally, systems design refers to the act of using the marketing information and designing a product to be manufactured based on the broader aspect of product development.

This combines the viewpoint of design, marketing, and manufacturing in a single approach to develop a product.

Therefore, it can be said that systems design is the procedure of defining and developing systems that will satisfy the specific needs of the users.

This means that, the fundamental aspect of system design is to study the different components and their consequent interaction with each other.

Physical design is one important aspect of systems design which is related to the actual input and output procedures of the system. This means that it involves different aspects such as:

  • How data and instructions are given to the system
  • How these are authenticated or verified
  • How the input is processed and
  • How the output of the processed data is displayed.

Te different requirements that are determined in a physical design are:

  • The input requirement
  • The output requirement
  • The storage requirement
  • The processing requirement
  • The system control requirement and
  • The backup or recovery requirement.

In other words, physical design of a system can be divided into three specific sub-tasks such as:

  • User interface design
  • Process design and
  • Data design.

When you are done with designing the micro-architecture and an instruction set, the next thing to do is develop a practical machine.

This is called implementation, which however is normally not an architectural design but a hardware design engineering rather.

This implementation process can be divided further into different steps which include:

  • Logic implementation where the required circuits are designed at a logic-gate level and
  • Circuit implementation where basic elements such as multiplexers, gates and latches are designed at transistor-level.

Physical implementation is another process where the physical circuits are drawn for different components that are placed on a board or in a chip floor and create wires to connect them.

Design validation tests must be done to see how things work in real time and in different situations. When this process starts, the logic emulators test the design at the logic level.

However, this is a slow process. To expedite the process prototypes are built by using Field Programmable Gate Arrays or FPGAs after making the necessary corrections.

The last step is to test the prototype Integrated Circuits that may need you to make several redesigns.

Practically, for the CPUs, this entire process is referred to as systems design.

Other Technologies

There are some other technologies as well in computer architecture. These technologies are usually used in larger companies such as Intel. These are:

  • Assembly instruction set architecture – Here a smart assembler converts abstract assembly language that is common to a set of machines to a little bit different machine language for diverse implementations.
  • Programmer-visible macroarchitecture – Here compilers and other high level language tools are used to define a reliable interface or contract to the programmers who are using them and abstracting dissimilarities between the underlying micro-architecture, ISA, and UISA.
  • Microcode – This software translates instructions and performs like a wrapper around the hardware to run a chip. This presents an ideal version of the instruction set interface of the hardware which gives flexible options to the chip developers.
  • UISA – A short for User Instruction Set Architecture, this is one of three subsets of the RISC CPU instructions that are useful to the application developers. The other two subsets are OEA or Operating Environment Architecture used by the developers of the operation system and VEA or Virtual Environment Architecture instructions that are useful to the developers of virtualization systems.
  • Pin architecture – This architecture is more flexible than ISA functions since the external hardware can get used to new encodings or transform from a pin to a message.
Read Also:  What is Gaming Computer? (Explained)

Now that you are done with the different computer architectures, take a look at the fruition of processor architecture.

How Does Computer Architecture Work?

Now, take a look at the working process of each of the components of the computer architecture.

Input Unit – This provides data from outside to the computer system.

This means that the input unit connects the outer environment with the computer by taking data from the input devices.

It then converts the data into machine language and loads the same in the computer system.

Output Unit – This provides the results of the processed data by the computer on the output devices in the form of text, images, audio, and video. This means that it connects the computer with the outer environment.

Storage Unit – This stores the data and information in the traditionally divided primary and secondary storage.

The data in the primary storage is made available to the CPU directly while the secondary storage is not which stores huge amounts of data permanently. This is transferred to the primary memory first from where the CPU can access them.

Arithmetic Logic Unit – This makes all calculations associated with the computer system. This is a part of the Central processing unit.

Control Unit – This is another part of the CPU and transfers data from the storage unit to the Arithmetic Logic Unit for calculation. The control unit is also called the Central Nervous System since it controls all other units of the computer system.

Apart from transferring data across the computer, it also decides how the input output devices, memory, Arithmetic Logic Unit and others should behave.

What are the Elements of Computer Architecture?

Usually, the computer architecture comprises three specific elements such as:

  • The processor
  • The memory and
  • The peripherals.

All these components are linked with one another with the help of a system bus. This system bus, in turn, consists of three different components such as:

What is the Importance of Computer Architecture?

Now, take a look at the importance of computer architecture.

Typically, the primary role of the computer architecture is to ensure that there is a perfect harmony when the computer operates or processes any given set of data or instructions.

Therefore, the computer architecture needs to maintain a balance in the efficiency and performance of the computer system on the whole as well as keep its cost low and ensure a high level of reliability.

For example, the Instruction Set Architecture typically signifies the views of a programmer of the system apart from acting as the bridge between the hardware and software of the computer.

As you may know already, computers can understand only binary language, which includes only 0s and 1s.

On the other hand, the users of the computers understand languages of much higher level and other conditions such as while, if else, and more.

Therefore, without a proper language there will be no proper communication between the computer and the user and create a lot of confusion during the operation.

That is why efficient computer architecture is required in order to communicate between each other.

In that case, the Instruction Set Architecture plays a vital role by bridging this gap by translating the high level language of the users to binary language so that it can be understood by the computer.

What are the Examples of Computer Architecture?

Here are some of the most common examples of computer architectures:

  • The SPARC – These are typically made by Sun Microsystems and others
  • The x86 – This is made by AMD and Intel
  • The PowerPC – This is made by Apple, Motorola, and IBM.
  • ARM – This is a 32-bit computer architecture started by Acorn Computer Company and dominated as embedded and mobile. It was later changed to 64-bit with the AArch64 mode in Version 8 or ARMv8.
  • MIPS – This was used in Silicon Graphics workstations and other network infrastructures such as firewall systems and backbone routers.

Typically, there are three specific groups of computer architecture namely:

System Design

This group of computer architecture includes the hardware components used in the computer apart from the CPU or Central Processing units such as:

  • The data processors
  • The GPU or Graphics Processing Unit and
  • The direct memory.

Apart from that, it also includes data paths, memory controllers, and diverse things such as virtualization and multiprocessing.

Instruction Set Architecture

Commonly referred to as the ISA, this specific group of computer architecture includes the programming language embedded in the Central Processing Unit. In addition to that, it also includes other vital things such as:

  • The word size
  • The memory addressing mode
  • The types of processor register
  • The instruction set used by the programmers and
  • The data formats.

The primary job of the ISA is to define the capabilities and functions of the CPU depending on the particular type of programming it can process or perform.


The micro-architecture is also known as a computer organization. This particular type of computer architecture describes several important things such as:

  • The data paths
  • The storage elements and
  • The data processing elements.

Add to that, it also tells how exactly these things should be integrated into the Instruction Set Architecture.


Computer architecture is a complex subject that comprises the computer system as well as the operations that determine its functionality.

As this article points out it defines and meets the needs of the users as well as the system on the whole.

About Dominic Cooper

Dominic CooperDominic Cooper, a TTU graduate is a computer hardware expert. His only passion is to find out the nitty gritty of all computers since childhood. He has over 12 years of experience in writing, computer testing, and research. He is not very fond of social media. Follow Him at Linkedin

Was this helpful?

Thanks for your feedback!
Inline Feedbacks
View all comments