Direct Memory Access vs Memory Mapped I/O: 13 Differences

5
51
Direct Memory Access vs Memory Mapped I/O

The Direct Memory Access or DMA is a process in which data is transferred from the memory to I/O and the I/O to memory without the help of the Central Processing Unit or the CPU of the computer.

On the other hand, Memory Mapped IO refers to the registers of the device that are mapped into the memory space in it.

When these specific spaces are read or written by the CPU, it is actually written to or read from the device and not from the real memory.

In addition to that, there are also several other differences between Direct Memory Access and Memory Mapped I/O which you need to know.

KEY TAKEAWAYS

  • Direct Memory Access allows reading and writing data from the memory directly.
  • In a Memory Mapped I/O the processor reads the data from the registers before writing it onto the buffer.
  • The DMA controller is mainly responsible for transferring from the memory to the I/O and vice versa.

Direct Memory Access vs Memory Mapped I/O – The 13 Differences

Direct Memory Access vs Memory Mapped I/O

1. Basic Difference

Basically, Direct Memory Access will allow the hardware of the computer to read and write from and to the memory directly.

On the other hand, Memory Mapped I/O will enable the Central Processing Unit of the computer to manage hardware by reading and writing particular memory addresses.

2. Uses

Direct Memory Access is used typically for those high bandwidth operations such as camera video input and disk I/O.

On the other hand, Memory Mapped I/O is typically used for those tasks such as changing control bits that typically involve low bandwidth.

3. Data Transfer Perspective

In DMA transfer, the data is transferred to and from the actual memory buffer itself directly. The role of the CPU of the computer here is to inform the device about the location of the particular buffer so that it can be accessed by the device directly.

It is then that the CPU can perform other necessary tasks.

On the other hand, in a Memory Mapped I/O, in order to transfer the data to the actual memory buffer from the device, the processor reads the data from the registers of the Memory Mapped device and then writes it to the buffer.

This is just the opposite of transferring the data to the device.

4. Hardware Control

It is the DMA controller specifically that allows the hardware to read and write memory directly.

On the other hand, it is the Memory Mapped I/O that allows the Central Processing Unit to read and write a particular memory address to control the hardware.

5. Large Data Transfer

In Direct Memory Access large amounts of data can be transferred because the system concurrency is increased since the CPU is allowed to perform other relevant tasks while it transfers data through the memory and system buses.

On the other hand, Memory Mapped I/O is not as efficient as Direct Memory Access to transfer data in large amounts. This is because the processor has to transfer data word by word between the memory and the I/O module.

6. Hardware Design

In the case of Direct Memory Access the hardware design is much more complex in comparison because it needs the DMA controller to be integrated into the system, which, in turn, needs to allow the controller to be the bus master.

On the other hand, in the case of Memory Mapped I/O, the hardware design is much simpler since it does not need any additional address lines. However, decoding the address may get a bit complicated in this case.

7. Source and Destinations

Read Also:  What is Upper Memory Area? (Explained)

In Direct Memory Access, the DMA controller transfers data from the memory to the I/O and the other way around.

However, in the case of the Memory Mapped I/O, since the control signals are produced by the processor, data can be transferred from the memory to the processor and from the processor to the I/O.

8. CPU Functions

In DMA, the processor informs the device about the location of the buffer and then continues with its other operations barring accessing the memory which is now done by the DMA controller directly.

On the other hand, in the case of the Memory Mapped I/O, the CPU needs to read the data from the registers of the Memory Mapped device and write them to the buffer during the process of transferring data to the actual memory buffer from the device.

The reverse happens when data is transferred to the device.

9. Bus Master

In Direct Memory Access, it is the DMA controller that eventually becomes the bus master after sending a request to become so to the processor which then allows it by relinquishing its own control over the system bus for a few cycles and stays idle.

On the other hand, in the case of the Memory Mapped I/O, it is the processor that acts as the bus master all throughout.

10. Control Signals

All control signals necessary during the data transfer process are generated by the Direct Memory Access controller.

On the other hand, in the case of the Memory Mapped I/O, it is the CPU that produces all such control signals.

11. Instruction Fetching or Decoding Requirements

In the case of Direct Memory Access, there is no need for fetching and decoding of the instructions during the data transfer process because the DMA only transfers data and for that it does not need any instructions.

On the other hand, in the case of the Memory Mapped I/O, every bit of data here needs fetching and decoding the instructions.

12. Speed of Data Transfer

Data transfer is done very quickly by the DMA controller. On the other hand, in the case of Memory Mapped I/O the process is quite slow in comparison.

13. Efficiency

In Direct Memory Access, the efficiency of the DMA controller seems to become better and higher when the size of data transfer is bigger.

On the other hand, in the case of the Memory Mapped I/O, the process seems to become more and more inefficient when the size of data transfer becomes larger.

Which is Needed More – Direct Memory Access or Memory Mapped I/O?

Direct Memory Access vs Memory Mapped I/O

It is the need and preference of the users that will primarily determine which among the Direct Memory Access and Memory Mapped I/O is better or more necessary.

However, in general, there is a relation between the two in terms of their purpose and these are not directly opposites.

Therefore, you can make a comparison between the two directly to arrive at a conclusion.

As you may know, between the main memory of the system and the secondary memory or hard disk it is the virtual memory that acts as the cache. Data is fetched from the hard disk in advance into the main memory.

This ensures that the required data is always available in the main memory to be accessed readily.

This helps in running more applications on the computer system having more physical memory to support.

Ideally, a Memory Mapped I/O is one of the main types of input/output in microcontrollers and microprocessors as well as in other types of computers.

It actually refers to a peripheral device which is similar to the memory and its address space is similar to the data memory.

With Direct Memory Access, a few specific types of peripherals are allowed to access the main memory of the system or the Random Access Memory directly. It can also be used to transfer data from one place of the RAM to another.

Read Also:  Integrated Circuits vs Transistor: 8 Differences

In the absence of DMA, the processor will have to write every datum to the peripheral making it extremely intensive for the processor.

When data is to be streamed from a hard drive, with the DMA it will be easier.

This is because the pointer can be set up at the beginning of the data block in the memory which may be the same as a sector on the disk itself.

This will initiate the transfer and continue to stay with the disk controller till the transfer is completed when it will go away after getting an interrupt that the data transfer is completed.

Once again, all these will be done with minimal intervention of the processor which will prevent the system from being awfully sluggish or not be functional at all.

Now, you may wonder what could be the relationship with Direct Memory Access and Memory Mapped I/O then.

Well, it is very simple. You can use DMA with Memory Mapped I/O or even with a port-based I/O. However, everything depends on the design of the chip.

Since the address of the source is fixed, which means that all the word data comes from the same peripheral address and the destination address changes after each transfer, the buffer will be therefore filled up in sequence by the DMA automatically.

However, there are a few specific advantages of using Memory Mapped I/O as well. Some of these are:

  • It will give a single address space along with a common set of instructions for the I/O and data operations both
  • The memory ordering rules as well as the memory barriers can be defined which are applicable to the normal memory as well as the device accesses
  • It will allow reusing regular memory access instructions and you will not need an entire set of separate set of I/O instructions and opcodes
  • Pointers can be used in different languages such as C and C++ in order to access devices instead of inline assembly and other intrinsic that are platform specific
  • The same type of memory mapping mechanisms can be used as for other memory in order to gain control access to devices and
  • The low latency bases can be beneficial since the request routing mechanism will be in place which will allow optimizing normal data accesses.

However, there are a few caveats of Memory Mapped I/O as well such as:

  • Tagging the pointers volatile may be still required
  • Inline assembly and intrinsic may still be required to incorporate memory barriers
  • It complicates the cache controller potentially because the device accesses will behave in a different way from regular memory access
  • The instruction scheduling and speculation becomes more complicated since the CPU is not aware that the given store or load goes to the device memory
  • Other structures such as the MMU or Memory Management Unit that may be within or near the memory system needs to inform such activities once it is received and the address is decoded
  • Corner conditions and limitations are introduced such as the need of specific access widths like 32-bit writes only or no 8-bit or 16-bit only or 64-bit writes which may surprise the compilers
  • Lower throughput and high access latency may creep in while the requests steps off the low-latency and fast path which is intended for data to an I/O subsystem with simpler and slower buses and
  • Compilers may use instructions that are not compatible with the peripherals while using specific instructions for the Memory Mapped I/O to perform.

Still, Memory Mapped I/O uses the same address space and bus to address the I/O devices and memory both. The features of it allow:

  • Mapping I/O devices into memory space
  • Allotting I/O devices with memory addresses
  • Treating I/O devices as memory devices by the processor
  • Being the I/O addresses as big as the memory addresses
  • Increasing the number of I/O devices
  • Transferring data from I/O devices with any instruction like MOV or any register of the processor and
  • Needing only Read and Write as only two control signals in the system.
Read Also:  What is 4K Resolution? (Explained)

However, memory address decoding will be more complex, slower, and costly because these are big.

The mechanism of DMA, on the other hand, is fast and allows direct communication among the peripheral and memory with no intervention of the processor.

In this process the processor however plays a significant role such as:

  • It acts as the default bus master
  • It sets and initializes the transfer parameters
  • It relinquishes the control over the bus when a request is made
  • It informs the HLDA that the DMA controller is the bus master now and
  • It programs the two registers inside the DMA controller known as the CAR or Current Address Register and CWCR or Current Word Count Register to give the opening address and the number of bytes to transfer.

The DMA controller then takes over and checks the DREQ signal to ensure that the I/O device is ready to make the data transfer.

If it is 1, a HOLD signal is sent by it to the processor to gain control over the system bus and it issues DACK to the I/O device to inform that the data transfer is about to start.

Transfer is made each byte in one cycle by the DMA controller and when the whole data transfer process is completed, the Count Register and Address Register are decremented by 1.

This process is repeated till Terminal Count is reached which is when the count reaches to 0.

Since the data transfer process is completed now, the Direct Memory Access controller gives up the control over the system bus to the processor by making HOLD = 0 when the CPU becomes the bus master once again.

Therefore, with the dual action of the DMA controller and the CPU, the overall performance of the system is improved making the data transfer process very fast in spite of the fact that the process is in the ‘Hold’ state and cannot perform any operations while the DMA controller works using the bus.

In that sense, Direct Memory Access has a significant edge over the Memory Mapped I/O or MMIO.

Conclusion

So, now you surely know the differences between Mapped Memory I/O and Direct Memory Access and which one of them is more suitable for your type of computing needs.

Make your choice accordingly and confidently, which should be very easy by now.

About Puja Chatterjee

AvatarPuja Chatterjee, a distinguished technical writer, boasts an extensive and nuanced understanding of computer technology. She is an esteemed graduate of the Bengal Institute of Management Studies (BIMS), where she honed her skills and knowledge in the tech domain. Over the span of more than 12 years, Puja has developed a deep expertise that encompasses not only technology writing, where she articulates complex technical concepts with clarity and precision, but also in the realm of client relationship management. Her experience in this area is characterized by her ability to effectively communicate and engage with clients, ensuring their needs are met with the highest level of professionalism and understanding of their technical requirements. Puja's career is marked by a commitment to excellence in both written communication within the tech industry and fostering strong, productive relationships with clients.

Previous articleMini DisplayPort & Thunderbolt: 14 Differences
Next article22 Differences Between DRAM & SRAM
Puja Chatterjee
Puja Chatterjee, a distinguished technical writer, boasts an extensive and nuanced understanding of computer technology. She is an esteemed graduate of the Bengal Institute of Management Studies (BIMS), where she honed her skills and knowledge in the tech domain. Over the span of more than 12 years, Puja has developed a deep expertise that encompasses not only technology writing, where she articulates complex technical concepts with clarity and precision, but also in the realm of client relationship management. Her experience in this area is characterized by her ability to effectively communicate and engage with clients, ensuring their needs are met with the highest level of professionalism and understanding of their technical requirements. Puja's career is marked by a commitment to excellence in both written communication within the tech industry and fostering strong, productive relationships with clients.
5 Comments
Oldest
Newest
Inline Feedbacks
View all comments