What is 14 nm Processor? Uses, Benefits & Challenges

5
48
What is 14 nm Processor

What is 14 nm Processor?

The term 14 nm defies the technology or the lithography on which a processor is built. Technically, this signifies that the pitch or distance between the two transistors in the CPU is 14 nanometers.

KEY TAKEAWAYS

  • The 14 nm technology is the commercial name of a particular lithography or manufacturing process which indicates the size of the semiconductor.
  • This particular technology uses second generation 3D tri-gate transistors to deliver more density, power, performance, better switching speeds at a lower capacitance, power leakage and cost.
  • A lot of devices today use the 14 nm technology right from the personal computers to servers and even Internet of Things.
  • Major challenges to this technology are power dissipation management and minimization, testing and scaling of wire length.

Understanding 14 nm Processor

What is 14 nm Processor

The advancements made in the 14 nm process over the past few years have made it a reality.

Though Intel thinks otherwise, most chip manufacturing companies are now in volume production of chips with 14 nm technology.

The 14 nm process is the successor of the 22 nm or 20 nm process node.

This particular MOSFET technology node was named by the International Technology Roadmap for Semiconductors or ITRS.

The 14 nm nodes typically use the fin field-effect transistor or FinFET technology which is a form of the multi-gate MOSFET technology.

This specific technology is considered to be the non-planar fruition of planar silicon CMOS technology.

Samsung Electronics introduced their 14 nm processor in 2014, a year before Intel started to ship their 14 nm devices to the consumers.

In a 14 nm processor, the composition of the Integrated Circuit or IC can vary from one manufacturer to another as well as their designs due to their different goals.

For example, the 14 nm A9 of Apple manufactured by Samsung comes with a cache that is about 1/3 of the whole chip.

On the other hand, the cache size of Broadwell of Intel accounts for just 10% of the whole chip.

Similarly, Broadwell and Skylake of Intel implement a huge amount of high-speed elements that are inherently sparse, aiming at high performance.

The tall cells in the Skylake account for around 30% of its composition which is less than 1% of A8 or A9 of Apple.

The tall logic cells are usually intended for high performance and high frequency or high-switching circuitry in the processor.

On the other hand, the short cells are best for high density.

If you consider the 14 nm processor of Intel, it had a bumpy start initially but things started to improve with Skylake and 14 nm became the 2nd generation FinFET transistors of Intel.

Intel usually uses 193 mm immersion lithography for the patterning layers along with SADP or Self Aligned Double Patterning.

For the work function metals it uses TiN pMOS or TiAlN nMOS.

As compared to all other 14 nm nodes, that of Intel’s is the densest and comes with more than 1.5x raw logic density.

Ideally, the 14 nm process has undergone several refinements over the years to optimize clock speed and ensure lower power consumption and higher drive current.

As for Intel, they used the original 14 nm process for their Skylake and Broadwell processors but improved a lot in their second process 14 nm+.

This offered 12% more drive current but consumed less power.

This particular process is utilized in Kaby Lake as well as Server/HEDT Skylake SP/X processors, both.

However, there is a third improved process called 14 nm++ which allows more than 23% higher drive current and consumes 52% less power as compared to the original 14 nm process.

Read Also:  What is Haswell Processor? Features, Generation & More

Nevertheless, the impact on the density is not clear due to the bit comfortable 84 nm poly pitch from 70 nm.

The tri-gate fins are thinner, taller, and closer to each other which offer better computing experience while consuming less active power and increasing the battery life as a result.

In addition to that, the 14 nm technology offers good dimensional scaling, lower capacitance, and improved density and at the same time reduces the cell size of the SRAM cell by almost half the area due to the thinner fins.

Uses

The 14 nm technology is used to produce a lot of different products ranging from low power to high performance.

These products include everything from personal computing devices to servers and devices for Internet of Things.

This makes the best use of the 2nd generation 3D tri-gate transistors.

These powerful transistors help the processors to deliver unbelievable power, density, performance, and cost.

The 14 nm transistors also reduce leakage of power.

In terms of applications, the 14 nm chips are basically used in application processors, high-end consumer electronics, automotive electronics, and AI chips.

Benefits

With the specs of the 14 nm process, the capabilities of the chips have increased manifold in terms of three of the most crucial measurements such as gate pitch, fin pitch, and the interconnect pitch of the transistors.

The changes made in the size and design by using the 14 nm process has in turn improved the performance of FinFET as well.

The size reduction has resulted in tighter density with taller and thinner fins that generates more drive current and increases the performance of the chip overall.

Also, with the lower number of fins in each of the transistors, the density is further improved, though consequently it has also reduced the capacitance of the transistor.

The process helps the chips to maintain their pace, if not improve, the switching speeds of the transistors.

This has reduced power leakage as a result by offering a continuum in the switching speeds and lower leakage which improves the performance curve.

The 14 nm process is considered to be very important for chip manufacturing companies having low power goals to achieve with their Core M processors.

The 14 nm process has enabled the chips, especially the Intel chips, to double the performance per watt.

When you look at the cost factor, the superior double patterning has broken down the cost allowing achieving area scaling that is better than normal along with a huge uptick in the per-core power consumption box.

This has allowed the chips to offer more than single-digit uplifts in performance and behavior of the CPU even under load.

Challenges

One of the most significant challenges to overcome is the yield considering the heavy investment required in 14 nm Research and Development projects.

This cost factor has no doubt played a significant role in pulling back companies, even majors like Intel, from going ahead with the 14 nm technology and production for long.

It is pretty difficult to achieve a 14 nm resolution, especially in a polymeric resistor, even if electron beam lithography is used.

Apart from that, the resolution is limited and made unreliable due to the chemical effects caused by ionizing radiation.

With the existing state-of-the-art immersion lithography a resolution to around 30 nm is achievable but for anything further than that multiple patterning and Hardmask materials are needed.

Another significant limitation is in the form of plasma damage caused to the low-k materials.

Typically, the extent of this damage could be as thick as 20 nm but, under unfavorable conditions, it can go as high up as around 100 nm.

Read Also:  What is 8th Generation Processor? Pros, Cons & More

And, when the low-k materials get more porous, this damage along with its sensitivity is supposed to get even worse.

Technically speaking, unconstrained silicon can have an atomic radius of about 0.11 nm which means that around 90 Si atoms can span the length of the channel. This will eventually result in considerable leakage.

There is the challenge for the EDA or Electronic Design Automation companies of meeting the requirements of low power dissipation, lower geometry design, and low cost.

The companies are facing the challenge of reducing the size of the technology nodes as well as the transistors more and more in order to keep the size of the overall chip small.

This is hard to accomplish without compromising with the performance and quality of the chips since they also have to meet the demand of the consumers for Internet of Things products.

Therefore, the chip manufacturing companies have to look for IoT components especially that are very small in size but are good in terms of affordability as well as RF performance.

Then there is the problem with power dissipation by these companies.

Since IoT is expanding continuously and is becoming the next front line of technology in the future, there will be a new variety of applications that will ask for power minimization.

This is nearly impossible as of now with the small transistors without affecting its overall and eventual performance of the IoT devices and applications.

However, analysis and management of power dissipation is critical but has become more difficult for the chip manufacturing companies in any of its categories, whether it is dynamic power, power dissipation leakage or short-circuit dissipation.

The chip manufacturing companies are also worried about the security and privacy aspect once IoT takes off and becomes mainstream.

There is a high possibility of the information being stolen. There is also a significant concern over the security of implementation of the chips.

Though this specific security concern is resolved by a few companies, the main concerns still remain, which are regarding the prevention of information theft.

Adequate security measures are required to be implemented both into the applications as well as in the network connections.

Another significant issue is testing the small transistors in order to enhance the performance.

The reduced size of the transistors is giving rise to SSD or Solid State drive fault defects.

The companies cannot detect the defects on a regular basis because they cannot identify a suitable DFT or Design for Test technique so that they can boost up the performance of the chips as well as reduce the cost and time to market the products.

Also, due to the size of the transistors shrinking, the manufacturers of the chips and their engineers face a significant challenge in scaling the wire length.

This results in wire interconnecting delay. This also adds to the difficulty level when it comes to fine pitching the wires to better lower-design technology density as well as limit complexity.

Succeeding localizing the entire supply chain and 14 nm chip production is another significant challenge for the chip manufacturing companies to overcome.

A lot of resources are required apart from the mechanism that has the capability to handle production lines for large scale chip manufacturing.

It includes and is not limited to CMP, etching, CVD heat treatment, cleaning machines, and lots more.

Therefore, self-sufficiency will mean a lot in the 14 nm chips to transform the semiconductor industry in the era when Moore’s Law seems to be drawing to a close.

Voltage scaling is another major issue for the high-performance designs due to several factors such as leakage, steeper sub-threshold gradients, limited variability, gate oxide thickness, high k-materials, and RDF or Random Dopant Fluctuations.

Read Also:  What is Microprocessor? Design, Types, Pros, Cons & More

Then there are reliability challenges of the wires and devices and the companies need to worry about fin width that will have random effect on correlations, sources of changeability, fin height, variability of channel length, gate Line Edge Roughening or LER, exacerbation by 3D effects, device width quantization of device widths, device parasitic, and different contact resistance.

Wire scaling is another significant challenge of the high-performance designs because these will be unable to stand large RC increases.

The need for fine-pitched wires that will improve the density along with increased wiring interrelated layers and short run local connections are also significant challenges to overcome.

Optimizing wire plane usage will limit the complexity of the technology and better DA tools will ensure best negotiation of the design rules to ensure best levels of optimization at the driver end.

However, achieving all these is not an easy or complication-free task.

Since enhancing the performance of the transistors and the processor is tricky and needs special treatment of the wires, pushing width and adding buffers will also not be easy.

Apart from the issues created by double patterning lithography and the fat wires that cause local disruption, another challenge is to understand the colors for proper analysis.

This is because the rules regarding coloring are quite complicated and are not local. Coloring as such is subject to external factors and therefore adequate color awareness and analysis is required to ensure maximum accuracy.

Also, considering interrelated capacitance shifts, correct DPL solutions, and interconnect reliability seems to be some other challenges faced by the chip manufacturing technology with respect to the 14 nm process.

In short, the challenges regarding scaling as well as interconnectivity between the wires restrict the connectivity of the devices in the IoT ecosystem and lower their performance on the whole.

Therefore, leakage, feature size and power dissipation are few specific challenges that the chip manufacturing companies need to look into immediately along with other smaller yet significant ones as well.

However, the good news is that these improvements are in-line which will help the process nodes perform better in the coming years.

Chip manufacturing companies are constantly on the lookout for ways to put the challenges that exist or may come in the way at rest.

Conclusion

So, as you can see, the 14 nm process is good but it is not without challenges and delays.

Though several companies have taken on to even better processes, Intel seems to be a bit back on track.

However, there are a lot of good reasons to be optimistic about the 14 nm process.

About Dominic Chooper

AvatarDominic Chooper, an alumnus of Texas Tech University (TTU), possesses a profound expertise in the realm of computer hardware. Since his early childhood, Dominic has been singularly passionate about delving deep into the intricate details and inner workings of various computer systems. His journey in this field is marked by over 12 years of dedicated experience, which includes specialized skills in writing comprehensive reviews, conducting thorough testing of computer components, and engaging in extensive research related to computer technology. Despite his professional engagement with technology, Dominic maintains a distinctive disinterest in social media platforms, preferring to focus his energies on his primary passion of understanding and exploring the complexities of computer hardware.

Previous articleWhat is nm (Nanometer)? Uses, Benefits & Challenges
Next articleGlossary of Computer Hardware Terms
Dominic Chooper
Dominic Chooper, an alumnus of Texas Tech University (TTU), possesses a profound expertise in the realm of computer hardware. Since his early childhood, Dominic has been singularly passionate about delving deep into the intricate details and inner workings of various computer systems. His journey in this field is marked by over 12 years of dedicated experience, which includes specialized skills in writing comprehensive reviews, conducting thorough testing of computer components, and engaging in extensive research related to computer technology. Despite his professional engagement with technology, Dominic maintains a distinctive disinterest in social media platforms, preferring to focus his energies on his primary passion of understanding and exploring the complexities of computer hardware.
5 Comments
Oldest
Newest
Inline Feedbacks
View all comments