Home
About Future-Fab
Download New Issue
Contact Us
Volume Archives
Editorial Panel

FUTURE FAB ARCHIVES


Removing Test Barriers to Moore's Law
(2/2/2002) Future Fab Intl. Issue 12
By Mike Mayberry, Intel Corporation
Print Article Print this paper
Send As Email Send as email

Moore’s law and its derivatives have been driving our industry for decades, serving not only as an overall goal but also often as a detailed roadmap for scaling.

Reality is not quite as simple. New circuit and architectural techniques continuously increase frequencies at a faster rate in turn translating into more power consumption (Figure 1). From 1993 to 2001, CPU core frequencies grew from 66MHz to 2000MHz, a gain of 30x which is significantly faster than that expected from transistor scaling alone. Over time, higher yields enable larger die sizes and therefore more transistors to test. And recently voltage scaling has started to slow which accentuates the power issue.

Introduction

Moore’s law and its derivatives have been driving our industry for decades, serving not only as an overall goal but also often as a detailed roadmap for scaling. Some simplistic transistor scaling trends per generation:

Trans density 2.0x for compaction product
Gate delay 0.7x transistor speedup
Voltage, gate 0.7x ideal scaling for E-field

From these we can surmise for a product migrated to a new process generation:

Die area 0.5x  
Frequency 1.4x inverse of delay
Power 0.5x scaled CV2f
Power density 1.0x Power/Area

Therefore, a compaction product runs faster, consumes less power, and costs less. Everybody wins!

Reality is not quite as simple. New circuit and architectural techniques continuously increase frequencies at a faster rate in turn translating into more power consumption (Figure 1). From 1993 to 2001, CPU core frequencies grew from 66MHz to 2000MHz, a gain of 30x which is significantly faster than that expected from transistor scaling alone. Over time, higher yields enable larger die sizes and therefore more transistors to test. And recently voltage scaling has started to slow which accentuates the power issue.

Revised scaling might look like:

Trans density 2.0x  
Frequency 2.0x transistors + circuits + ...
Voltage 0.85x recent trend
Power density 1.7x for compaction
  2.0x for new architecture

All these have a major impact on test capability and costs.

Power issues are becoming a critical challenge for test. Getting power into the device requires high currents at lower voltages and the ability to respond to very fast current transients. Removing power is equally challenging, especially given the requirement for a temporary contact to the device. These can drive up the cost of testers and handling equipment and can also limit the degree of test parallelism, which in turn increases effective test times.

A unique power problem for test is the need to burn-in devices at elevated voltages. While many ICs do not require burn-in, the most complex designs and those on leading edge fab processes typically require this step to find and screen latent defects that would fail in the customer’s hands. However, the higher voltage significantly elevates transistor and gate leakage. Leakage current has been increasing at greater than 3x per generation as transistor threshold voltages are dropped and as oxide layers are thinned.

More transistors require more interconnects and I/Os to efficiently move information (Rent’s rule) and higher total bandwidth on and off die to keep up the core. For a tester that talks to the device in its native mode, this drives a requirement for faster pin electronics and/or more total channels that in turn drives up tester costs.

Complexity driven by higher transistor count is another critical challenge and is discussed in more detail in the next section. As the transistor count doubles, the volume of required test content also increases since in the end each transistor needs to be verified. For example, many common memory test patterns are proportional to N bits or higher. While it is impossible to generalize, a transistor growth of 2x might translate to sqrt(2) more I/Os and also sqrt(2) more total test time.

Figure 1. CPU clock trend for recent microprocessors.

Figure 2. 1997 SIA test capital trend.

Figure 3. Defect breakdown for microprocessor manufactured on 0.13µm process.

Figure 4. Pattern frequency sensitivity for microprocessor manufactured on 0.13µm process.

Combining this test time growth with the increasing cost of testers translates to a disturbing trend as captured in the 1997 SIA capital test cost trend[1] (Figure 2). Test costs per transistor are roughly flat in this chart, which means over time that they will consume a larger fraction of the product cost relative to Si costs. If the trends were to continue, eventually it could cost more to test than to manufacture the device. There are already reports of ASIC devices that cost more to test than to fabricate the die. The challenge for test development is to anticipate Moore’s law scaling and innovate fast enough to stay ahead.

Test Content

Leading edge designs are so complicated that it is difficult to write tests that cover all of the design. At the same time, shrinking features are exposing new types of defects, which need to be screened. Another trend is to integrate multiple functional blocks together as system on chip products. This creates new testing challenges that can be significantly worse compared with the standalone blocks.

Test content generation and delivery thus requires a high degree of automation, which in turn requires simplification to make it tractable to compute. For many decades, the industry has used stuck-at faults as a measure of overall coverage even though many defects do not electrically behave as stuck-at faults. The content, regardless of the fault model used in generation, must have high coverage for the defects observed in manufacturing. It also needs to execute efficiently on the available tester architecture.

Figure 3 illustrates the spectrum of defects that need to be detected by test. This particular snapshot is based on a microprocessor manufactured on a 0.13µm fab process and excludes gross shorts/opens. For this data set, roughly 90% of the random defects were ‘hard defects’. They cause device failures under all conditions regardless of voltage, temperature or frequency. These defects are relatively easy to catch because they will be detected by nearly any type of test, but because they are the most numerous, this requires uniform coverage across the entire device. Stuck-at coverage correlates well to coverage for hard defects.

For this data set, about 10% of the random defects are ‘soft defects’. They only fail under particular combinations of test conditions and test application. They can be further divided into the impact on device speed. Most have a large impact on speed, typically greater than 10-20%; some have very small impact on speed and may not be detectable for most paths within the device. The mix of defects often shifts over time as manufacturing learning takes place.

Beyond random defects are other manufacturing deviations that must be detected and these are highly dependent on device design and manufacturing process. Figure 4 shows pattern sensitivity information for speed testing[2]. The peak at the left corresponds to systematic limiting paths in the design that are sensitive to small changes in the manufacturing. Testing for these paths must be especially precise. The tail to the right corresponds to speed sensitive soft defects. Here there are many different failure modes requiring broad coverage but somewhat less precision. Test content must cover both.

The defect distribution that causes failures is a function not only of the fabrication process but also of the product design. Large devices and large volume manufacturing will effectively sample more kinds of defects than a limited run on less complicated devices. At the same time, a circuit with large margin may be insensitive to certain defects while another with less margin will fail. Figure 5 shows failure analysis trends across several process generations from SRAM development vehicles [Intel 2001]. The observed number of opens has increased over time and opens typically manifest as soft defects. As voltage scaling continues, the available circuit margin decreases and designs become more sensitive to these defects. Test content must evolve.

Figure 5. Failure analysis results across fab process.

Testing Strategies

Testing can be classified into two general types: functional and structural. At its simplest, functional testing attempts to reproduce how the product will be used in its end environment. Inputs receive signals while outputs are monitored and the result is compared against a previously calculated behavioral model. [A variation involves checking against a golden unit, which eliminates the need to calculate and store the response.] Because this type of testing mimics the customer usage, it has the highest correlation to customer observed fallout.

A general-purpose functional tester must reproduce the system electrical environment for the device. Each pin must have a contact, each input a driver, and each output a receiver. The channel must be able to run in whatever data format and frequency the device requires. Since it is not desirable to rebuild a tester each time the device configuration changes, this drives each channel to be a superset of the requirements. Each must run up to maximum frequency and run all required data formats.

Not surprisingly, most of the expense of a functional tester is in the channels. Each new CPU generation requires typically double the overall data rate and advances in channel electronics just keep up. This in turn results in roughly constant channel costs. Figure 6 shows data for testers used for Intel® microprocessors. Although the trend is noisy, price per pin for high performance functional testers has trended around $9K +/- for the last decade. Since the needed pin count has been growing over time, this results in more expensive testers. For example the tester used for the Intel® Pentium® II processor cost 3.6x more than that used for the Intel386™ processor, somewhat faster than predicted by the SIA model discussed in the introduction.

Figure 6. Cost per pin trend for high performance functional testers. Time span is roughly a decade.

Separate from the problem of tester capability and cost is the problem of generating test content. Functional testing is relatively easy for devices with a very regular architecture and simple access, like memories. It is easy to understand the range of possible combinations and target tests to cover all expected defects (although in this case the amount of testing for all possible combinations might be prohibitive).

For a CPU or for a product made from integrated functional blocks, the combination space becomes much larger and internal states complicate the problem. Determining how to sensitize and observe all portions of the device from the outside can require huge test writing efforts. Writing manual functional tests for a complex processor can take many tens of engineering-years.

Structural testing by contrast breaks a design into smaller blocks and then targets defect testing within that block. The test content is typically delivered to the device in a manner independent of the application within the device. In this way, the tester architecture can be decoupled from the test application. Another key advantage of structural testing is that the blocks become small enough to automatically compute the test content (automatic test pattern generation or ATPG), saving considerable design effort. Structural testing requires complementary design for test (DFT) to make it work.

Scan and its variations do that by inserting extra capability into the latches that stimulate and observe within the design. These are then chained together for scan access outside the device. This is very good for getting uniform coverage across the design for hard defects. The number of scan latches grows linearly with the complexity of the design so as designs evolve, the data volume and test time can become a problem[3]. Scan also suffers in coverage for soft defects. Because the device is being exercised differently than it would in native mode, differences in power and timing between blocks can lead to correlation issues.

Built In Self Test (BIST) consists of an on-die stimulator and observer that can be triggered from the outside. For example, one section of the device could have a Programmable Built In Self Test engine (PBIST) for large memory arrays. The tester sends algorithm instructions to the PBIST engine through special test modes and then the PBIST engine executes the actual test. In this example, the external tester can be very simple as it merely instructs the built-in tester. The PBIST engine itself handles the complexity and can be tailored to the test needs for that array. While this requires more design effort and transistors, as those transistors become cheaper, these BIST engines become cheaper to implement. BIST can address the data volume and test time issues but can still suffer from correlation issues just like scan.

Referring back to Figure 3, all of the hard defects can be caught with slow speed scan and most of the soft defects can be caught with multi-cycle scan or BIST. However, that still leaves a few percent of random defects and many systematic/speed defects that require special functional tests. Small designs with larger margins may not require these special tests. For large designs, no single type of testing is sufficient to cover the range of possible failures.

Distributed Test

Given that multiple test schemes are required to catch the expected range of defects, it is tempting to design a tester that does everything. That flexibility is costly, however, especially when it means replacing existing capability. Instead, we chose to develop a hierarchy of capabilities, each with optimized hardware for a particular kind of testing. We then segment tests across these multiple platforms in order to optimize the economic costs. Figure 7 shows the hierarchy of capability. This strategy of splitting content across multiple sockets is known as Distributed Test.

Figure 7. Hierarchy of test content and optimal test platform.

The medium bus capability referred to in the figure, or structural tester, embodies a defined testability interface that is independent of the product characteristics. The driver complexity is roughly equivalent to the previous generation of functional tester but the bus is intentionally defined to be narrower than the device native mode (2-5x fewer pins). This results in a lower channel cost (Figure 8) and still lower system cost. The narrower bus and appropriate choice of system architecture enable parallel test sites. Other benefits were improved supply since leading edge electronics were not required, and even cheaper subsequent iterations. The latter cost decrease shows Moore’s law at work, this time for the benefit of test! Overall the 2nd generation structural tester achieves a 10x per site cost reduction versus the functional tester.

Figure 8. Channel cost for structural compared with functional testers. MP1 indicates massively parallel structural tester.

An example of the narrow bus capability is the massively parallel structural tester embodied by the Test During Burn In (TDBI) implementation. Here the testability bus is reduced to only four channels and one set of shared resources. The net result is extremely low tester cost per device, but a tester that is only optimal for tests that require little data transfer. The PBIST example above is the perfect example of a test that runs very efficiently on the TDBI system. Other tests, which require much more data transfers, won’t run as well on TDBI and will remain on the structural class socket. Migration of content between the structural and massively parallel structural sockets is opportunistic and device dependent.

A particular product might differ in the relative mix among the steps but over time we expect more and more content to migrate to the cheaper platforms. For example, functional tests can be ported to run as internal tests using DFT to make them compatible with the structural tester. Because we have moved the bulk of the content off the expensive functional tester, these in turn can support higher volumes and we can use special modes on the tester to extract more capability. With this strategy in place, our cost curve looks more like Figure 9 and we now track the Si curve.

Figure 9. Relative cost curves incorporating benefit of Distributed Test.

Future Trends

Moore’s law scaling will continue and transistors will continue to become cheaper and faster. This allows addition of other functions beyond pure computation such as system on a chip. To efficiently test these devices, the problem of moving test data around must be solved since in many cases, there is no efficient direct access for the tester. For difficult to access blocks, BIST is a key solution[4] and cheaper transistors enable it. Over time a larger and larger fraction of the device becomes self-tested and in some special cases self-healing. This in turn takes more burdens off the external tester and enables further cost scaling.

I/O speeds will continue to increase beyond the realm of general-purpose testers. It is likely that a device specific interface along with appropriate internal DFT will be needed for multi-GHz data rates.

Scaling will drive new defect mechanisms. Processes in development are exhibiting new failure modes and the need to develop new methods for detection. For example, fluctuations in the threshold voltage due to the number of atoms in the junction are starting to become detectable. For some of these ‘soft defects’, a strategy to test and detect multiple times (N-detect) is required and is another motivation for programmable BIST[5,6].

Conclusions

Physical scaling drives many challenges, particularly for functional test. Power delivery and removal, and especially burn-in power, dominate the physical challenges today. On the content side, migration to structural test and distributed test allows us to keep up with ever more complex products by breaking the design into more manageable blocks. Further optimization between the design and test application is required to take full advantage of the migration and this requires extensive knowledge of expected defect mechanisms. The rising gap between data rate internal and external to the device and the need to N-detect defects, pushes in the direction of more on-die testing.

Acknowledgments and References

[1] The National technology Roadmap for Semiconductors, 1997 Edition. Semiconductor Industry Association.

[2] M. Rodgers, ‘Defect Screening Challenges in the Gigahertz/Nanometer Age: Keeping Up with the Tails of Defect Behavior’, Proceedings of International Test Conference, 2000.

[3] Y. Sato, T. Ikeya, M. Nakao, T. Nagumo, ‘A BIST Approach for Very Deep Sub-Micron (VDSM) Defects’, Proceedings of International Test Conference 2000.

[4] G. Hetherington, T. Fryars, N. Tamarapalli, M. Kassab, A. Hassan and J. Rajski, ‘Logic BIST for Large Industrial designs: Real Issues and Case Studies’, Proceedings of International Test Conference, 1999.

[5] S. Chakravarty, ‘On the Capability of Delay Tests to Detect Bridges and Opens’. Proceedings 6th Asian Test Symposium, 1997.

[6] K. Baker, G. Gronthoud, M. Lousberg, I. Schanstra, C. Hawkins, ‘Defect-Based Delay Testing of Resistive Vias-Contacts: A Critical Evaluation’, Proceedings of International Test Conference, 1999.

[7] ®, ™ are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

Biography

Mike Mayberry

Mike Mayberry is the Director of Sort Test Technology Development, Intel Corporation based in Hillsboro, Oregon. Dr Mayberry received a PhD in Chemistry from University of California, Berkeley in 1984, and joined Intel that same year. While at Intel, he has worked on process development for a variety of magnetic memory, nonvolatile memory and logic fabrication processes. Since 1994, he has worked on test process development for Intel Microprocessors.

 
 
Search



Published By:
?ˇŔcheap #167
38 Miller Ave.
Mill Valley, CA 94941
www.mazikmedia.com
converse@mazikmedia.com
Disclaimer | Privacy Policy