About Future-Fab
Download New Issue
Contact Us
Volume Archives
Editorial Panel


Equipment Process Time Variability: Cycle Time Impacts
(6/29/2001) Future Fab Intl. Issue 11
By Peter Gaboury, STMicroelectronics
Print Article Print this paper
Send As Email Send as email


The success of any fab is its ability to measure, control, and systematically reduce cycle time. However, the difficulty with keeping cycle time under control is finding the best compromise between fab capacity (hence capital spending), equipment utilization, WIP management, ‘Hot lot’ cycle time, and overall fab cycle time. One often-ignored cycle time lever is variability – an important factor impacting cycle time and additionally, a factor that is difficult to measure and even more difficult to improve. The sources of variability come from all aspects of the fab: equipment: process, product, and manufacturing. Variability affects queue time, causing congestion, and impacts the uncertainty of arrival rates or the duration of processing time[1]. Like rabbits, variability propagates, impacting arrival and departure rates of material from one machine to the next machine. Intel has presented during several Sematech Manufacturing Methods Symposia an Intel internal industrial metric called A80: the measurement of the variability of equipment availability[2]. A80 is the value of availability where 80% of the time the equipment is up and ready for processing. Intel tracks this parameter very closely, and also tracks the spread of availability by examining the difference between A20 and A80.

Measuring the variability associated with availability, and many of the other maintenance parameters, is an easy task for any company using any of the modern Manufacturing Execution Systems (MES). However, processing time variability is an aspect that is often ignored – and often roughly approximated. A reason why you don’t often discuss process time variability is simple – measuring processing time variability is not an easy job. We could measure process time variability through static stopwatching – however static stopwatching is time- and resource-consuming, and inescapably static – a one shot deal: if things change you have no visibility. This paper proposes an alternate method – using the machine automation interface to collect throughput data and to constantly monitor and measure process time variability.

At STMicroelectronics, I have used the iPLUS system – Improved Productivity through Learning, Understanding and Solving – documented in previous publications[3], to monitor and measure process time variability. The iPLUS system collects equipment events through the machine SECS/GEM automation interface, merges this information with our MES system and measures Overall Equipment Effectiveness (OEE) in real time. The iPLUS system was primarily created to measure equipment productivity losses – however, with the parameters necessary to measure OEE you can measure processing time.

Lastly, in this paper I will show different process time variability for different machine types, as well as some of the key factors impacting variability, and some possible ways to reduce process time variability.

World-class cycle time remains an important factor in the success of any semiconductor fab. By optimizing cycle time, we can impact fab inventory, on-time delivery, maximize output, and gain competitive advantage by delivering products faster to the marketplace.

The relationship between WIP, Fab output and cycle time is given by Little’s Law: TH = WIP/CT. We can achieve the same output with large WIP, long CT, small WIP or small CT. How often have you been able to look at the theoretical machine performance but have been unable to predict real life performance? All of the input variables we appear to have control of, but in the end when we try to balance the equations our output does not match the theoretical. The major difference? Variability[4]. Variability impacts us in every aspect of our production, but we seldom measure its impact and the impact on overall production. What is better? Short but frequent machine interruptions? Does the variation in arrival rate impact us more than the variation in machine availability?

Variability is anything that causes our production system to vary from regular predictable behavior. Sources of variability are setups, machine failures, inventory shortages, operator non-availability, machine flexibility, operator skills, material handling, non-value added steps, engineering changes, etc[5]. The list goes on and on and on.

Machine failure variability is easy to measure (considering you have a good Manufacturing Execution System (MES)). Intel has presented a paper during the SEMATECH Manufacturing Methods Symposium about its industrial metric call A80: measuring the value of availability where 80% of the time the machine is available. The difference between A20 and A80 measures the spread of the availability distribution – closely approximating a measure of availability variability. Effective downtime variability can also be measured by calculating the coefficient of variability[6] (see Figure 1, where t0 represents the base processing time, c0 is the base process time coefficient of variability, A is the machine availability, mf is the mean time to failure, mr is the mean time to repair,sr is the standard deviation or repair times, and cr is the coefficient of variability of repair times

Figure 1. Formula for Coefficient of Failure Variability.

Measuring all of the above remains an exercise in statistics – except the process time coefficient of variability. How can we measure this? While we can continuously monitor repair and failure times, how can we continuously monitor process time variability?

We could approximate process time variability – either using static measurements or rough approximations – however, how can we guarantee that we do not deviate from this approximation? Busing in 1998[7] gives a summary of the four possible solutions for equipment data collection practices. For this paper, I use a variant of the third solution proposed by Busing – using the standardized equipment communications interface (SECS) through the automation system, and merging with information from the CAM system – to intercept machine events and measure in real time the processing time for each wafer.

Thomas Vonderstrass and myself have previously reported on the STMicroelectronics system for measuring and improving productivity called iPLUS[8]. The iPLUS system – improved Productivity through Learning, Understanding and Solving – captures automation events coming from the production machines in real time – it merges this information with the manufacturing execution system (CAM system), and measures productivity and productivity losses. Capturing equipment events, such as Wafer Start and Wafer End, we can apply a machine model, and we can measure throughput for each wafer, per recipe, per machine versus time. In addition, we can calculate steady state throughput of each recipe by eliminating production anomalies like assists or first wafers, and we can measure a reference throughput by keeping statistics on the best processing time for a given recipe on a machine.

In this paper I have measured the process time variability for several different machine types, discuss different factors impacting variability, and draw some conclusions about how real time measurement of process time variability can improve your overall fab cycle time.

Variability and the Impact on Cycle Time

Hopp and Spearman[4] show the impact that process time variability has on cycle time – increasing queuing time and creating traffic jams. Queuing time can be calculated per Figure 2. In this formula, ca is the coefficient of arrival time variability, u is the utilization of the station, te is the mean process time of a job, and m is the number of machines. As you can see, the coefficient of process variability, ce, impacts the queuing time by the square of the coefficient of process variability.

Figure 2. Formula for Calculation of Queuing Time.

In addition to impacting the queuing time, Hopp and Spearman[4] show how variability in the process time impacts the rate of arrival and departure in a machine-to-machine production sequence. When the rate of departure changes due to process time variation, this propagates to the next machine and creates WIP blocking – traffic jams – at the next machine. If the next machine has the capacity to handle the added arrivals you have no problem – however, if the next machine has insufficient capacity you will create a traffic jam.

Data Collection and Results

Process time variability was measured, trended and statistics were calculated over a seven-day period of production. Four types of machines were measured – steppers from two different manufacturers, single wafer implanters, metal etchers and epitaxy machines.

Average process time was calculated for each lot and results were normalized versus each lot average. Variability was then calculated for each lot, and contributors to variability were broken down into within lot variability, lot-to-lot variability, and day-to-day variability. Assists and first wafer impacts were included into the variability: first wafer impacts within lot variability and assists affect within lot variability and lot-to-lot or day-to-day.

Figure 3. Coefficient of Process Time Variability.

The coefficient of process variability is calculated using the equation in Figure 3. The standard deviation was measured for:

  • Within Wafer: the standard deviation of Process Time for a given wafer number for a given program/job over time.
  • Wafer to Wafer: Wafer to wafer process time standard deviation.
  • Job-to-Job: Standard Deviation of Process Time between all jobs/programs on a machine.

Data Handling and Data Integrity

Data integrity is the fundamental weak point of measuring data using the automation system. Busing (1998)[9] has classified different data integrity problems and analyzed some of the root causes of data integrity problems from automation systems.

We have implemented modeling countermeasures in the iPLUS system to prevent these problems and we have used many of the suggestions presented by Busing to ‘fill’ in the missing blanks, such as resolving logical conflicts using previous events or replacing missing data with ‘virtual events’. We eliminate a large number of data integrity problems by ensuring that our CAM system, automation system and the iPLUS system time values are routinely synchronized to an ‘standard’ clock – hence preventing time stamp confusion in the data chain.

Results: Stepper A

Stepper A is an advanced I-line stepper capable of producing 0.35-micron products at a very high throughput. Table 1 contains the results.

Table 1. Results for Stepper A.
Average Processing Time 43 sec
ó r – Standard Deviation Processing Time 87 sec
ó – within wafer 72 sec
ó – wafer to wafer 46 sec
ó – job to job 9 sec
Other 13 sec
C e : Coefficient of Variation 2.02

The largest source of variation is the within wafer variation – that is the repeatability of a processing time for a given wafer. Figure 4 shows that the primary impact is caused by process time variation of wafer 1. Figure 5 shows that there is some relationship between the first wafer standard deviation and the recipe that you are using – showing some recipe dependencies such as the quality of the wafer alignment marks. Figure 6 shows the standard deviation of wafer to wafer, and again you can see the major impact of wafer 1 on overall process time variation.

Figure 4. Standard Deviation of within Wafer Processing Time versus Wafer Number for Stepper A.
Figure 5. Standard Deviation of Wafer 1 Processing Time versus Stepper Recipe/Job.
Figure 6. Average Processing Time versus Wafer Number for Stepper A.

Results: Stepper B

Stepper B is an advanced DUV stepper with 0.25-micron capability. See Table 2.

Table 2. Results for Stepper B.
Average Processing Time 79 sec
ó r – Standard Deviation Processing Time 28 sec
ó – within wafer 68 sec
ó – wafer to wafer 98 sec
ó – job to job 10 sec
Other 45 sec
C e : Coefficient of Variation 1.62

For this stepper the major contributors to variation are similar to Stepper A, however there is a bigger impact from wafer-to-wafer – see Figure 7. As you can see from the graph the first wafer effect is much more significant on Stepper B than Stepper A. I was surprised that the coefficient of variation for stepper B was better than the CV of Stepper A. Stepper A was equipped with batch streaming software to ensure the best cascading of recipes and I expected that this batch streaming software would improve overall variation. From the results, we can see that the first wafer processing time for Stepper B is more significant than Stepper A (360 sec versus 175 sec), as expected, however the within wafer standard deviation is much more significant. This is especially true when compared to the average processing time. In fact there is more variation in Stepper B in relation to the average processing time and this variation is more proportional – in other words, the batch streaming software does not reduce significantly enough the first wafer processing time to help reduce variability.

Figure 7. Average Processing Time versus Wafer Number for Stepper B.

Results: Metal Etcher

The metal etcher is a standalone-metal etching system with an integrated resist strip and wafer cleaning station. See Table 3.

Table 3. Results for Metal Etcher.
Average Processing Time 311 sec
ó r – Standard Deviation Processing Time 434 sec
ó – within wafer 362 sec
ó – wafer to wafer 137 sec
ó – job to job 105 sec
Other 165 sec
C e : Coefficient of Variation 1.39

Table 4. Results for Single Wafer Medium Current Implanter.
Average Processing Time 35 sec
ó r - Standard Deviation Processing Time 92 sec
ó - within wafer 76 sec
ó - wafer to wafer 32 sec
ó - job to job 8.46 sec
Other 39 sec
C e : Coefficient of Variation 2.62

Again, the major contribution to variability looks similar to the steppers, with a higher variability from job to job. From Figure 8, we can see the distribution of process time versus program, and we can see that there are three major programs that are contributing to the sigma from job to job. Once again, we see a very high contribution from the first wafer towards the sigma within the wafer.

Figure 8. Processing Time versus program for a metal etcher.
Figure 9. Sigma of Within Wafer Processing Time versus Wafer Number.

Results: Single Wafer Medium Current Implanter

This medium current implanter is a single wafer implant machine.

The major contributor to variation is the within wafer sigma, however the signature is different from the other machines. For this machine, the variability within the wafer is distributed randomly and is not highly dependent on wafer number. This is probably due to instability in the auto-tuning system.

Results: Single Wafer Epitaxial Deposition

For these machines there is a large impact of within wafer standard deviation. Figure 10 shows the distribution of within wafer sigma from wafer to wafer. As you can see, there is a random distribution of within wafer sigma versus wafer number with a very high sigma for wafer 11.

Figure 10. Within Wafer Process Time sigma versus Wafer No Discussion.

Firstly, you can see from the above data that there are some common themes for improvement in process time variability:

  • First Wafer Impact. Not only is the first wafer duration longer than the other wafers, but in almost all cases the variation seen within the first wafer processing time is more significant than other wafers. This variation can be reduced by making robust machine recipes, improving machine setup, and also by manufacturing management improvement such as increasing the lot batching or improving the recipe cascading.
  • Job to job sensitivities to standard deviation. Some jobs are more likely to be sensitive to within wafer variation. By managing these sensitivities, you can improve the overall job performance.
  • Some machine-specific signatures, like the implanter or the differences between Stepper A and Stepper B. This only reinforces the fact that it is important to measure process time variability for a machine model.

Next, the automation system provides a very simple way to examine and measure large volumes of data over a regular time period. The next logical step is to implement automatic measurement routines for process time variability and to implement manufacturing systems for reducing process time variation.

From the above experience the machine with the least amount of process time variation was the metal etching tool. The next step is to calculate the failure time variability – as for these types of tools, the failure time and variability is often very high due to the difficulty of re-qualifying the tool after a major preventive maintenance action.


This paper demonstrates a way to measure process time variability by using the automation information from the machine to measure wafer-by-wafer process times. Our iPLUS system stores these process times in a database and you can obtain and analyze these values. I have demonstrated how to use this data to measure variability over several types of equipment and I have found several major factors that can be used to improve process time variability. Improving process time variability will improve overall cycle time.

If we are to obtain world-class cycle time values in the future, I am convinced that we need to implement these automatic measurement systems. Systems such as iPLUS, permit analysis of large volumes of data over very long periods of time and can permit improvement in difficult to measure metrics such as process time variability.

Lastly, I am sure that Busing[7] was right when he asserted that the semiconductor industry has largely ignored the knowledge and experience of the industrial engineering profession. Semiconductor companies that want to succeed in this growing and demanding industry have to invest in developing the correct industrial engineering experience and capability in order to take every possible improvement opportunity.


Special thanks need to go to the Carrollton Industrial Engineering Team, especially Mike Earhart for finding and opening the book on variability. I would like to also thank Ed Dobson and Philippe Vialletelle from Crolles, and Paul Patruno from SEMATECH, for their information about the Intel work on A80. Final thanks to all of the iPLUS ‘champions’ at STMicroelectronics and IPC: at ST, Denis Paccard (an iPLUS ‘co-founder’), Della Killeen, Alain Astier, Eddy Reynaud, Jean-Francois Delbes, Paul Adam, Herve Bozec, Yves Roubinet. At IPC, Gerhard Rupp, Thomas Vonderstrass, Andreas Kilger, Axel Pustet and Guenter Snifnatsch.


[1] Donald Reinertsen, “Managing the Design Factory”, 1997, p51.

[2] Tom Gibbons, “A80 – 80% Confidence”, Sematech Presentation, 1998.

[3] Peter Gaboury, Thomas Vonderstrass, “E79: Standards, signals, and the Internet. Project iPLUS: Using the E79 standard to design a real-time, company wide, dynamic management tool”, Future Fab, Jan 2001.

[4] Wallace J Hopp and Mark L Spearman, “Factory Physics: Foundations of Manufacturing Management”, 2000.

[5] Idem 4.

[6] Idem 5.

[7] David Busing, “Automated Procedures for Characterizing Specific Equipment Productivity Losses with Applications in the Semiconductor Manufacturing Industry”, Competitive Semiconductor Manufacturing Program (CSM) document 46, 1998.

[8] Idem 3.

[9] Idem 7.


Published By:
?ˇŔcheap #167
38 Miller Ave.
Mill Valley, CA 94941
Disclaimer | Privacy Policy