The International Technology Roadmap for Semiconductors defines requirements
that drive semiconductor supplier industries (e.g., electronic design
automation (EDA), lithography, and packaging) toward costeffective scaling
of CMOS process technology. In ITRS parlance, a “red brick” is a technology
requirement for which no known solution exists. Solving any red brick
requires large R&D investments. It is neither technologically feasible
nor economically viable for any given supplier industry to solve red bricks
in an independent attempt to continue Moore's Law scaling. This leads
to the concept of “shared red bricks” – supplier industries must cooperate
to achieve a more globally optimized solution to these technological hurdles.
For an example of a potential shared red brick, consider the multibillion-dollar
question of whether lithography and frontend processes must continue to
push for 10 percent tolerances in critical dimensions. As an alternative,
some of the CD control red brick burden may be borne by designer and EDA
constituencies, which can provide smarter design for variability techniques
to enable robust, high-yielding designs without impossibly tight control
of CD. Another example can be found in the design-to-mask handoff and
the cost of developing faster, more accurate mask writers. Should mask
equipment suppliers develop increasingly expensive and complex mask writers
to handle the ever-increasing post-reticle enhancement technology (RET)
fractured design data volume? As an alternative, a bidirectional design-to-mask
data flow, founded on new cross-disciplinary EDA technology, can dramatically
reduce data volume. Such examples as these point out a need to carefully
partition R&D investment so as to maximize overall industry ROI.
Invoking ROI brings up the issue of cost, both as a normalizer across
supplier industries and as a critical indicator of future semiconductor
industry health. The semiconductor industry must extract incremental value
from incremental manufacturing capability. When designers cannot predictably
fill the fab with highvalue, high-margin parts, there is less return on
a foundry’s capital investments, leading to workarounds (reprogrammability,
platform-based design, software-based product differentiation, and so
on) that provide value through means other than silicon differentiation.
Such workarounds sacrifice quality and value in designs, meaning fewer
design projects are worth attempting. Thus, design and process technologists
are in the same boat, trying to maintain the retooling cycles that are
the heartbeat of the semiconductor roadmap. Cost is indeed the semiconductor
industry’s greatest challenge.
Today, design NRE costs routinely reach tens of millions of dollars,
in part due to the lack of manufacturing closure in today’s design flows,
which causes longer turnaround times and manufacturing respins. Future
design for manufacturing (DFM) technology must reduce design NRE cost
and directly address manufacturing NRE – the cost of a mask set and probe
card – which is well over $1 million at the 90 nm technology node and
creates a significant damper on semiconductor-based innovation. As the
semiconductor industry deploys optical proximity corrections, phase-shifting
masks, and area fill to better control subwavelength lithography, it would
be a mistake to simply concede the multimilliondollar mask set as a fact
of life. The challenges for DFM are to reduce these NRE costs and to maximize
ROI and productivity.
As the industry works to solve the challenges of DFM and manufacturing
yield, it will have to address structural barriers and the creation of
“vertical” designers and tools. The current era of independent silos or
communities – design, process, IP libraries, EDA, etc. – must come to
an end. People as well as tools must gain increased understanding of the
entire design-to-die flow, not just small isolated segments. Today’s attempts
at DFM strongly reflect technology DNA or genealogy – physical verification,
custom layout, optical lithography simulation, reticle enhancement, place-androute,
performance analysis, etc. Traditional separations such as FEOL vs. BEOL,
design synthesis and optimization vs. design analysis and verification,
and design engineer vs. process engineer unfortunately persist in the
DFM arena. At the same time, closer interaction between traditionally
independent analysis and optimization technologies will improve the chances
of reaching faster and more economical solutions. In particular, the EDA
industry must fill gaps that today prevent close alignment of design and
manufacturing flows. EDA tools targeted primarily at post-tapeout enhancements
such as RET and CMP-fill need to move up the food chain to interact more
closely with synthesis, timing optimization, and place and route flows.
Particularly as process variability increases in future technology nodes,
DFM will be responsible for avoiding excessive over-design as well as
Future Vertical Tool Integrations
First, a unified and standardized data model can be enabling to design-throughmanufacturing
integrations. One candidate, the universal data model (UDM), is under
development (see www.si2.org/udm/); its goal is to allow both the design
and manufacturing sides to access a fuller set of design information and
manufacturing characterizations. Such a model must be sufficiently restrictive
so that foundries, mask shops, and design houses can allow data to cross
boundaries; yet it must be extensible enough to permit closer sharing
of data as available. The present conception of UDM encompasses design
geometry data, parasitics, connectivity information, timing constraints,
and mask features, among other attributes. As a result, it potentially
forms an infrastructure for lithography-aware design technology as well
as design-aware lithography.
Second, the absence of design-for-yield and DFM methodologies is often
blamed on the absence of usable (statistical) process data. Besides being
highly proprietary to the foundries, the problem with such data lies in
its extraction. Extraction of quantified process characteristics that
can be used by upstream design tools is highly involved. It may require
extensive collaboration between EDA vendors, manufacturing equipment makers,
foundries, mask makers, and metrology equipment makers, among others.
It may further require fabrication and analysis of several test chips.
Moreover, such process characterization can differ from tool to tool,
material to material, and location to location, even for the same foundry.
A more practical vision is for design optimization and analysis tools
to instead make use of qualitative universal truths about the manufacturing
process. Such tools would essentially try to exploit knowledge of trends
to avoid the need for extensive process data.
Third, post-tapeout processing tools such as RET and fill-insertion tools
must be made aware of design quality metrics (i.e., chip metrics) such
as timing and power. Such awareness not only aligns the tools to the overall
product goals, but also contributes to parametric yield enhancements and
enables much-needed reductions in manufacturing costs and mask data preparation.
Due to the potential impact of such data processing steps for manufacturability,
post-RET performance simulation would be desirable going into the future.
This would require a more direct communication channel between mask data
prep and design, which would initially be more tractable for IDMs rather
than fab-less design houses. Two noteworthy illustrations of this channel
can be given for optical proximity correction (OPC) and area fill.
OPC changes mask apertures to correct for line-end shortening, corner
rounding, and other systematic optical process effects. The resulting
complicated mask shapes affect data volumes and design rule checking,
and even more critically impact mask writing and inspection costs. For
example, future design technology must allow function-aware OPC that is
applied only as needed (e.g., to features in critical paths that require
better CD control). This entails passing performance analysis and functional
intentions from logic-layout synthesis to physical verification. Required
flow integrations must span library creation, detailed routing, and physical
verification (e.g., so that an optical correction is not made independently
by several tools, leading to an incorrect result). Such passing of designers'
intent to OPC tools can lead to a functionally better OPC result in addition
to huge mask cost savings.
“Dummy” area fill insertion must not only be driven by the best possible
models of the manufacturing process, but also address many flow issues.
Examples include: a) dummy metal fill will change RC extraction results
and must be accounted for in the upstream timing verification before the
layout goes to physical verification; b) master cell and macro characterizations
(performance models) must be a priori compatible with later insertion
of dummy fill; c) data volumes and design hierarchies must be maintained
or at least strongly managed; and d) interlayer dependencies such as filling/cheesing
(active/poly) or the need to maximize planarity of the sum of all layer
thicknesses. Just as with OPC and PSM, dummy fill requires a completely
integrated, front-to-back solution. The industry urgently needs true performanceimpact-
limited fill insertion techniques, driven by competent modeling.
Finally, we close with an important word of caution about the statistical
timing tools and statistical optimization methods that are actively being
researched. Such tools must model and understand actual variations rather
than crude and inaccurate abstractions. This means that the “composition”
of variations during statistical analysis and optimizations should match
the methods used for initial "decomposition" via statistical metrology
techniques during process characterization. A mismatch between the two
will introduce unnecessary errors and place a question mark on results
produced by computeintensive statistical analyses.