Read 2ND_Q2000.qxd text version

The Journal of the

Second Quar ter - 2000

Reliability Analysis Center

Reliability-Centered Maintenance: A Success Story for the Engine Repair Process

Gary L. Jackson, Captain, USAF

The Problem with Aviation COTS

7

4

By George T. Babbitt, General, USAF (Retired) Tutorial: The Central Richard J. Jones, Captain, USAF Limit Theorem

I N

Editors Note: To serve the interests of our readers, we try to include articles that reflect the perspectives of all levels of management and technical expertise. Many of our articles deal with very specific technical issues, provide tutorials on statistical methods, or deal with advances in the reliability, maintainability, and supportability disciplines. The following article illustrates that top-level leaders view reliability not as an end in itself but as the means of achieving some bottom-line goal, such as safety or mission success. In this case, the bottom line is force readiness. It is important for all of us who consider ourselves reliability practitioners to never forget that reliability is important only insofar as it serves the bottom line. The RAC thanks General Babbitt and his staff for providing us with this article, written as General Babbitt was busily preparing for his retirement.

ASQ Certification Program Supportability and the RAC Industry News

15

©

10

S I D E

11

14

PRISM in Ten Easy Steps

General George T. Babbitt

Introduction

Military logistics is the profession of ensuring readiness. Todays logistics priorities will determine future success or failure. Our approach to solving the challenge of readiness is largely determined by the political and economic environment. The 1990s clearly demonstrated that predicting where the next conflict will erupt is guesswork at best. When that decade began, the world was recovering from the end of the Cold War. East Germany had announced the end of border restrictions, and the Berlin Wall - the Cold Wars most enduring icon - soon collapsed. Nobody suspected that a major theater war in the Persian Gulf was less than one year away. The message is clear: be ready for anything. As professional logisticians, we cannot afford to be

caught with our guard down. In times of peace and prosperity, we must assume that another Gulf War or Kosovo is just around the corner. Todays Air Force must get to the fight quickly and sustain operations with few people and limited resources in locations where little or no support is available. To prepare ourselves for the events that may lie just around the corner, we can and must take steps today that will ensure readiness tomorrow. The first step is to focus on the logistics processes that maintain overall readiness. One of the most recent adaptations of this process-first approach resides in the world of aircraft engines. Engine availability - once cited as a reason for low readiness - has become a success story that every echelon of logistics can emulate. Additionally, these process improvements have

From the Editor

19

Calendar Help from RAC

22

21

RAC is a DoD Information Analysis Center Sponsored by the Defense Technical Information Center and Operated by IIT Research Institute

The Journal of the Reliability Analysis Center

been beneficial in maximizing engine system reliability and have led to reduced operations and support costs.

Engine Readiness Challenges

Engine maintenance is a complex and challenging business. It is the cornerstone of our ability to deploy aircraft on a moments notice anywhere in the world. The key to success is maintaining enough engines to sustain combat sorties and compensate for scheduled maintenance - commonly known as the war readiness engine (WRE) level. Recently, however, the Air Force faced several challenges in maintaining WRE levels. First, the 1990s witnessed a spare parts shortfall that limited our ability to perform some scheduled maintenance. As a result, the Air Force temporarily expanded use limits of many engine components to ensure availability for flying operations. Although this was not our preferred long-term solution to engine readiness, it did maintain readiness. Second, the operations tempo increased sharply, placing an even heavier burden on a smaller, less experienced engine maintenance workforce. To maintain an acceptable level of engine readiness using limited available resources, the Air Force migrated to performing On-Condition Maintenance (OCM). This meant that only acute maintenance issues could be resolved while engines were in the shop. Although OCM provided engines, it didnt result in a reliable level of system performance to maintain our time on wing (TOW) standards. Although it solved the instant budget issue, it inevitably lead to a larger bill later on. The solution to realizing the inherent engine reliability and operating at the lowest real cost was found in something that had been a part of Air Force policy since 1985: Reliability-Centered Maintenance (RCM). Unfortunately, the circumstances of the day forced its abandonment.

to RCM for a solution. Once maintenance efforts were focused on repairing or replacing components that would likely fail before the next scheduled removal, reliability numbers improved. For example, unscheduled engine removals (UER) for Lakenheath decreased from over 3 per 1,000 engine flight hours (EFH) to just over 1 per 1,000 EFH (Figure 1). At Cannon Air Force Base, another test site, reliability of the F110-GE-100 engine also increased significantly. In a demonstration involving 44 engines, UERs were reduced from more than 3 per 1,000 EFH to 1.5 (Figure 2). These test cases showed that performing predictable, scheduled maintenance maximizes engine reliability and reduces operating cost.

F100-229 Unscheduled Removals at RAFL

4.0 Removals per 1000EFH 3.0 2.0 1.0

FY98 To Date

Goal Non RCM RCM

Figure 1: Unscheduled Engine Removals at RAF Lakenheath

F110-100 Unscheduled Removals at Cannon AFB

4.0 Removals per 1000EFH 3.0

To Date

Goal

2.0 1.0

Non RCM

RCM

Maximizing Engine System Reliability

Using the RCM principles of repairing what is broken on an engine during unscheduled maintenance and what will likely fail before a desired TOW time is essential to maximizing engine system reliability. With these principles, the Air Force realized several engine-related success stories (see Figures 1 through 3). In 1998 RAF Lakenheaths F100-PW-229 engine availability was unusually low. Forty-four broken engines were sitting in the shop awaiting maintenance or parts, creating 22 engine holes for the flying squadrons. That represented over a third of the 126 engines assigned to Lakenheath. The wings ability to support a major theater war was in jeopardy. A team made up of experts from Lakenheath, the engine depot maintenance facility at San Antonio Air Logistics Center, and Pratt & Whitney (the F100 series engine manufacturer), looked

2

Second Quarter - 2000

Figure 2: Unscheduled Engine Removals at Cannon AFB Additionally, an RCM demonstration involving 39 F100-PW220 engines at Luke Air Force Base produced similar improvements in reliability. UERs at Luke fell from nearly 3 per 1,000 EFH to about 1.4 (Figure 3).

Financial Benefits for RCM

Simply put, reliable engines are cheaper to maintain. As the mean time between removal (MTBR) rate of engines increases, the cost per engine flight hour decreases. A MTBR rate of 1,000 EFH equates to about $600 per engine flight hour for a fighter engine. However, if the MTBR is reduced to 200 hours, the cost per flight hour increases to nearly $1,500 per hour.

The Journal of the Reliability Analysis Center

F100-220 Unscheduled Removals at Luke AFB

3.0 Removals per 1000EFH 2.5 2.0 1.5 1.0

To Date

Goal

range perspective and clearly demonstrated why RCM was the best solution. The RCM teams ability to succeed was a direct result of their ability to show an impact on readiness and do it on a nearly cost neutral basis. In the future, up-front investments of any kind must target readiness if theyre going to receive adequate support. The success of the Air Forces return to RCMbased engine maintenance, despite its initial costs, highlights a theme thats relevant throughout the acquisition and sustainment process - appropriate proactive maintenance can both reduce operating costs today and improved readiness tomorrow.

Fleet

RCM

Figure 3: Unscheduled Engine Removals at Luke AFB Additionally, an average unscheduled engine shop visit costs about $30,000 for material, labor, and support at Luke Air Force Base (excluding the costs of module overhaul, if required). For each unscheduled engine removal prevented, a level of cost avoidance is achieved. Following the Luke Air Force Base UER data to its logical conclusion, the Air Force avoided about $480,000 in UER costs. As reliability is maximized in engines, the Air Force should experience significant reduction in engine operating costs. This lesson is pertinent to all echelons of logistics, not just engines. The Air Force is now developing plans to implement these improved engine-build standards throughout the major commands.

About the Authors:

General Babbitt recently retired from the Air Force after over 35 years of service. He is a career maintainer who served as officer in charge of fighter flight lines in the United States, the Pacific, and Europe. He twice commanded aircraft maintenance squadrons and was deputy commander for maintenance of a European F-15 wing. General Babbitt culminated his career as the commander of the Air Force Materiel Command (AFMC). During his command of AFMC, General Babbitt focused on reducing the cost of developing and sustaining our nations Air Force. Captain Jones was General Babbitts speechwriter. He has a broad background in retail and wholesale supply operations. Captain Jackson is the logistics action officer in the Commanders Action Group. He has held positions at base level supply and aircraft maintenance for F-15Es.

Conclusion

This RCM example tells a story of successful process improvement. The Air Force approached engine reliability from a long-

Error Noted in 1st Quarter 2000 RAC Journal

The RAC Journal - First Quarter 2000 has an item titled "Tutorial: Testing for MTBF" on pages 7 and 8. The equations presented here are completely invalid, and can not be used even for special cases (beta = 1) without serious modification. Everything is incorrect after Equations 1 and 2. The author [Anthony Coppola] has confused three different things: component-level testing without replacement, repairable systems NHPP models, and repairable systems HPP models. The discussion of the test plan and the note that follows Equation 3 pertain to component-level testing without replacement. The number of failures in this case is not Poisson distributed - it is binomial. Equation 3 then pertains to a Weibull nonhomogeneous Poisson process model (NHPP) for repairable systems. A Weibull NHPP model and Weibull time to failure (TTF) distribution are two completely different things. Also, if you use a Weibull NHPP model, it makes no sense to test for MTBF anyway because the process is not stationary - it is changing with time. Then, the discussion ends with fixed length test plans for repairable systems with replacement (a homogeneous Poisson process - HPP). To illustrate, consider this case: A Weibull TTF component is put on test with m = 10, theta = 20, beta = 4, t = 15. Testing without replacement, you would expect close to 20 failures because t > theta. Testing with replacement, you would expect between 20 and 30 failures. Equation 3 says that the expected number of failures is over 100! David Coit Rutgers University (Mr. Coppolas reply on page 10)

Second Quarter - 2000

3

The Journal of the Reliability Analysis Center

The Problem with Aviation COTS

Lieutenant Colonel L. D. Alford [From time to time, we find and reprint articles in other journals that we believe would be of interest to our readers. The following article was originally published in the Winter 1999 issue (Volume XXIII, Number 4) of the Air Force Journal of Logistics and is reproduced in its entirety with permission. The Editor] Commercial-Off-the-Shelf - or COTS has become a byword for acquisition reform, but there are significant risks associated with the use of COTS products in military systems. These risks are especially acute for aviation systems. To take advantage of the fast pace of technological advances in industry, the Department of Defense (DoD) is acquiring commercial products and components for use in military systems. COTS items provide the Department of Defense with numerous potential benefits. Primarily, they allow incorporation of new technology into military systems more quickly than typical developmental programs. COTS can also reduce research and development costs. Even more important, the Department of Defense has looked to COTS purchases to help reduce operations and support costs for military systems. Figure 1 shows why this is highly desired: the cost of operations and support is almost threequarters the overall cost of a typical system. With this in mind, what could be the worst misfortune to befall an item procured as COTS? Could it be that the item changed and the original was no longer available commercially? What if the commercial replacement would no longer work in the military system for which it was procured? The very worst misfortune, which incorporates both of these problems, would be if the item were to suddenly become government uniqueno replacement available commercially. Becoming government unique would not entirely defeat the purpose of a COTS acquisition, but it would signifi4

Second Quarter - 2000

100 90 80 70 60 50 40 30 20 10 0

Milestones

Life-Cycle Cost Operation & Support

System Acquisition Production

Research & Development

72%

20%

8%

0 I II III

Figure 1. Typical Cost Distribution1 cantly affect support - the longest tail and, as shown in Figure 1, the greatest cost in the acquisition life cycle. This misfortune could never affect our COTS procurement - or could it? In any COTS acquisition, the acquirer needs to have already planned for this eventuality. Government unique is the conceptual opposite of COTS. An item is government unique when the only source or user of the item is the government. An item is a discrete unit that can be individually acquired for the logistical support of a system. A system, in this definition, is the higher level mission component for which the item is procured. For example, an aircraft and its support equipment are a system, but a radio installed in the aircraft is an item. Whenever a manufacturer discontinues or makes a change to a COTS item, the item can become government unique. When the manufacturer changes the item, if the government does not either acquire the variant or reflect the change in the systems incorporating the item and the systems documentation, the original becomes government unique. After a manufacturer makes a change to an item, the government might be able to purchase and use the new variant without any negative effect to the system. In this case, though the original item is now government unique, the change would not affect the form, fit, interface, or mission characteristics of the device. Unfortunately, manufacturers changes routinely affect these characteristics, and the effects of these COTS item changes for systems incorporating them are significant. The problems of changing form, fit, and interface should be obvious; if the variant item is to be installed and operate correctly, these characteristics generally cannot change. To accommodate form, fit, and interface changes, the acquirer must usually make modifications to the system. Modifications are costly and usually result in the original item becoming obsolete. Changes to mission characteristics do not necessarily result in system modifications. However, if they affect the overall performance or capability of the system, they can cause significant problems. For example, if the new item has an operating temperature range less than that of the original, the system could fail when used in an environment where temperatures exceed operating limits. Although configuration changes can cause create [sic] in a logistics program, the most devastating cause of government uniqueness occurs when a manufacturer discontinues an item. Figure 2 shows that, for a large number of COTS acquisitions, this is inevitable. The life

The Journal of the Reliability Analysis Center

of a typical military acquisition exceeds 20 years, yet the life of a typical civil product, especially electronics, is much less. From our own experience, we know it is almost impossible to purchase an ancient Z80-based computer, but right now, the Z80 lives on in the Air Forces AP-102 computer. This problem is not isolated to the electronics industry. For example, electronic gauges are replacing aviation steam gauges, the mechanical gauges on instrument panels. As a result, sources for mechanical components are becoming scarce, and they are difficult to obtain. The concepts outlined provide the definitive framework under which COTS must be understood. Without notice, the manufacturer is free to make changes to or discontinue production of the COTS item. As long as the manufacturers item changes do not affect characteristics or logistics supply, the acquirer has no problem. When changes do affect form, fit, interface, mission characteristics, or logistics supply, these changes become a significant problem for any COTS acquisition. This is especially true for aviation COTS. Two specific difficulties, airworthiness and forced modifications, result from manufacturers changes to aviation COTS. Airworthiness is the primary safety characteristic of any aircraft. It is

the primary element proven in the testing of the aircraft. The Federal Aviation Administration (FAA) certifies the airworthiness of most COTS items for aircraft, and these items must be certified in the system as well as individually. Military system certification, except for FAA-certified aircraft, is done wholly by the aircrafts configuration management (CM) authority. In the Air Force this authority is the single manager. This means that a simple change in mission characteristics, including improved functionality, will always drive a recertification of the aircraft. This recertification can range from a paper review to full flight test. The rate of change in COTS items is significant. This is especially true for aviation COTS. Considering the rate of change in COTS items, frequent recertification is a daunting prospect for the CM authority. In addition, COTS item changes can also drive changes to the specifications and technical data of any system on which these items are installed. The other difficulty for aviation COTS, which also affects any system, is forced modifications. A forced modification is a systems modification caused by the change of form, fit, interface, function, mission characteristic, or logistics supply. When logistics supply is affected, the acquirer must support the discontinued item or find a replacement. The lat-

ter may force a modification. More common in aviation COTS is an FAAdirected (airworthiness directive [AD]) 3 change to an item. These directives are FAA regulation-based orders that mandate a change to an aviation item or system. Airworthiness directives are regulatory in nature, and no person may operate a product to which an airworthiness directive applies except in accordance with the requirements of that air4 worthiness directive. The manufacturer has two choices in implementing the AD: discontinue the product or make the required change. The user of the item also has two choices: get a replacement product, if available, or make the changes required by the directive. When the change affects the form, fit, or interface of the item, an AD forces a modification to the system. For FAA-certified aircraft, the system must also receive FAA flight certification. For government certified aircraft, the CM authority must modify the system and certify airworthiness. However, the government is under no obligation to change its COTS items to accommodate an AD. If the government does not change a COTS item to comply with an AD, the item becomes government unique. Because the government self-certifies, commonly, nonFAA certified government aircraft do not make AD directed changes. Further, because in many cases, the government does not subscribe to technical changes from manufacturers, the CM authority may not be aware of ADs that pertain to a systems components. This problem is exacerbated when the CM has established a depot for a COTS acquisition and is, in that case, supporting the component without knowledge of or real commonality with the original item. Usually ADs are issued more than once a year affecting well-established air vehicles; however, thousands of ADs may affect a single aircraft model. All this boils down to the fact that, for aviation, a COTS item will become government unique in a very short period of time from a few months to a year after the acquisition of the item. Government (continued on page 8)

Second Quarter - 2000

Figure 2. COTS Obsolescence2

5

The Journal of the Reliability Analysis Center

Tutorial: The Central Limit Theorem

By: Anthony Coppola, Reliability Analysis Center ing. First, assume that a distribution for a random variable (time to failure, diameter of a ball bearing, percent of voters who want more gun control laws, etc.) has a finite mean (m) and finite variance (s2). Then, we measure a parameter of interest by taking the mean of measurements on a sample of size N (N times to failure, ball bearings, voters polled, etc.) and repeat this for many samples each of size N. We then would find that the sample means tend to be distributed normally, despite the shape of the distribution of the random variable, and the fit to a normal becomes better as the sample size N (the number of measurements in one sample, not the number of samples taken) increases. Further, the mean of this normal distribution of sample means will be equal to the mean of the distribution of the random variable and the variance of the normal distribution of sample means will be equal to s2/N, the variance of the distribution of the random variable divided by the sample size (again, the number of measurements in one sample). This phenomenon is called the central limit theorem, and it permits us to estimate parameters for a distribution of interest using sample data. Analysis of normal distributions is facilitated because any normal distribution can be converted to a standard normal distribution in which the mean (m) = 0 and s, the standard deviation (the square root of the variance) = 1. Conversion is accomplished using Equation 1:

z= x -m

s

An interesting and useful statistical phenomenon is the follow-

The standard normal tables give us the means to determine confidence in quality measured from samples (or in opinion polling; the same methods apply). For example, suppose we take a sample from a lot of parts and find a certain proportion defective. We know the size of the sample (N) and the proportion defective of the sample (p'). What we want to determine is the proportion defective in the entire lot (p), or more specifically, a range of values that will contain the true value of p with a known probability (the predetermined confidence). Since a part is defective or not, the proportion defective, p, follows a binomial distribution. However, p' is a sample measurement and so, by the central limit theorem, is distributed normally, with mean of the distribution = p. To convert our measured value of p' to a standard normal value, z, we note:

z= p '- p

s

(2)

(1)

where s is the standard deviation of the distribution of sample means. s is the square root of the variance of the sample distribution, which, per the central limit theorem, is equal to the variance of the parent binomial distribution divided by the sample size. Since the variance of a binomial distribution is p(1-p), the variance of the distribution of the sample means is p(1-p)/N and s is the square root of p(1-p)/N. Hence:

z= p '- p

[ ( - p )/ N ] p1

(3)

where: z = a point on the horizontal axis of the standard normal distribution x = the corresponding data point on the normal distribution being converted m = the mean of the distribution being converted s = the standard deviation of the distribution being converted For any value (z) of the standard normal distribution, we can obtain the percent of the area of the curve from - ¥ to z, from z to ¥, from 0 to z, or from -z to z. These are listed in tables widely available. Table 1 contains excerpted data from a table of the standard normal distribution. Table 1: Standard Normal Data Value of z Area between z and z 1.28 0.80 1.645 0.90 1.96 0.95 2.33 0.98 2.58 0.99

Since we do not know p, we cannot determine the standard deviation (the denominator of Equation 3). However, we can estimate it by using the measured value p' for p. Doing so yields an expression called the standard error and Equation 3 becomes:

z= p'-p

[ ' ( - p')/ N ] p 1

(4)

Rearranging the terms results in Equation 5:

p = p '- z [ ' ( - p ')/ N ] p 1

(5)

Area from z to ¥ 0.10 0.05 0.025 0.01 0.005

From Table 1, we find that 90% of the area under the standard normal curve is between the values z = -1.645 and z = 1.645. This means there is a 90%

Second Quarter - 2000

7

The Journal of the Reliability Analysis Center

probability that a randomly selected value of z would be between -1.645 and 1.645. This, in turn, implies that it is 90% probable that p would be in a range defined by Equations 6 and 7.

LL = Lower limit of range = p' - 1.645 [ ( - p')/ N ] p' 1

(6) (7)

Thus, 1000 samples allow us to determine the value of p to be .5 plus or minus .026 which is less than a 3% error. Note that this is 3% of the possible range of values for p, not 3% of the estimated value of p. If, instead of counting defectives, we were polling voters and found 500 out of 1000 polled in favor of some government action, we would report the results as 50% in favor with a margin of error of 3%. While the measured value p' will affect the margin of error, the biggest influence on it is the sample size. A sample size of 100 with p' measured at .5 will result in a margin of error, obtained by using Equations 6 and 7, of .08225, or just over 8%. Similarly, a sample size of 10,000 will reduce the margin of error to .00825 or less than 1%, for a 90% confidence. The user can trade-off between confidence, margin of error, and sample size, based on a normal distribution of sample means, no matter what the underlying distribution of the parameter of interest, thanks to the central limit theorem. For a user-friendly introduction to statistics and its application to reliability engineering, try Practical Statistical Tools for the Reliability Engineer, RAC Product code STAT.

UL = Upper limit of range = p' + 1.645 [ ( - p')/ N ] p' 1

For example, if we took a sample of 1000 parts and found 500 defective, p' =.5, and we would be 90% sure that p, the proportion defective in the parent population, was actually in the range:

LL

=

(.5 - 1.645

[.5( - .5 )/ 1000]) to UL ( + 1.645 [.5( - .5 )/ 1000]) 1 .5 1 ,

=

(.5 - 1.645

.00025 )to ( + 1.645 .00025 ) .5

= = =

(.5 - 1.645 [.0158]) to (.5 + 1.645 [.0158]) (.5 - .026) to (.5 + .026) .474 to .526 (8)

The Problem with Aviation COTS (continued from page 5) uniqueness means forced review, modification, support changes, and recertification when the change is recognized - or blissful ignorance and risk if the change is not recognized.

COTS Support Strategies

What can be done to prevent these problems for aviation systems specifically and all systems generally? One solution has been mentioned, and this solution has been accomplished with varying degrees of success since the first acquisition of COTS items. Depot. This approach is the acknowledgment of an items potential government uniqueness before the manufacturer makes any changes. In this strategy, the acquirer purchases spares and builds a government depot activity to support the item. This solution does take advantage of the COTS item commercial development, but the overall cost savings may not be significant because the longest tail, the support tail, is at least as long as any normal government item development. In fact, the support tail may be costlier because the government has not been involved in the item development. Many programs use this strategy; the C-130 improved auxiliary power unit program is one example. Lifetime Spares. Another similar solution is to purchase enough spares for the total life of the system and item. The AP-102 computer program used this strategy to ensure suf-

ficient Z80 chips to support the life of the system. Again, this is not an optimum solution because it usually increases the items logistics tail. In this case, if the items life expectancy is less than predicted or the items life is extended, the government has no other recourse than to entirely replace the item or to develop a support capability. These two solutions, government depot and lifetime spares buy, prevent forced modifications and subsequent airworthiness certification requirements. They can also introduce risk. In addition, they defeat two major potential advantages of COTS: the ability to reduce the support tail and the ability to take advantage of future commercial developments in the item. There are four other solutions to these problems that do take full advantage of the possibilities of COTS acquisition, but each is fraught with its own risk. Each of these solutions is a variant of what is commonly known as contractor logistic support (CLS). Purchase Technical Information. In the first alternative, the acquirer can purchase the servicing information support of the manufacturer. This allows the CM authority to make decisions based on changes to the item. If the CM authority knows of a manufacturers changes to an item, the CM can choose to acquire a replacement or modify the system as required to allow continued use of the variant item. The CM has three options. First, when an item changes and the decision is made to replace the item, the CM must acquire

8

Second Quarter - 2000

The Journal of the Reliability Analysis Center

and certify the new item. Second, if the item is retained with changes, the CM must certify and possibly change the system. And third, if a decision is made to not make any changes to the item, the CM must set up government-unique support. The advantages of retention or replacement (options 1 and 2) are the continued COTS logistics tail and guaranteed item certification. The CM must still recertify the system. If the item is retained in its original configuration (option 3), the decision to support a government-unique item leads to a typical high-cost government logistics tail. This pick-and-choose method of systems support probably has not been used intentionally. However, after a manufacturer has made unexpected changes to a COTS component, many programs have found themselves in this situation. Purchase Manufacturer Support. The second alternative is the acquirer can purchase manufacturer support for the item. The risks in this are similar to that of purchasing servicing information support; however, the manufacturer has more incentive to keep the item within form, fit, and interface configuration for the system. When changes in the system are required to support changes in the item, the manufacturer can aid the CM authority. This is a very common method used to support COTS. Purchase Manufacturer Modification Support. In the third alternative, the acquirer can purchase the full, integrated support of the manufacturer. This allows the manufacturer to make changes to the system, along with changes to the item. The contractor may have some Total System Performance Responsibility (TSPR), but the CM authority must still recertify the system. The AC-130U is using this method to manage COTS in its new Integrated Weapon System Support program. This is the most common method used today to support COTS items and systems through CLS. Purchase Full Manufacturer Support. Fourth, the acquirer can purchase the full system support that would allow an

integrator to automatically make changes to the system necessary to accommodate any item changes. In this scenario, the contractor would have TSPR and certify the weapon system. This fourth option is used primarily to support FAAcertified government aircraft. It could potentially be used to support any government aircraft or system incorporating COTS items. The message should be plain. COTS acquisitions lead the acquirer down two support paths: government-unique, high-cost logistics and COTS manufacturer support. Both of these paths involve risk and guarantee future costs for any system incorporating COTS items. The potential of COTS acquisitions is embodied in a lower cost development, initial acquisition, and support costs. That potential must be balanced with the knowledge that COTS acquisitions will either force modifications and recertifications or lead to a typical government-unique logistics tail. COTS for aviation is a viable method of aircraft and aviation acquisition, but it is not a simple solution. It requires careful planning and forethought that must be incorporated into any program contemplating a COTS acquisition. Notes 1. John F. Phillips, DUSD (L) September 1996, and John W. Jones, Ed., Integrated Logistics Support Handbook, 2nd Ed., 1995. 2. Joint Stars AFPEO/C3 briefing. 3. Federal Aviation Administration, Part 39Airworthiness Directives, Federal Aviation Regulations. Washington, DC: Government Printing Office, February 1996. 4. Ibid. Lieutenant Colonel Alford is an aeronautical test policy manager at the Air Force Materiel Command.

Another article on COTS appeared in the March 2000 issue of National Defense. The article, Commercial-Offthe-Shelf Military Systems: Myth vs. Reality, discusses the risks associated with over-reliance on COTS technology. The "myth" is that users expect that systems will be fielded much more quickly when COTS is used. Philip E. Coyle, the Pentagon's director of operational test and evaluation, cautions that military equipment "is never completely off-the-shelf." Many of the issues raised by Lieutenant Colonel Alford and Mr. Coyle are addressed in a forthcoming handbook from the RAC, titled Supporting Commercial Products in Military Applications. Now available from the RAC is SELECT, a software based tool to assist in evaluating COTS for military applications.

Editor's Note:

Second Quarter - 2000

9

The Journal of the Reliability Analysis Center

ASQ Certification Program

The American Society for Quality has a comprehensive certification program for those professionals working in the fields of quality and reliability. The following certifications can be earned: Certified Quality Engineer (CQE) Certified Software Quality Engineer (CSQE) Certified Quality Auditor (CQA) Certified Mechanical Inspector (CMI) Certified Quality Technician (CQT) Certified Reliability Engineer (CRE) Certified Quality Manager (CQM)

Of the total of 85,857 ASQ certifications, the smallest number, 3,118 or 3.6%, have been earned in Reliability Engineering. The RAC recommends that all engineers working in reliability consider earning their CRE. Certification is viewed by many managers and customers as a mark of excellence. It indicates that the certified individual has the knowledge and background essential to his or her field of endeavor. Certification is not only an investment in an individual's career but can enhance future sales of employers. As stated on the ASQ web site, the requirements for a CRE are ". . . eight years of on-the-job experience in one or more of the areas of the Certified Reliability Engineer Body of Knowledge. . . ," "Proof of professionalism.," and "Each certification candidate must pass a written examination that consists of multiple choice questions that measure comprehension of the Body of Knowledge." For a complete discussion of the certification requirements and to learn more about the certification program, visit the ASQ web site. Go to www.asq.org, click on Standards and Certification, and then click on ASQ Certification. The Reliability Analysis Center is proud that five staff members, Patrick Hetherington, Ned Criscimagna, Dave Dylis, Bill Wessels, and Dave Russell, are CREs.

Error Noted in 1st Quarter 2000 RAC Journal

(continued from page 3) Mr. Coppola's reply: I plead guilty to a lesser charge. My critic is correct that the binomial is the distribution to use in analyzing tests where failures are distributed according to the Weibull distribution (I use it in my statistics book, "Practical Statistical Tools for the Reliability Engineer" and in a tutorial "Test Risks, Confidence and OC Curves" to be published in a future issue of the RAC Journal). However, using Equation 3 and the Poisson summation approximates the summation of probabilities based on the binomial and works satisfactorily when the expected number of failures is small. I did not state this and compounded the problem with an example that has a relatively large expected number of failures (5). To illustrate the difference, my method calculates the probability of having two failures or less at 0.124, while a binomial summation calculates it at 0.148. If I had used an example with the expected number of failures equal to 0.5 (by assuming a q = 316 instead of 100), the Poisson summation gives the probability for two failures or less at 0.985 vs. 0.987 for the binomial summation, which I think is reasonably close. My apologies for being unclear on this point. I used the Poisson summation rather than the binomial as a way of leading in to the discussion of the test of MTBF for a constant failure rate, where the Poisson summation applies. I probably should not have taken this approach, since it does, as my critic states, confuse assumptions about repair, non-repair, HPP and NHPP. This may concern statisticians more than the average reader, but even the latter should have been told clearly that the term MTBF is really properly used only with reference to a repairable system with constant failure rate. I thank Dave Coit for pointing out the discrepancy. However, I deny that everything after Equations 1 and 2 is incorrect. I maintain Equation 4 and the discussion on testing for a constant failure rate are correct, and even one who disputes the derivation of Equation 4 must agree with the answer.

The appearance of advertising in this publication does not constitute endorsement by the Department of Defense or RAC of the products or services advertised.

10

Second Quarter - 2000

The Journal of the Reliability Analysis Center

Supportability and the RAC

For nearly two years, the Supportability Information and Decision Analysis Center (SIDAC) functioned as a part of the Reliability Analysis Center (RAC), operated by the IIT Research Institute (IITRI). In August 1998, the Air Force Material Command had contracted with IITRI to operate the SIDAC because the technical scopes of the two centers are so closely related. Previously operated under a separate five-year contract, SIDAC retained its identity but benefited from the expertise and information holdings of RAC along with a proven Information Analysis Center infrastructure. In May 2000, SIDAC operations were totally integrated into the RAC and the mission of the RAC was correspondingly expanded. The RAC now addresses all aspects of reliability, maintainability, quality and supportability. At first glance, the advantage of integrating supportability into the RAC mission may not be apparent. Historically, RACs focus has been on reliability, maintainability, and quality (RM&Q). Even before assuming specific responsibility for supportability, however, RAC recognized the growing importance of affordability and the affect of a shrinking acquisition budget on re-capitalization of the defense force structure. For many years, the operating and support (O&S) costs associated with current weapon systems have been estimated to consume 60-70% of the defense budget, so finding adequate funds to develop and acquire new systems has always been a challenge a shrinking budget has simply made the challenge more daunting. If the costs required to operate and maintain old systems can be reduced as a percentage of the total budget, additional funds could be made available for new systems. As a result of the decrease in total funding and the fact that O&S costs increase as systems age, the military services are taking three separate courses of action. The first is to increase the focus on planned life extension. Under planned life extension, a well-defined processes and associated decision tools are used to continually evaluate the O&S costs of a system together with the systems effectiveness to determine when and if life extension is a viable option. (Life extension is discussed in RAC publication Service Life Extension Assessment, SLEA in the RAC catalog.) The second course of action is to reduce O&S costs by improving the processes of O&S. Basically, the services are doing this by streamlining the support infrastructures, introducing new technology, and implementing more efficient management systems. Many of the initiatives associated with this course of action are discussed in the RAC/SIDAC product Survey of Military Logistics Initiatives (LOGI in the RAC catalog). Finally, for new system development and even in early Science and Technology efforts, affordability has been made an important criterion for success. Reliability, maintainability, and quality all affect system effectiveness and life cycle costs. Support concepts, the logistics infrastructure, and other supportability considerations also affect effectiveness and costs. Finally, RM&Q influence the inherent supportability of a system. So the marriage of RAC and SIDAC is extremely appropriate and helps the RAC continue to provide the technical support needed by the military services. The unchanging and ultimate objective of this support is to ensure that American warfighters have the right systems in the right place at the right time. For information on how RAC can help solve your supportability problems, contact Patrick Hetherington at (315) 339-7084 ([email protected]) or Ned Criscimagna at (301) 918-1526 ([email protected]).

Best Manufacturing Practices (BMP) Program

Since 1985, the Office of Naval Research has sponsored the BMP Program. Under this program, the Navy works to identify, document, and publicize best practices in design, test, production, facilities, logistics, and management. These best practices are identified using in-depth, on-site surveys conducted on a voluntary basis of companies, military organizations, and government facilities. The stated purpose of identifying these best practices is to increase the quality, reliability, and maintainability of goods produced by the United States. As of October 1999, 117 surveys had been conducted and published. Many of the reports of these surveys are available from the BMP web site at http://www.bmpcoe.org. Requests for reports not available from the web site or for additional information on the Best Manufacturing Practices Program, contact: Best Manufacturing Practices Program 4321 Hartwick Road, Suite 400, College Park, MD 20740 Attention: Mr. Ernie Renner, Director Tel: (800) 789-8180 Fax: (301) 403-8180 Email: [email protected] In 1998, a similar program was begun by the Society of Automotive Engineers (SAE) concentrating specifically on the automotive manufacturing industry. For information on the SAE program, contact Roy Trent at (248) 652-8461 ([email protected]).

Second Quarter - 2000

11

The Journal of the Reliability Analysis Center

Industry News

CAESAR is Alive and Well

No, we dont mean Julius Caesar. CAESAR stands for Civilian American European Surface Anthropometry Resource. Begun in 1997, the purpose of the CAESAR project is to develop a database of human physical dimensions. Eventually 8,000 male and female subjects (4,000 from the United States and 4,000 from Europe) between the ages of 18 and 65 and having various heights and weights will be scanned using the US Air Forces whole body laser scanner. The resulting digital information will be stored in a database. Scanning of American subjects began in 1998; scanning of European subjects began in the Netherlands in 1999. The project will provide the detailed data needed to complement guidance found in documents such as MIL-HDBK759C(2), Human Engineering Design Guidelines, 31 March 1998, and MIL-STD-1472F, Human Engineering, 23 August 1999. CAESAR is a Cooperative Research Project of the Society of Automotive engineers. Currently, 31 industry members are partners in the project. Each partner will have access to $6 million worth of research for an investment of $40,000. For more information or to join CAESAR, contact Gary Pollak at 724772-7196 (email: [email protected]). You can also get more information by visiting http://www.sae.org/technicalcommittees/ caesar.htm. Jerrell retired from Northrop Grumman on 1 February 2000. He joined the School of Engineering & Applied Science at Southern Methodist University (SMU). There he is developing and teaching courses in Probability & Statistics, Reliability Engineering, Quality Engineering & Control, Systems Analysis & Optimization, and Concurrent Engineering as well as direct student research. In addition, he is assisting with development of the Systems Engineering Program, a Reliability Engineering track and a Logistics Engineering track. Despite what promises to be a demanding new career, Jerrell plans to continue in an active role in G-11 as chair of its Executive Committee with a focus on strategic planning and executive-level promotion of G-11. The new G-11 Chair, Suren Singhal, has been a major contributor to G-11 through his tireless efforts and leadership of the G11 Probabilistic Methods Committee and its Leadership Council.

IEEEs Human Interface Technology Committee

International Aerospace Quality Systems Standard Published

AS9100, Quality Systems Aerospace Model for Quality Assurance in Design, Development, Production, Installation, and Servicing, has been published by the Society of Automotive Engineers (SAE). The new SAE standard was developed from ISO9001, AS9000, and EN9000-1 and is intended to be a globally-harmonized document for use by aerospace companies worldwide. Technically, AS9100 is equivalent to prEN9100, published by the European Association of Aerospace Industries (AECMA). It supersedes AS9000, Aerospace Basic Quality System Standard, published by AE in 1997. To order a copy of AS9100, contact SAE Customer Service at 724-776-4970 (email: [email protected]).

The Human Performance Technology Committee, formerly the Human Performance Reliability Committee, has the principal objective of increasing the sensitivity of system designers and program managers to the important impact of humans on system reliability. The Committee achieves this objective through the following activities: standards and guidance document development, presentations and tutorials, and publications in a variety of journals and magazines. The Committee members represent a broad range of scientific expertise, engineering expertise, and program management. This mix of perspectives ensures Committee products are applicable to many types of customers. The most recent Committee product is the IEEE videotape tutorial entitled Designing Systems for Reliable Human Performance. The Committees plans the following projects for 2000: Update committee description if necessary to reflect new Technical Operations organization Update and augment PAR package for proposed human reliability standard Annual technology report input Evaluate the feasibility of Web-based training

Chair of SAE G-11 Steps Down

Jerrell Stracener has stepped down as Chair of the SAE Reliability, Maintainability, Supportability and Logistics (RMSL) Division (G-11), a position he has held for over seven years. At the March meeting of the G-11, Dr. Suren Singhal formally assumed the responsibilities and duties of Chairman of the G-11. He became the fourth chairman of G-11 since its founding in 1985.

14

For more information on the Committee, contact the Committee Chair, Dr. Kenneth P. LaSala, at: 703 Cannon Road Silver Spring, MD 20904-3323 301-713-3352 (Work phone) 301-713-4149 (Fax) [email protected]

Second Quarter - 2000

The Journal of the Reliability Analysis Center

SAE Publishes Revised R&M Guideline

SAE M-110 is the Reliability and Maintainability Guideline for Manufacturing Machinery and Equipment. Originally published in 1993, the 1999 revision of the guideline was jointly managed by the Society of Automotive Engineers (SAE) and the National Center for Manufacturing Sciences. The guideline is

listed as a QS-9000 related manual, so all suppliers who are QS-9000 certified or desire to be certified should review the revision. M-110 is available from SAE. To order a copy, contact SAE at 724-776-4970 (email: [email protected]).

PRISM© in Ten Easy Steps

Using PRISM©: Making Maximum Use of RACRates© Models and Default Values By: Norman Fuqua, Reliability Analysis Center RACs PRISM© Reliability Assessment Tool is designed to accommodate various levels of sophistication in performing reliability assessments from the parts to the system-level. Both the novice and the journeyman practitioner can effectively use © PRISM , by using the built-in tailoring capability. At the most © © elemental level of assessment, RACRates Models and PRISM system level built-in default values are used. These default values are based on typical parts and environmental use conditions based on RAC data. This approach to part-level prediction is roughly equivalent to the old MIL-HDBK-217 Part Count method. Such an assessment is the focus of this article. A PRISM© reliability assessment has three basic parts. First, you must define your system, as explained in Steps 1 to 3. Then, you must define your parts, as discussed in Steps 4 to 7. Finally, you can customize your assessment, as outlined in Steps 8 to 10. Once complete, a variety of reportings options are available to © output PRISM analysis results. The environment you select sets default values for five critical parameters of your system: Operating Temperature, Dormant Temperature, Relative Humidity, Vibration, and Year of Manufacture. Step 2 Select the Operational Profile. This selection establishes two critical parameters for your system, the duty cycle and the cycling rate. The eight operational profile choices are Automotive, Industrial, Computer, Consumer, Commercial Aircraft, Military Aircraft, Military Ground, and Telecommunications. Selecting the operational profile determines the default values for Duty Cycle, i.e., operating ON time, and Cycling Rate. After defining the environment and the operating profile, we can build the system tree structure and populate it with components. Step 3 Define and insert all of the applicable assemblies and sub-assemblies into your system. Each assembly and sub-assembly will become the major © and minor branches of the system tree. PRISM has no inherent limitations regarding the number and location of the branches on this tree as long as they are all in a basic hierarchical pattern. Define and insert all of the applicable components into your system. Continuing with our tree analogy, these individual components or piece parts are the leaves spreading from the various branches.

Define Your System

The RAC developed its unique RACRates© Models by apportioning observed component failure rates, from RAC databases of over 20 trillion part hours, among the four primary failure cause categories, which are: a) operational, b) non-operational, c) power/temperature cycling, and d) electrical overstress. In addressing the failure rate contributions separately, the contribution of discrete factors, such as temperature change and vibration level, may be assessed individually. Since the operating environment and the use profile will have a major impact upon the system reliability, the predominant environmental conditions are © direct inputs to the RACRates Models. Thus, to properly address each of these failure cause categories, one must begin by appropriately defining both the system operating environment and the operating profile. Step 1 Select the appropriate Operating Environment. The 40 different choices for environment are:

Airborne (11 choices from Fixed Wing, Inhabited to Missile Launch) Ground (27 choices from Ground Stationary to Ground, Mobile, Tracked) Naval (2 choices, Shipboard or Submarine)

Step 4 -

Define Your Parts

Unique RACRatesÓ Models are used for each different kind of part, so we need to properly categorize each of our parts to ensure the most appropriate model is used for a given part. This categorization is a two-step process. First, the general part category e.g., capacitor is selected; then, the specific part type e.g., ceramic is identified. (continued on page 17)

Second Quarter - 2000

15

Reliability Training

Life Data Analysis...

Course Section 1 provides a comprehensive treatment of the subject of Life Data Analysis as it applies to Reliability Engineering. This section also provides an in-depth overview of the underlying statistical theory and methods, as well as a complete overview and step-by-step guidance on the use of Weibull++.

Master the Subject

Accelerated Life Testing...

Course Section 2 is devoted to one of the hottest subjects in Reliability Engineering, Accelerated Life Testing. This section provides the theory required to understand, evaluate, and plan accelerated life tests, and instruction in the use of ReliaSoft's ALTA.

Seminar 2000 Dates & Locations August 21-25 October 16-20 December 4-8 Virginia Beach, VA Tucson, AZ Reno, NV

System Reliability...

Course Section 3 builds upon previously learned concepts and examines System Reliability, Maintainability and Availability theory and applications. BlockSim, ReliaSoft's latest software for ascertaining and optimizing System Reliability and Maintainability, is utilized.

ReliaSoft's

Reliability Seminars

Each of ReliaSoft's Seminars is composed of three Course Sections. Course Section 1 provides the foundation, theory and software instruction to successfully apply Reliability and Life Data Analysis. Course Sections 2 and 3 extend the foundation into the advanced subjects of Accelerated Life Testing and System Reliability, respectively. Depending on your background and needs, you can choose to attend all three sessions or any combination of the three sessions.

Master the Tools

BlockSim

Designed for Complex System Reliability Analysis utilizing a Reliability Block Diagram approach. Get complete results, graphs and reports, including algebraic solutions and predictions for complex systems.

ALTA

Designed for Accelerated Life Test Analysis utilizing multiple distributions and life stress relationships. Allows you to correctly make Use Life reliability predictions from accelerated life tests and to correctly compute acceleration factors.

Weibull++

Designed for Reliability Life Data Analysis utilizing multiple distributions, including all forms of the Weibull distribution, with a clear and concise interface geared toward Reliability Engineering.

Measure the Benefits

Reduce Improve Decrease Empower Life Cycle Costs Defect Rates Informed Decisions Product Design Quickly Reduce Reduce Shorten Identify Trends Warranty Costs Maintenance Costs Test Time

For more information or to register for any of these seminars, call: 888-722-7522, 952-953-3292, or fax 952-953-2929 E-Mail: [email protected] Web Site: www.ReliaSoft.com/seminars

The Journal of the Reliability Analysis Center

Step 5 -

Select the basic Part Category. The eight part categories are Capacitor, Diode, Integrated Circuits (IC), Resistor, Software, Thyristor, Transistor and Other. One of two different approaches is used at this juncture: Approach 1: The part category selection establishes the applicable RACRatesÓ Model default values for these input parameters: Capacitance Value (capacitors only), Stress level, Temperature Rise and Rated Power (resistors only). Approach 2: If no RACRatesÓ Model is available you have chosen the part category Other. You must use either RAC Data (EPRD/NPRD) or insert a user defined failure rate. Electronic Parts Reliability Data (EPRD) is a failure rate database of commercial and military electronic components. EPRD contains failure data on thousands of integrated circuits, discrete semiconductors (diodes, transistors, thyristors, optoelectronic devices), resistors, capacitors, inductors and transformers, all obtained from field usage data. Nonelectronic Parts Reliability Data (NPRD) is a database with failure rates for a wide variety of component types, including mechanical, electromechanical, and discrete electronic parts and assemblies and is widely recognized as an industry de facto standard for mechanical part failure rates. NPRD provides summary-level failure rates for numerous detailed part categories by environment and quality level. This database reflects field experience in military and commercial applications, concentrating on items not covered by other ã failure rate sources or the RACRates Models. The database has data for more than 25,000 parts. Step 7 -

Maturity Model (CMM) assessment. Default values are provided for the following parameters: Time to stabilization, Fault activation, Fault Latency and Average % Severity. A minimum number of additional inputs, e.g., the number of lines of source code, are required to use this model. Enhance your Part Level Data. These data would typically include items such as:

1) 2) 3)

Reference Designator - e.g., C1. (To identify a specific circuit location.) Industry P/N, Original Equipment Manufacturer (OEM) P/N, Specification Number, or Federal Stock Number. Part Description. (This field is essentially a repeat of the Part Type but may contain additional information especially when using data sources other than the RACRatesÓ Models.)

Note: The data in at least one of the four part number fields in item (2) and the data in item (3) are required inputs to build a Component Library for future use. 4) 5) Quantity. (Required if more than one part is used.) Manufacturer. (Additional data in this field may be needed for any future enhancements to the built-in Manufacturers Library that may be desired.)

Customize Your Assessment

This feature allows you to more uniquely characterize and identify your system and its storage, operation, and maintenance and thus tailor your assessment to your unique situation. Tailoring addresses: a) operating environment and profile, b) calendar baseline and change control, and c) process grade factors. Step 8 Modify your Environment and Operating Profile data. Here the user can change the original default values, thereby customizing the assessment to more accurately reflect the specific system and its actual operation. Default values may be individually changed for Operating Temperature, Dormant Temperature, Duty Cycle and Cycling Rate. Any environment and operating profile changes made at this point will impact the values of the defaults that are used in all of the applicable Ó RACRates Models. Step 9 Establish the Calendar Baseline, Change Control, and Year of Manufacture. Establishing the calendar baseline will ensure that your assessment properly reflects the ongoing part reliability improvements that are generally true throughout the electronics industry. Change control allows you to customize your assessment and incorporate different environments for different branches of your system tree, if

Second Quarter - 2000

Step 6 -

Select your Part Type. Part types are the sub-categories of the selected Part Category. After establishing the basic part category, we must now determine a specific part type to ensure that the correct Ó RACRates Model is used. The part type selection determines which of the fifty-eight RACRatesÓ Models is used for a specific part. The currently available types and number of models are:

Capacitor (14) Resistor (13) Transistor (9) Diode (15) Thyristor (3) Software (1) IC (3)

The RACRatesÓ Software model is based upon the Software Engineering Institutes Capability

17

The Journal of the Reliability Analysis Center

necessary. An example might be a system in which one assembly is operating in an aircraft and another assembly (or group of assemblies) is operating in a ground stationary environment. The default value for Change Control is Allow. Dont Allow means that the environment and operating profile are locked, i.e., a component or an assembly will not accept trickle-down edit changes to its environment and operating profile parameters. Year of Manufacturer - (The Default is the current year.) The range of available year of manufacture choices is 1993 to 2005. This date establishes a calendar baseline to ensure that your assessment properly reflects the ongoing part reliability improvements that generally apply throughout the electronics industry. It is anticipated that this range Ó will be extended in future releases of PRISM . If any year, other than 2000, is selected it will modify the RACRatesÓ Model part-growth Pi-factor. Step 10 - Establish applicable Process Grade Factors, a unique Process Grade Set, or both for your system. Critics of reliability prediction tools such as MILHDBK-217 have pointed out that many factors can impact system reliability that are not captured by a simple part quality failure rate multiplier. The Ó PRISM System Model explicitly accounts for those factors that contribute to variability, factors that traditional reliability prediction approaches ignore, by grading the process for each of the failure cause categories. Nine different process grade factors are in the PRISMÓ System Model. Each of these factors can be uniquely defined. These process grade factors Table 1. PRISM© System Reports

Report Name System Level Summary Tree View Assembly Breakdown Summary Assembly Breakdown Detail Component Detail Assembly Pareto Component Pareto Process Grade Set Component Library Process Grade Process Grade Set Definition

(PRISMÓ System Model Pi-factors) are: Design, Manufacturing, Parts Quality, System Management, Can-Not-Duplicate, Induced, Wearout, Growth and Infant Mortality. The resulting grade for each failure cause corresponds to the level to which an organization has taken the action necessary to mitigate the occurrence of failures due to that cause. This grading is accomplished by assessing the processes in a self audit-like fashion. Any or all failure causes can be individually assessed and graded. If individual Process Grade Factors are not addressed or a unique Process Grade Set (a grouping of process grade factors) is not selected, then various default values are assumed for each of these Process Grade Factors. The process grade factor default values reflect average or typical design, manufacturing, use and maintenance conditions.

Output Your Assessment

The PRISMÓ program offers a variety of output reports to provide summarized visibility into the details of the reliability prediction. Table 1 provides a listing and description of the eight system reports and the three general reports currently available.

Summary

In summary, we have seen in this brief introductory tutorial that the PRISMÓ is a very versatile reliability assessment tool that can be effectively used by both the novice and the journeyman Ó practitioner. A PRISM reliability assessment, making maximum use of RACRatesÓ Models and default values, can be straightforwardly performed using these ten easy steps.

Description Displays the current system level summary information including a breakdown of System Level Model Parameters, Predecessor System Analysis and Observed Data. Displays the current system tree with failure rates for each tree item. System Level Multipliers have been applied to the System and Assembly level failure rates. Displays the current system by Assembly. Each Assembly is decomposed to all of its child Assemblies. Component failure rates are not provided. Displays the current system by Assembly. Each Assembly is decomposed to all of its child Assemblies and/or Components. Displays failure rates for all components used in the active System. Where applicable, RACRates© Model failure rates parameters are provided. Displays the failure rates of Assemblies and Components existing directly below the active System or selected Branch. Failure Rates are rank ordered from highest failure rate to lowest. Displays the failure rates of all Components existing in the active System. Failure Rates are rank ordered from highest failure rate to lowest. Displays the selected Branchs system level model parameters based on the Process Grade Set indicated. Displays all the current entries listed in the Component Library. Displays the questions and answers for the specific Process Grade requested. Displays the member grades for a specific Process Grade Set. The parameter values are NOT listed in this report.

18

Second Quarter - 2000

The Journal of the Reliability Analysis Center

From the Editor

Who Can You Trust?

For over 100 years, the UL mark has been an indication of product safety, first in the United States and now worldwide. The extent to which the mark is used is, according to the Underwriters Laboratories web site, http://www.ul.com/, indicated by the fact that Each year, more than 16 billion UL marks are applied to products worldwide. It was with more than a little distress, then, that I read an article in the November 29, 1999 issue of the Washington Post titled UL: Still Safetys Symbol? In the article, the reporter, Caroline E. Mayer, cites several instances in which critics are questioning not only the testing performed by UL but the independence of the organization from the very manufacturers whose products it evaluates. Although some feel that the examples cited as evidence of ULs shortcomings are anecdotal and not indicative of ULs performance, the article does raise some serious issues. The examples described in the article include the halogen torchiere lamps, popup toasters, the Omega fire prevention sprinkler, ionization smoke alarms, and carbon monoxide alarms. Accusations against UL concerning these products range from ineffective tests to misleading the public regarding the extent of product problems. In the case of the carbon dioxide alarm, the problem was one of false alarms. According to the UL, the initial threshold at which the UL standard said the alarm must sound was too low. So UL revised the standard to require a higher threshold. Critics first point out that now the sensors are not sensitive enough. Furthermore, they say that the underlying and continuing problem is that the UL standard does not require any type of reliability testing for the gas sensor, so consumers can not be sure that the sensor is working properly. An indication of the seriousness of the problem with the CO2 alarm is that in one day in 1994, the Chicago fire department was deluged with 1,800 calls to 911. Almost all were false alarms caused by an unusual weather condition in which ambient CO2 levels were higher than usual. By the end of the year, more than 8,600 such calls had been made, only a small percentage of which were due to harmful levels of CO2. In 1997, after the UL standard (UL 2034) was revised, the Gas Research Institute (GRI), a private laboratory funded by the gas industry, tested 96 CO2 alarms. The results: after three months of operation, nearly 50% failed to sound when exposed to harmful levels of CO2. After six months of operation, a third of the alarms failed. In 1999, the GRI released the results of a survey that showed 20 of 80 alarms tested were defective, and 12 of these 20 were non-functional at the time of purchase. The Post reported that earlier in 1999, UL announced a voluntary recall of 1 million of the alarms because they failed to sound in time or they did not sound at all. The Post went on to say that UL was conducting, at its own expense, field tests of the alarms. In these tests, alarms were purchased and placed in the homes of UL employees, where the performance of the alarms was monitored over time. In one such test, 33% failed at least one of the UL-required response tests. Ned Criscimagna

A real problem is that no long-term reliability testing is required of the gas sensor. A test button on the alarm only indicates if the circuitry is working, not if the sensor can accurately measure levels of CO2. UL associate managing engineer Paul Patty, who is in charge of the CO2 alarm standard, states that there is no way of testing if the sensor is working properly. The article did not indicate how Patty feels about the need for or ability to conduct reliability testing. In view of the seriousness of the CO2 alarm problem, as well as problems with other products, the Consumer Product Safety Commission (CPSC) is doing testing and conducting investigations of its own of products carrying the UL mark. I cannot judge the validity of the criticisms leveled at UL. But I think this very troubling situation serves to emphasize three basics of testing. First, the function of an item to be tested must be thoroughly understood. Second, failure must be clearly and precisely defined. Finally, the test conditions must be such that the test will reveal weaknesses in the design that cause it to fail. These basics cannot be ignored, no matter who is doing the testing.

Second Quarter - 2000

19

The Journal of the Reliability Analysis Center

Calendar - Upcoming Events in Reliability

NAVSEA COTS Steering Board Workshop 2000 July 25-26, 2000 Laurel, MD Contact: Commander NSWC, Crane Division Code 602 Bldg. 2940 W Attn: COTS Workshop 300 Highway 361 Crane, IN 47522-5001 (812) 854-4248 Email: [email protected] 35th Annual International Logistics Symposium August 6-10, 2000 New Orleans, LA Contact: SOLE 8100 Professional Place Hyattsville, MD 20785 Tel: (301) 459-8446 Fax: (301) 459-1522 Email: [email protected] On the web: www.sole.org Military & Aerospace/Avionics (COTS) Conference, Exhibition & Seminar August 22-25, 2000 Fort Collins, CO Contact: Edward B. Hakim The C3I 2412 Emerson Ave. Spring Lake, NJ 07762 Tel: (732) 449-4729 Fax: (732) 449-4729 Email: [email protected] SEI Software Engineering Conference September 18-21, 2000 Washington, DC Contact: Symposium Conference Coordinator Software Engineering Institute Carnegie Mellon University Pittsburgh, PA 15213-3890 Tel: (412) 268-3007 Fax: (412) 268-5556 Email: [email protected] On the web: www/sei.cmu.edu/ products/events/symp/ ACerS Electronics Division Fall Meeting October 8-11, 2000 Clemson, SC Contact: Customer Service Dept. The American Ceramic Society Westerville, OH 43086-6136 Tel: (614) 794-5890 Fax: (614) 899-6109 Email: [email protected] On the web: www.acers.org Durability and Damage Tolerance of Aging Aircraft Structure October 11-13, 2000 Contact: Aerospace Short Courses The University of Kansas Continuing Education 12600 Quivira Road Overland Park, KS 66213-2402 Tel: (913) 897-8500 Fax: (913) 897-8540 Email: [email protected] On the web: www.kuce.org/aero/ 3rd Annual Systems Engineering & Supportability Conference October 23-26, 2000 San Diego, CA Contact: Bob Rassa Raytheon Electronic Systems Tel: (310) 334-0764 Email: [email protected] 4th DoD Maintenance Symposium & Exhibition October 30-November 2, 2000 Charleston, SC Contact: NDIA Tel: (703) 522-1820 Fax: (703) 522-1885 Email: [email protected] On the web: register.ndia.org/interview/ register.ndia ISTFA 2000 November 12-16, 2000 Bellevue, WA 26th International Symposium for Testing and Failure Analysis Contact: ISTFA Conference Administrator ASM International Materials Park, OH 44073-0002 Email: [email protected] On the web: www.edfas.org HASE 2000: 5th IEEE International Symposium on High Assurance Systems November 15-17, 2000 Albuquerque, NM Contact: Victor Winter Sandia National Laboratories PO Box 5800, Dept. 2662 Albuquerque, NM 87185-0535 Email: [email protected] On the web: www.high-assurance.org/ COMADEM 2000 Condition Monitoring & Diagnostic Engineering Management Congress & Exhibition December 3-8, 2000 Houston, TX Contact: Henry C. Pusey MFPT Society 4193 Sudley Road Haymarket, VA 20169-2420 Tel: (703) 754-2234 Fax: (703) 754-9743 Email: [email protected] On the web: www.mfpt.org Year 2001 International Symposium on Product Quality and Integrity - Reliability and Maintainability Symposium (RAMS) January 22-25, 2001 Philadelphia, PA Contact: (Paper submissions) RAMS Database Coordinator 804 Vickers Avenue Durham, NC 27701-3143 Email: (for papers) [email protected] On the web: www.rams.org/

Also visit our Calendar web page at http://rac.iitri.org/cgirac/Areas?0

21

Second Quarter - 2000

The Journal of the Reliability Analysis Center

Help from RAC

Technical Support

The RAC has a technical staff of over 50 engineers with background and expertise ready to help in all areas of RMS&Q. RAC can provide long and short term help and quick reaction, independent analyses. RAC is a cost-effective source of expertise for military or commercial systems development, production, or operation and maintenance applications. These services are readily available to government and commercial organizations. The RAC provides consulting in the specialty areas shown in the following in the three basic phases of a products life cycle: System/Product Development, Production, and Operation and Maintenance.

·

Program Tailoring and Management Parts Control Programs and Part Qualification Failure Mode, Effects & Criticality Analysis (FMECA) Testability and Maintainability Analysis Environmental Characterization Reliability/Maintainability Test Planning and Control Electrostatic Discharge (ESD) Susceptibility Analysis

·

Reliability Data Collection and Analysis Reliability Problem Solving, Failure Reporting and Corrective Action System (FRACAS), and Reverse Engineering Reliability-Centered Maintenance (RCM) Component/Obsolescence Environmental Stress Screening (ESS) Planning Application and Use of COTS/NDI

·

Reliability Modeling and Numerical Assessment Systems/Equipment Lifetime Extension Fault Tree Analysis (FTA) Finite Element Analysis (FEA) Worst Case Circuit Analysis (WCCA) System Modernization Mechanical Reliability and Maintainability Total Quality Management Quantitative Analysis

·

·

·

·

· · ·

·

·

· ·

· ·

· ·

·

·

· ·

Training

For over 25 years the Reliability Analysis Center (RAC) has been a world leader in reliability and quality training. RAC courses stress proven approaches and techniques for the designer, analyst, and manager. They include numerous real world examples and encourage class involvement. In addition to open registration courses offered at several locations around the United States every few months, the Reliability Analysis Center can cost-effectively bring these or custom courses to your organizations location. Our open registration courses include the following (instructors names in parentheses):

Design Reliability (Norm Fuqua) Mechanical Reliability (Ned Criscimagna) System Software Reliability (Ann Marie [Leone] Neufelder) Accelerated Testing (Pantelis Vassiliou) Weibull Analysis (Dr. Robert Abernethy and Mr. Wes Fulton) Reliability Growth and Repairable System Analysis (Dr. Larry Crow)

Analysis course. Dr. Larry Crow will teach the Reliability Growth and Repairable System Analysis course. Any of these courses can be taught at your site for up to 30 students at a cost equal to the registration cost for six students attending the open course. When taught on-site, these courses can be customized to the specific industry or company. Additional courses can be developed to meet specific needs. For more information on course content or schedules for open cour-ses, you can either contact Nan Pfrimmer by email at [email protected] or telephone 800-526-4803, or visit our web site: http://rac.iitri.org/PRODUCTS/course_summaries.html.

RAC Products

These courses are regularly offered in October in Denver, CO; in December in Orlando, FL; in March in Anaheim, CA; and in June in Virginia Beach, VA. The RAC has recently added two new courses: Weibull Analysis and Reliability Growth and Repairable System Analysis. Dr. Robert Abernethy and Mr. Wes Fulton will teach the Weibull

22

Second Quarter - 2000

RAC offers an extensive line of reliability, maintainability, quality, and supportability related products that include R&M publications, R&M Software Tools, and Electronic Reliability Databases. See the product list on our web page at http://rac.iitri.org/PRODUCTS/Products.html. You can use and mail or fax the order form on the facing page, order by calling the RAC, or place an order from the web. The newest product available from the RAC is the recently completed Maintainability Toolkit, a companion document to the highly applauded Reliability Toolkit: Commercial Practices Edition. The price of the Maintainability Toolkit is $50 US and $60 Non-US; the Order Code is M-KIT.

The Journal of the Reliability Analysis Center

Call:

(800) 526 - 4802 (315) 339 - 7047

Reliability Analysis Center 201 Mill Street Rome, NY 13440-6916 E-mail: [email protected]

Fax:

(315) 337 - 9932

Order Form

Quantity Title US Price Each Non-US Price Total

Maintainability Toolkit PRISM

$50.00 $1995.00

$60.00 $2195.00

Shipping and Handling:

US Orders add $4.00 per book for first class shipments, ($2.00 for RAC Blueprints). Non-US add $10.00 per book for surface mail (8-10 weeks), $15.00 per book for air mail ($25.00 for NPRD and VZAP, $40.00 for EPRD, $4.00 for RAC Blueprints).

Total

Name Company Division Address City Country Fax State Phone E-mail Zip Ext

Method of Payment: Personal Check Enclosed Company Check Enclosed (make checks payable to IITRI/RAC) Credit card #:

Type (circle):

Exp Date:

American Express VISA A minimum of $25.00 is required for credit card orders Mastercard

Name on Card Signature Billing Address

DD1155 (Government personnel)

Company Purchase Order

Send Product Catalog

Second Quarter - 2000

23

IIT Research Institute/ Reliability Analysis Center 201 Mill Street Rome, NY 13440-6916

The Journal of the

Non-Profit Organization US Postage Paid Utica, NY Permit No. 566

Reliability Analysis Center

(315) (888) (315) (315) (800) (800)

337-0900 RAC-USER 337-9932 337-9933 526-4802 526-4803

General Inquiries General Inquiries Facsimile Technical Inquiries Publication Orders Training Course Info Product Info Technical Inquiries Visit RAC on the Web

[email protected] [email protected] http://rac.iitri.org

Contact Gina Nash at (800) 526-4802 or the above address for our latest Product Catalog or a free copy of the RAC Users Guide.

24

Second Quarter - 2000

Information

2ND_Q2000.qxd

24 pages

Report File (DMCA)

Our content is added by our users. We aim to remove reported files within 1 working day. Please use this link to notify us:

Report this file as copyright or inappropriate

1295067


You might also be interested in

BETA
untitled
V-22 Osprey Tilt-Rotor Aircraft
LOATrainingGuide.doc