After studying this chapter you should. • understand the purpose of statistical process control;. • be able to set up and use charts for means, ranges, standard. The manual is an introduction to statistical process control. It is not intended to limit evolution of SPC) methods suited to particular processes or commodities. PDF | Statistical Process Control (SPC) is the misleading title of the area of statistics which is concerned with the statistical monitoring of sequentially observed.

Author: | RENNA SENDER |

Language: | English, Spanish, French |

Country: | Andorra |

Genre: | Academic & Education |

Pages: | 635 |

Published (Last): | 15.05.2016 |

ISBN: | 214-5-62322-929-9 |

ePub File Size: | 16.51 MB |

PDF File Size: | 14.53 MB |

Distribution: | Free* [*Regsitration Required] |

Downloads: | 22930 |

Uploaded by: | LENORA |

Statistical Process Control is not an abstract theoretical exercise for variation, and came up with Statistical Process Control Charts which provide a simple. processes. SPC techniques are a tool for highlighting this unusual behaviour. saw the introduction of the first Statistical Process Control (SPC) Charts. The relationship between SPC and Six Sigma. Summary SPC is put. Finally, there is a web site where control charts as pdf and data files in Excel can be.

Every process is subject to classification based on capability and control. The process then continues to move around the Process Improvement Cycle. Care should be given not to apply multiple criteria except in those cases where it makes sense. The process itself must be investigated, and, typically, management action must be taken to improve the system. If any points are substantially higher or lower than the others, confirm that the calculations and plots are correct and log any pertinent observations. Regression charts are used to monitor the relationship between two correlated variables in order to determine if and when deviation from the laown predictable relationship occurs.

A scale which yields a "narrow" control chart does not enable analysis and control of the process. Specifications limits should not be used in place of valid control limits for process analysis and control.

B Centerline The control chart requires a centerline based on the sampling distribution in order to allow the determination of non-random patterns which signal special causes.

C Subgroup sequence I timeline Maintaining the sequence in which the data are collected provides indications of "when" a special cause occurs and whether that special cause is time-oriented.

D Identification of out-of-control plotted values Plotted points which are out of statistical control should be identified on the control chart. For process control the analysis for special causes and their identification should occur as each sample is plotted as well as periodic reviews of the control chart as a whole for non- random patterns.

E Event Log Besides the collection, charting, and analysis of data, additional supporting information should be collected. This information can be recorded on the control chart or on a separate Event Log. Elements of Control Charts During the initial analysis of the process, knowledge of what would constitute a potential special cause for this specific process may be incomplete.

Consequently, the initial information collection activities may include events which will prove out not to be special causes. Such events need not be identified in subsequent information collection activities.

If initial information collection activities are not sufficiently comprehensive, then time may be wasted in identifying specific events which cause out-of-control signals. I I S2 -- material lot I , 52 -- nlc S S2 -- material lot I j S2 -- bad material -- stopped production; red tagged material 1 I sequestered production from lot change. Sx indicates the shift; indicates changes in the process. Establish an environment suitable for action.

J Define the process. Determine the features or characteristics to be charted based on: The customer's needs. Current and potential problem areas. Correlation between characteristics. Correlation between variables does not imply a causal relationship. In aution the absence of process knowledge, a designed experiment may be needed to verify such relationships and their significance. I J Define the characteristic.

The characteristic must be operationally defined so that results can be communicated to all concerned in ways that have the same meaning today as yesterday.

This involves specifying what information is to be gathered, where, how, and under what conditions. Attributes control charts would be used to monitor and evaluate discrete variables whereas variables control charts would be used to monitor and evaluate continuous variables.

J Define the measurement system. Total process variability consists of part-to-part variability and measurement system variability. It is very important to evaluate the effect of the measurement system's variability on the overall process variability and determine whether it is acceptable. The measurement performance must be predictable in terms of accuracy, precision and stability. Periodic calibration is not enough to validate the measurement system's capability for its intended use.

In addition to being calibrated, the measurement system must be evaluated in terms of its suitability for the intended use.

The definition of the measurement system will determine what type of chart, variables or attributes, is appropriate. Unnecessary external causes of variation should be reduced before the study begins. This could simply mean watching that the process is being operated as intended.

The purpose is to avoid obvious problems that could and should be corrected without use of control charts. This includes process adjustment or over control. In all cases, a process event log may be kept noting all relevant events such as tool changes, new raw material lots, measurement system changes, etc. This will aid in subsequent process analysis. J Assure selection scheme is appropriate for detecting expected special causes.

Even though convenience sampling andlor haphazard 1 sampling is often thought of as being random sampling, it is not. If one i assumes that it is, and in reality it is not, one carries an unnecessary risk f that may lead to erroneous and or biased conclusions.

For more details see Chapter I, Section H. Data Collection 2. Establish Control Limits 3. Interpret for Statistical Control 4. Extend Control Limits for ongoing control see Figure These measurements are combined into a control statistic e. The measurement data are collected from individual samples from a process stream. The samples are collected in subgroups and may consist of one or more pieces. In general, a larger subgroup size makes it easier to detect small process shifts.

Create a Sampling Plan For control charts to be effective the sampling plan should define rational subgroups.

A rational subgroup is one in which the samples are selected so that the chance for variation due to special causes occurring within a subgroup is minimized, while the chance for special cause variation between subgroups is maximized. The key item to remember when developing a sampling plan is that the variation between subgroups is going to be compared to the variation within subgroups. Taking consecutive samples for the subgroups minimizes the opportunity for the process to change and should minimize the within-subgroup variation.

The sampling frequency will determine the opportunity the process has to change between subgroups. The variation within a subgroup represents the piece-to-piece variation over a short period of time. Subgroup Size - The type of process under investigation dictates how the subgroup size is defined.

As stated earlier, a larger subgroup size makes it easier to detect small process shifts. The team responsible has to determine the appropriate subgroup size.

If the expected shift is relatively small, then a larger subgroup size would be needed compared to that required if the anticipated shift is large.

The calculation of the control limits depends on the subgroup size and if one varies the subgroup size the coiitrol limits will change for that subgroup. There are other teclviiques that deal with variable subgroup sizes; for example, see Montgomeiy and Grant and Leaveiiwortli Su bgrozp Frequency - The subgroups are taken sequentially in time, e. The goal is to detect changes in the process over time. Subgroups should be collected often enough, and at appropriate times so that they can reflect the potential oppostunities for change.

The potential causes of change could be due to worlt-shift differences, relief operators, wanii-up trends, material lots, etc. Number of Subgroups The nuinber of subgroups needed to establish - control limits should satisfy the following criterion: Generally, 25 or more subgroups containing about or more individual readings give a good test for stability and, if stable, good estimates of the process location and spread.

This number of subgroups ensures that the effect of any extreme values in the range or standard deviation will be minimized. In some cases, existing data may be available which could accelerate this first stage of the study. However, they should be used only if they are recent and if the basis for establishing subgroups is clearly understood.

Before continuing, a rational sampling plan must be developed and documented. Sampling Scheme - If the special causes affecting the process can occur unpredictably, the appropriate sampling scheme is a random or probability sample.

A random sample is one in which every sample point rational subgroup has the same chance probability of being selected. A random sample is systematic and planned; that is, all sainple points are determined before any data are collected. For special causes that are laown to occur at specific times or events, the sampling scheme should utilize this knowledge.

Haphazard sampling or convenieiice sampling not based on the expected occurrence of a specific special cause should be avoided since this type of sampling provides a false sense of security; it can lead to a biased result and coiisequeiitly a possible erroneous decision. Whichever sampling scheme is used all sainple points should be determined before any data are collected see Deming and Gmslta For a discussion about rational subgrouping and the effect of subgrouping on control chart interpretation see Appendix A.

J Header information including the description of the process and sampling plan. J Recordingldisplaying the actual data values collected. J For interim data calculations optional for automated charts. This should also include a space for the calculations based on the readings and the calculated control statistic s. The value for the control statistic is usually plotted on the vertical scale and the horizontal scale is the sequence in time.

The data values and the plot points for the control statistic should be aligned vertically. The scale should be broad enough to contain all the variation in the control statistic. A guideline is that the initial scale could be set to twice the difference between the expected maximum and minimum values. J To log observations. This section should include details such as process adjustments, tooling changes, material changes, or other events which may affect the variability of the process.

Record Raw Data Enter the individual values and the identification for each subgroup. J Log any pertinent observation s. Calculate Sample Control Statistic s for Each Subgroup The control statistics to be plotted are calculated from the subgroup measurement data. These statistics may be the sample mean, median, range, standard deviation, etc.

Calculate the statistics according to the fomulae for the type of chart that is being used. Make sure that the plot points for the corresponding control statistics are aligned vertically.

Connect the points with lines to help visualize patterns and trends. The data should be reviewed while they are being collected in order to identify potential problems. If any points are substantially higher or lower than the others, confirm that the calculations and plots are correct and log any pertinent observations. They define a range of values that the control statistic could randomly fall within, given there is only common cause to the variation.

If the average of two different subgroups from the same process is calculated, it is reasonable to expect that they will be about the same. But since they were calculated using different parts, the two averages are not expected to be identical.

Even though the two averages are different, there is a limit to how different they are expected to be, due to random chance. This defines the location of the control limits. This is the basis for all control chart techniques. If the process is stable i. If the control statistic exceeds the control limits then this indicates that a special cause variation may be present. There are two phases in statistical process control studies.

The first is identifying and eliminating the special causes of variation in the process. The objective is to stabilize the process. A stable, predictable process is said to be in statistical control. The second phase is concerned with predicting future measurements thus verifying ongoing process stability. During this phase, data analysis and reaction to special causes is done in real time.

Once stable, the process can be analyzed to determine if it is capable of producing what the customer desires. Identify the centerline and control limits of the control chart To assist in the graphical analysis of the plotted control statistics, draw lines to indicate the location estimate centerline and control limits of the control statistic on the chart.

In general, to set up a control chart calculate: See Chapter 11, Section C, for the formulas. Special causes can affect either the process location e. The objective of control chart analysis is to identify any evidence that the process variability or the process location is not operating at a constant level - that one or both ai-e out of statistical control - and to take appropriate action. In the subsequent discussion, the Average will be used for the location control statistic and the Range for the variation control statistic.

The conclusions stated for these control statistics also apply equally to the other possible control statistics. Since the control limits of the location statistic are dependent on the variation statistic, the variation control statistic should be analyzed first for stability. The variation and location statistics are analyzed separately, but comparison of patterns between the two charts may sometimes give added insight into special causes affecting the process A process cannot be said to be stable in statistical control unless both charts have no out-of-control conditions indications of special causes.

Analyze the Data Since the ability to interpret either the subgroup ranges or subgroup averages depends' on the estimate of piece-to-piece variability, the R chart is analyzed first. The data points are compared with the control limits, for points out of control or for unusual patterns or trends see Chapter 11, Section D ecial Causes Range For each indication of a special cause in the range chart data, conduct an analysis of the process operation to deternine the cause and improve process understanding; correct that condition, and prevent it from recurring.

The control chart itself should be a useful guide in problem analysis, suggesting when the condition may have began and how long it continued. However, recognize that not all special causes are negative; some special causes can result in positive process improvement in terms of decreased variation in the range - those special causes should be assessed for possible institutionalization within the process, where appropriate.

Timeliness is important in problem analysis, both in terms of minimizing the production of inconsistent output, and in terms of having fresh evidence for diagnosis. It should be emphasized that problem solving is often the most difficult and time-consuming step. Statistical input from the control chart can be an appropriate starting point, but other methods such as Pareto charts, cause and effect diagrams, or other graphical analysis can be helpful see Ishikawa Ultimately, however, the explanations for behavior lie within the process and the people who are involved with it.

Thoroughness, patience, insight and understanding will be required to develop actions that will measurably improve performance. CHAPTER I1 - Section A Control Charting Process Recalculate Control Limits Range Chart When conducting an initial process study or a reassessment of process capability, the control limits should be recalculated to exclude the effects of out-of-control periods for which process causes have been clearly identified and removed or institutionalized.

Exclude all subgroups affected by the special causes that have been identified and removed or institutionalized, then recalculate and plot the new average range and control limits.

Confirm that all range points show control when compared to the new limits; if not, repeat the identification, correction, recalculation sequence. If any subgroups were dropped from the R chart because of identified special causes, they should also be excluded from the chart. The exclusion of subgroups representing unstable conditions is not just "throwing away bad data. This, in turn, gives the most appropriate basis for the control limits to detect future occurrences of special causes of variation.

Be reminded, however, that the process must be changed so the special cause will not recur if undesirable as part of the process. Find and Address Special Causes Average Chart Once the special cause which affect the variation Range Chart have been identified and their effect have been removed, the Average Chart can be evaluated for special causes.

In Figure For each indication of an out-of-control condition in the average chart data, conduct an analysis of the process operation to determine the reason for the special cause; correct that condition, and prevent it from recurring. Use the chart data as a guide to when such conditions began and how long they continued. Timeliness in analysis is important, both for diagnosis and to minimize inconsistent output. Problem solving techniques such as Pareto analysis and cause-and-effect analysis can help.

Ishikawa Recalculate Control Limits Average Chart When conducting an initial process study or a reassessment of process capability, exclude any out-of-control points for which special causes have been found and removed; recalculate and plot the process average and control limits. The preceding discussions were intended to give a functional introduction to control chart analysis.

Even though these discussions used the Average and Range Charts, the concepts apply to all control chart approaches. Furthermore, there are other considerations that can be useful to the analyst. One of the most important is the reminder that, even with processes that are in statistical control, the probability of getting a false signal of a special cause on any individual subgroup increases as more data are reviewed.

While it is wise to investigate all signals as possible evidence of special causes, it should be recognized that they may have been caused by the system and that there may be no underlying local process problem. If no clear evidence of a special cause is found, any "corrective" action will probably serve to increase, rather than decrease, the total variability in the process output. It might be desirable here to adjust the process to the target if the process center is off target.

These limits would be used for ongoing monitoring of the process, with the operator and local supervision responding to signs of out-of-control conditions on either the location and variation X or R chart with prompt action see Figure A change in the subgroup sample size would affect the expected average range and the control limits for both ranges and averages.

This situation could occur, for instance, if it were decided to take smaller samples more frequently, so as to detect large process shifts more quickly without increasing the total number of pieces sampled per day. To adjust central lines and control limits for a new subgroup sample size, the following steps should be taken: Using the table factors based on the new subgroup size, calculate the new range and control limits: Plot these new control limits on the chart as the basis for ongoing process control.

As long as the process remains in control for both averages and ranges, the ongoing limits can be extended for additional periods. If, however, there is evidence that the process average or range has changed in either direction , the cause should be determined and, if the change is justifiable, control limits should be recalculated based on current performance.

The goal of the process control charts is not perfection, but a reasonable and economical state of control. For practical purposes, therefore, a coiltrolled process is not one where the chart never goes out of control. Obviously, there are different levels or degrees of statistical control. The definition of control used can range from mere outliers beyond the control limits , through runs, trends and stratification, to fidl zone analysis.

As the definition of control used advances to fill1 zone analysis, the liltelihood of finding lack of control increases for example, a process with no outliers may demonstrate lack of control though an obvious run still within the control limits. For this reason, the definition of control used should be consistent with your ability to detect this at the point of control and should remain the same within one time period, within one process.

Some suppliers may not be able to apply the hller definitions of conti on the floor on a real-time basis due to immature stages of operator training or lack of sophistication in the operator's ability. The ability to detect lack of control at the point of control on a real-time basis is an advantage of the control chart.

Over-intespretation of the data can be a danger in maintaining a true state of economical control. The presence of one or more points beyond either control limit is primary evidence of special cause variation at that point. This special cause could have occurred prior to this point.

Since points beyond the control limits would be rare if only variation from comrnon causes were present, the presumption is that a special cause has accounted for the extreme value. Therefore, any point beyond a control limit is a signal for analysis of the operation for the special cause. Mark any data points that are beyond the control limits for investigation and corrective action based on when that special cause actually started.

A point outside a control limit is generally a sign of one or more of the following: The control limit or plot point has been miscalculated or misplotted. The piece-to-piece variability or the spread of the distribution has increased i. The measurement system has changed e. For charts dealing with the spread, a point below the lower control limit is generally a sign of one or more of the following: The control limit or plot point is in error.

The spread of the distribution has decreased i. A point beyond either control limit is generally a sign that the process has shifted either at that one point or as part of a trend see Figure When the ranges are in statistical control, the process spread - the within-subgroup variation - is considered to be stable. The averages can then be analyzed to see if the process location is changing over time. If the averages are not in control, some special causes of variation are malting the process location unstable.

This could give the first warning of an unfavorable condition which should be corrected. Conversely, certain patterns or trends could be favorable and should be studied for possible permanent improvement of the process.

Comparison of patterns between the range and average charts may give added insight. There are situations where an "out-of-control pattern" may be a bad event for one process and a good event for another process. An example of this is that in an X and R chart a series of 7 or more points on one side of the centerline may indicate an out-of-control situation.

If this happened in a p chart, the process may actually be improving if the series is below the average line less nonconformances are being produced. So in this case the series is a good thing - if we identify and retain the cause.

Mark the point that prompts the decision; it may be helpful to extend a reference line back to the beginning of the run. Analysis should consider the approximate time at which it appears that the trend or shift first began.

J Greater spread in the output values, which could be from an irregular cause such as equipment malfunction or loose fixturing or from a shift in one of the process elements e. J A change in the measurement system e. Smaller spread in output values, which is usually a good condition that should be studied for wider application and process improvement. J A change in the measurement system, which could mask real performance changes.

As the subgroup size n becomes smaller 5 or less , the likelihood of runs below R increases, so a run length of 8 or more could be necessary to signal a decrease in process variability. A run relative to the process average is generally a sign of one or both of the following: J The process average has changed - and may still be changing.

J The measurement system has changed drift, bias, sensitivity, etc. Care should be taken not to over-interpret the data, since even random i. Examples of nom-andom patterns could be obvious trends even though they did not satisfy the runs tests , cycles, the overall spread of data points within the control limits, or even relationships among values within subgroups e.

One test for the overall spread of subgroup data points is described below. Generally, about of the plotted points should lie within the middle third of the region between the control limits; about of the points should be in the outer two-thirds of the region.

If several process streams are present, they should be identified and tracked separately see also Appendix A. Figure The most commonly used are discussed above. Except for the first criterion, the numbers associated with the criteria do not establish an order or priority of use. Determination of which of the additional criteria to use depends on the specific process characteristics and special causes which are dominant within the process.

Note 2: Care should be given not to apply multiple criteria except in those cases where it makes sense. The application of each additional criterion increases the sensitivity of finding a special cause but also increases the chance of a Type I error.

In reviewing the above, it should be noted that not all these considerations for interpretation of control can be applied on the production floor.

There is simply too much for the appraiser to remember and utilizing the advantages of a computer is often not feasible on the production floor. So, much of this more detailed analysis may need to be done offline rather than in real time. This supports the need for the process event log and for appropriate thoughtfill analysis to be done after the fact. Another consideration is in the training of operators. Application of the additional control criteria should be used on the production floor when applicable, but not until the operator is ready for it; both with the appropriate training and tools.

With time and experience the operator will recognize these patterns in real time. The Average Run Length is the number of sample subgroups expected between out-of-control signals.

The in-control Average Run Length A X , is the expected number of subgroup samples between false alai-ins. The ARL is dependent on how out-of-control signals are defined, the true target value's deviation from the estimate, and the tme variation relative to the estimate.

This table indicates that a mean shift of 1. A shift of 4 standard deviations would be identified within 2 subgroups. Larger-subgroups reduce the size of o, and tighten the control limits around X. Alternatively, the ARL ' s can be reduced by adding more out-of-control criteria. Other signals such as runs tests and patterns analysis along with the control limits will reduce the size of the ARL ' s.

The following table is approximate ARL's for the same chart adding - the runs test of 7-points in a row one side o f 2. As can be seen, adding the one extra out-of-control criterion significantly reduces the ARLs for small shifts in the mean, a decrease in the risk of a Type I1 error.

Note that the zero-shift the in-control ARL is also reduced significantly. This is an increase in the risk of a Type I error or false alarm. This balance between wanting a long ARL when the process is in control versus a short ARL when there is a process change has led to the development of other charting methods.

Some of those methods are briefly described in Chapter Control chart constants for all control charts discussed in this section are listed in Appendix E. Deviation of X: Chart Features: Centerline - CL,. Estimate of the Standard Deviation of X: Sample Value: X ' is the value of the othelement in the sample when the data are arranged in ascending order Ix? Average Range: There are other approaches in the literature which do not use averages.

Estimate of the Stan Chart Features: Therefore, valid signals occur only in the ; form of points beyond the control limits. Other rules used to evaluate the j data for non-random patterns see Chapter II, Section B are not reliable indicators of out-of-control conditions.

Attributes charts are part of probability based charts discussed in Chapter These control charts use categorical data and the probabilities related to the categories to identify the presences of special causes. The analysis of categorical data by these charts generally utilizes the binomial, or poisson distribution approximated by the normal form. Traditionally attributes charts are used to track unacceptable parts by identifying nonconfoi-ming items and nonconforrnities within an item.

There is nothing intrinsic in attributes charts that restricts them to be solely used in charting nonconforming items. They can also be used for tracking positive events. However, we will follow tradition and refer to these as nonconformances and nonconformities. Since the control limits are based on a normal approximation, the sample size used should be such that np 2 5.

Centerline For constant control limits when the sample size varies min ni for situations where -2 0. Most of these charts were developed to address specific process situations or conditions which can affect the optimal use of the standard control charts. A brief description of the more common charts will follow below. This description will define the charts, discuss when they should be used and list the formulas associated with the chart, as appropriate.

If more information is desired regarding these charts or others, please consult a reference text that deals specifically with these types of control charts. Probability based charts belong to a class of control charts that uses categorical data and the probabilities related to the categories.

The analysis of categorical data generally uses the binomial, multinomial or poisson distribution. Examples of these charts are the attributes charts discussed in Chapter I1 Section C. However, there is nothing inherent in any of these forms or any other forms that requires one or more categories to be "bad. This is as much the fault of professionals and teachers, as it is the student's. There is a tendency to take the easy way out, using traditional and stereotypical examples.

This leads to a failure to realize that quality practitioners once had or were constrained to the tolerance philosophy; i. With stoplight control charts, the process location and variation are controlled using one chart. The chart tracks the number of data points in the sample in each of the designated categories.

The decision criteria are based on the expected probabilities for these categories. A typical scenario will divide the process variation into three parts: One simple but effective control procedure of this type is stoplight control which is a semi- variables more than two categories technique using double sampling.

In this approach the target area is designated green, the warning areas as yellow, and the stop zones as red.

The use of these colors gives rise to the "stoplight" designation. Of course, this allows process control only if the process distribution is known. The quantification and analysis of the process requires variables data. The focus of this tool is to detect changes special causes of variation in the process. That is, this is an appropriate tool for stage 2 activities27 only. At its basic implementation, stoplight control requires no computations and no plotting, thereby making it easier to implement than control charts.

Since it splits the total sample e. Although, the development of this technique is thoroughly founded in statistical theory, it can be implemented and taught at the operator level without involving mathematics. Process performance including measurement variability is acceptable.

The process is on target. Once the assumptions have been verified by a process performance study using variables data techniques, the process distribution can be divided such that the average i 1.

Any area outside the process distribution the If the process distribution follows the normal form, approximately Similar conditions can be established if the distribution is found to be non-normal.

Check 2 pieces; if both pieces are in the green area, continue to run. If one or both are in the red zone, stop the process, notify the designated person for corrective action and sort material. When setup Select 2 or other corrections are made, repeat step 1.

Samples 3. If one or both are in a yellow zone, check three more pieces. If any I pieces fall in a red zone, stop the process, notify the designated person for corrective action and sort material. When setup or other Green? J If no pieces fall in a red zone, but three or more are in a yellow zone out of 5 pieces stop the process, notify the designated 1 J person for corrective action.

When setup or other corrections are made, repeat step I. If three pieces fall in the green zone and the rest are yellow, continue to run. Select 3 Additional Samples Measurements can be made with variables as well as attributes gaging. I Certain variables gaging such as dial indicators or air-electronic columns are better suited for this type of program since the indicator background Any Red? Although no charts or graphs are required, charting is Yes I recommended, especially if subtle trends shifts over a relatively long period of time are possible in the process.

In any decision-making situation there is a risk of making a wrong decision. With sampling, the two types of errors are: Probability of calling the process bad when it is actually good false alarm rate. Probability of calling the process good when it is actually bad miss rate. Sensitivity refers to the ability of the sampling plan to detect out-of-control conditions due to increased variation or shifts from the process average.

The disadvantage of stoplight control is that it has a higher false alarm rate than an X and R chart of the same total sample size. The advantage of stoplight control is that it is as sensitive as an X and R chart of the same total sample size.

Users tend to accept control mechanisms based on these types of data due to the ease of data collection and analysis. Focus is on the target not specification limits - thus it is compatible with the target philosophy and continuous improvement. An application of the stoplight control approach for the purpose of nonconformance control instead of process control is called Pre- control. It is based on the specifications not the process variation.

The first assumption means that all special sources of variation in the process are being controlled. The second assumption states that The area outside the specifications is labeled red. For a process that is normal with C , Cpk equal to 1. Similar calculations could be done if the distribution was found to be non-normal or highly capable.

The pre-control sampling uses a sample size of two. However, before the sampling can start, the process must produce 5 consecutive parts in the green zone. Each of the two data points are plotted on the chart and reviewed against a set of rules. Every time the process is adjusted, before the sampling can start the process must produce 5 consecutive parts in the green zone.

Pre-control is not a process control chart but a lionconformance control chart so great care must be taken as to how this chart is used and interpreted. Pre-control charts should be not used when you have a C,, Cpk greater than one or a loss function that is not flat within the specifications see Chapter IV.

The disadvantage of pre- control is that potential diagnostics that are available with normal process control methods are not available. Further, pre-control does not evaluate nor monitor process stability. Pre-control is a compliance based tool not a process control tool.

However there are processes that only produce a small number of products during a single run e. Further, the increasing focus on just-in-time JIT inventory and lean manufacturing methods is driving production runs to become shorter. From a business perspective, producing large batches of product several times per month and holding it in inventory for later distribution, can lead to avoidable, unnecessary costs.

Manufacturers now are moving toward JIT - producing much smaller quantities on a more frequent basis to avoid the costs of holding "work in process" and inventory.

For example, in the past, it may have been satisfactory to make 10, parts per month in batches of 2, per week. Now, customer demand, flexible manufacturing methods and JIT requirements might lead to malting and shipping only parts per day. To realize the efficiencies of short-run processes it is essential that SPC methods be able to verifL that the process is truly in statistical control, i.

The process must be operated in a stable and consistent manner. The process aim must be set and maintained at the proper level. The Natural Process Limits must fall within the specification limits. Short-run oriented charts allow a single chart to be used for the control of multiple products. There are a number of variations on this theme. Among the more widely described short-run charts are: Production processes for short runs of different products can be characterized easily on a single chart by plotting the differences between the product measurement and its target value.

These charts can be applied both to individual measurements and to grouped data. The DNOM approach assumes a common, constant variance among the products being tracked on a single chart.

When there are substantial differences in the variances of these products, using the deviation from the process target becomes problematic. In such cases the data may be standardized to compensate for the different product means and variability using a transformation of the form: This class of charts sometimes is referred to as Z or Zed charts.

In some short-run processes, the total production volume may be too small to utilize subgrouping effectively.

In these cases subgrouping measurements may work counter to the concept of controlling the process and reduce the control chart to a report card function.

But when subgrouping is possible, the measurements can be standardized to accommodate this case. Standardized Attributes Control Charts.

Attributes data samples, including those of variable size, can be standardized so that multiple part types can be plotted on a single chart.

The standardized statistic has the form: There are situations where small changes in the process mean can cause problems. Shewhart control charts may not be sensitive enough to efficiently detect these changes, e. The two alternative charts discussed here were developed to improve sensitivity for detecting small excursions in the process mean. See Montgomery , Wheeler and Grant and Leavenworth for in-depth discussions of these methods and comparisons with the supplemental detection rules for enhancing the sensitivity of the Shewhart chart to small process shifts A CUSUM chart plots the cumulative sum of deviations of successive sample means from a target specification so that even minor permanent shifts 0.

For larger shifts, Shewhart control charts are just as effective and take less effort. These charts are most often used to monitor continuous processes, such as in the chemical industry, where small shifts can have significant effects. A graphical tool V-mask is laid over the chart with a vertical reference line offset from origin of the V passing through the last plotted point see Figure The offset and angle of the arms are functions of the desired level of-sensitivity to process shifts.

An out-of-control I Vmask Chart for Coating Thickness condition e g , a significant process shift is indicated when previously plotted points fall outside of the V-mask arms. These arms take the place of the upper and lower control limits. The chart in Figure When the V-mask was positioned on prior data points, all samples fell within the control limits, so there was no indication of an out-of-control situation.

In comparison, an Individual and Moving Range X: MR plot of the same data Figure See Montgomery for a discussion of this procedure. An initial value, zo must be estimated to start the process with the first sample. Through recursive substitution, successive values of 2, can be determined from the equation: Some authors also consider control limit widths other than three-sigma when designing an EWMA chart.

But, current literature indicates that this approach may not necessary. See Montgomery and Wheeler for detailed discussions. The advantage of this chart is its ability to efficiently detect small process mean shifts, typically less then 1.

Its disadvantage is its inability to efficiently detect large changes in the process mean. In situations where large process mean shifts are expected, the Shewhart control chart is recommended.

A common use of the EWMA is in the chemical industry where large day-to-day fluctuations are common but may not be indicative of the lack of process predictability. Figures This approach is based on a simple, unweighted moving average. See Montgomery However, the EWMA also can be used to forecast a "new" process mean for the next time period.

These charts can be useful to signal a need to Because moving averages are involved, the points being plotted are correlated dependent and therefore detection of special causes using pattern analysis is not appropriate since they assume independence among the points. But they are not appropriate as tools for process improveinent see Wheeler See Lowery et al.

If the underlying distribution of a process is known to be non-normal, there are several approaches that can be used: Use the standard Shewhai-t control charts with appropriate sample size. Use a transformation to convert the data into a near normal form and use the standard charts. Use control limits based on the native non-nonnal forin. The approach which is used depends on the amount the process distribution deviates from normality and specific conditions related to the process.

His goal was to develop a tool usefill for the economic control of quality. Shewhart control charts can be used for all processes. However, as the process distribution deviates from normality, the sensitivity to change decreases, and the risk associated with the Type I error increases.

For many non-normal process distributions, the Central Limit Theorem can be used to mitigate the effect of non-normality. That is, if a sufficiently large subgroup size is used,34the Shewhart control chart can be used with near normal sensitivity and degree of risk.

The distribution of F,7approaches the normal distribution N px ,- 3 The "rule of thumb" is that the range chart should be used with subgroups of size fifteen or less. The standard deviation chart can be used for all subgroup sizes. When a large subgroup size is not possible, the control limits of the Shewhart control charts can be modified using adjustment factors to compensate for the effect of the non-normality. Since non-normal distributions are either asymmetric, have heavier tails than the normal distribution, or both, use of the standard f 3 sigma control limits can increase the risk of false alarms, especially if pattern analysis for special causes is used.

In this approach the non-normal distributional forrn is characterized by its skewness or kurtosis or both. Tabled or algorithmic correction factors are then applied to the normal control limits. Burr, I. Any significant change in the distribution is an indicator that the process is being affected by special causes.

An alternative to the adjustment factors is to convert the data instead of the control limits. In this approach, a transformation is determined which transforms the non-normal process distribution into a near normal distribution. The selected transformation is then used to transform each datum point and the standard Shewhart control chart methodologies are used on the converted data. For this approach to be effective, the transformation must be valid.

This typically requires a capability study with a sample size sufficiently large to effectively capture the non-normal form. Also, because the transfoimations tend to be mathematically complex, this approach is only effective and efficient when implemented using a computer program. There are situations when the above approaches are not easily handled. Examples of these situations occur when the process distribution is highly non-normal and the sample size cannot be large, e.

In these situations a control chart can be developed using the non-normal form directly to calculate the chart control limits. The control limits are based on the exponential distribution with parameter 0 equal to the mean time between failures MTBF. In general, control limits for this approach are selected to be the 0.

Like the other approaches above, for this approach to be effective, it typically requires a capability study with a sample size sufficiently large to capture the non-normal form. Advantages of this approach are that the data can be plotted without complex calculations and it provides more exact control limits than adjustment factors.

Multivariate charts are appropriate when it is desired to simultaneously control two or more related characteristics that influence the performance of a process or product. Their advantage is that the combined effect of all variables can be monitored using a single statistic. For instance, the combined effects of pH and temperature of a part washing fluid may be linked to past cleanliness measured by particle count.

A multivariate chart provides a means to detect shifts in the mean and changes in the parameter Y uL relationships. A correlation matrix of variables can be used to test whether a multivariate control chart could be useful. A multivariate chart reduces Type I error, i. Pledian The simplicity of this approach is also its disadvantage.

Additional analysis using other statistical tools may be required to isolate the special cause. See Kourti and MacGregor Multivariate charts are mathematically complex, and computerized implementation of these methods is essential for practical application. It is important, however, that the use of appropriate techniques for estimating dispersion statistics be verified.

See Wheeler , Montgomery and current literature such as Mason and Young , for detailed discussions of multivariate control charts. In Chapter I, Section E, a Case 3 process was defined as one not in statistical control but acceptable to tolerance.

Special causes of variation are present, the source of variation is lcnown and predictable but may not be eliminated for economic reasons. However, this predictability of the special cause may require monitoring and control.

One method to determine deviations in the predictability of special cause variation is the Regression chart. Regression charts are used to monitor the relationship between two correlated variables in order to determine if and when deviation from the laown predictable relationship occurs. These charts originally were applied to administrative processes but they have also been used to analyze the correlation between many types of variables. Regression charts track the linear correlation between two variables, for example: Throughput versus machine cycle time line speed.

Dimensional change relative to tooling cycles. For example, if a tool has constant wear relative to each cycle of the process, a dimensional feature such as diameter Y could be predicted based on the cycles X performed. Using data collected over time this linear relationship can be modeled as When X equals zero cycles, the predicted Y is equal to bo.

So bo is the predicted dimension from a tool never used. The predictive limits computed are curved lines with the tightest point at. Often they are replaced with the f k 3s in order to X tighten the control limits at each extreme for X. Points that exceed the control limits indicate tooling which has a tool life which is significantly different from the base tool life. This can be advantageous or detrimental depending on the specific situation.

Care should be taken in making predictions extrapolating outside of the range of the original observations. The accuracy of the regression model for use outside of this range should be viewed as highly suspect. Both the prediction interval for future values and the confidence interval for the regression equation become increasingly wide. Additional data may be needed for model validation.

Discussion on confidence intervals can be found in Hines and Montgomery An alternative approach to the Regression Chart is to chart the residual values. From the regression equation, the residual value E is Y - f. A chart of the residual values could be treated in the same manner as an Individuals chart with 3equal to zero. This approach would be more hseful and intuitive when the variable relationships are more complex. Control chart methods generally assume that the data output from a process are independent and identically distributed.

For many processes this assumption is not correct. These types of processes have output that are autocowelated and analysis with standard charting methods may result in erroneous conclusions. One common approach to contend with serial dependency is to take samples far enough apart in time that the dependency is not apparent. This often works in practice but its effectiveness relies on postponing sample collection and may extend the sampling interval longer than is appropriate.

Also, this approach ignores information which may be useful or even necessary for accurate prediction in order to utilize techniques which were not designed for that type of data. Processes which drift, walk or cycle through time are good candidates for time series analysis and an ARMA method may be appropriate.

The autoregressive AR model is defined by The current value observed is equal to a constant, a weighted combination of prior observations and a random component. There are similar restrictions for the 4 ' s in the higher order models. Differencing removes the serial dependence between an observation and another lagged observation. The differenced observation is equal to the current observation minus the observation made k samples prior.

The data should only be differenced if the model is not stationary.

Most data from manufacturing processes will not need differencing. The processes do not diverge to infinity. The next step is to determine the number of autoregressive and moving average parameters to include in the model. Typically the number of 4 ' s or 6"s needed will not be more than two. To estimate the parameters use Non-Linear Estimation. Once the model is determined and stationaly, and the parameters are estimated then the next observation can be predicted from past observations.

For a more complete discussion see Box, Jenkins and Reinsel The first four rules can be easily implemented with manual control charts, but the latter rules do not lend themselves to rapid visual identification since they require the determination of the number of standard deviations a plotted point is from the centerline.

This can be aided by dividing the control chart into "zones" at 1, 2, and 3 standard deviations from the centerline. The zones assist in the visual determination of whether a special cause exists using one or more of the tabled criteria.

See Montgomery and Wheeler The run sums control chart analysis was introduced by Roberts and studied further by Reynolds This approach assigns a score to each zone. A typical set of scores are: It analyzes a cumulative score, based on the zones. The cumulative score is the absolute value of the sum of the scores of the zones in which the points are plotted.

Every time the centerline is crossed the cumulative score is reset to zero. Score one Chart 3. Thus, the analyst does not need to recognize the patterns associated with non-random behavior as on a Shewhart chart. With the scoring of 0, 2,4, 8 this method is equivalent to the standard criteria 1, 5, and 6 for special causes in an 2 or Individuals chart and is more stringent than criterion 8. With the scoring of 1,2,4, 8 this method is equivalent to the standard criteria 1, 2, 5, and 6 for special causes in an X or Individuals chart and is more stringent than criteria 7 and 8.

As shown in the figure above, trends criterion 3 can also be detected depending on the start and stop of the trend.

Zone control charts can be modified to eliminate the point-plotting process; the points are plotted in the zone not to a scale. Thus, one standard zone control chart can fit most needs; when to act on a process is determined by the charting procedure. For example, one set of weights scores can be used during the initial phase for detecting special causes.

Then the weights could be changed when the process is in control and it is more important to detect drift. The efficiency of the zone control chart is demonstrated by comparing its average run lengths with those of standard control tests.

Data should always be presented in such a way that preserves the evidence in the data for all the predictions that might be made from these data. Whenever an average, range, or histogram is used to summarize data, the summary should not mislead the user into taking any action that the user would not take if the data were presented in a time series. Since the normal distribution is described by its process location mean and process width range or standard deviation this question becomes: Has the process location or process width changed?

Consider only the location. What approach can be used to determine if the process location has changed? One possibility would be to look at 10 This is done by using the process infomation to identify and eliminate the existence of special causes or detecting them and removing their effect when they do occur.

The exact level of belief in prediction of future actions cannot be determined by statistical measures alone. Subject-matter expertise is required. Tools For Process Control and Improvement every part produced by the process, but that is usually not economical. The alternative is to use a sample of the process, and calculate the mean of the sample. Has the process changed n. The answer is that this very rarely happens. But how is this possible?

After all, the process has not changed. Doesn't that imply that the process mean remains the same? The reason for this is that the sample mean is only an estimation of the process mean. To make this a little clearer, consider taking a sample of size one. The mean of the sample is the individual sample itself. With such random samples from the distribution, the readings will eventually cover the entire process range.

Using the formula: Then, compare the Sam le to the sampling distribution using the 1 3 standard deviation limits! These are called control limits. If the sample falls outside these limits then there is reason to believe that a special cause is present.

Further, it is expected that all the random samples will exhibit a random ordering within these limits. If a group of samples shows a pattern there is reason to believe that a special cause is present.

I l3 Shewhart selected the k3 standard deviation limits as useful limits in achieving the economic control of processes. Since Control Charts provide the operational definition of "in statistical control," they are useful tools at every stage of the Improvement Cycle see Chapter I, Section F. J Is the metric appropriate; i. J Are the data consistent; i. Are the data reliable; i. J Is the measurement system appropriate and acceptable?

Plot the data: Tools For Process Control and Improveinent Compare to the centerline and determine if there are any non- random patterns clearly discernible Analyze the data Take appropriate action The data are compared with the control limits to see whether the variation is stable and appears to come from only common causes. If special causes of variation are evident, the process is studied to further determine what is affecting it.

After actions see Chapter I, Section D have been talten, further data are collected, control limits are recalculated if necessary, and any additional special causes are acted upon. After all special causes have been addressed and the process is running in statistical control, the control chart continues as a monitoring tool. Process capability can also be calculated.

If the variation from common causes is excessive, the process cannot produce output that consistently meets customer requirements. The process itself must be investigated, and, typically, management action must be taken to improve the system.

Is the metric appropriate; i. J Will the data be consistent; i. J Will the data be reliable; i. Plot each point as it is determined: J Compare to control limits and determine if there are any points outside the control limits J Compare to the centerline and determine if there are any non- random patterns clearly discernible 9 Analyze the data Take appropriate action: J Continue to run with no action talten; or J Identify source of the special cause and remove if unacceptable response or reinforce if acceptable response ; or Continue to run with no action taken and reduce sample size or frequency; or Initiate a continual improvement action Often it is found that although the process was aimed at the target value during initial setup, the actual process location p 15 may not l5 1 The Greek letter p is used to indicate the actual process mean, which is estimated by the sample X mean.

Tools For Process Control and Improvement match this value. For those processes where the actual location deviates from the target and the ability to relocate the process is economical, consideration should be given to adjusting the process so that it is aligned with the target see Chapter IV, Section C. This assumes that this adjustment does not affect the process variation. This may not always hold true, but the causes for any possible increase in process variation after re-targeting the process should be understood and assessed against both customer satisfaction and economics.

The long-term performance of the process should continue to be analyzed. This can be accomplished by a periodic and systematic review of the ongoing control charts. New evidence of special causes might be revealed. Some special causes, when understood, will be beneficial and useful for process improvement Others will be detrimental, and will need to be corrected or removed.

The purpose of the Improvement Cycle is to gain an understanding of the process and its variability to improve its performance. As this understanding matures, the need for continual monitoring of product variables may become less - especially in processes where documented analysis shows that the dominant source of variation are more efficiently and effectively controlled by other approaches.

For example: For a process that is in statistical control, improvement efforts will often focus on reducing the common cause variation in the process. Reducing this variation will have the effect of "shrinking" the control limits on the control chart i. Many people, not familiar with control charts, feel this is "penalizing" the process for improving.

They do not realize that if a process is stable and the control limits are calculated correctly, the chance that the process will erroneously yield an out-of-control point is the same regardless of the distance between the control limits see Chapter I, Section E. One area deserving mention is the question of recalculation of control chart limits. Once properly computed, and if no changes to the common cause variation of the process occur, then the control limits remain legitimate.

Signals of special causes of variation do not require the recalculation of control limits. For long-term analysis of control charts, it is best to recalculate control limits as infrequently as possible; only as dictated by changes in the process. For continual process improvement, repeat the three stages of the Improvement Cycle: Be used by operators for ongoing control of a process e Help the process perform consistently and predictably Allow the process to achieve - Higher quality - Lower unit cost - Higher effective capability e Provide a common language for discussing the performance of the process Distinguish special from common causes of variation, as a guide to local action or action on the system.

The gains and benefits from the control charts are directly related to the following: Management Philosophy: How the company is managed can directly impact the effectiveness of SPC. The following are examples of what needs to be present: Focus the organization on variation reduction. Establish an open environment that minimizes internal competition and supports cross-functional teamwork.

Support and fund management and employee training in the proper use and application of SPC. Show support and interest in the application and resulting benefits of properly applied SPC. Make regular visits and asks questions in those areas. Apply SPC to promote the understanding of variation in engineering processes. Apply SPC to management data and use the information in day-to- day decision making. The above items support the requirements contained in , IS0 Engineering Philosophy: How engineering uses data to develop designs can and will have an influence on the level and type of variation in the finished product.

The following are some ways that engineering can show effective use of SPC: Focus the engineering organization on variation reduction throughout the design process; e. Establish an open engineering environment that minimizes internal competition and supports cross-functional teamwork. Support and fund engineering management and employees training in the proper use and application of SPC.

Require an understanding of variation and stability in relation to measurement and the data that are used for design development. How manufacturing develops and operates machines and transfer systems can impact the level and type of variation in the finished product: Focus the manufacturing organization on variation reduction; e. Support and fund manufacturing management and employees training in the proper use and application of SPC.

Apply SPC in the understanding of variation in the manufacturing processes. Require an understanding of variation and stability in relation to measurement and the data that are used for process design development. Use the analysis of SPC information to support process changes for the reduction of variation.

Do not release control charts to operators until the process is stable. The transfer of responsibility for the process to production should occur after the process is stable. Assure proper placement of SPC data for optimum use by the employees. Quality Control: The Quality function is a critical component in providing support for an effective SPC process: Support SPC training for management, engineering, and employees in the organization.

Mentor key people in the organization in the proper application of SPC. Assist in the identification and reduction of the sources of variation. Ensure optimum use of SPC data and information. Production personnel are directly related to the process and can affect process variation.

They should: Be properly trained in the application of SPC and problem solving. Have an understanding of variation and stability in relation to measurement and the data that are used for process control and improvement. Then the Plan-Do-Study-Act process can be used to fiirther improve the process. At a minimum, the use of SPC for process monitoring will result in the process being maintained at its current performance level. However, real iinprovements can be achieved when SPC is used to direct the way processes are analyzed.

Proper use of SPC can result in an organization focused on ilnproving the quality of the prodtict and process. There are basically two types of control charts, those for variables data and those for attributes data.

The process itself will dictate which type of control chart to use. If the data derived from the process are of a discrete nature e. If the data derived from the process are of a continuous nature e. Within each chart type there are several chart combinations that can be used to further evaluate the process.

Charts based on count or percent data e. When introducing control charts into an organization, it is important to prioritize problem areas and use charts where they are most needed. Problem signals can come from the cost control system, user complaints, internal bottlenecks, etc. The use of attributes control charts on key overall quality measures often points the way to the specific process areas that would need more detailed examination including the possible use of control charts for variables.

If available, variables data are always preferred as they contain more useful information than attributes data for the same amount of effort. For example you need a larger sample size for attributes than for variables data to have the same amount of confidence in the results.

If the use of variables measurement systems is infeasible, the application of attributes analysis should not be overlooked. Some current metrology literature defines accuracy as the lack of bias. CHAPTER I1 Control Chai-ts Variables conti charts represent the typical application of statistical process control where the processes and their outputs can be characterized by variable ineasurements see Figure A quantitative value e.

This can lead to lower total measurement costs due to increased efficiency; Because fewer pasts need to be clieclted before inaltiiig reliable decisions, the time delay between an "out-of-coiitrol" signal and corrective action is usually shorter; and With variables data, performance of a process can be analyzed, and improvement can be quantified, even if all individual values are within the specification limits. This is impoi-taiit in seeking continual improvement. A variables chai-t can explain process data in terms of its process variation, piece-to-piece variation, and its process average.

Because of this, control charts for variables are usually prepared and analyzed in pairs, one chart for process average and another for the process variatioii. X is the arithmetic average of the values in small subgroups - a measure of process average; R is the range of values within each subgroup highest minus lowest - a measure of process variatioii.

However, there are a number of other control charts that may be more useful under certain circumstances. The X and R charts may be the most common charts, but they be the most appropriatefor all situations. Surface should conform to master standard Conform to what degree? Any material applied to mirror back to shall ' Visible to whom? Attributes data have discrete values and they can be counted for recording and analysis. Examples include the presence of a required label, the continuity of an electrical circuit, visual analysis of a painted surface, or errors in a typed document.

Other examples are of characteristics that are measurable, but where the results are recorded in a simple yeslno fashion, such as the conformance of a shaft diameter when measured on a golno-go gage, the acceptability of door margins to a visual or gage check, or on-time delivery performance.

Control charts for attributes are important for several reasons: Attributes data situations exist in any technical or administrative process, so attributes analysis techniques are usefill in many applications. The most significant difficulty is to develop precise operational definitions of what is conforming. Attributes data are already available in many situations - wherever there are existing inspections, repair logs, sorts of rejected material, etc.

In these cases, no additional effort is required for data collection. The only expense involved is for the effort of converting the data to control chart form. Where new data must be collected, attributes information is generally quick and inexpensive to obtain.

With simple gaging e g , LSL USL a golno-go gage or visual standards , specialized measurement skills are often are not required. There are many occasions where Target specialized measurement slcills are required especially when the part measured falls in the "gray" area?

Much data gathered for management summary reporting are often in attributes form and can benefit from control chart analysis. Examples include scrap rates, quality audits and material rejections. Because of the ability to distinguish between special and common cause variation, control chart analysis can be valuable in interpreting these management reports.

This manual will use conforminglnonconforming throughout attributes discussions simply because These categories are "traditionally" used Organizations just starting on the path to continual improvement usually begin with these categories Many of the examples available in literature use these categories. It should not be inferred that these are the only "acceptable" categories or that attributes charts cannot be used with Case 1 processes; see Chapter I, Section E.

Montgomery , Wheeler 1, However the reasons for the use of control charts see Chapter I, Section E must be kept in mind. Any format is acceptable as long as it contains the following see Figure A scale which yields a "narrow" control chart does not enable analysis and control of the process.

Specifications limits should not be used in place of valid control limits for process analysis and control. B Centerline The control chart requires a centerline based on the sampling distribution in order to allow the determination of non-random patterns which signal special causes. C Subgroup sequence I timeline Maintaining the sequence in which the data are collected provides indications of "when" a special cause occurs and whether that special cause is time-oriented. D Identification of out-of-control plotted values Plotted points which are out of statistical control should be identified on the control chart.

For process control the analysis for special causes and their identification should occur as each sample is plotted as well as periodic reviews of the control chart as a whole for non- random patterns.

E Event Log Besides the collection, charting, and analysis of data, additional supporting information should be collected. This information can be recorded on the control chart or on a separate Event Log.

Elements of Control Charts During the initial analysis of the process, knowledge of what would constitute a potential special cause for this specific process may be incomplete. Consequently, the initial information collection activities may include events which will prove out not to be special causes. Such events need not be identified in subsequent information collection activities. If initial information collection activities are not sufficiently comprehensive, then time may be wasted in identifying specific events which cause out-of-control signals.

I I S2 -- material lot I , 52 -- nlc S S2 -- material lot I j S2 -- bad material -- stopped production; red tagged material 1 I sequestered production from lot change. Sx indicates the shift; indicates changes in the process. Establish an environment suitable for action. J Define the process. Determine the features or characteristics to be charted based on: The customer's needs.

Current and potential problem areas. Correlation between characteristics. Correlation between variables does not imply a causal relationship. In aution the absence of process knowledge, a designed experiment may be needed to verify such relationships and their significance. I J Define the characteristic. The characteristic must be operationally defined so that results can be communicated to all concerned in ways that have the same meaning today as yesterday.

This involves specifying what information is to be gathered, where, how, and under what conditions. Attributes control charts would be used to monitor and evaluate discrete variables whereas variables control charts would be used to monitor and evaluate continuous variables. J Define the measurement system.

Total process variability consists of part-to-part variability and measurement system variability. It is very important to evaluate the effect of the measurement system's variability on the overall process variability and determine whether it is acceptable. The measurement performance must be predictable in terms of accuracy, precision and stability.

Periodic calibration is not enough to validate the measurement system's capability for its intended use. In addition to being calibrated, the measurement system must be evaluated in terms of its suitability for the intended use. The definition of the measurement system will determine what type of chart, variables or attributes, is appropriate. Unnecessary external causes of variation should be reduced before the study begins. This could simply mean watching that the process is being operated as intended.

The purpose is to avoid obvious problems that could and should be corrected without use of control charts. This includes process adjustment or over control. In all cases, a process event log may be kept noting all relevant events such as tool changes, new raw material lots, measurement system changes, etc.

This will aid in subsequent process analysis. J Assure selection scheme is appropriate for detecting expected special causes.

Even though convenience sampling andlor haphazard 1 sampling is often thought of as being random sampling, it is not. If one i assumes that it is, and in reality it is not, one carries an unnecessary risk f that may lead to erroneous and or biased conclusions.

For more details see Chapter I, Section H. Data Collection 2. Establish Control Limits 3. Interpret for Statistical Control 4. Extend Control Limits for ongoing control see Figure These measurements are combined into a control statistic e.

The measurement data are collected from individual samples from a process stream. The samples are collected in subgroups and may consist of one or more pieces. In general, a larger subgroup size makes it easier to detect small process shifts. Create a Sampling Plan For control charts to be effective the sampling plan should define rational subgroups. A rational subgroup is one in which the samples are selected so that the chance for variation due to special causes occurring within a subgroup is minimized, while the chance for special cause variation between subgroups is maximized.

The key item to remember when developing a sampling plan is that the variation between subgroups is going to be compared to the variation within subgroups. Taking consecutive samples for the subgroups minimizes the opportunity for the process to change and should minimize the within-subgroup variation. The sampling frequency will determine the opportunity the process has to change between subgroups.

The variation within a subgroup represents the piece-to-piece variation over a short period of time. Subgroup Size - The type of process under investigation dictates how the subgroup size is defined. As stated earlier, a larger subgroup size makes it easier to detect small process shifts. The team responsible has to determine the appropriate subgroup size. If the expected shift is relatively small, then a larger subgroup size would be needed compared to that required if the anticipated shift is large.

The calculation of the control limits depends on the subgroup size and if one varies the subgroup size the coiitrol limits will change for that subgroup. There are other teclviiques that deal with variable subgroup sizes; for example, see Montgomeiy and Grant and Leaveiiwortli Su bgrozp Frequency - The subgroups are taken sequentially in time, e. The goal is to detect changes in the process over time.

Subgroups should be collected often enough, and at appropriate times so that they can reflect the potential oppostunities for change. The potential causes of change could be due to worlt-shift differences, relief operators, wanii-up trends, material lots, etc. Number of Subgroups The nuinber of subgroups needed to establish - control limits should satisfy the following criterion: Generally, 25 or more subgroups containing about or more individual readings give a good test for stability and, if stable, good estimates of the process location and spread.

This number of subgroups ensures that the effect of any extreme values in the range or standard deviation will be minimized. In some cases, existing data may be available which could accelerate this first stage of the study. However, they should be used only if they are recent and if the basis for establishing subgroups is clearly understood. Before continuing, a rational sampling plan must be developed and documented. Sampling Scheme - If the special causes affecting the process can occur unpredictably, the appropriate sampling scheme is a random or probability sample.

A random sample is one in which every sample point rational subgroup has the same chance probability of being selected. A random sample is systematic and planned; that is, all sainple points are determined before any data are collected. For special causes that are laown to occur at specific times or events, the sampling scheme should utilize this knowledge. Haphazard sampling or convenieiice sampling not based on the expected occurrence of a specific special cause should be avoided since this type of sampling provides a false sense of security; it can lead to a biased result and coiisequeiitly a possible erroneous decision.

Whichever sampling scheme is used all sainple points should be determined before any data are collected see Deming and Gmslta For a discussion about rational subgrouping and the effect of subgrouping on control chart interpretation see Appendix A. J Header information including the description of the process and sampling plan. J Recordingldisplaying the actual data values collected.

J For interim data calculations optional for automated charts. This should also include a space for the calculations based on the readings and the calculated control statistic s. The value for the control statistic is usually plotted on the vertical scale and the horizontal scale is the sequence in time.

The data values and the plot points for the control statistic should be aligned vertically. The scale should be broad enough to contain all the variation in the control statistic. A guideline is that the initial scale could be set to twice the difference between the expected maximum and minimum values.

J To log observations. This section should include details such as process adjustments, tooling changes, material changes, or other events which may affect the variability of the process. Record Raw Data Enter the individual values and the identification for each subgroup.

J Log any pertinent observation s. Calculate Sample Control Statistic s for Each Subgroup The control statistics to be plotted are calculated from the subgroup measurement data. These statistics may be the sample mean, median, range, standard deviation, etc.

Calculate the statistics according to the fomulae for the type of chart that is being used. Make sure that the plot points for the corresponding control statistics are aligned vertically. Connect the points with lines to help visualize patterns and trends.

The data should be reviewed while they are being collected in order to identify potential problems. If any points are substantially higher or lower than the others, confirm that the calculations and plots are correct and log any pertinent observations.

They define a range of values that the control statistic could randomly fall within, given there is only common cause to the variation. If the average of two different subgroups from the same process is calculated, it is reasonable to expect that they will be about the same.

But since they were calculated using different parts, the two averages are not expected to be identical. Even though the two averages are different, there is a limit to how different they are expected to be, due to random chance. This defines the location of the control limits. This is the basis for all control chart techniques. If the process is stable i. If the control statistic exceeds the control limits then this indicates that a special cause variation may be present.

There are two phases in statistical process control studies. The first is identifying and eliminating the special causes of variation in the process. The objective is to stabilize the process. A stable, predictable process is said to be in statistical control.

The second phase is concerned with predicting future measurements thus verifying ongoing process stability. During this phase, data analysis and reaction to special causes is done in real time. Once stable, the process can be analyzed to determine if it is capable of producing what the customer desires. Identify the centerline and control limits of the control chart To assist in the graphical analysis of the plotted control statistics, draw lines to indicate the location estimate centerline and control limits of the control statistic on the chart.

In general, to set up a control chart calculate: See Chapter 11, Section C, for the formulas.

Special causes can affect either the process location e. The objective of control chart analysis is to identify any evidence that the process variability or the process location is not operating at a constant level - that one or both ai-e out of statistical control - and to take appropriate action. In the subsequent discussion, the Average will be used for the location control statistic and the Range for the variation control statistic. The conclusions stated for these control statistics also apply equally to the other possible control statistics.

Since the control limits of the location statistic are dependent on the variation statistic, the variation control statistic should be analyzed first for stability. The variation and location statistics are analyzed separately, but comparison of patterns between the two charts may sometimes give added insight into special causes affecting the process A process cannot be said to be stable in statistical control unless both charts have no out-of-control conditions indications of special causes.

Analyze the Data Since the ability to interpret either the subgroup ranges or subgroup averages depends' on the estimate of piece-to-piece variability, the R chart is analyzed first. The data points are compared with the control limits, for points out of control or for unusual patterns or trends see Chapter 11, Section D ecial Causes Range For each indication of a special cause in the range chart data, conduct an analysis of the process operation to deternine the cause and improve process understanding; correct that condition, and prevent it from recurring.

The control chart itself should be a useful guide in problem analysis, suggesting when the condition may have began and how long it continued. However, recognize that not all special causes are negative; some special causes can result in positive process improvement in terms of decreased variation in the range - those special causes should be assessed for possible institutionalization within the process, where appropriate.

Timeliness is important in problem analysis, both in terms of minimizing the production of inconsistent output, and in terms of having fresh evidence for diagnosis.

It should be emphasized that problem solving is often the most difficult and time-consuming step. Statistical input from the control chart can be an appropriate starting point, but other methods such as Pareto charts, cause and effect diagrams, or other graphical analysis can be helpful see Ishikawa Ultimately, however, the explanations for behavior lie within the process and the people who are involved with it. Thoroughness, patience, insight and understanding will be required to develop actions that will measurably improve performance.

CHAPTER I1 - Section A Control Charting Process Recalculate Control Limits Range Chart When conducting an initial process study or a reassessment of process capability, the control limits should be recalculated to exclude the effects of out-of-control periods for which process causes have been clearly identified and removed or institutionalized. Exclude all subgroups affected by the special causes that have been identified and removed or institutionalized, then recalculate and plot the new average range and control limits.

Confirm that all range points show control when compared to the new limits; if not, repeat the identification, correction, recalculation sequence. If any subgroups were dropped from the R chart because of identified special causes, they should also be excluded from the chart.

The exclusion of subgroups representing unstable conditions is not just "throwing away bad data. This, in turn, gives the most appropriate basis for the control limits to detect future occurrences of special causes of variation. Be reminded, however, that the process must be changed so the special cause will not recur if undesirable as part of the process. Find and Address Special Causes Average Chart Once the special cause which affect the variation Range Chart have been identified and their effect have been removed, the Average Chart can be evaluated for special causes.

In Figure For each indication of an out-of-control condition in the average chart data, conduct an analysis of the process operation to determine the reason for the special cause; correct that condition, and prevent it from recurring. Use the chart data as a guide to when such conditions began and how long they continued. Timeliness in analysis is important, both for diagnosis and to minimize inconsistent output.

Problem solving techniques such as Pareto analysis and cause-and-effect analysis can help. Ishikawa Recalculate Control Limits Average Chart When conducting an initial process study or a reassessment of process capability, exclude any out-of-control points for which special causes have been found and removed; recalculate and plot the process average and control limits.

The preceding discussions were intended to give a functional introduction to control chart analysis. Even though these discussions used the Average and Range Charts, the concepts apply to all control chart approaches. Furthermore, there are other considerations that can be useful to the analyst. One of the most important is the reminder that, even with processes that are in statistical control, the probability of getting a false signal of a special cause on any individual subgroup increases as more data are reviewed.

While it is wise to investigate all signals as possible evidence of special causes, it should be recognized that they may have been caused by the system and that there may be no underlying local process problem. If no clear evidence of a special cause is found, any "corrective" action will probably serve to increase, rather than decrease, the total variability in the process output.

It might be desirable here to adjust the process to the target if the process center is off target. These limits would be used for ongoing monitoring of the process, with the operator and local supervision responding to signs of out-of-control conditions on either the location and variation X or R chart with prompt action see Figure A change in the subgroup sample size would affect the expected average range and the control limits for both ranges and averages.

This situation could occur, for instance, if it were decided to take smaller samples more frequently, so as to detect large process shifts more quickly without increasing the total number of pieces sampled per day. To adjust central lines and control limits for a new subgroup sample size, the following steps should be taken: Using the table factors based on the new subgroup size, calculate the new range and control limits: Plot these new control limits on the chart as the basis for ongoing process control.

As long as the process remains in control for both averages and ranges, the ongoing limits can be extended for additional periods.

If, however, there is evidence that the process average or range has changed in either direction , the cause should be determined and, if the change is justifiable, control limits should be recalculated based on current performance. The goal of the process control charts is not perfection, but a reasonable and economical state of control. For practical purposes, therefore, a coiltrolled process is not one where the chart never goes out of control.

Obviously, there are different levels or degrees of statistical control. The definition of control used can range from mere outliers beyond the control limits , through runs, trends and stratification, to fidl zone analysis. As the definition of control used advances to fill1 zone analysis, the liltelihood of finding lack of control increases for example, a process with no outliers may demonstrate lack of control though an obvious run still within the control limits.

For this reason, the definition of control used should be consistent with your ability to detect this at the point of control and should remain the same within one time period, within one process. Some suppliers may not be able to apply the hller definitions of conti on the floor on a real-time basis due to immature stages of operator training or lack of sophistication in the operator's ability.

The ability to detect lack of control at the point of control on a real-time basis is an advantage of the control chart. Over-intespretation of the data can be a danger in maintaining a true state of economical control. The presence of one or more points beyond either control limit is primary evidence of special cause variation at that point. This special cause could have occurred prior to this point. Since points beyond the control limits would be rare if only variation from comrnon causes were present, the presumption is that a special cause has accounted for the extreme value.

Therefore, any point beyond a control limit is a signal for analysis of the operation for the special cause. Mark any data points that are beyond the control limits for investigation and corrective action based on when that special cause actually started. A point outside a control limit is generally a sign of one or more of the following: The control limit or plot point has been miscalculated or misplotted.

The piece-to-piece variability or the spread of the distribution has increased i. The measurement system has changed e. For charts dealing with the spread, a point below the lower control limit is generally a sign of one or more of the following: The control limit or plot point is in error. The spread of the distribution has decreased i. A point beyond either control limit is generally a sign that the process has shifted either at that one point or as part of a trend see Figure When the ranges are in statistical control, the process spread - the within-subgroup variation - is considered to be stable.

The averages can then be analyzed to see if the process location is changing over time. If the averages are not in control, some special causes of variation are malting the process location unstable. This could give the first warning of an unfavorable condition which should be corrected. Conversely, certain patterns or trends could be favorable and should be studied for possible permanent improvement of the process.

Comparison of patterns between the range and average charts may give added insight. There are situations where an "out-of-control pattern" may be a bad event for one process and a good event for another process. An example of this is that in an X and R chart a series of 7 or more points on one side of the centerline may indicate an out-of-control situation. If this happened in a p chart, the process may actually be improving if the series is below the average line less nonconformances are being produced.

So in this case the series is a good thing - if we identify and retain the cause. Mark the point that prompts the decision; it may be helpful to extend a reference line back to the beginning of the run. Analysis should consider the approximate time at which it appears that the trend or shift first began. J Greater spread in the output values, which could be from an irregular cause such as equipment malfunction or loose fixturing or from a shift in one of the process elements e.

J A change in the measurement system e. Smaller spread in output values, which is usually a good condition that should be studied for wider application and process improvement. J A change in the measurement system, which could mask real performance changes. As the subgroup size n becomes smaller 5 or less , the likelihood of runs below R increases, so a run length of 8 or more could be necessary to signal a decrease in process variability. A run relative to the process average is generally a sign of one or both of the following: J The process average has changed - and may still be changing.

J The measurement system has changed drift, bias, sensitivity, etc. Care should be taken not to over-interpret the data, since even random i. Examples of nom-andom patterns could be obvious trends even though they did not satisfy the runs tests , cycles, the overall spread of data points within the control limits, or even relationships among values within subgroups e.

One test for the overall spread of subgroup data points is described below. Generally, about of the plotted points should lie within the middle third of the region between the control limits; about of the points should be in the outer two-thirds of the region.

If several process streams are present, they should be identified and tracked separately see also Appendix A. Figure The most commonly used are discussed above. Except for the first criterion, the numbers associated with the criteria do not establish an order or priority of use. Determination of which of the additional criteria to use depends on the specific process characteristics and special causes which are dominant within the process.

Note 2: Care should be given not to apply multiple criteria except in those cases where it makes sense. The application of each additional criterion increases the sensitivity of finding a special cause but also increases the chance of a Type I error.

In reviewing the above, it should be noted that not all these considerations for interpretation of control can be applied on the production floor. There is simply too much for the appraiser to remember and utilizing the advantages of a computer is often not feasible on the production floor. So, much of this more detailed analysis may need to be done offline rather than in real time.

This supports the need for the process event log and for appropriate thoughtfill analysis to be done after the fact. Another consideration is in the training of operators. Application of the additional control criteria should be used on the production floor when applicable, but not until the operator is ready for it; both with the appropriate training and tools.

With time and experience the operator will recognize these patterns in real time. The Average Run Length is the number of sample subgroups expected between out-of-control signals. The in-control Average Run Length A X , is the expected number of subgroup samples between false alai-ins. The ARL is dependent on how out-of-control signals are defined, the true target value's deviation from the estimate, and the tme variation relative to the estimate.

This table indicates that a mean shift of 1. A shift of 4 standard deviations would be identified within 2 subgroups. Larger-subgroups reduce the size of o, and tighten the control limits around X. Alternatively, the ARL ' s can be reduced by adding more out-of-control criteria. Other signals such as runs tests and patterns analysis along with the control limits will reduce the size of the ARL ' s. The following table is approximate ARL's for the same chart adding - the runs test of 7-points in a row one side o f 2.

As can be seen, adding the one extra out-of-control criterion significantly reduces the ARLs for small shifts in the mean, a decrease in the risk of a Type I1 error. Note that the zero-shift the in-control ARL is also reduced significantly. This is an increase in the risk of a Type I error or false alarm.

This balance between wanting a long ARL when the process is in control versus a short ARL when there is a process change has led to the development of other charting methods. Some of those methods are briefly described in Chapter Control chart constants for all control charts discussed in this section are listed in Appendix E. Deviation of X: Chart Features: Centerline - CL,.

Estimate of the Standard Deviation of X: Sample Value: X ' is the value of the othelement in the sample when the data are arranged in ascending order Ix?

Average Range: There are other approaches in the literature which do not use averages. Estimate of the Stan Chart Features: Therefore, valid signals occur only in the ; form of points beyond the control limits. Other rules used to evaluate the j data for non-random patterns see Chapter II, Section B are not reliable indicators of out-of-control conditions. Attributes charts are part of probability based charts discussed in Chapter These control charts use categorical data and the probabilities related to the categories to identify the presences of special causes.

The analysis of categorical data by these charts generally utilizes the binomial, or poisson distribution approximated by the normal form. Traditionally attributes charts are used to track unacceptable parts by identifying nonconfoi-ming items and nonconforrnities within an item. There is nothing intrinsic in attributes charts that restricts them to be solely used in charting nonconforming items.

They can also be used for tracking positive events. However, we will follow tradition and refer to these as nonconformances and nonconformities. Since the control limits are based on a normal approximation, the sample size used should be such that np 2 5. Centerline For constant control limits when the sample size varies min ni for situations where -2 0. Most of these charts were developed to address specific process situations or conditions which can affect the optimal use of the standard control charts.

A brief description of the more common charts will follow below. This description will define the charts, discuss when they should be used and list the formulas associated with the chart, as appropriate. If more information is desired regarding these charts or others, please consult a reference text that deals specifically with these types of control charts. Probability based charts belong to a class of control charts that uses categorical data and the probabilities related to the categories.

The analysis of categorical data generally uses the binomial, multinomial or poisson distribution. Examples of these charts are the attributes charts discussed in Chapter I1 Section C. However, there is nothing inherent in any of these forms or any other forms that requires one or more categories to be "bad. This is as much the fault of professionals and teachers, as it is the student's.

There is a tendency to take the easy way out, using traditional and stereotypical examples. This leads to a failure to realize that quality practitioners once had or were constrained to the tolerance philosophy; i.

With stoplight control charts, the process location and variation are controlled using one chart. The chart tracks the number of data points in the sample in each of the designated categories.

The decision criteria are based on the expected probabilities for these categories. A typical scenario will divide the process variation into three parts: One simple but effective control procedure of this type is stoplight control which is a semi- variables more than two categories technique using double sampling. In this approach the target area is designated green, the warning areas as yellow, and the stop zones as red.

The use of these colors gives rise to the "stoplight" designation. Of course, this allows process control only if the process distribution is known. The quantification and analysis of the process requires variables data. The focus of this tool is to detect changes special causes of variation in the process. That is, this is an appropriate tool for stage 2 activities27 only.

At its basic implementation, stoplight control requires no computations and no plotting, thereby making it easier to implement than control charts.

Since it splits the total sample e. Although, the development of this technique is thoroughly founded in statistical theory, it can be implemented and taught at the operator level without involving mathematics. Process performance including measurement variability is acceptable. The process is on target.

Once the assumptions have been verified by a process performance study using variables data techniques, the process distribution can be divided such that the average i 1. Any area outside the process distribution the If the process distribution follows the normal form, approximately Similar conditions can be established if the distribution is found to be non-normal. Check 2 pieces; if both pieces are in the green area, continue to run. If one or both are in the red zone, stop the process, notify the designated person for corrective action and sort material.

When setup Select 2 or other corrections are made, repeat step 1. Samples 3. If one or both are in a yellow zone, check three more pieces. If any I pieces fall in a red zone, stop the process, notify the designated person for corrective action and sort material.

When setup or other Green? J If no pieces fall in a red zone, but three or more are in a yellow zone out of 5 pieces stop the process, notify the designated 1 J person for corrective action. When setup or other corrections are made, repeat step I. If three pieces fall in the green zone and the rest are yellow, continue to run. Select 3 Additional Samples Measurements can be made with variables as well as attributes gaging.

I Certain variables gaging such as dial indicators or air-electronic columns are better suited for this type of program since the indicator background Any Red? Although no charts or graphs are required, charting is Yes I recommended, especially if subtle trends shifts over a relatively long period of time are possible in the process. In any decision-making situation there is a risk of making a wrong decision. With sampling, the two types of errors are: Probability of calling the process bad when it is actually good false alarm rate.

Probability of calling the process good when it is actually bad miss rate. Sensitivity refers to the ability of the sampling plan to detect out-of-control conditions due to increased variation or shifts from the process average. The disadvantage of stoplight control is that it has a higher false alarm rate than an X and R chart of the same total sample size. The advantage of stoplight control is that it is as sensitive as an X and R chart of the same total sample size.

Users tend to accept control mechanisms based on these types of data due to the ease of data collection and analysis. Focus is on the target not specification limits - thus it is compatible with the target philosophy and continuous improvement. An application of the stoplight control approach for the purpose of nonconformance control instead of process control is called Pre- control. It is based on the specifications not the process variation.

The first assumption means that all special sources of variation in the process are being controlled. The second assumption states that The area outside the specifications is labeled red. For a process that is normal with C , Cpk equal to 1. Similar calculations could be done if the distribution was found to be non-normal or highly capable.

The pre-control sampling uses a sample size of two. However, before the sampling can start, the process must produce 5 consecutive parts in the green zone.

Each of the two data points are plotted on the chart and reviewed against a set of rules. Every time the process is adjusted, before the sampling can start the process must produce 5 consecutive parts in the green zone. Pre-control is not a process control chart but a lionconformance control chart so great care must be taken as to how this chart is used and interpreted. Pre-control charts should be not used when you have a C,, Cpk greater than one or a loss function that is not flat within the specifications see Chapter IV.

The disadvantage of pre- control is that potential diagnostics that are available with normal process control methods are not available. Further, pre-control does not evaluate nor monitor process stability. Pre-control is a compliance based tool not a process control tool. However there are processes that only produce a small number of products during a single run e. Further, the increasing focus on just-in-time JIT inventory and lean manufacturing methods is driving production runs to become shorter.

From a business perspective, producing large batches of product several times per month and holding it in inventory for later distribution, can lead to avoidable, unnecessary costs.

Manufacturers now are moving toward JIT - producing much smaller quantities on a more frequent basis to avoid the costs of holding "work in process" and inventory. For example, in the past, it may have been satisfactory to make 10, parts per month in batches of 2, per week.

Now, customer demand, flexible manufacturing methods and JIT requirements might lead to malting and shipping only parts per day. To realize the efficiencies of short-run processes it is essential that SPC methods be able to verifL that the process is truly in statistical control, i.

The process must be operated in a stable and consistent manner. The process aim must be set and maintained at the proper level. The Natural Process Limits must fall within the specification limits.