Discrete probability methods have several advantages that should be retained in constructing a probabilistic model. First, most engineering data are in a discrete form, and thus a discrete probability method is a natural choice for incorporating such data in an analysis. Second, the discrete probability methods are invariant; i.e., regardless of the weighting scheme used for the input variable distributions, no new coding is required to implement these schemes. Other weighting methods, for example, Monte Carlo importance sampling, can require significant re-coding before lowprobability results can be estimated. The most significant drawback to discrete probability methods is that their application is limited. These discrete methods require many calculations and a large amount of computer storage space. The number of storage spaces equals the number of discrete points ND raised to the power of the number of variables Nv. Thus, for ten discrete and nine input variables, the response variable is characterized by 1 billion data points! While some computers may have sufficient storage space to handle this number of data points, statistically these data points are not all significant. A new method for random sampling from the discrete probability space and condensing after performing a statistically significant number of calculations is described. The accuracy of a Monte Carlo calculation can be approximated, while importance sampling can be directed without any recoding of the computer algorithm.