Monte Carlo method
56098
225828691
2008-07-15T16:17:57Z
SpuriousQ
288643
Reverted edits by [[Special:Contributions/OrgasGirl|OrgasGirl]] ([[User talk:OrgasGirl|talk]]) to last version by 17.103.47.162
[[image:Monte_carlo_method.svg|thumb|right|The Monte Carlo method can be illustrated as a game of [[Battleship (game)|battleship]]. First a player makes some random shots. Next the player applies algorithms (ie. a battleship is four dots in the vertical or horizontal direction). Finally based on the outcome of the random sampling and the algorithm the player can determine the likely locations of the other player's ships. ]]
'''Monte Carlo methods''' are a class of [[computation]]al [[algorithm]]s that rely on repeated [[random]] sampling to compute their results. Monte Carlo methods are often used when [[computer simulation|simulating]] [[physics|physical]] and [[mathematics|mathematical]] systems. Because of their reliance on repeated computation and [[random number|random]] or [[pseudorandomness|pseudo-random]] numbers, Monte Carlo methods are most suited to calculation by a [[computer]]. Monte Carlo methods tend to be used when it is infeasible or impossible to compute an exact result with a [[deterministic algorithm]].<ref>Douglas Hubbard "How to Measure Anything: Finding the Value of Intangibles in Business" pg. 46, John Wiley & Sons, 2007</ref>
The term '''Monte Carlo method''' was coined in the 1940s by physicists working on nuclear weapon projects in the [[Los Alamos National Laboratory]].<ref> [The beginning of the Monte Carlo method http://library.lanl.gov/la-pubs/00326866.pdf]</ref>
== Overview ==
There is no single Monte Carlo method; instead, the term describes a large and widely-used class of approaches. However, these approaches tend to follow a particular pattern:
# Define a domain of possible inputs.
# Generate inputs randomly from the domain, and perform a deterministic computation on them.
# Aggregate the results of the individual computations into the final result.
For example, the value of [[pi|π]] can be approximated using a Monte Carlo method. Draw a square of unit area on the ground, then [[inscribed figure|inscribe]] a circle within it. Now, scatter some small objects (for example, grains of rice or sand) throughout the square. If the objects are scattered [[uniform distribution (continuous)|uniformly]], then the proportion of objects within the circle vs objects within the square should be approximately π/4, which is the ratio of the circle's area to the square's area. Thus, if we count the number of objects in the circle, multiply by four, and divide by the number of objects in the square, we get an approximation to π.
Notice how the π approximation follows the general pattern of Monte Carlo algorithms. First, we define a domain of inputs: in this case, it's the square which circumscribes our circle. Next, we generate inputs randomly (scatter individual grains within the square), then perform a computation on each input (test whether it falls within the circle). At the end, we aggregate the results into our final result, the approximation of π. Note, also, two other common properties of Monte Carlo methods: the computation's reliance on good random numbers, and its slow convergence to a better approximation as more data points are sampled. If grains are purposefully dropped into only, for example, the center of the circle, they will not be uniformly distributed, and so our approximation will be poor. An approximation will also be poor if only a few grains are randomly dropped into the whole square. Thus, the approximation of π will become more accurate both as the grains are dropped more uniformly and as more are dropped.
== History ==
The name "Monte Carlo" was popularized by physics researchers [[Stanislaw Ulam]], [[Enrico Fermi]], [[John von Neumann]], and [[Nicholas Metropolis]], among others; the name is a reference to a famous [[casino]] in [[Monaco]] where Ulam's uncle would borrow money to gamble.<ref>Douglas Hubbard "How to Measure Anything: Finding the Value of Intangibles in Business" pg. 46, John Wiley & Sons, 2007</ref> The use of [[randomness]] and the repetitive nature of the process are analogous to the activities conducted at a casino.
Random methods of computation and experimentation (generally considered forms of [[stochastic simulation]]) can be arguably traced back to the earliest pioneers of probability theory (see, e.g., [[Buffon's needle]], and the work on small samples by [[William Gosset]]), but are more specifically traced to the pre-electronic computing era. The general difference usually described about a Monte Carlo form of simulation is that it systematically "inverts" the typical mode of simulation, treating deterministic problems by ''first'' finding a probabilistic analog. Previous methods of simulation and statistical sampling generally did the opposite: using simulation to test a previously understood deterministic problem. Though examples of an "inverted" approach do exist historically, they were not considered a general method until the popularity of the Monte Carlo method spread.
Perhaps the most famous early use was by Enrico Fermi in 1930, when he used a random method to calculate the properties of the newly-discovered [[neutron]]. Monte Carlo methods were central to the [[simulation]]s required for the [[Manhattan Project]], though were severely limited by the computational tools at the time. Therefore, it was only after electronic computers were first built (from 1945 on) that Monte Carlo methods began to be studied in depth. In the 1950s they were used at [[Los Alamos National Laboratory|Los Alamos]] for early work relating to the development of the [[hydrogen bomb]], and became popularized in the fields of [[physics]], [[physical chemistry]], and [[operations research]]. The [[Rand Corporation]] and the [[U.S. Air Force]] were two of the major organizations responsible for funding and disseminating information on Monte Carlo methods during this time, and they began to find a wide application in many different fields.
Uses of Monte Carlo methods require large amounts of random numbers, and it was their use that spurred the development of [[pseudorandom number generator]]s, which were far quicker to use than the tables of random numbers which had been previously used for statistical sampling.
==Applications==
Monte Carlo simulation methods are especially useful in studying systems with a large number of [[coupling (physics)|coupled]] degrees of freedom, such as liquids, disordered materials, strongly coupled solids, and cellular structures (see [[cellular Potts model]]). More broadly, Monte Carlo methods are useful for modeling phenomena with significant [[uncertainty]] in inputs, such as the calculation of [[risk]] in business (for its use in the insurance industry, see [[stochastic modelling]]). A classic use is for the evaluation of [[definite integral]]s, particularly multidimensional integrals with complicated boundary conditions.
[[Monte Carlo methods in finance]] are often used to calculate the value of companies, to evaluate investments in projects at corporate level or to evaluate financial derivatives. The Monte Carlo method is intended for financial analysts who want to construct stochastic or probabilistic financial models as opposed to the traditional static and deterministic models.
Monte Carlo methods are very important in [[computational physics]], [[physical chemistry]], and related applied fields, and have diverse applications from complicated [[quantum chromodynamics]] calculations to designing [[heat shield]]s and [[aerodynamics|aerodynamic]] forms.
Monte Carlo methods have also proven efficient in solving coupled integral differential equations of radiation fields and energy transport, and thus these methods have been used in [[global illumination]] computations which produce photorealistic images of virtual 3D models, with applications in [[video games]], [[architecture]], [[design]], computer generated [[film]]s, special effects in cinema, business, economics and other fields.
Monte Carlo methods are useful in many areas of computational mathematics, where a ''lucky choice'' can find the correct result. A classic example is [[Miller-Rabin primality test|Rabin's algorithm]] for primality testing: for any ''n'' which is not prime, a random ''x'' has at least a 75% chance of proving that ''n'' is not prime. Hence, if ''n'' is not prime, but ''x'' says that it might be, we have observed at most a 1-in-4 event. If 10 different random ''x'' say that "''n'' is probably prime" when it is not, we have observed a one-in-a-million event. In general a Monte Carlo algorithm of this kind produces one correct answer with a guarantee '''''n'' is composite, and ''x'' proves it so''', but another one without, but with a guarantee of not getting this answer when it is wrong '''too often''' — in this case at most 25% of the time. See also [[Las Vegas algorithm]] for a related, but different, idea.
===Application areas===
Areas of application include:
* Graphics, particularly for [[ray tracing]]; a version of the [[Metropolis-Hastings algorithm]] is also used for ray tracing where it is known as [[Metropolis light transport]]
* [[Monte Carlo method for photon transport|Modeling light transport in biological tissue]]
* [[Monte Carlo methods in finance]]
* [[Reliability engineering]]
* In simulated annealing for protein structure prediction
* In semiconductor device research, to model the transport of current carriers
* Environmental science, dealing with contaminant behavior
* [[Monte Carlo method in statistical physics|Monte Carlo method]] in [[statistical physics]]; in particular, [[Monte Carlo molecular modeling]] as an alternative for computational [[molecular dynamics]].
* Search And Rescue and Counter-Pollution. Models used to predict the drift of a life raft or movement of an oil slick at sea.
* In [[Probabilistic design]] for simulating and understanding the effects of variability
* In [[Physical chemistry]], particularly for simulations involving atomic clusters
*In computer science
** [[Las Vegas algorithm]]
** [[LURCH]]
** [[Computer Go]]
**[[General Game Playing]]
* Modeling the movement of impurity atoms (or ions) in plasmas in existing and tokamaks (e.g.: DIVIMP).
* In experimental [[particle physics]], for designing [[particle detector|detectors]], understanding their behavior and comparing experimental data to theory
* Nuclear and particle physics codes using the Monte Carlo method:
** [[GEANT (program)|GEANT]] - [[European Organization for Nuclear Research|CERN]]'s simulation of high energy particles interacting with a detector.
** [[CompHEP]], [[PYTHIA]] - Monte-Carlo generators of particle collisions
** [[Monte Carlo N-Particle Transport Code|MCNP(X)]] - LANL's radiation transport codes
** [[Monte Carlo Universal|MCU]] - universal computer code for simulation of particle transport (neutrons, photons, electrons) in three-dimensional systems by means of the Monte Carlo method
** [[EGS (program)|EGS]] - [[SLAC|Stanford]]'s simulation code for coupled transport of electrons and photons
** [[PEREGRINE]] - LLNL's Monte Carlo tool for radiation therapy dose calculations
** [[BEAMnrc]] - Monte Carlo code system for modeling radiotherapy sources ([[Linear particle accelerator|LINAC]]'s)
** [[PENELOPE]] - Monte Carlo for coupled transport of photons and electrons, with applications in radiotherapy
** [[MONK]] - Serco Assurance's code for the calculation of [[Neutron multiplication factor|k-effective]] of nuclear systems
** Modelling of [[foam]] and cellular structures
** Modeling of [[biological tissue|tissue]] [[morphogenesis]]
=== Other methods employing Monte Carlo===
* Assorted random models, e.g. [[self-organised criticality]]
* [[Direct simulation Monte Carlo]]
* [[Dynamic Monte Carlo method]]
* [[Kinetic Monte Carlo]]
* [[Quantum Monte Carlo]]
* [[Quasi-Monte Carlo method]] using [[low-discrepancy sequence]]s and self avoiding walks
* Semiconductor charge transport and the like
* [[Electron microscopy]] beam-sample interactions
* [[Stochastic optimization]]
* [[Cellular Potts model]]
* [[Markov chain Monte Carlo]]
* [[Cross-Entropy Method]]
* [[Applied information economics]]
* [[Monte Carlo localization]]
==Use in mathematics==
In general, Monte Carlo methods are used in mathematics to solve various problems by generating suitable random numbers and observing that fraction of the numbers obeying some property or properties. The method is useful for obtaining numerical solutions to problems which are too complicated to solve analytically. The most common application of the Monte Carlo method is Monte Carlo integration.
=== Integration ===
{{main|Monte Carlo integration}}
Deterministic methods of [[numerical integration]] operate by taking a number of evenly spaced samples from a function. In general, this works very well for functions of one variable. However, for functions of [[vector space|vector]]s, deterministic quadrature methods can be very inefficient. To numerically integrate a function of a two-dimensional vector, equally spaced grid points over a two-dimensional surface are required. For instance a 10x10 grid requires 100 points. If the vector has 100 dimensions, the same spacing on the grid would require [[googol|10<sup>100</sup>]] points—far too many to be computed. 100 [[dimension]]s is by no means unreasonable, since in many physical problems, a "dimension" is equivalent to a [[degrees of freedom (physics and chemistry)|degree of freedom]]. (See [[Curse of dimensionality]].)
Monte Carlo methods provide a way out of this exponential time-increase. As long as the function in question is reasonably [[well-behaved]], it can be estimated by randomly selecting points in 100-dimensional space, and taking some kind of average of the function values at these points. By the [[law of large numbers]], this method will display <math>1/\sqrt{N}</math> convergence—i.e. quadrupling the number of sampled points will halve the error, regardless of the number of dimensions.
A refinement of this method is to somehow make the points random, but more likely to come from regions of high contribution to the integral than from regions of low contribution. In other words, the points should be drawn from a distribution similar in form to the integrand. Understandably, doing this precisely is just as difficult as solving the integral in the first place, but there are approximate methods available: from simply making up an integrable function thought to be similar, to one of the adaptive routines discussed in the topics listed below.
A similar approach involves using [[low-discrepancy sequence]]s instead—the [[quasi-Monte Carlo method]]. Quasi-Monte Carlo methods can often be more efficient at numerical integration because the sequence "fills" the area better in a sense and samples more of the most important points that can make the simulation converge to the desired solution more quickly.
==== Integration methods ====
* Direct sampling methods
** [[Importance sampling]]
** [[Stratified sampling]]
** [[Recursive stratified sampling]]
** [[VEGAS algorithm]]
* [[Random walk Monte Carlo]] including [[Markov chain]]s
** [[Metropolis-Hastings algorithm]]
* [[Gibbs sampling]]
=== Optimization ===
Another powerful and very popular application for random numbers in numerical simulation is in [[optimization (mathematics)|numerical optimization]]. These problems use functions of some often large-dimensional vector that are to be minimized (or maximized). Many problems can be phrased in this way: for example a [[computer chess]] program could be seen as trying to find the optimal set of, say, 10 moves which produces the best evaluation function at the end. The [[traveling salesman problem]] is another optimization problem. There are also applications to engineering design, such as [[multidisciplinary design optimization]].
Most Monte Carlo optimization methods are based on [[random walk]]s. Essentially, the program will move around a marker in multi-dimensional space, tending to move in directions which lead to a lower function, but sometimes moving against the [[gradient]].
==== Optimization methods ====
* [[Evolution strategy]]
* [[Genetic algorithm]]s
* [[Parallel tempering]]
* [[Simulated annealing]]
* [[Stochastic optimization]]
* [[Stochastic tunneling]]
=== Inverse problems===
Probabilistic formulation of [[inverse problem]]s leads to the definition of a [[probability distribution]] in the model space. This probability distribution combines [[A priori (math modeling)|a priori]] information with new information obtained by measuring some observable parameters (data). As, in the general case, the theory linking data with model parameters is nonlinear, the a posteriori probability in the model space may not be easy to describe (it may be multimodal, some moments may not be defined, etc.).
When analyzing an inverse problem, obtaining a maximum likelihood model is usually not sufficient, as we normally also wish to have information on the resolution power of the data. In the general case we may have a large number of model parameters, and an inspection of the marginal probability densities of interest may be impractical, or even useless. But it is possible to pseudorandomly generate a large collection of models according to the posterior probability distribution and to analyze and display the models in such a way that information on the relative likelihoods of model properties is conveyed to the spectator. This can be accomplished by means of an efficient Monte Carlo method, even in cases where no explicit formula for the a priori distribution is available.
The best-known importance sampling method, the Metropolis algorithm, can be generalized, and this gives a method that allows analysis of (possibly highly nonlinear) inverse problems with complex a priori information and data with an arbitrary noise distribution. For details, see Mosegaard and Tarantola (1995) [http://www.ipgp.jussieu.fr/~tarantola/Files/Professional/Papers_PDF/MonteCarlo_latex.pdf] , or Tarantola (2005) [http://www.ipgp.jussieu.fr/~tarantola/Files/Professional/SIAM/index.html] .
== Monte Carlo and random numbers==
Interestingly, Monte Carlo simulation methods do not generally require truly [[random number]]s to be useful - for other applications, such as [[primality testing]], unpredictability is vital (see Davenport (1995)).<ref>{{cite web|last=Davenport |first=J. H. |authorlink= |coauthors= |title=Primality testing revisited |work= |publisher= |date= |url=http://doi.acm.org/10.1145/143242.143290 |format= |doi=http://doi.acm.org/10.1145/143242.143290 |accessdate=2007-08-19 |quote = }}</ref> Many of the most useful techniques use deterministic, [[pseudo-random]] sequences, making it easy to test and re-run simulations. The only quality usually necessary to make good [[simulation]]s is for the pseudo-random sequence to appear "random enough" in a certain sense.
What this means depends on the application, but typically they should pass a series of statistical tests. Testing that the numbers are [[uniform distribution|uniformly distributed]] or follow another desired distribution when a large enough number of elements of the sequence are considered is one of the simplest, and most common ones.
==An alternative to the basic Monte Carlo method==
[[Applied information economics]] (AIE) is a decision analysis method used in business and government that addresses some of the shortcomings of the Monte Carlo method - at least how it is usually employed in practical situations. The most important components AIE adds to the Monte Carlo method are:
:1) Accounting for the systemic overconfidence of human estimators with [[calibrated probability assessment]]
:2) Computing the economic value of information to guide additional empirical measurements
:3) Using the results of Monte Carlos as input to portfolio analysis
When Monte Carlo simulations are used in most decision analysis settings, human experts are used to estimate the probabilities and ranges in the model. However, decision psychology research in the field of calibrated probability assessments shows that humans - especially experts in various fields - tend to be statistically overconfident. That is, they put too high a probability that a forecasted outcome will occur and they tend to use ranges that are too narrow to reflect their uncertainty. AIE involves training human estimators so that the probabilities and ranges they provide realistically reflect uncertainty (eg., a subjective 90% confidence interval as a 90% chance of containing the true value). Without such training, Monte Carlo models will invariably underestimate the uncertainty of a decision and therefore the risk.
Another shortcoming is that, in practice, most users of Monte Carlo simulations rely entirely on the initial subjective estimates and almost never follow up with empirical observation. This may be due to the overwhelming number of variables in many models and the inability of analysts to choose economically justified variables to measure further. AIE addresses this by using methods from decision theory to compute the economic value of additional information. This usually eliminates the need to measure most variables and puts pragmatic constraints on the methods used to measure those variables that have a significant information value.
The final shortcoming addressed by AIE is that the output of a Monte Carlo - at least for the analysis of business decisions - is simply the histogram of the resulting returns. No criteria is presented to determine if a particular distribution of results is acceptable or not. AIE uses [[Modern Portfolio Theory]] to determine which investments are desirable and what their relative priorities should be.
== See also ==
* [[Bootstrapping (statistics)]]
* [[Las Vegas algorithm]]
* [[Markov chain]]
* [[Auxiliary field Monte Carlo]]
* [[Molecular dynamics]]
* [[Quasi-Monte Carlo method]]
* [[Random number generator]]
* [[Randomness]]
* [[Resampling (statistics)]]
==Notes==
<!--See http://en.wikipedia.org/wiki/Wikipedia:Footnotes for an explanation of how to generate footnotes using the <ref(erences/)> tags-->
<!--to cite a web resource, use this template
<ref>{{cite web|last= |first= |authorlink= |coauthors= |title= |work= |publisher= |date= |url= |format= |doi= |accessdate= |quote = }}</ref>
-->
{{reflist}}
==References==
* Bernd A. Berg, ''Markov Chain Monte Carlo Simulations and Their Statistical Analysis (With Web-Based Fortran Code)'', World Scientific [[2004]], ISBN 981-238-935-0.
* Arnaud Doucet, Nando de Freitas and Neil Gordon, ''Sequential Monte Carlo methods in practice'', [[2001]], ISBN 0-387-95146-6.
* P. Kevin MacKeown, ''Stochastic Simulation in Physics'', [[1997]], ISBN 981-3083-26-3
* Harvey Gould & Jan Tobochnik, ''An Introduction to Computer Simulation Methods, Part 2, Applications to Physical Systems'', [[1988]], ISBN 0-201-16504-X
* C.P. Robert and G. Casella. "Monte Carlo Statistical Methods" (second edition). New York: Springer-Verlag, [[2004]], ISBN 0-387-21239-6
* R.Y. Rubinstein and D.P. Kroese (2007). "Simulation and the Monte Carlo Method" (second edition). New York: John Wiley & Sons, ISBN 978-0-470-17793-8.
* Mosegaard, Klaus., and Tarantola, Albert, 1995. Monte Carlo sampling of solutions to inverse problems. J. Geophys. Res., 100, B7, 12431-12447.
* Tarantola, Albert, ''Inverse Problem Theory'' ([http://www.ipgp.jussieu.fr/~tarantola/Files/Professional/SIAM/index.html free PDF version]), Society for Industrial and Applied Mathematics, 2005. ISBN 0-89871-572-5
* Nicholas Metropolis, Arianna W. Rosenbluth, Marshall N. Rosenbluth, Augusta H. Teller and Edward Teller, "[[Equation of State Calculations by Fast Computing Machines]]", Journal of Chemical Physics, volume 21, p. 1087 (1953) ({{DOI|10.1063/1.1699114}})
* N. Metropolis and S. Ulam, "The Monte Carlo Method", Journal of the American Statistical Association, volume 44, number 247, pp. 335–341 (1949) ({{DOI|10.2307/2280232}})
* Fishman, G.S., (1995) ''Monte Carlo: Concepts, Algorithms, and Applications'', Springer Verlag, New York.
*Judgement under Uncertainty: Heuristics and Biases, ed. D. Kahneman and A. Tversky,(Cambridge University Press, 1982)
* R. E. Caflisch, ''Monte Carlo and quasi-Monte Carlo methods'', Acta Numerica vol. 7, Cambridge University Press, 1998, pp. 1-49. [http://books.google.com/books?id=g-j-Pz1zjkwC]
==External links==
{{external links}}
*[http://mathworld.wolfram.com/MonteCarloMethod.html Overview and reference list], Mathworld
*[http://www.ipp.mpg.de/de/for/bereiche/stellarator/Comp_sci/CompScience/csep/csep1.phy.ornl.gov/mc/mc.html Introduction to Monte Carlo Methods], Computational Science Education Project
*[http://www.sitmo.com/eqcat/15 Overview of formulas used in Monte Carlo simulation], the Quant Equation Archive, at sitmo.com
*[http://www.chem.unl.edu/zeng/joy/mclab/mcintro.html The Basics of Monte Carlo Simulations], [[University of Nebraska-Lincoln]]
*[http://office.microsoft.com/en-us/assistance/HA011118931033.aspx Introduction to Monte Carlo simulation] (for [[Microsoft Excel|Excel]]), Wayne L. Winston
*[http://www.bus.lsu.edu/academics/finance/faculty/dchance/Instructional/TN96-03.pdf Monte Carlo Simulation], Prof. Don M. Chance
*[http://www.brighton-webs.co.uk/montecarlo/concept.asp Monte Carlo Methods - Overview and Concept], brighton-webs.co.uk
*[http://www.solver.com/simulation/monte-carlo-simulation/index.html Monte Carlo Simulation - Introduction], solver.com
*[http://www.cooper.edu/engineering/chemechem/monte.html Molecular Monte Carlo Intro], [[Cooper Union]]
*[http://homepages.nyu.edu/~sl1544/articles.html Monte Carlo techniques applied to finance], Simon Leger
*[http://homepages.ed.ac.uk/s0095122/Applet1-page.htm Monte Carlo techniques applied in physics]
*[http://www.global-derivatives.com/maths/k-o.php MonteCarlo Simulation in Finance], global-derivatives.com
*[http://twt.mpei.ac.ru/MAS/Worksheets/approxpi.mcd Approximation of π with the Monte Carlo Method]
*[http://papers.ssrn.com/sol3/papers.cfm?abstract_id=265905 Risk Analysis in Investment Appraisal], The Application of Monte Carlo Methodology in Project Appraisal, Savvakis C. Savvides
*[http://doi.acm.org/10.1145/143242.143290 Primality Testing Revisited]Proc. ISSAC 1992, James H. Davenport
*[http://www.datastructures.info/the-monte-carlo-algorithmmethod/ Example of Calculating Pi using the Monte Carlo method, C++]
*[http://en.wikiversity.org/wiki/Probabilistic_Assessment_of_Structures Probabilistic Assessment of Structures using the Monte Carlo method], Wikiuniversity paper for students of Structural Engineering
* [http://www.fz-juelich.de/nic-series/volume10/janke2.pdf Statistical Analysis of Simulations], by Professor Wolfhard Janke, Universität Leipzig, Germany
* [http://www.puc-rio.br/marco.ind/quasi_mc.html A very intuitive and comprehensive introduction to Quasi-Monte Carlo methods]
===Software===
* [http://simularsoft.com.ar SimulAr] - Free Monte Carlo Simulation Excel Add-In
* [http://www.mrc-bsu.cam.ac.uk/bugs/ The BUGS project] (including WinBUGS and OpenBUGS)
* [http://www.crystalball.com/ Monte Carlo Simulation Tool for Excel] by Oracle (formerly Decisioneering)
* [http://www.palisade.com Monte Carlo Simulation Tool for Excel] by Palisade
* [http://www.lumenaut.com/montecarlo.htm Monte Carlo Simulation Tool for Excel] by Lumenaut
* [http://www.goldsim.com Monte Carlo Simulation for Business, Engineered, and Environmental Systems] by GoldSim
* [http://www.statistics101.net Monte Carlo Simulation, Resampling, Bootstrap tool]
* [http://yasai.rutgers.edu/ YASAI: Yet Another Simulation Add-In] - Free Monte Carlo Simulation Add-In for Excel created by [[Rutgers University]]
[[Category:Monte Carlo methods| ]]
[[Category:Randomness]]
[[Category:Numerical analysis]]
[[Category:Statistical mechanics]]
[[Category:Computational physics]]
[[Category:Sampling techniques]]
[[ar:طريقة مونت كارلو]]
[[cs:Metoda Monte Carlo]]
[[da:Monte Carlo-metoder]]
[[de:Monte-Carlo-Simulation]]
[[es:Método de Monte Carlo]]
[[fr:Méthode de Monte-Carlo]]
[[ko:몬테카를로 방법]]
[[hr:Monte Carlo simulacija]]
[[id:Metode Monte Carlo]]
[[it:Metodo Monte Carlo]]
[[he:שיטת מונטה קרלו]]
[[nl:Monte-Carlosimulatie]]
[[ja:モンテカルロ法]]
[[no:Monte Carlo-metoden]]
[[pl:Metoda Monte Carlo]]
[[pt:Método de Monte Carlo]]
[[ru:Метод Монте-Карло]]
[[simple:Monte Carlo algorithm]]
[[su:Metoda Monte Carlo]]
[[fi:Monte Carlo -simulaatio]]
[[sv:Monte Carlometoden]]
[[vi:Phương pháp Monte Carlo]]
[[tr:Monte Carlo benzetimi]]
[[zh:蒙特·卡罗方法]]
[[ur:مونٹے کارلو تشبیہ]]