1. Methodology
Download icon

Microsimulation. A Tool for Economic Analysis

  1. Anders Klevmarken  Is a corresponding author
  1. Department of Economics, Sweden
Research article
Cite this article as: A. Klevmarken; 2022; Microsimulation. A Tool for Economic Analysis; International Journal of Microsimulation; 15(1); 6-14. doi: 10.34196/ijm.00246

Abstract

Micro simulation involves modeling the behavior of individuals and other decision units taking into account the effects of policy parameters such as tax rates, eligibility rules for benefits and subsidies and compensation rates in the social security system. The model is simulated to analyze the impact of policy changes not only on mean behavior but also on the entire distribution of target variables. Micro simulation models have thus, for instance, been used to analyze how changes in the income taxes influence the tails of the income distribution (the incidence of poverty). Micro simulation complements a more traditional economic analysis. It is demanding in terms of modeling effort, data requirements and computer capacity. The issues of statistical inference related to micro simulation are in principle no different from those in econometric modeling generally. In practice the large scale and complex structure of a typical micro simulation model and the shortage of good micro data raise inference issues of particular relevance for micro simulation such as the choice of estimation criteria, calibration to benchmarks and model validation. Much in current practice does not meet high scientific standards, but there is a potential for a promising research program.

1. The ceteris paribus assumption in economic analysis

In the empirical verification of economic models economists almost always have to rely on observational studies while controlled experiments are difficult to implement, sometimes even considered unethical and thus very rare. This has the important implication that confounding factors must become controlled by careful modeling and the application of sound econometric methods.

In their theoretical work economists usually abstract from these confounding factors and concentrate on the mechanism of key interest – the so called ceteris paribus assumption. To convey a new idea and to demonstrate the key implication of this idea this is an efficient and useful approach. But in empirical testing and in using the new theory for policy recommendations this substitute in mind for a controlled experiment is in general not applicable. Only under rather special circumstances there is an econometric correspondence to the ceteris paribus assumption, i.e., no confounding factors other than random noise.

Even a first-year student of Econometrics knows that in regression analysis omitted variables will in general result in biased and inconsistent estimates of the partial effects of the included variables. In fact, this result generalizes to almost all mis specified models. Policy recommendations based on models using an erroneous ceteris paribus assumption will thus in general simply be wrong.

When is it then possible to model potentially confounding factors as residual white noise and when is it necessary to explicitly include them as part of the economic model? What is important enough not to be seen as a random residual? There is no straight answer to these questions. This is the art of model building and of using “rhetoric” to convince fellow economists and policy makers of the benefits of a particular model1. The only advice the econometrician can give is to put the model to the test of data using our battery of diagnostic tests. There is, however, no foolproof procedure to follow to the “true” model. For a given finite sample many models will pass the diagnostic tests and with very large samples almost every model would become rejected. The existence of a true data generating process is a useful assumption made by the econometrician to be able to discuss properties of econometric methods, but it is not really needed in economic analysis. What is important is to find a model that is useful for a particular purpose. In the search for such a model one might be willing to neglect less important deviations between model and data and accept certain properties of the model even if diagnostic tests would reject them. The economist must, however, be able to show that these properties are of no consequence or at least “less important” for his purpose. Some analysts take this kind of argument as an excuse to fit very simple structures to data and to neglect the diagnostic testing. This is not well advised. Diagnostic tests that signal likely specification errors are always warnings to be taken seriously and we usually prefer an economic model, which fits data better than a model that does not fit as well.

A modern economy is a complex interaction of many economic agents and one would think that a realistic model in general would have to be almost as complex as the reality it is thought to mirror. Our experiences of large-scale modeling are, however, not entirely good. The large macro models of the 1960s and 1970s did not keep what they promised. The computable general equilibrium models have their advocates, but this line of research have also been heavily criticized, see for instance Hansen and Heckman (1996) and Sims (1996). It is hard to convince fellow economists and policy makers about the merits of a large-scale model, the functioning of which they have difficulties in understanding. They don’t trust a black box!

Macroeconomics is about understanding the relations between the aggregates of the national accounts and macro economic modeling involves attempts to formalize these relations. In doing so microeconomic arguments are frequently used but dressed in the language of the average economic man. The link between micro and macro is, however, weak. Usually, we cannot derive the macro relations from micro entities. In the 1950s and 1960s this was a concern for economists. There was a literature on the “aggregation problem”, see for instance, Theil (1954), Fisher (1969) and Lütjohann (1974), which now appears almost forgotten. Under the influence of time-series modeling, macro modeling has become a more or less independent branch of economics. In their critique of research using computable general equilibrium models Hansen and Heckman (1996, p. 100) note:

“It is simply not true that there is a large shelf of micro estimates already constructed for different economic environments that can be plugged without modification into a new macro model. In many cases estimators that are valid in one economic environment are not well suited for another. Given the less-than idyllic state of affairs, it seems foolish to look to micro data as the primary source for many macro parameters required to do simulation analysis.”

In researching a problem area, it is frequently fruitful to use different approaches, but still, will we ever be able to understand the movements in the macro aggregates unless we are able to derive them from micro entities? We do recognize that there is heterogeneity in the behavior of micro units, and we know from the early aggregation literature that simply adding micro relations can derive no stable macro relations. Aggregation is thus no simple adding up but involves the interaction of micro units on markets and under institutional constraints. To set up a research program which permits not only random but also nonrandom heterogeneity in the behavior of economic agents, that allow them to interact in various ways and that also allow the explicit introduction of institutional constraints and policy parameters, a new framework is needed. It is possible that the micro simulation technique could provide it.

2. What is micro simulation?

Micro simulation is a technique that uses the capacity of modern computers to make micro units act and interact in such a way that it is possible to aggregate to the level of interest. A micro simulation model can be seen as a set of rules, which operates on a sample of micro units such as individuals, households, and firms. Each micro unit is defined and characterized by a set of properties (variables) and as the model is simulated these properties are updated for each and every micro unit. The model might simply be a set of deterministic rules such as the income tax rules of a country operating on a sample of taxpayers and used to compute the distribution of after-tax income, the aggregate income tax revenue or other fiscal entities of interest. But the model could also include behavioral assumptions usually formulated as stochastic models. Examples are fertility models, models for household formation and dissolution, labor supply and mobility.

In micro simulation modeling there is no need to make assumptions about the average economic man. Although unpractical, we can in principle model every man. It is no simple task to model the behavior of single consumers and firms, but it is an advantage to model the decisions of those who actually make them and not the make-believe decisions of some aggregate. It stimulates the researcher to pay attention to the institutional circumstances that constrain the behavior of consumers and firms. It also in a straightforward way suggests what data should be collected and from whom. Similarly, in a micro simulation model it is possible to include the true policy parameters and the rules which govern their use, such as the tax rate scales, eligibility rules, tax thresholds, etc. One is not confined to using average tax rates applied to everyone. This makes micro simulation especially useful for policy analysis.

The development of micro simulation can be traced to two different sources. One is Guy Orcutt’s idea about mimicking natural experiments also in economics and his development of the behavioral dynamic micro simulation model DYNASIM2 that later was further developed by Steven Caldwell into the CORSIM model (Caldwell, 1993; Caldwell, 1996). Another source is the increased interest among policy makers for distributional studies. Changes in the tax systems of many Western economies has developed a need for a tool to analyze who will win and who will loose from changes in the tax and benefit systems. As a result, many governments now have so called tax-benefit models. Examples are the Danish LAW model, the UK model POLIMOD3 , STINMOD4 in Australia, SWITCH5 in Ireland, and FASIT6 in Sweden. At the European level EUROMOD is an ambitious attempt to build a tax-benefit model for all of EU7 These models usually do not include behavioral relations but only all the details of the tax and benefit rules. These rules are then applied to a sample of individuals for which one knows all gross incomes and everything else needed to compute taxes and benefits. For every individual in the sample one is thus able to compute the sum of all (income) taxes due and the disposable income for each household. The output becomes, for instance, the distribution of the disposable income. It is then possible to change the tax rates or anything else in the tax code and run the model once again and compare to the previous result. In this way one can analyze who will gain and who will loose from a tax change and estimate the aggregate budget effects of tax and benefit changes. The simulation model will, however, only give the first-order effect of a tax change, because household composition, work hours and incomes are assumed unchanged and not influenced by the taxes. This is both a strength and a weakness of the tax-benefit models. It is a strength because it is easy to understand what the model does, and no controversial assumptions are needed. There is also no difficult inference problem. All the analyst needs to do is to draw an inference from the random sample of taxpayers to the population of taxpayers, which is something we know from sampling theory. The weakness is of course that we do not know the relative size of any adjustments of behavior to the tax and benefit changes. Many tax reforms aim at changing the behavior of taxpayers. The first-order effect might then become a bad approximation. As a result, attempts have been made to enlarge the tax-benefit models with behavioral models to capture these adjustments. Duncan and Weeks (1997) gives an example of a tax-benefit model amended with a labor supply model. Additional examples can be found in Klevmarken (1997) Table 3. In this way the tax-benefit models approach Orcutt’s DYNASIM and its successors.

Large scale dynamic microsimulation models with behavioral adjustments typically include demographic models that move the population forward, models that simulate earnings and labor supply and sometimes also models for geographical mobility, demand for housing etc. Most models of this kind have been developed in academic environments and have rarely been used to advice governments on policy issues. Examples are the Swedish MICROHUS8 , the Dutch NEDYMAS9 and the German Sfb3-MSM10 Among the few models of this kind that have been used for policy purposes are CORESIM, the Swedish Ministry of Finance model SESIM11 and the microsimulation model used by the Swedish National Insurance Board (RFV) to simulate the future of the Swedish public pension system. The latter model is probably one of the oldest policy driven microsimulation models that are still in operation. It was developed in the beginning of the 1970s (Eriksen, 1973; Klevmarken, 1973).

A number of conference volumes give good surveys of the microsimulation territory, for instance Orcutt et al. (1986), Harding (1996) and Mitton et al. (2000).

3. Pros and cons of microsimulation

Most of these models are designed to focus on distributional issues in particular on the income distribution. This is a characteristic feature of micro simulation models. They are useful tools for analyses of distributions not only mean relations. This is something we should bear in mind when we discuss statistical inference in micro simulation models.

Tax rules and rules that determine who is eligible for various benefits are usually highly nonlinear and sometimes have discontinuous jumps. Micro simulation models have the advantage of relatively easily accommodating such functional forms. One is thus not confined to functions with nice properties. In such cases approaches based on aggregate group data might be impossible and microsimulation the only adequate approach. For instance, in the Swedish ATP pension system pensions were based on the 15 best years of earnings and studies of the properties of this pension system required simulations of individual earnings profiles. Another example is s study of the cost for old age care in the United Kingdom (Hancock, 2000).

There are also disadvantages and problems with micro simulation models. One is that the size and complexity of a typical model makes it hard to understand its properties intuitively. This is one reason why micro simulation has not become fully accepted by the Economics profession. Given the main tradition of working with small, stylized models and the relative failure of the large macro models of the 1960s and 1970s many economists are now skeptical about the usefulness of large models. In order to change this, micro simulation modeling has to rely on good economic theory and use sound econometric inference methods, but economist also have to learn what scientists in other disciplines already know namely, how to examine the properties of large simulation models.

Contributing to the skeptics of the Economics profession is also the view that the science of Economics has not yet given us knowledge such that it is meaningful to build large micro simulation models for policy analysis and policy advice. For instance, in their assessment of the needs for data, research and models the Panel of Retirement Income Modeling of the U.S. National Research Council concluded (Citro and Hanushek, 1997):

“To respond to immediate policy needs, agencies should use limited, special-purpose models with the best available data and research findings to answer specific policy questions. Although such models may not provide very accurate estimates, the alternative of developing complex new individual-level microsimulation or employer models in advance of needed improvements in data and research knowledge has little prospect of producing better results and will likely represent, in the immediate future, a misuse of scarce resources.”

This was a recommendation to government agencies as policy makers concerned with retirement behavior. It should not be interpreted as general recommendation against microsimulation. On the contrary they also suggested (p. 153):

“The relevant federal agencies should consider the development of a new integrated individual-level microsimulation model for retirement-income-related policy analysis as an important long-term goal, but construction of such a model would be premature until advances are made in data, research knowledge, and computational methods.”

As pointed out by the panel one of the major problems in microsimulation work is the shortage of good micro data. Although the supply of micro data has increased very much in the last 20-30 years it is still hardly possible to find one data source or one sample which will contribute all the information needed for a typical microsimulation model. In fact, many model builders have found it necessary to use guestimates of model parameters and then try to calibrate the model against known benchmarks. Calibration is nothing but an attempt to tune the unknown parameters such that the model is able to simulate reasonably well the distributions of key variables. In this respect there is a similarity between microsimulation modeling and general equilibrium modeling. Both rely too often on the calibration technique. Hansen and Heckman (1996) criticized this approach because they found too little emphasis on assessing the quality of the resulting estimates. In fact, the properties of the estimates are usually unknown and mutatis mutandis the same is true for the simulated entities. The calibration techniques also tend to hide a more serious problem, namely that typically calibration involves only one year’s data or a single average or total. Because this reliance on a single or just a few points of benchmark data they do not always identify a unique set of values for the model parameters.

Even if the parameter estimates are not calibrated guestimates but true estimates, the absence of comprehensive data has typically induced people to a piece wise estimation procedure. Each submodel is estimated from its own data set and there is no model-wide estimation criterion. If the model has a hierarchical or a recursive structure and if the stochastic structure imposes independence or lack of correlation between model blocks or sub-models, then a piece wise approach can be justified, but in general it cannot.

By way of an example consider the following simple two-equation model:

(1) y1t=β1xt+ϵ1t;y2t=β2y1t+ϵ2t;E(ϵiϵj)= {σ12if i=j=2.σ22if i=j=2.0if ij

This is a recursive model and it is well-known that OLS applied to each equation separately will give consistent estimates of β1 and β2. The estimate of β1 gives the BLUP y^1 = β^1xt while predictions of y2 outside the sample range are β2^y1^. However, this suggests the following model-wide criterion,

(2) 1σ12t(y1ty^1t)2+1σ22t(y2tβ^2y^1t)2;

Minimizing this criterion with respect to β^1 and β^2 yields the OLS estimator for β^1 but the following estimator for β^2,

(3) β^2=ty2txtty1txt;

In this case both the “piece wise” OLS estimator of β2 and the “system-wide” instrumental variable estimator (3) are consistent but the OLS estimator does not minimize the prediction errors as defined by (2). In fact, under the additional assumption of normal errors the estimator (3) is a maximum likelihood estimator and thus asymptotically efficient.12

If we would add the assumption that ε1 and ε2 are correlated the recursive property of the model is lost and OLS is no longer a consistent estimator of β2. The estimator (3) is, however, still consistent and under the assumption of normality a ML estimator. In this example we would thus prefer the “system-wide” estimator (3) whether the model is recursive or not.

This was only a small illustrative example, but it demonstrates both that a piece wise approach might give inconsistent and biased estimates and that a choice of estimation criterion in line with the objective of microsimulation matters.

4. Choice of estimation criterion and estimator

Let us now turn to the choice of model-wide estimation criterion. The least-squares criteria commonly used assume that we seek parameters estimates such that the mean predictions give the smallest possible prediction errors, eq. (2) is an example. However, in micro-simulation we are not only interested in mean predictions, but we want to simulate well the whole distribution of the target variables. This difference in focus between micro simulation and a more conventional econometric analysis might suggest a different estimation criterion.

If the stochastic properties of the microsimulation model were fully specified including families of distribution functions, then the maximum likelihood method would use all the information in model and data to obtain efficient estimates. In practice, however, ML estimation will in general not become feasible. There are several reasons for this: a) For many submodels economic theory does not suggest any parametric family of distributions and we are usually unwilling to make strong assumptions which are not firmly based in theory or in previous research. Instead, we might prefer to get a representation of the distributions of “residuals” by resampling from the empirical residual distributions. b) In some cases, for instance when the distributions of income and wealth are simulated, we have to work with strongly skewed and highly non-normal distributions. c) A typical microsimulation model includes a rather complex mixture of different submodels, functional forms and dependence assumptions, such that it might become difficult to set up a likelihood function.

Given the general purpose of microsimulation the estimation criterion should not only penalize deviations from the mean but also deviations in terms of higher order moments. A natural candidate estimation principle then becomes the Generalized Method of Moments. The complexity and nonlinearity of a microsimulation model, however, cause difficulties in evaluating the moment conditions. A potential solution to this problem is to use the fact that the model is built to simulate and thus replace GMM by the Simulated Method of Moments.

5. Model validation

An important part of any model building effort is testing and validation. Validation involves two major issues. First the choice of criterion and validation measure, and second the derivation of the stochastic properties of this measure taking all sources of uncertainty into account. The choice of criterion for validation is of course closely related to that for estimation. As already mentioned, we are not only interested in good mean predictions, but also in good representations of cross-sectional distributions and of transitions between states. When an event occurs becomes important in any dynamic microsimulation exercise. A micro-simulation model is likely to have a number of simplifying assumptions about lack of correlation and independence, both between individuals and over time. For this reason, one might expect too much random noise in the simulations and too quickly decaying correlations compared to real data. In addition to model wide criteria, one might thus be interested in criteria that focus on these particular properties. Work is needed to develop such measures with known properties.

For a model not to big and complex in structure it might be feasible to derive an analytic expression for the variance-covariance matrix of the simulations, which takes all sources of uncertainty into account: random sampling, estimation and simulation errors (Pudney and Sutherland, 1996). In general micro-simulation models are so complex that analytical solutions are unlikely. Given the parameter estimates the simulation uncertainty can be evaluated if simulations are replicated with new random number generator seeds for each replication. There is a trade off between the number of replications needed and the sample size. The bigger sample the fewer replications.

To evaluate the uncertainty which arises through the parameter estimates one approach is to approximate the distribution of the estimates with a multivariate normal distribution with mean vector and covariance matrix equal to that of the estimated parameters. By repeated draws from this normal distribution and new model simulations for each draw of parameter values an estimate of the variability in the simulation due to uncertainty about the true parameter values can be obtained.

To avoid the normal approximation, one might use sample re-use methods. For instance, by boot strapping one can obtain a set of replicated estimates of the model parameters. Each replication can be used in one or more simulation runs, and the variance of these simulations will capture both the variability in parameter estimates and the variability due to simulation (model) errors. If the boot strap samples are used not only to estimate the parameters but also as replicated bases (initial conditions) for the simulations, then one would also be able to capture the random sampling errors.

Much of the total error in simulated values will come from the choice of a particular model structure. Sensitivity analysis is an approach to assess the importance of this source of error. As pointed out in Citro and Hanushek (1997) p. 155 “sensitivity analysis is a diagnostic tool for ascertaining which parts of an overall model could have the largest impact on results and therefore are the most important to scrutinize for potential errors that could be reduced or eliminated”. If simple measures of the impact on key variables from marginal changes in parameters and exogenous entities could be computed they would potentially become very useful.

Most models will almost always show deviations between simulated and observed values. If these deviations are within the bounds suggested by the stochastic properties of the simulation exercise, then one might like to constrain the model to simulate these known benchmarks with certainty. The model is aligned or calibrated to the benchmarks. If they were not used when the parameters of the model were estimated, this is a way to include new information. Alignment can thus be seen as a form of constrained estimation. Policy analysts also have another reason to force the model to simulate benchmarks. They think that the whole simulation exercise becomes more credible if the model reproduces what most people recognize as statistical facts.13 The idea seems to be that if the model is aligned to known benchmarks it will also do a better job in simulating other variables. This may be true, but it will not be true in general! If the benchmarks are tested against the model and rejected, then the model should be revised rather than aligned.

6. Conclusions

Microsimulation has the potential of linking micro and macro and enhancing our understanding of fundamental macro relations. It also has the potential to answer questions related to heterogeneity in behavior and differences in outcome of economic and social policy. Microsimulation is particularly well suited for analysis of the distribution of well-being.

In principle inference in microsimulation models is no different from inference in other applications of Economics, but there are practical difficulties due to the large scale and complex structure of a typical simulation model. This and a general caution for too strong assumptions about the stochastic properties of a model suggested that simulation-based estimation and sample re-use methods might be a good approach to the inference problems. These methods are, however, very demanding in terms of computations, and it remains to see if we currently have the computing power needed.

Future development of micro simulation will depend on ambitious research programs involving new collections of micro data that will permit studies of how micro units interact in markets and make decisions. We also need to develop our inference methods and computational skills.

Footnotes

1.

McCloskey (1983) and Sims (1996)

2.

See Orcutt (1957) and Orcutt et al. (1961); Orcutt et al. (1976)

3.

Redmond et al. (1996)

4.

Lambert et al. (1994) and Schofield and Polette (1996)

5.

Callan et al. (1996)

6.

FASIT Användarhandledning, see the web page of Statistics Sweden, www.scb.se

7.

Sutherland (1996) and Sutherland (2001)

8.

Klevmarken and Olovsson (1996), Klevmarken (2001) Appendix

9.

Nelissen (1994)

10.

Helberger (1982), Hain and Helberger (1986), Galler and Wagner (1986) and Galler (1989), Galler (1994)

11.

Eriksson and Hussénius (1999) and the webb address http://www.sesim.org/

12.

The estimator (3) is a ML estimator because there is no additional x-regressor in the second relation. The reduced form becomes a SURE system with the same explanatory variable in both equations. In general the ML estimator will depend on the structure of the covariance matrix of the errors.

13.

For similar reasons policy analysts sometimes want to align to “official” demographic projections and macro economic forecasts of labor force participation rates, unemployment rates, etc.

References

  1. 1
    Content, Validation and Uses of CORSIM 2.0, a Dynamic Microanalytic Model of the United States Paper Presented at the IARIW Conference on Micro-Simulation and Public Policy
    1. S Caldwell
    (1993)
    Content, Validation and Uses of CORSIM 2.0, a Dynamic Microanalytic Model of the United States Paper Presented at the IARIW Conference on Micro-Simulation and Public Policy, Canberra, Australia.
  2. 2
    “Health, Wealth, Pensions and Life Paths: The CORSIM Dynamic Microsimulation Model”, Ch 22 in A. Harding (Ed.) Microsimulation and Public Policy
    1. S Caldwell
    (1996)
    “Health, Wealth, Pensions and Life Paths: The CORSIM Dynamic Microsimulation Model”, Ch 22 in A. Harding (Ed.) Microsimulation and Public Policy, North-Holland.
  3. 3
    Simulating Welfare and Income Tax Changes: The ESRI Tax-Benefit Model, ESRI Dublin
    1. T Callan
    2. C O’Donoghue
    3. M Wilson
    (1996)
    Simulating Welfare and Income Tax Changes: The ESRI Tax-Benefit Model, ESRI Dublin.
  4. 4
    Assessing Policies for Retirement Income. Needs for Data, Research, and Models. National Research Council
    1. CF Citro
    2. EA Hanushek
    (editors) (1997)
    Washington, D.C: National Academy Press.
  5. 5
    Behavioral tax microsimulation with finite hours choices
    1. A Duncan
    2. M Weeks
    (1997)
    European Economic Review 41:619–626.
    https://doi.org/10.1016/S0014-2921(97)00005-6
  6. 6
    En Prognosmodell För Den Allmänna Tilläggspensioneringen, Riksförsäkringsverket
    1. T Eriksen
    (1973)
    En Prognosmodell För Den Allmänna Tilläggspensioneringen, Riksförsäkringsverket, Stockholm.
  7. 7
    SESIM – A Short Documentation
    1. P Eriksson
    2. J Hussénius
    (1999)
    Stockholm: Ministry of Finance.
  8. 8
    The Existence of Aggregate Production Functions
    1. FM Fisher
    (1969)
    Econometrica: Journal of the Econometric Society 37:553.
    https://doi.org/10.2307/1910434
  9. 9
    "Policy Evaluation by Microsimulation - the Frankfurt Model”, 21st General Conference of the International Association for Research in Income and Wealth at Lahnstein
    1. H Galler
    (1989)
    "Policy Evaluation by Microsimulation - the Frankfurt Model”, 21st General Conference of the International Association for Research in Income and Wealth at Lahnstein.
  10. 10
    Mikroanalytischer Grundlagen Der Gesellschaftspolitik
    1. H Galler
    (1994)
    369–379, Mikrosimulationsmodelle in der Forschungsstrategie des Sonderforschungsbereich 3, Mikroanalytischer Grundlagen Der Gesellschaftspolitik, Vol, 2, Berlin, Akademie Verlag, p.
  11. 11
    Microanalytic Simulation Models to Support Social and Financial Policy, North-Holland, Amsterdam
    1. HP Galler
    2. G Wagner
    (1986)
    The microsimulation model of the Sfb 3 for the analysis of economic and social policies, pp 227-247 in, Microanalytic Simulation Models to Support Social and Financial Policy, North-Holland, Amsterdam.
  12. 12
    Microanalytic Simulation Models to Support Social and Financial Policy, North-Holland
    1. W Hain
    2. C Helberger
    (1986)
    Longitudinal microsimulation of life income, Microanalytic Simulation Models to Support Social and Financial Policy, North-Holland, Amsterdam.
  13. 13
    Microsimulation Modelling for Policy Analysis
    1. R Hancock
    (2000)
    Charging for care in later life: an exercise in dynamic microsimulation” chapter 10 in, Microsimulation Modelling for Policy Analysis, Cambridge, Cambridge University Press.
  14. 14
    The Empirical foundations of Calibration
    1. LP Hansen
    2. JJ Heckman
    (1996)
    Journal of Economic Perspectives 10:87–104.
    https://doi.org/10.1257/jep.10.1.87
  15. 15
    Microsimulation and Public Policy
    1. A Harding
    (1996)
    Amsterdam: North-Holland, Elsevier Science B.V.
  16. 16
    Auswirkungen öffentlicher Bildungsausgaben in der BRD auf die Einkomensverteilung der Ausbildungsgeneration
    1. Chr Helberger
    (1982)
    Stuttgart: Gutachten im Auftrag der Transfer-Enquete-Kommission, Kohlhammer.
  17. 17
    En ny modell för ATP-systemet
    1. NA Klevmarken
    (1973)
    Statistical Review 1973:403–443.
  18. 18
    Behavioral Modeling in Micro Simulation Models. A Survey. Working Paper 1997:31, Department of Economics, Uppsala University
    1. NA Klevmarken
    (1997)
    Behavioral Modeling in Micro Simulation Models. A Survey. Working Paper 1997:31, Department of Economics, Uppsala University.
  19. 19
    “Microsimulation – A Tool for Economic Analysis”, Working Paper 2001:13, Department of Economics, Uppsala University
    1. NA Klevmarken
    (2001)
    “Microsimulation – A Tool for Economic Analysis”, Working Paper 2001:13, Department of Economics, Uppsala University.
  20. 20
    Microsimulation and Public Policy
    1. NA Klevmarken
    2. P Olovsson
    (1996)
    Direct and behavioral effects of income tax changes - simulations with the Swedish model MICROHUS, Microsimulation and Public Policy, Amsterdam, Elsevier Science Publishers.
  21. 21
    An Introduction to STINMOD: A Static Microsimulation Model. NATSEM Technical Paper No 1, University of Canberra
    1. SRP Lambert
    2. D Percival
    3. D Schofield
    4. S Paul
    (1994)
    An Introduction to STINMOD: A Static Microsimulation Model. NATSEM Technical Paper No 1, University of Canberra, Australia.
  22. 22
    Linear Aggregation in Linear Regression
    1. H Lütjohann
    (1974)
    Stockholm: Stockholm University.
  23. 23
    The Rhetoric of Economics
    1. D McCloskey
    (1983)
    Journal of Economic Literature 21:481–517.
  24. 24
    Microsimulation Modelling for Policy Analysis
    1. L Mitton
    2. H Sutherland
    3. M Weeks
    (2000)
    Cambridge: Cambridge University Press.
  25. 25
    Towards a Payable Pension System. Costs and Redistributive Impact of the Current Dutch Pension System and Three Alternatives
    1. JHM Nelissen
    (1994)
    The Netherlands: TISSER, Tilburg Institute for Social Security Research, Department of Social Security Studies, Tilburg University.
  26. 26
    A new type of socio-economic system
    1. GH Orcutt
    (1957)
    The Review of Economics and Statistics 39:116–123.
    https://doi.org/10.2307/1928528
  27. 27
    Policy Explorations Through Microanalytic Simulation, The Urban Institute
    1. GH Orcutt
    2. S Caldwell
    3. R Wertheimer
    (1976)
    Policy Explorations Through Microanalytic Simulation, The Urban Institute, Washington D.C.
  28. 28
    Microanalysis of Socioeconomic Systems: A Simulation Study
    1. GH Orcutt
    2. M Greenberger
    3. J Korbel
    4. A Rivlin
    (1961)
    New York: Harper and Row.
  29. 29
    Micronalytic Simulation Models to Support Social and Financial Policy
    1. GH Orcutt
    2. J Merz
    3. H Quinke
    (editors) (1986)
    Amsterdam: North-Holland, Elsevier Science Publishers B.V.
  30. 30
    Microsimulation and Public Policy
    1. S Pudney
    2. H Sutherland
    (1996)
    Statistical Reliability in Microsimulation Models With Econometrically-Estimated Behavioural Responses, Chapt. 21 in, Microsimulation and Public Policy, North-Holland Elsevier.
  31. 31
    Microsimulation Unit MU/RN/19
    1. G Redmond
    2. H Sutherland
    3. M Wilson
    (1996)
    POLIMOD: An Outline, Microsimulation Unit MU/RN/19, 2nd Edition, Cambridge, DAE, University of Cambridge.
  32. 32
    A Comparison of Data Merging Methodologies for Extending A Microsimulation Model. NATSEM Technical Paper No 11, University of Canberra
    1. D Schofield
    2. J Polette
    (1996)
    A Comparison of Data Merging Methodologies for Extending A Microsimulation Model. NATSEM Technical Paper No 11, University of Canberra, Australia.
  33. 33
    Macroeconomics and Methodology
    1. CA Sims
    (1996)
    Journal of Economic Perspectives 10:105–120.
    https://doi.org/10.1257/jep.10.1.105
  34. 34
    Microsimulation Unit MU/RN/20
    1. H Sutherland
    (1996)
    EUROMOD: A European Benfit-tax Model, Microsimulation Unit MU/RN/20, Cambridge, DAE, University of Cambridge.
  35. 35
    EUROMOD Working Paper No EM9/01
    1. H Sutherland
    (2001)
    EUROMOD: An Integrated European Benefit-Tax Model. Final Report, EUROMOD Working Paper No EM9/01, Cambridge, DAE University of Cambridge.
  36. 36
    Linear Aggregation of Economic Relations
    1. H Theil
    (1954)
    Linear Aggregation of Economic Relations, North-Holland, Amsterdam.

Article and author information

Author details

  1. Anders Klevmarken

    Department of Economics, Uppsala, Sweden
    For correspondence
    anders@klevmarken.nu
    Competing interests
    No competing interests reported

Funding

No specific funding for this article is reported.

Acknowledgements

This article is part of a lecture presented at the International School on Mathematical and Statistical Applications in Economics, January 15-19 2001, Västerås, Sweden. It was previously published as Working Paper 2001:13, Department of Economics, Uppsala University.

Publication history

  1. Version of Record published: April 30, 2022 (version 1)

Copyright

© 2022, Anders Klevmarken

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Download citations (links to download the citations from this article in formats compatible with various reference manager tools)

Open citations (links to open the citations from this article in various online reference manager services)