1. Methodology
  2. Innovation
Download icon

Should We Invest in Microsimulation Models?

  1. Anders Klevmarken  Is a corresponding author
  1. Department of Economics, Sweden
Research article
Cite this article as: A. Klevmarken; 2022; Should We Invest in Microsimulation Models?; International Journal of Microsimulation; 15(1); 186-193. doi: 10.34196/ijm.00259

The Guy Orcutt keynote lecture at the 8Th World Congress (2021) of the International Microsimulation Association

Much of the activities to build and use microsimulation models have taken place outside academic institutions, but inside ministries and government agencies. We have found it difficult to convince our fellow economists and social scientists of the advantages of microsimulation. Are our colleagues in ministries and government agencies better equipped to see the benefits of microsimulation than people in academia? They certainly sit closer to the relevant policy issues, but do they provide reliable instruments for policy analysis?

Let’s first try to remember what Guy Orcutt had in mind when he some 70 years ago suggested his micro analytic approach to policy analysis and macroeconomics (Orcutt, 1957; Orcutt, 1980; Orcutt et al., 1961; Orcutt et al., 1976; Bergmann et al., 1980; Orcutt and Glaser, 1980). To understand his motivation it helps to return to the econometrics of the 1950:s, 60:s and 70:s which was dominated by the attempts to estimate and test Keynesian macro models and Leontief-type input-output models using aggregate data, which evened out a multitude of micro behavior and created multicollinearity, autoregression and interdependencies. As Orcutt saw it the relatively few macro data points did not include enough information for powerful tests of economic behavior compared to much richer micro data. Although a micro analytic model would have more parameters to be estimated than a conventional macro model, what is important for successful estimation and testing is the ratio between the number of observations and the number of parameters, which he thought would be much higher in the micro analytic approach. He also believed that model estimation would become easier, because assumptions about independence and recursiveness would be more realistic at the micro level than at the macro level. His micro analytic approach would also solve the problem to aggregate non-linear micro relations to macro relations, open up for studies of the heterogeneity of micro behavior and the interaction of micro units in markets, and permit studies of the distributional consequences of policy changes. In an economy wide micro analytic model no macro relations were needed. However, it is interesting to note that Orcutt’s own micro analytic model MAM, presented at the first international microsimulation conference in Stockholm in 1977 (Orcutt and Glaser, 1980), included an auxiliary macro model, which handled the macroeconomic feedback. How much of Guy Orcutt’s visions have now been realized?

1. Static tax-benefit models

Let’s first turn to the case of a static tax-benefit model. It will only give first-order effects of changes in taxes and benefits, because there are no behavioral changes modeled nor any macro-economic feedback. The statistical inference is rather straight forward as it relies on standard sampling theory (assuming that the data set used is a proper sample)1 , but the sampling frame is likely to be a few years older than the target population, because it takes time to produce the data needed. On the other side, the quality of data is usually good as they come as byproducts of the taxation procedures.

Many government agencies, unions and political organizations already have a tax-benefit model. This suggests that these models are found useful in spite of their deficiencies. The EUROMOD project has made a remarkable progress in harmonizing tax-benefit models across European countries and facilitating comparisons across borders. Attempts have also been made to introduce behavioral response into these models. Creedy and Duncan (2001) suggested an interesting approach to estimate the relative importance of behavioral adjustments in labor supply and wage rates following an increase in the income tax. Using a microsimulation model for Australia they simulated a tax increase from 20 to 25% The first-order effect on the tax revenues was an increase by 9.1% while the total “3rd round” effect was 11.7-13.4%, depending on the assumption about the wage rate elasticity of demand. This example suggests that behavioral adjustments might be important. However, timing is an issue. A static tax-benefit model does not tell us when these effects will materialize. Do the tax-benefit models open up new exiting prospects for research? I am not so sure.

2. The Guy Orcutt type of dynamic microsimulation models

Let’s now turn to the Guy Orcutt type of dynamic microsimulation models (dynamic in the sense that the model ages the population studied). There is a surprisingly long list of applications of this kind of microsimulation models. (See for instance the survey article Li and O’Donoghue, 2012) A large share still focuses on distributional issues such as income inequality, poverty, inequality of wealth accumulation and the effects of changes in taxes and benefits. But there are also many other applications for instance related to health policy, transportation issues and geographical mobility. But how many of these studies live up to the scientific standards which would permit us to give good policy advice and forecasts?

These models typically consist of a sequence of estimated conditional distributions, which simulate outcomes of policy changes or general changes in the economy. The Orcutt kind of dynamic microsimulation models do not explain how consumer, investment and financial markets function. In general, the conditional distributions do not estimate the supply side nor the demand side, and they should best be interpreted as attempts to estimate the outcome from these markets. This raises the issue of their stability and in particular: Do these distributions change if policy changes?

A dynamic microsimulation model requires more data than a tax-benefit model. Usually, the main data source is a large survey or a set of register data, which will define the individual units of simulation, become the main source of estimation, and provide starting values for the simulations. But it might not include all the data needed to estimate all relations of the microsimulation model. Then additional complementary data sets have been used, which might not have had properties comparable to the main data source. If no other alternative was available, more or less arbitrary “guesstimates” have usually been plugged in.

In practice the size and complexity of a microsimulation model with its mixture of model types and functional forms and the need to use different data sources has made it common to use a piece wise estimation strategy, i.e., each relation is estimated separately by standard methods. Depending on the model structure, this could be an acceptable procedure. If the microsimulation model is hierarchical with stochastic properties such that it is recursive or block recursive, then a piece wise estimation strategy should give consistent parameter estimates. (Compare the introductory passage about Guy Orcutt’s visions!)

However, frequently relations have not been estimated at all, but “calibrated”. In this context calibration is an attempt to tune the parameter values such that the model simulates as closely as possible an observed distribution or a few given benchmark values of key variables. If there are just one or a few benchmarks this procedure might not identify the parameters uniquely, i.e., more than one set of calibrated values could give the same good fit. The properties of the calibrated values will also depend on how closeness is defined. Usually, no attempts are made to analyze the properties of the calibrated values and consequently one does not know the properties of the simulations either.

The terms “calibration” and “alignment” have been used in several different contexts. For instance, Stephensen (2015) notes that “alignment can be said to do one of two things: mean-correction and variance-elimination”. The objective variance-elimination (variance-reduction is a better term) is not problematic. As shown in Klevmarken (2002) alignment can be seen as a method to incorporate external information in the estimation process. But the objective mean-correction is problematic. If the simulated distributions deviate more from observed data, than the stochastic properties of the model motivate, then this suggests that data reject either the parameter estimates or the model as such or both. The model should thus not be aligned but re-estimated or reformulated. We should thus test if the model accepts the alignment bench marks or not, before we go ahead with an alignment. I have previously suggested that one might use a χ2-test (Klevmarken, 2008), but this is not the only alternative.

In econometric work generally, more attention is paid to model the structure of the “conditional mean” than modelling the distribution around the mean. In the microsimulation context the focus is often on distributional issues and then we cannot pass by the stochastic properties of the model easily. We know that distributions of economic variables like income and wealth typically show skewness and kurtosis, and it is not uncommon to find extreme outliers. In general, we will find it difficult to make assumptions about specific distributions. We are then led to choose estimation methods which do not rely on specific distributional assumptions but rather are insensitive to outliers. One should also avoid estimation criteria which only punish deviations from the mean, but instead use a criterion function which also punishes deviations from higher moments. In previous work I have suggested that a General Method of Moments approach (GMM) might be useful (Klevmarken, 2002). However, the complexity of a typical microsimulation model with nonlinear relations and discrete jumps might make it difficult to evaluate the moment criteria. A potential solution is then to take advantage of the fact that we are working with a simulation model and get the parameter estimates by the simulated method of moments.

3. Model validation

In order to evaluate the properties of the simulations one has to take all sources of stochastic uncertainty into account. They are: 1) Randomness built into the simulation model, 2) random errors in the estimated parameters, 3) random errors in the start values, and 4) sampling variability because the model works on a sample of micro units. Relatively few studies have addressed, evaluated and documented the uncertainty of the simulations. Only the first kind of uncertainty is commonly accounted for when the model is simulated, while the uncertainty that arises from the other sources is ignored.

How should one reproduce the properties of these four sources of uncertainty in the simulations? In the absence of specific assumptions about families of distributions, the stochastic properties of the “residuals” build into the model structure can be estimated using repeated drawings from the empirical residuals. Similarly, for the stochastic properties of the parameter estimates, if it is not possible to deduce the large sample properties of the estimates, their properties can be estimated using resampling techniques, i e random subsamples are drawn from the original data set and the model is re-estimated once for each subsample. This gives a sample of parameter estimates, the empirical distribution of which can be used in the simulations. To estimate the uncertainty that arises from measurement errors in start values, however, becomes more problematic. The only information we are likely to get about this kind of uncertainty comes from the very sample of start values, and possibly from similar samples from adjacent years (time points). For a given variable we would like to estimate the distribution of start values (or at least its variance) for “similar” micro units. The problem is how one should define “similar”. Perhaps it would be possible to stratify the sample of start values according to reasonable explanatory variables or estimate a regression model based on these variables and estimate its residual variance.

Finally, the consequences of sampling variability in connection with a dynamic microsimulation model in which new micro units are born and old die are not yet well analyzed. If the basic sample of micro units is large this source of uncertainty might not be very important. It depends however on what one wishes to simulate. If the target is an entity in a relatively small subgroup of micro units, then also a large basic sample might become small. The issue is how the sampling probabilities of the original micro units should be forwarded to the remaining and new micro units of an aged sample to give a correct inference to the aged finite population. However, the original sampling probabilities are only designed for an inference to the finite population from which the sample of start values was drawn. They are not necessarily good for an inference to a future population. On the other side a sample drawn with very unequal sampling probabilities is not a good representation of the population if we ignore the sampling probabilities. One approach which has been suggested is to replicate the original micro units to numbers proportional to the inverse of the sampling probabilities. More specifically, to control the sample size, one should multiply the inverse of the sampling probability for each micro unit by the ratio of the original sample size to the size of the population (n/N). If this number is not an integer, it should be rounded to the nearest integer. If a microsimulation model is applied to a sample adjusted in such a way and the resulting simulations lead to a population with a size and composition which differ significantly from those of the demographic predictions produced by the national statistical bureau, then one has to decide either to believe the statistical bureau and change the microsimulation model, or to say that the demographers at the statistical bureau have not done their job!

In a large microsimulation model random errors are not the only kind of errors. Systematic errors in the model specification might be even more important. For this reason, it is essential to compute repeated simulations and analyze their properties. More generally it is important that the analyst gets to know the properties of the model by simulating it, before it is used for policy analysis or forecasting. Which variables and parameters have a large influence on the outcome? Does the model produce strange results in certain regions defined by variable and/or parameter values? For this purpose, we need tools designed for exploratory analysis. For instance, Salonen et al. (2018) estimated mixtures of conditional distributions of output variables and were able to identify homogenous subgroups of micro units and detect strange simulation results in one or more subgroups. In another study Gualdi et al. (2015) used a so called “phase diagram” to identify regions in the parameter space where the model was stable, erratic and explosive respectively or had other desirable or undesirable properties. (If a model in certain regions becomes unstable and goes into some kind of crisis, it is not necessarily the result of misspecification, but could also be a property of true economic interest, see below.)

4. Micro to macro

Most models of the Orcutt type of dynamic microsimulation ignore feedback from the macro economy. In the last few decades, one has tried to compensate for this shortcoming by aligning microsimulation models to computable general equilibrium models (CGE). (See for instance, Peichl, 2015) This can be done in three ways: In the first case the microsimulation model is aligned to the outcome of the macro model,2 in the second case the macro model is aligned to the outcome of the micro model and in the third case the models are aligned iteratively in both ways. Is this a good idea? If the parameter estimates of a microsimulation model are strongly influenced by the alignment procedure, does this really imply that there are strong feed backs from the macro side? Isn’t it more likely that this is the result of misspecification errors in the macro model or in the micro model or in both models? To defend an alignment procedure, one should require that the two models are congruent, for instance, the micro model should at least in the long run produce macro variables that follow a general equilibrium. In general, our microsimulation models do not satisfy this requirement.

If we return to the old literature on the so-called aggregation problem, we might remember that a general result from this work was that only under very special and simple specifications of the micro relations there exists a corresponding stable macro function.3 In our case we have the same problem. One would probably have to impose unrealistically strong restrictions on the microsimulation model to achieve congruence between a microsimulation model and a CGE model, such that the two models in expectation or in the long run would give similar macro simulations. See also a brief discussion of these issues in Hansen and Heckman (1996) in which they suggest that there are no micro parameters that can inform a CGE model.

Furthermore, a general equilibrium model assumes that the macro economy tends to some kind of equilibrium. Is this necessarily a good assumption? Do we want to impose this assumption on our micro simulation models?

A characteristic feature of microsimulation models is that they recognize the heterogeneity of consumers, investors and firms and that the aggregation to the macro level will depend on this heterogeneity. Furthermore, there is not only heterogeneity between micro units, but also heterogeneity over time. Consumers adjust their behavior to changes in the consumer markets and in institutions which come about through technical change (introduction of new goods). They also adjust to policy changes and changes in relative prices. Firms also adapt for similar reasons. Some grow, some merge, some go into bankruptcy and some change their lines of production and trade. Does this dynamic heterogeneity necessarily lead to macroeconomic equilibrium? It is true that the economy seems to have checks and balances which bring it back to some kind of stability after a crisis, but is it an equilibrium, and if it is an equilibrium, is it necessarily the same equilibrium as before the crisis? Is it necessary to assume the existence of an equilibrium to get a model to mimic the stability properties we think the real economy possesses? Isn’t it likely that these properties can be traced to the heterogeneity of the micro units and their behavior? Smart consumers and smart firms will find ways to minimize the consequences of a crisis, while some are not as smart and will go out of business, into unemployment and on welfare, and when the rest of us see that we adjust towards a new balance out of the crisis.

If we think of a microsimulation model as an instrument for policy analysis and forecasting, it might be useful to have a model which can simulate outside regions that we find beneficial. If the model in its simulations approaches regions with much volatility or a region in which the economy explodes or one with high unemployment and many firms in bankruptcy, this could give a warning to policy makers and the model could also help suggest a policy that could lead away from these undesirable regions. Examples can be found in Eliasson (1991). Using his microsimulation model MOSES, he finds that the model under certain circumstances simulates a collapse of the economy. “If firms are forced to have the same expectations as the sector average and competition assures that there are narrow limits between productivity performances and profit margins, then after 25 years the sector economy collapses.” “The same thing happened in a similar experiment …. where all firms were made to follow the ‘leader’ of each market, in this case the largest firm. If the entire group of firms in one sector happens to come into a position where the average firm would go bankrupt and/or choose to exit, all firms make identical decisions. This ‘follow John expectational design’ hence removes the robustness of the economy guaranteed by the diversity of structure.” (p.169)

My conclusion is thus that we should not align our microsimulation models to CGE models or try to build them into our models. Instead, we should take advantage of the heterogeneity of micro units and try to build into our microsimulation models the properties of stability we find realistic.

5. Agent-based models

A micro to macro analysis might in principle be based on an Orcutt type dynamic microsimulation model, but more promising is perhaps agent-based models. In theory they are well suited to model the heterogeneity of micro units, their interaction on markets and the outcome of this interaction. One does not have to rely on the assumption of rational expectations or a priori assume the existence of an equilibrium. For these reasons, the ABM approach is promising in integrating micro and macro. Richiardi (2016) and Eliasson (2017; 2018) have strongly argued along these lines. Another illustration of the importance of modelling agent heterogeneity is found in Recchioni et al. (2015). An agent-based model for the analysis of the price dynamics in the stock market is calibrated to market data using a least-squares criterion. The authors noted that “Some insights into traders’ strategies and their impact on aggregate variables have been provided by agent-based models. These models have shown that the interactions at the micro-level are crucial in comprehending macro-economic dynamics. The agent-based model approach has highlighted the interplay between the micro and macro levels, revealing the similarities and differences between the overall system and its parts.” (p. 2). They also observed that “Brock and Hommes, in several papers ….. , have studied an asset pricing model where traders can switch among different forecasting strategies.4 The switching mechanism is driven by a fitness measure, which is a function of past realized profits. The price dynamics driven by heterogeneous strategies is capable of explaining a range of complex financial behaviors. While collective behavior, ….. when agents imitate each other, …… can lead to large price fluctuation and volatility clustering.” (p. 2)

The history of agent-based modelling can be traced to Guy Orcutt’s ideas of microsimulation, but he never built any model that we today would call an ABM. Early contributions were Barbara Bergman’s model “Transaction” (Bergman, 1974) and Gunnar Eliasson’s model MOSES (Eliasson, 1977; Eliasson, 1978; 1984). Since then, a literature on ABMs has developed. There are models which address the micro to macro issues without the a priori assumption of market equilibrium and models which look into the details of agents’ interaction in various markets. For a review see Ballot et al. (2014).

However, there are also disadvantages with ABMs. One needs large sets of parameters to model micro behavior and consequently also large and rich data sets to estimate and simulate these models. Estimation is a problem in itself, which many researchers in the field have neglected. They have been satisfied if the models have been able to reproduce stylized facts. ABMs are driven by the competition among micro units in the markets, which sorts out the fittest to survive while those who do not stand up for the competition will go bankrupt or into poverty. The mathematical structure of such models will include discrete jumps and discontinuities, which might become difficult to handle within standard estimations methods. Never-the-less the quality standard of the inference used for ABMs should be no less than that used for any other stochastic model. We must be able to tell what properties our simulations have. Otherwise, our models are not suitable instruments for policy advise and forecasting. There is a need for further development of estimation-, validation- and simulation methods in this context.

Computational capacity has long been a bottle neck. For instance, in Fabretti (2012) a general method of moments method was used to estimate a relatively small model of financial markets. With only 12 parameters, still very long execution times were required for computation. It was also difficult to find a unique optimum for the objective function and the results changed significantly with the time span and objective function used. We will probably do better today, as our computing capacity has increased, and it will most likely continue to do so.

It has been suggested that many ABMs can be represented by a Markov Chain and that this representation might facilitate the estimation of ABMs, see Fabretti (2014) and Izquierdo et al. (2009). A Markov Chain is a process with a short memory. Do we want to have microsimulation models in which transition probabilities only depend on the state in the nearest previous time period? There is no general answer to this question. It depends on the context and becomes an empirical issue. It has also been suggested that the Markov process should be ergodic and perhaps even stationary, because it facilitates estimation. But are these properties good properties in a microsimulation model? Ergodicity implies that the model tends towards an equilibrium, which does not depend on the start values. Experience from ABM modeling suggests that start values are important, and if not chosen carefully the models might show strong volatility, chaotic behavior and even collaps in the long run. One should therefore prefer not to assume a priori that an ABM has the ergodic property. Similarly, it might be difficult to believe that stationarity is a realistic property of an economy wide ABM.

6. Conclusions

Microsimulation can answer questions related to heterogeneity in behavior and differences in outcome of economic and social policy. Microsimulation is particularly well suited for analysis of the distribution of well-being. In this respect one can say that we have fulfilled Guy Orcutt’s vision. Microsimulation also has the potential of linking micro and macro and enhancing our understanding of fundamental macro relations. Good work has been done in this direction, but we still have some distance to go before there are economy wide models properly estimated and tested such that they become good instruments for policy advise and forecasting. Even with new, inventive ideas, but using bad data, calibration rather than estimation and insufficiently validated models, these new ideas will not lead to good policy advice and good forecasting.

General stochastic equilibrium models have been criticized for their assumption of rational expectations leading to the a priori assumption of a stochastic equilibrium. These models do not recognize that micro units do not all behave alike and that they change behavior. They cannot explain how economic crisis develop except through exogenous or stochastic disturbances. (See the discussion in Richiardi (2016), Eliasson (2017) and Eliasson, 2018). It is also very unlikely that we will find congruence between a realistic microsimulation model and a general equilibrium model.

My conclusion is thus that we should not try to incorporate general equilibrium macro models into our microsimulation models or link them to such models. Instead, our efforts should be directed to the study of the heterogeneity and adaptive behavior of micro units, so markets and market changes can be included in our microsimulation models. Macro entities will then, following Orcutt, come out as a simple summation over micro units. (I disregard from the fact that statistical bureaus usually massage their primary data before forming macro aggregates. It is then not certain that a simple summation across micro units will give exactly the same macro estimates as those produced by a statistical bureau. Some adjustments of the simulated micro data may become necessary.) When a microsimulation model produces both micro and macro simulations, we should use both microdata and macro data to estimate the model. Only macro data will most certainly not identify the model. The parameter estimates will then hopefully give good, quality assured, simulated estimates of both micro and macro behavior.

So yes, to fulfill the visions of Guy Orcutt we should invest in long-term research programs which attract experts from different fields. Such a program should:

  • Systematically analyze the heterogeneity in micro behavior,

  • Model the interaction of people, firms and institutions in various markets,

  • Study and model the detailed influence of true policy parameters on micro agents and markets,

  • Develop a new micro analytical basis for macro analysis,

  • Collect adequate data,

  • Develop and use sound inference methods for estimation, testing and simulation such that the properties of simulation results become known.

Footnotes

1.

The sampling variability can be handled through conventional sampling theory or by resampling techniques. One example is McClelland et al. (2020) which computes confidence bands for a tax-benefit model using both a bootstrap approach and a more conventional large sample normal approximation. This article also briefly reviews similar studies on other models.

2.

Maitino et al. (2020) an interesting alternative to a conventional top-down alignment is used. A macro model predicts if there is excess demand or excess supply of labor or balance in the market. In the case of excess demand or supply, a number of workers in the microsimulation model will move between employment and unemployment proportional to the size of the imbalance.

3.

See for instance, Theil (1954), Fisher (1969) and Lütjohann (1974).

4.

For instance, fundamentalists and trend followers.

References

  1. 1
    Agent-based modeling and economic theory: Where do we stand?
    1. G Ballot
    2. A Mandel
    3. A Vignes
    (2014)
    Journal of Economic Interaction and Coordination 10:199–220.
    https://doi.org/10.1007/s11403-014-0132-6
  2. 2
    Annals of Economic and Social Measurement
    1. B Bergman
    (1974)
    475–479, A Microsimulation of the Macroeconomy with Explicitly Represented Money Flows, Annals of Economic and Social Measurement, Vol, 3, p.
  3. 3
    Micros Simulation – Models, Methods and Applications
    (1980)
    Proceedings of a symposium in Stockholm Sept. 19-22, 1977, IUI Conference reports 1980:1.
  4. 4
    Aggregating Labour Supply and Feedback Effects in Microsimulation
    1. J Creedy
    2. A Duncan
    (2001)
    The Institute for Fiscal Studies WP01/24:1–24.
  5. 5
    Competition and market processes in a simulation model of the Swedish economy
    1. G Eliasson
    (1977)
    The American Economic Review 67,1:277–281.
  6. 6
    A Micro-to-Macro Model of the Swedish Economy. Papers on the Swedish Model from the Symposium on Micro Simulation Methods in Stockholm Sept. 19-22 1977. The Industrial Institute for Economic and Social Research (IUI) Conference Reports 1978:1
    (1978)
    Almqvist & Wiksell International, Stockholm.
  7. 7
    Micro-heterogeneity of firms and the stability of industrial growth
    1. G Eliasson
    (1984)
    Journal of Economic Behavior & Organization 5:249–274.
    https://doi.org/10.1016/0167-2681(84)90002-7
  8. 8
    Modeling the experimentally organized economy. Complex dynamics in an empirical micro-macro model of endogenous economic growth
    1. G Eliasson
    (1991)
    Journal of Economic Behavior and Organization 16:153–182.
    https://doi.org/10.1016/0167-2681(91)90047-2
  9. 9
    Why Complex, Data Demanding and Difficult to Estimate Agent Based Models? Lessons from a Decades Long Research Program
    1. G Eliasson
    (2017)
    International Journal of Microsimulation 11:4–60.
    https://doi.org/10.34196/ijm.00173
  10. 10
    Foundation of Economic Change, Economic Complexity and Evolution
    1. G Eliasson
    (2018)
    Micro to Macro Evolutionary Modeling: On the Economics of Self Organization of Dynamic Markets by Ignorant Actors, Foundation of Economic Change, Economic Complexity and Evolution, Springer International Publications, 10.1007/978-3-319-62009-1.
  11. 11
    On the problem of calibrating an agent based model for financial markets
    1. A Fabretti
    (2012)
    Journal of Economic Interaction and Coordination 8:277–293.
    https://doi.org/10.1007/s11403-012-0096-3
  12. 12
    Advances in Computational Social Science and Social Simulations
    1. A Fabretti
    (2014)
    A Markov Chain approach to ABM calibration, Advances in Computational Social Science and Social Simulations, Barcelona, Autonomous University of Barcelona.
  13. 13
    The Existence of Aggregate Production Functions
    1. FM Fisher
    (1969)
    Econometrica 37:553–577.
    https://doi.org/10.2307/1910434
  14. 14
    Tipping points in macroeconomic agent-based models
    1. S Gualdi
    2. M Tarzia
    3. F Zamponi
    4. J-P Bouchaud
    (2015)
    Journal of Economic Dynamics & Control 50:29–61.
  15. 15
    The empirical foundation of calibration
    1. LP Hansen
    2. JJ Heckman
    (1996)
    Journal of Economic Perspectives 10:87–104.
    https://doi.org/10.1257/jep.10.1.87
  16. 16
    Techniques to Understand Computer Simulations: Markov Chain Analysis
    1. LS Izquierdo
    2. SS Izquierdo
    3. JM Galan
    4. JI Santos
    (2009)
    Journal of Artificial Societies and Social Simulation 12:6.
  17. 17
    Statistical inference in micro-simulation models: incorporating external information
    1. NA Klevmarken
    (2002)
    Mathematics and Computers in Simulation 59:255–265.
    https://doi.org/10.1016/S0378-4754(01)00413-X
  18. 18
    Dynamic Microsimulation for Policy Analysis: Problems and Solutions. Chapter 2, pp 31-53, in Klevmarken N.A. and B. Lindgren (eds) Simulating an Ageing Population. A microsimulation approach applied to Sweden
    1. NA Klevmarken
    (2008)
    Contributions to Economic Analysis 285, Emerald Group Publishing Lmtd, Bingley UK, 2008, 10.1016/S0573-8555(07)00002-8.
  19. 19
    A survey of dynamic microsimulation models: Uses, model structure and methodology
    1. J Li
    2. C O’Donoghue
    (2012)
    International Journal of Microsimulation 6:3–55.
    https://doi.org/10.34196/ijm.00082
  20. 20
    Linear Aggregation in Linear Regression
    1. H Lütjohann
    (1974)
    Stockholm: Stockholm University.
  21. 21
    IrpetDin. A Dynamic Microsimulation Model for Italy and the Region of Tuscany
    1. ML Maitino
    2. L Ravagli
    3. N Sciclone
    (2020)
    International Journal of Microsimulation 13:27–53.
    https://doi.org/10.34196/IJM.00224
  22. 22
    Estimating Confidence Intervals in a Tax Microsimulation model
    1. R McClelland
    2. S Khitatrakun
    3. C Lu
    (2020)
    International Journal of Microsimulation 13:2–20.
    https://doi.org/10.34196/IJM.00216
  23. 23
    A new type of socio-economic system
    1. G Orcutt
    (1957)
    Review of Economics and Statistics 58:773–797.
    https://doi.org/10.2307/1928528
  24. 24
    Hypothesis Formation, Testing and Estimation for Microanalytical ModelingGg
    1. G Orcutt
    (1980)
    Section III:1 in Bergmann et al 1980.
  25. 25
    Microanalytical Modelling and Simulation
    1. G Orcutt
    2. A Glaser
    (1980)
    Section III:2 in Bergmann et al 1980.
  26. 26
    Microanalysis of Socioeconomic Systems: A Simulation Study
    1. G Orcutt
    2. M Greenberger
    3. J Korbel
    4. A Rivlin
    (1961)
    New York: Harper and Row.
  27. 27
    Policy Explorations Through Microanalytic Simulations
    1. GH Orcutt
    2. S Caldwell
    3. R Wertheimer
    (1976)
    Washington D.C: The Urban Institute.
  28. 28
    Linking microsimulation and CGE models
    1. A Peichl
    (2015)
    International Journal of Microsimulation 9:167–174.
    https://doi.org/10.34196/ijm.00132
  29. 29
    A calibration procedure for analyzing stock price dynamics in an Agent-based framework
    1. MC Recchioni
    2. G Tedeschi
    3. M Gallegati
    (2015)
    Journal of Economic Dynamics and Control 60:1–25.
    https://doi.org/10.1016/j.jedc.2015.08.003
  30. 30
    The future of agent-based modelling
    1. MG Richiardi
    (2016)
    Eastern Economic Journal 43:271–287.
    https://doi.org/10.1057/s41302-016-0075-9
  31. 31
    Using trajectory analysis to test and illustrate microsimulation outcomes
    1. J Salonen
    2. H Tikanmäki
    3. T Nummi
    (2018)
    International Journal of Microsimulation 12:3–17.
    https://doi.org/10.34196/ijm.00198
  32. 32
    Logit Scaling: A General Method for Alignment in Microsimulation Models
    1. P Stephensen
    (2015)
    International Journal of Microsimulation 9:89–102.
    https://doi.org/10.34196/ijm.00144
  33. 33
    Linear Aggregation of Economic Relations
    1. H Theil
    (1954)
    North-Holland, Amsterdam.

Article and author information

Author details

  1. Anders Klevmarken

    Department of Economics, Uppsala, Sweden
    For correspondence
    anders@klevmarken.nu
    Competing interests
    No competing interests reported

Acknowledgements

I have had the benefit of discussing previous drafts of this paper with Gunnar Eliasson, and I appreciate his many good comments. Any remaining errors are obviously mine.

Publication history

  1. Version of Record published: April 30, 2022 (version 1)

Copyright

© 2022, Anders Klevmarken

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Download citations (links to download the citations from this article in formats compatible with various reference manager tools)

Open citations (links to open the citations from this article in various online reference manager services)