The paper deals with the statistical modeling of convergence and cohesion over time with the use of kurtosis, skewness and L-moments. Changes in the shape of the distribution related to the spatial allocation of socio-economic phenomena are considered as an evidence of global shift, divergence or convergence. Cross-sectional time-series statistical modeling of variables of interest is to overpass the minors of econometric theoretical models of convergence and cohesion determinants. L-moments perform much more stable and interpretable than classical measures. Empirical evidence of panel data proves that one pure pattern (global shift, polarization or cohesion) rarely exists and joint analysis is required.

]]>This paper presents a Bayesian model averaging regression framework for forecasting US inflation, in which the set of predictors included in the model is automatically selected from a large pool of potential predictors and the set of regressors is allowed to change over time. Using real-time data on the 1960–2011 period, this model is applied to forecast personal consumption expenditures and gross domestic product deflator inflation. The results of this forecasting exercise show that, although it is not able to beat a simple random-walk model in terms of point forecasts, it does produce superior density forecasts compared with a range of alternative forecasting models. Moreover, a sensitivity analysis shows that the forecasting results are relatively insensitive to prior choices and the forecasting performance is not affected by the inclusion of a very large set of potential predictors.

]]>Permutation tests for serial independence using three different statistics based on empirical distributions are proposed. These tests are shown to be consistent under the alternative of *m*-dependence and are all simple to perform in practice. A small simulation study demonstrates that the proposed tests have good power in small samples. The tests are then applied to Canadian gross domestic product (GDP data), corroborating the random-walk hypothesis of GDP growth.

This is just a sample abstract paragraph.

]]>Official statistics production based on a combination of data sources, including sample survey, census and administrative registers, is becoming more and more common. Reduction of response burden, gains of production cost efficiency as well as potentials for detailed spatial-demographic and longitudinal statistics are some of the major advantages associated with the use of integrated statistical data. Data integration has always been an essential feature associated with the use of administrative register data. But survey and census data should also be integrated, so as to widen their scope and improve the quality. There are many new and difficult challenges here that are beyond the traditional topics of survey sampling and data integration. In this article, we consider statistical theory for data integration on a conceptual level. In particular, we present a two-phase life cycle model for integrated statistical microdata, which provides a framework for the various potential error sources, and outline some concepts and topics for quality assessment beyond the ideal of error-free data. A shared understanding of these issues will hopefully help us to collocate and coordinate efforts in future research and development.

]]>Hedonic methods are a prominent approach in the construction of quality-adjusted price indexes. This paper shows that the process of computing such indexes is substantially simplified if arithmetic (geometric) price indexes are computed based on exponential (log-linear) hedonic functions estimated by the Poisson pseudo-maximum likelihood (ordinary least squares) method. A Monte Carlo simulation study based on housing data illustrates the convenience of the links identified and the very attractive properties of the Poisson estimator in the hedonic framework.

]]>In the context where one main regressor is measured with error and at least one instrumental variable is available for the correction of measurement error, this paper provides, to the best of our knowledge, a first point-identification result on the variance of measurement error, the variance of latent variable, and their covariance. We show that the parameters are identified if the regression model is not *de facto* linear. We illustrate the method in an application to identify mean-reverting measurement error, a typical issue in reported income where the measurement error of income is negatively correlated with the true income.

In this paper, we consider balanced hierarchical data designs for both one-sample and two-sample (two-treatment) location problems. The variances of the relevant estimates and the powers of the tests strongly depend on the data structure through the variance components at each hierarchical level. Also, the costs of a design may depend on the number of units at different hierarchy levels, and these costs may be different for the two treatments. Finally, the number of units at different levels may be restricted by several constraints. Knowledge of the variance components, the costs at each level, and the constraints allow us to find the optimal design. Solving such problems often requires advanced optimization tools and techniques, which we briefly explain in the paper. We develop new analytical tools for sample size calculations and cost optimization and apply our method to a data set on Baltic herring.

]]>