The smoothness prediction models (in IRI) currently incorporated into the Mechanistic-Empirical Pavement Design Guide (M-E PDG) of the United States have been developed by means of Ordinary Least Squares (OLS). However, some of the variables that are used in predicting future IRI were previously estimated by means of separate performance models, which could cause bias because of the correlation between the previously estimated distress types and the unobserved components of the IRI model. The bias in this case can be corrected by considering additional variables that are correlated with the distress types causing the bias, thus eliminating the correlation to the unobserved terms in the model. There can also be bias in the IRI model that is generated by unobserved factors that are not included in the model. If these factors are section-specific, the bias can be removed by taking into consideration performance time history data from several pavement sections. The author has used updated LTPP data that are consistent with the dataset originally used to fit the Mechanistic-Empirical IRI model for flexible pavements over thick granular bases. The data were then used in modeling IRI by means of Ordinary Least Squares and Instrumental Variable Regressions analyzing the data as pooled, and as a panel dataset (by random-effects, fixed-effects, and joint fixed-effects approach) to check for possible bias in the model.
It was found that the current IRI model, as estimated by OLS, exhibits several types of bias due to heterogeneity and incorrect assumptions in the modeling process. It was also identified that the preferred IRI model was the joint fixed-effects approach and, therefore, the model parameters were estimated correcting for the omitted variable bias and simultaneous equation bias. Estimating the model by accounting for possible bias in the data suggested considerable changes in the effects of the different parameters affecting IRI through time, mainly rutting of the pavement structure.
More information to be found here
The similarity algorithm calculates how much two contents in the system are similar to one another. So far, similarity is calculated based on similarity of the project type, area of interest and user type. Generally, if two contents have more parameters in common they are more similar to each another. More information.
Copyright to all material on FEHRL Knowledge Centre are reserved. FEHRL Knowledge Centre's content (documents, reports, presentations, etc) can be cited, or excerpted in a sensible and proportionate manner, or e.g. included in non-commercial, on-line news digests, with proper reference (including a link) to FEHRL Knowledge Centre as the source, and to the author, by name, of any referenced post