A post-processing method for calibrating probabilistic forecasts of continuous weather variables is presented. The method takes an existing probability distribution and adjusts it such that it becomes calibrated in the long run. The original probability distributions can be ones such as are generated from a numerical weather prediction (NWP) ensemble combined with a description of how uncertainty is represented by this ensemble. The method uses a calibration function to relabel raw cumulative probabilities into calibrated cumulative probabilities based on where past observations verified on past raw probability forecasts. Applying the calibration method to existing probabilistic forecasts can be beneficial in cases where the underlying assumptions used to construct the probabilistic forecast are not in line with nature's generating process of the ensemble and corresponding observation. The method was tested on a forecast data set with five different forecast variables and was verified against the corresponding analyses. The calibration method reduced the calibration deficiency of the forecasts down to the level expected for perfectly calibrated forecasts. When the raw forecasts exhibited calibration deficiencies, the calibration method improved the ignorance score significantly. It was also found that the ensemble-uncertainty model used to create the original probability distribution affected the ignorance score.