The aim of removing camera shake is to estimate a sharp version x from a shaken image y when the blur kernel k is unknown. Recent research on this topic evolved through two paradigms called and . only solves for k by marginalizing the image prior, while recovers both x and k by selecting the mode of the posterior distribution. This paper first systematically analyses the latent limitations of these two estimators through Bayesian analysis. We explain the reason why it is so difficult for image statistics to solve the previously reported failure. Then we show that the leading methods, which depend on efficient prediction of large step edges, are not robust to natural images due to the diversity of edges. , although much more robust to diverse edges, is constrained by two factors: the prior variation over different images, and the ratio between image size and kernel size. To overcome these limitations, we introduce an inter-scale prior prediction scheme and a principled mechanism for integrating the sharpening filter into . Both qualitative results and extensive quantitative comparisons demonstrate that our algorithm outperforms state-of-the-art methods.
If you can't find a tool you're looking for, please click the link at the top of the page to "Go to old article view". Alternatively, view our Knowledge Base articles for additional help. Your feedback is important to us, so please let us know if you have comments or ideas for improvement.