Omnidirectional video (cylindrical or spherical) is a new media becoming more and more popular thanks to its interactivity for online multimedia applications such as Google Street View as well as for video surveillance and robotics applications. Interactivity in this context means that the user is able to explore and navigate audio-visual scenes by freely choosing viewpoint and viewing direction. In order to provide this key feature, omnidirectional video is typically represented as a classical two-dimensional (2D) rectangular panorama video that is mapped onto a (spherical or cylindrical) mesh and then rendered on the client's screen. Early transmission models of this full panorama video and mesh content simply deal with the panorama as a high-resolution video to be encoded at uniform quality. Generally the user can only view a restricted field-of-view of the content and then interact with pan-tilt-zoom commands. This means that a significant part of the bandwidth is wasted by transmitting quality video in regions that are not being visualized. In this paper we evaluate the relevance and optimality of a personalized transmission where quality is modulated in spherical or cylindrical regions depending on their likelihood to be viewed during a live user interaction. We show, based on interaction delay as well as bandwidth constraints, how tiling and predictive methods can improve on existing methods. © 2012 Alcatel-Lucent.