Predictions of water and energy budgets at the land surface are central to climate simulation and numerical weather prediction, as well as to water resources planning and management. Macroscale hydrological models provide a new tool for simulating surface water and energy balances at the scale of large continental river basins. However, these models are limited by the scarcity of in situ meteorological forcing data. Remote sensing data provide an alternative to in situ data, with observations that are, in some cases, at a higher spatial and temporal resolution than those available from traditional surface sources. Nonetheless, there remain important questions as to whether the accuracy of remotely sensed surface variables is sufficient to serve as forcings for surface hydrological models. This question is addressed through comparison of hydrologic simulations for the Ohio River basin with the variable infiltration capacity (VIC) macroscale hydrology model, using in situ and remotely sensed data. In situ data consist of gridded (at ½ degree latitude-longitude spatial resolution) precipitation, temperature, and wind, with downward solar and longwave radiation inferred from the diurnal temperature range. Remotely sensed observations include incident solar radiation, air temperature, and vapor pressure deficit inferred from the Geostationary Operational Environmental Satellite (GOES), the advanced very high resolution radiometer (AVHRR) and the TIROS Operational Vertical Sounder (TOVS), respectively. Precipitation, in all cases, is from gridded station data. The modeled streamflows and evapotranspiration rates are quite similar for the two cases. The largest differences in predicted surface hydrology are associated with differences in modeled snow cover accumulation and snowmelt, and result from a warm bias in the remotely sensed temperature data.