The focused plenoptic camera differs from the traditional plenoptic camera in that its microlenses are focused on the photographed object rather than at infinity. The spatio-angular tradeoffs available with this approach enable rendering of final images that have significantly higher resolution than those from traditional plenoptic cameras. Unfortunately, this approach can result in visible artifacts when basic rendering is used. In this paper, we present two new methods that work together to minimize these artifacts. The first method is based on careful design of the optical system. The second method is computational and based on a new lightfield rendering algorithm that extracts the depth information of a scene directly from the lightfield and then uses that depth information in the final rendering. Experimental results demonstrate the effectiveness of these approaches.