Integrated visualizations for assisted navigation were investigated that support both wayfinding and spatial learning. Participants navigated a predefined route with assistance through a virtual environment, visiting five target locations. Wayfinding accuracy was assessed. After wayfinding, self-to-object knowledge was measured with pointing tasks, and object-to-object knowledge was measured with an allocentric configurational task. Self-to-object knowledge was supported through self-to-targets-visualizations that provided information about the egocentric straight-line directions between the navigator and target locations. Both the acquisition of object-to-object knowledge as well as self-to-object knowledge was supported through comprehensive map visualizations. Alignment (rotations of the visualization according to changes of heading in the environment) appeared supportive for the acquisition of self-to-object knowledge with self-to-targets visualizations. Alignment was not effective with comprehensive maps. Wayfinding was impeded if visualizations were not aligned with the current heading. Individual differences in perspective taking ability played a strong role for wayfinding accuracy with misaligned visualizations. It is concluded that visualizations are encoded egocentrically for wayfinding purposes in the context of assisted navigation. Accordingly, the acquisition of self-to-object spatial knowledge can be supported through appropriate visualizations. (PsycINFO Database Record (c) 2020 APA, all rights reserved).