An important yet little studied problem in network analysis is the effect of the presence of errors in creating the networks. Errors can occur both due to the limitations of data collection techniques and the implicit bias during modeling the network. In both cases, they lead to changes in the network in the form of additional or missing edges, collectively termed as noise. Given that network analysis is used in many critical applications from criminal identification to targeted drug discovery, it is important to evaluate by how much the noise affects the analysis results. In this paper, we present an empirical study of how different types of noise affect real-world networks. Specifically, we apply four different noise models to a suite of nine networks, with different levels of perturbations to test how the ranking of the top-k centrality vertices changes. Our results show that deletion of edges has less effect on centrality than the addition of edges. Nevertheless, the stability of the ranking depends on all three parameters: the structure of the network, the type of noise model used, and the centrality metric to be computed. To the best of our knowledge, this is one of the first extensive studies to conduct both longitudinal (across different networks) and horizontal (across different noise models and centrality metrics) experiments to understand the effect of noise in network analysis.