Recently, deep artificial neural networks (DANNs) have been successfully applied to various pattern recognition tasks with high industrial impact. Their results are so convincing that neural nets are already tested in heavily regulated fields like medicine or finance. However, these autonomous systems are often deployed without evaluating the reasoning behind their decisions. Thus, recent research has shifted towards methods that increase the interpretability of DANNs. The goal of this paper is to explain the influence of input variables on the decision of a DANN. More precisely, we aim at improving the linear weighting scheme for the contribution of input variables (LICON), a previously introduced method which estimates the contributions of inputs in a local neighborhood, by combining it with the gobal sensitivity approach(GSA), which uses sampling to examine multiple values of an input. This allows the local influence estimation of LICON to be assessed in relation to estimates obtained from sampled input values. The effectiveness of the proposed approach is assessed via a comparative study of the involved explanation methods. Despite the computational complexity, which has to be dealt with in the future, it is shown that the proposed approach generates reasonable estimates for input contributions.