Underwater object recognition is a key technology in the field of underwater exploration. However, current underwater imaging tasks often face significant object scale variation under severe quality degradation. This paper proposes MENet, a salient object detection network that fuses information from multi-scale effective receptive fields (ERF) for scale-variant objects in underwater scenes. Specifically, we design an ERFServer module to dynamically adjust ERF in the fusion phase, enabling enhanced multi-scale features across multi-resolution semantics interactions. Our network with refined ERF and resolution features utilizes both global and local information more comprehensively and better adapts to scale variations of underwater objects. Experiments on underwater image datasets UFO-120 and MAS3K demonstrate that the proposed method outperforms other state-of-the-art underwater image methods.