Analog-digital converters suffer of several nonlinearity-caused distortions like integral and differential nonlinearity, but practically the distortion due to the finite resolution of the converter can also be considered as such a nonlinearity. An approach to analog-digital conversion is introduced for reducing these errors. By conventional conversion we can only have the information from the output of the converter that the analog signal was in a given interval at the instant of the sampling moment, but we do not know exactly where. Although this insecurity cannot be totally eliminated, it can be reduced appreciably. By significant oversampling of the input of the converter it can be determined with relatively high accuracy, when the analog signal did cross a threshold level. These code transition levels of the converter can be determined and if these data are available, then at the instant of the crossing the input value can be measured more accurately than by conventional ways. This also means that the size of the quantization error of the quantizer can be reduced. In order to ensure adequate number of level crossings for low-amplitude or slowly varying signals, a dither is added. According to simulations after digital signal processing the signal follows the analog input more accurately than signals digitized by conventional ways. Theoretical computations show that nonlinearity of the converter can be reduced considerably, so using this method even currently used A/D converters could be linearized.