If we want to program an algorithm we have to face up to the fact that we can only store numbers with a finite precision, so we must decide what kind of computer number format we should use. So our first question is: how can we decide if a computer number format is sufficiently precise? If we increase the precision of a floating point number we must increase the number of its fraction bits (and maybe even its exponent bits as well) but this way to perform the computation more resource is needed, this brings us to our second question: which is the computer number format that is precise enough and needs the least resources to compute with. In my thesis I am going to examine these questions, and hopefully answer them. During my work I'm going to test the Matlab toolbox specially designed for testing arbitrary precision floating-point calculations, then I'm going to illustrate how the toolbox works on a simple example. This is going to be a FIR filter that uses a floating point module in verilog. This will be synthesized with different number formats and we're going to examine the results.