In my thesis I compared the most often used neural network architectures, working methods and abilities. I made some comparisation about the learning speeds and memory usages of these networks based on some benchmark problems. These problems are Mackey Glass, Hénon map, Ikeda map and sunspot time-series. I forecasted each time-series and based on these solutions I tried to compare beyond and under learning predisposition of networks I have used. I measured the learning times also. First I presented some static neural network types, then I described dynamic extensions of these networks. The types I am writing about are FIR-MLP, FIR-RBF, FIR-CMAC, NOE-MLP. I presented the most important dynamic classes also. These are NFIR, NARX and NOE classes. I included NFIR and NOE class networks in my measurements. I presented BPTT and RTLR algorithms which are able to teach networks with feedbacks. I introduced my own FIR-RBF implementation too. Finally I summarized learning times and mean square errors of networks.