Servers have always been an essential part of information and communication (ICT) systems. In the past decades the explosive growth of technology followed Moore's law. This means that both the traffic generated by the requests and the performance of the hardware that served it has increased exponentially. The excessive power consumption generated by the computers has engendered numerous problems. First, the operating costs have become too high, second, the generated heat has harmful effects on the environment.
Usually the greater part of the consumption in a computer is due to the processor. Fortunately manufacturers have developed methods to reduce the CPU voltage and frequency dynamically during operation in case of lower traffic. Such technologies are the Enhanced Intel SpeedStep Techonolgy (EIST) by Intel and the Cool'n'Quiet technology by AMD.
In this thesis we studied mechanisms similar to the ones mentioned above by modeling them with vacation queues and in two server systems. First we took a look at the case of plain $M/M/2$ queues to find out their energy consumption then we repeated the examination with simple vacations and working vacations. In the case of working vacations we also compared synchronous and asynchronous vacations.
Ultimately we found that in case of low traffic the energy consumption can be greatly reduced by applying vacations. We also found out that the lower the latency between the state changes the better the performance. When comparing the synchronous and the asynchronous vacations we saw that the asynchronous case performs slightly better but the difference is small.