The backend of the iTrack GPS tracking system has a monolithic structure. The system handles a lot of connections with clients and targets at once, furthermore it needs to perform resource-demanding batch jobs in the background. The amount of connections are increasing rapidly, which means the load of the backend is getting higher. Due to its monolithic architecture, the iTrack backend is not as scalable as the increasing load demands, which means that even though the system can still handle the current load, preparing for the future, the system needs to be upgraded so it will be more scalable. During the upgrade I also have to keep in mind the maintainability of the system's code.
First of all I will describe the architecture that the iTrack backend had at the beginning of my work along with the reasons behind this huge upgrade. Next I will explain the plan that had me started on this work, followed by the steps of the refactoring, the problems I needed to face in connection with them, the solution to these problems and how they altered the original plan.
I will describe the needed preparations, followed by the integration of the distributed database cache. I will write about the automation of the system's infrastructure, the creation and results of the cluster containing 2 then 3 instances, the cluster's deployment to the live (stable) environment. I will explain the implementation of the communication between the members of the cluster.
Lastly I will summarize the whole process and the results, followed by the next possible actions to take.