Machine Learning (ML) is one of the most dynamically researched domains of past decades in artificial intelligence related fields. The underlying idea in ML is to replace strictly defined, problem specific algorithms with experience-based, constantly improving machine-generated logics. Application filed of ML systems is wide, but there is a still increasing need for further application possibilities, and extended features, such as adaptivity, or autonomous deployability. In my thesis work, I propose an automatic machine learning framework, which attempts to target main challenges of future systems. I design a model, which is applicable to a reinforcement learning task modelled by a Markov Decision Process (MDP), even in case of complex state space, or multi-actor environment. The system is capable to adapt goal criteria changes during normal operation, exploits an actual community learning situation, in which communicating learning agents share the same problem to solve, and performs automatic self-optimization through transforming experience data into a more descriptive, though still compact form. I present the experimental implementation of the system, and a theoretical showcase, in which the abilities are being demonstrated. Specifications, design choices, implementation details, and evaluation results are also presented, while future challenges, and development ideas are discussed in the end.