Self-organizing systems (SOS) can perform complex tasks in unforeseen situations with adaptability. Previous work has introduced field-based approaches and rule-based social structuring for individual agents to not only comprehend the task situations but also take advantage of the social rule-based agent relations to accomplish their tasks without a centralized controller. Although the task fields and social rules can be predefined for relatively simple task situations, when the task complexity increases and the task environment changes, having a priori knowledge about these fields and the rules may not be feasible. In this paper, a multiagent reinforcement learning (RL) based model is proposed as a design approach to solving the rule generation problem with complex SOS tasks. A deep multiagent reinforcement learning algorithm was devised as a mechanism to train SOS agents for knowledge acquisition of the task field and social rules. Learning stability, functional differentiation, and robustness properties of this learning approach were investigated with respect to the changing team sizes and task variations. Through computer simulation studies of a box-pushing problem, the results have shown that there is an optimal range of the number of agents that achieves good learning stability; agents in a team learn to differentiate from other agents with changing team sizes and box dimensions; the robustness of the learned knowledge shows to be stronger to the external noises than with changing task constraints.