Today, organizations are beginning to realize the importance of using as much data as possible for decision-making in their strategy. The finding of relevant patterns in enormous amount of data requires automatic machine learning algorithms, among them, a popular option is the mixture-of-experts that allows to model data using a set of local experts. The problem of using typical learning algorithms over Big Data is the handling of these large datasets in primary memory. In this paper, we propose a methodology to learn a mixture-of-experts in a distributed way using PETUUM platform. Particularly, we propose to learn the parameters of mixture-of-experts by adapting the standard stochastic gradient descent in a distributed way. This methodology is applied to people detection with standard real datasets considering accuracy and precision metrics among other. The results show a consistent performance of mixture-of-experts models where the best number of experts varies according to the particular dataset. We also evidence the advantages of the distributed approach by showing the almost linear decreasing of average training time according to the number of processors. In a future work, we expect to apply this methodology to mixture-of-experts with embedded variable selection.