The construction of a meaningful graph plays a crucial role in the emerging field of signal processing on graphs. In this paper, we address the problem of learning graph Laplacians, which is similar to learning graph topologies, such that the input data form graph signals with smooth variations on the resulting topology. We adopt a factor analysis model for the graph signals and impose a Gaussian probabilistic prior on the latent variables that control these graph signals.
We show that the Gaussian prior leads to an efficient representation that favours the smoothness property of the graph signals, and propose an algorithm for learning graphs that enforce such property. Experiments demonstrate that the proposed framework can efficiently infer meaningful graph topologies from only the signal observations.