DeepPM: A Deep Learning-based Profit Maximization Approach in Social Networks
The problem of Profit Maximization asks to choose a limited number of influential users from a given social network such that the initial activation of these users maximizes the profit earned at the end of the diffusion process. This problem has a direct impact on viral marketing in social networks. Over the past decade, several traditional methodologies (i.e., non-learning-based, which include approximate solution, heuristic solution, etc.) have been developed, and many of them produce promising results. All these methods require the information diffusion model as input. However, it may not be realistic to consider any particular diffusion model as real-world diffusion scenarios will be much more complex and need not follow the rules for any particular diffusion model. In this paper, we propose a deep learning-based framework to solve the profit maximization problem. Our model makes a latent representation of the seed sets and is able to learn the diversified information diffusion pattern. We also design a noble objective function that can be optimized effectively using the proposed learning-based approach. The proposed model has been evaluated with the real-world datasets, and the results are reported. We compare the effectiveness of the proposed approach with many existing methods and observe that the seed set chosen by the proposed learning-based approach leads to more profit compared to existing methods. The whole implementation and the simulation code is available at: https://github.com/PoonamSharma-PY/DeepPM.
💡 Research Summary
The paper addresses the profit maximization problem in social networks, where a marketer must select a limited set of seed users under a budget constraint so that the net profit—benefits earned from influenced users minus the cost of seeding—is maximized after the diffusion process completes. Traditional approaches rely on a predefined diffusion model (e.g., Independent Cascade or Linear Threshold) and employ approximation algorithms, greedy heuristics, or combinatorial optimization techniques. Such methods suffer when real-world diffusion deviates from the assumed model, limiting their practical applicability.
To overcome this limitation, the authors propose DeepPM, a deep learning framework that does not require an explicit diffusion model. DeepPM consists of three tightly coupled components: a teacher model, a student model, and an auto‑encoder. The teacher model is a high‑fidelity diffusion engine that runs many Monte‑Carlo simulations of the Independent Cascade process for a given seed mask x. For each simulation it records a binary activation vector y, which serves as ground‑truth labels. The student model is a lightweight, fully differentiable Graph Convolutional Network (GCN) that takes the normalized adjacency matrix  and the seed mask x as inputs and predicts, for every node i, an activation probability \hat p_i(x). The GCN performs two rounds of neighborhood aggregation ( h_0 → ReLU(W_1  h_0) → σ(W_2  h_1)), thereby capturing two‑hop influence while keeping computational cost linear in the number of edges. Training minimizes the average binary cross‑entropy between the teacher’s binary outcomes and the student’s probabilistic predictions, ensuring that \hat p approximates the true expected activation probability p(x).
The auto‑encoder learns a low‑dimensional latent representation z of seed masks. The encoder maps a binary seed vector x to z, and the decoder reconstructs a soft seed vector \tilde x = σ(dec(z)) ∈ (0,1)^|V|. Reconstruction loss is also binary cross‑entropy, encouraging \tilde x to resemble realistic seed patterns observed during training. By jointly optimizing the GCN (student) and the auto‑encoder, the decoder becomes a differentiable generator of plausible seed configurations, enabling gradient‑based profit optimization.
The profit objective is defined as Φ̂(x) = b^T \hat p(x) – c^T x – μ·
Comments & Academic Discussion
Loading comments...
Leave a Comment