Dynamic Weight Optimization for Double Linear Policy: A Stochastic Model Predictive Control Approach

Dynamic Weight Optimization for Double Linear Policy: A Stochastic Model Predictive Control Approach
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The Double Linear Policy (DLP) framework guarantees a Robust Positive Expectation (RPE) under optimized constant-weight designs or admissible prespecified time-varying policies. However, the sequential optimization of these time-varying weights remains an open challenge. To address this gap, we propose a Stochastic Model Predictive Control (SMPC) framework. We formulate weight selection as a receding-horizon optimal control problem that explicitly maximizes risk-adjusted returns while enforcing survivability and predicted positive expectation constraints. Notably, an analytical gradient is derived for the non-convex objective function, enabling efficient optimization via the L-BFGS-B algorithm. Empirical results demonstrate that this dynamic, closed-loop approach improves risk-adjusted performance and drawdown control relative to constant-weight and prescribed time-varying DLP baselines.


💡 Research Summary

The paper tackles a fundamental open problem in the Double Linear Policy (DLP) literature: how to generate a sequence of trading weights that adapts in real time while preserving the policy’s Robust Positive Expectation (RPE) and survivability guarantees. Existing works either fix the weight at a constant optimal value or prescribe a deterministic time‑varying schedule, both of which lack a principled optimization backbone and may fail under changing market conditions.

To fill this gap, the authors introduce a Stochastic Model Predictive Control (SMPC) framework. The market is modeled by a risky asset price S(k) and per‑period return X(k) bounded in a known interval. Future returns are assumed conditionally independent given the information filtration, with conditional means µ(k)_i and variances σ²(k)_i that are estimated from a rolling window. The DLP is symmetrized by setting the long and short weights equal (w_L = w_S = w) and splitting the initial capital equally between the long and short accounts (α = ½). The state vector z_k =


Comments & Academic Discussion

Loading comments...

Leave a Comment