Cooperative Proxy Servers Architecture for VoD to Achieve High QoS with Reduced Transmission Time and Cost

Cooperative Proxy Servers Architecture for VoD to Achieve High QoS with   Reduced Transmission Time and Cost
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.
  • The aim of this paper is to propose a novel Voice On Demand (VoD) architecture and implementation of an efficient load sharing algorithm to achieve Quality of Service (QoS). This scheme reduces the transmission cost from the Centralized Multimedia Sever (CMS) to Proxy Servers (PS) by sharing the videos among the proxy servers of the Local Proxy Servers Group [LPSG] and among the neighboring LPSGs, which are interconnected in a ring fashion. This results in very low request rejection ratio, reduction in transmission time and cost, reduction of load on the CMS and high QoS for the users. Simulation results indicate acceptable initial startup latency, reduced transmission cost and time, load sharing among the proxy servers, among the LPSGs and between the CMS and the PS.

💡 Research Summary

**
The paper proposes a novel architecture for Video‑on‑Demand (VoD) systems that aims to improve Quality of Service (QoS) while reducing transmission cost and latency. The core concept is to organize proxy servers (PS) into Local Proxy Server Groups (LPSG). Each LPSG is managed by a Tracker (TR) that maintains a database of which video blocks are cached on each PS and how much free space remains. LPSGs are interconnected in a ring topology through their trackers, allowing neighboring groups to cooperate when a requested video is not locally available.

Video popularity is modeled using a Zipf‑like distribution. Based on popularity, the system performs partial replication: more popular videos have a larger number of blocks stored across multiple PSs, while less popular videos are cached only minimally. This “partial replication” strategy maximizes cache hit ratio without requiring each PS to store the entire video library.

When a client request arrives at a PS, the following hierarchical lookup is performed:

  1. If the requested block is present on the local PS, it streams immediately (minimal startup delay).
  2. If not, the local tracker checks other PSs within the same LPSG. If a neighbor PS (NBR) holds the block, the tracker initiates streaming from that neighbor to the requesting PS, incurring a small additional delay.
  3. If the block resides on a non‑neighbor PS within the same LPSG, the tracker computes an optimal path and streams the block accordingly; delay and cost are higher but still acceptable.
  4. If the block is absent from the entire LPSG, the request is forwarded to the tracker of the adjacent LPSG (ring neighbor). The neighboring tracker searches its own group; if found, the block is streamed across the two groups via the optimal path.
  5. Only when the block cannot be found in any LPSG does the system retrieve it from the Central Multimedia Server (CMS). This final step incurs the highest latency and transmission cost but occurs rarely.

Mathematically, the authors define video length (S_i), cached portion (W_i), and bandwidth (b_i) between CMS and PS. The transmission cost (TCOST({S_i-W_i}{CMS})) and user waiting time (W{t,i}(PS_q)) are expressed as non‑linear functions of these parameters. The optimization problem seeks to minimize both cost and waiting time subject to constraints on total cache capacity ((M \cdot B)) and positivity of cached portions. The tracker implements “perfect hashing” to locate cached blocks quickly and uses a shortest‑path algorithm to select the optimal streaming route.

The simulation environment consists of one CMS, six trackers, and six PSs per tracker, all linked in a ring. Transmission delays are set to realistic values (e.g., 100 ms PS‑client, 1.2 s CMS‑PS). Video sizes range from 280 MB to 1.12 GB, proportionally cached according to popularity. Results show that 60 %–80 % of video blocks are served from LPSG or neighboring LPSG, while only 20 %–40 % require CMS access. Consequently, average transmission cost drops by 30 %–40 % compared with a single‑proxy baseline, and startup latency is significantly reduced. The hit ratio improves markedly, confirming that cooperative caching effectively offloads the central server.

Despite these promising outcomes, the paper has several limitations. The tracker acts as a centralized metadata manager; its scalability and fault tolerance are not addressed, raising concerns about a potential bottleneck. The reliance on a static Zipf distribution may not capture dynamic popularity shifts common in real‑world VoD traffic, which could increase cache re‑placement overhead. Network conditions are modeled with fixed delays, ignoring variability in bandwidth and latency that occur in operational ISP networks. Moreover, the computational complexity of the optimal‑path and hashing mechanisms is not quantified, making it difficult to assess deployment costs.

In summary, the work introduces a hierarchical, cooperative proxy architecture that combines partial replication, tracker‑based cache awareness, and ring‑topology inter‑group communication to achieve higher QoS and lower transmission cost for VoD services. Future research should explore distributed tracker designs, adaptive popularity prediction, real‑world prototype implementations, and robust fault‑recovery mechanisms to validate the approach under realistic operating conditions.


Comments & Academic Discussion

Loading comments...

Leave a Comment