Active Transfer Bagging: A New Approach for Accelerated Active Learning Acquisition of Data by Combined Transfer Learning and Bagging Based Models
Modern machine learning has achieved remarkable success on many problems, but this success often depends on the existence of large, labeled datasets. While active learning can dramatically reduce labeling cost when annotations are expensive, early performance is frequently dominated by the initial seed set, typically chosen at random. In many applications, however, related or approximate datasets are readily available and can be leveraged to construct a better seed set. We introduce a new method for selecting the seed data set for active learning, Active-Transfer Bagging (ATBagging). ATBagging estimates the informativeness of candidate data point from a Bayesian interpretation of bagged ensemble models by comparing in-bag and out-of-bag predictive distributions from the labeled dataset, yielding an information-gain proxy. To avoid redundant selections, we impose feature-space diversity by sampling a determinantal point process (DPP) whose kernel uses Random Fourier Features and a quality-diversity factorization that incorporates the informativeness scores. This same blended method is used for selection of new data points to collect during the active learning phase. We evaluate ATBagging on four real-world datasets covering both target-transfer and feature-shift scenarios (QM9, ERA5, Forbes 2000, and Beijing PM2.5). Across seed sizes nseed = 10-100, ATBagging improves or ties early active learning and increases area under the learning-curve relative to alternative seed subset selection methodologies in almost all cases, with strongest benefits in low-data regimes. Thus, ATBagging provides a low-cost, high reward means to initiating active learning-based data collection.
💡 Research Summary
The paper tackles a fundamental bottleneck in modern machine learning: the need for large, fully labeled datasets. While active learning (AL) can dramatically reduce labeling costs, its early performance is heavily dependent on the initial seed set, which is traditionally chosen at random. Random seeds provide no guarantees about informativeness or representativeness, leading to slow model improvement in the first few AL iterations. The authors observe that in many real‑world scenarios a related “proxy” dataset—often already labeled for a different but correlated target—exists and can be leveraged to construct a more informative seed.
To exploit this opportunity, they propose Active‑Transfer Bagging (ATBagging), a two‑stage method that simultaneously maximizes (i) informativeness (the expected change in the model’s predictive distribution when a candidate point is added) and (ii) heterogeneity (coverage of the feature space).
Informativeness scoring is derived from a Bayesian interpretation of bagged ensembles. A bagged model consists of M weak learners trained on bootstrap samples. For a given data point (x, y), the sub‑models that contain the point form the in‑bag set, while those that do not form the out‑of‑bag set. The predictions of each set on a representative test pool X* provide empirical estimates of two predictive distributions: p(Y*|X*, x, y) (in‑bag) and p(Y*|X*) (out‑of‑bag). The Kullback‑Leibler divergence between these distributions is an approximation of the information gain IG(x, y) = KL
Comments & Academic Discussion
Loading comments...
Leave a Comment