GaborNet: Gabor filters with learnable parameters in deep convolutional neural networks

GaborNet: Gabor filters with learnable parameters in deep convolutional   neural networks
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The article describes a system for image recognition using deep convolutional neural networks. Modified network architecture is proposed that focuses on improving convergence and reducing training complexity. The filters in the first layer of the network are constrained to fit the Gabor function. The parameters of Gabor functions are learnable and are updated by standard backpropagation techniques. The system was implemented on Python, tested on several datasets and outperformed the common convolutional networks.


💡 Research Summary

The paper introduces GaborNet, a convolutional neural network architecture that replaces the conventional first convolutional layer with a “Gabor Layer” whose filters are constrained to the analytical form of Gabor functions. Unlike prior works that either use fixed Gabor filter banks as preprocessing or freeze them during training, GaborNet treats the four Gabor parameters—orientation (θ), wavelength (λ), phase offset (ψ), and Gaussian envelope scale (σ)—as learnable variables. These parameters are initialized from a standard Gabor filter bank (as described in reference


Comments & Academic Discussion

Loading comments...

Leave a Comment