Negative sampling Deep Learning
noun phrase
Definition: A training technique used to reduce the computational cost of learning over very large output spaces, such as vocabularies or label sets, by updating the model with the true target and a small number of sampled non-target items rather than all possible classes. In many deep learning contexts, negative sampling is treated as a specific simplified variant of, or a method closely related to, noise-contrastive estimation (NCE). It is especially well known from word embedding models such as word2vec. [Mikolov et al. 2013].
Example in context: “In this paper, we propose a negative sampling mechanism for a contextualized topic model to improve the quality of the generated topics.” [Adhya et al. 2022]
Related terms: candidate sampling, sampled softmax, noise-contrastive estimation (NCE)