# Review of OCGAN: One-class Novelty Detection Using GANs with Constrained Latent Representations¶

Perera, P., Nallapati, R., & Xiang, B. (2019). Ocgan: One-class novelty detection using gans with constrained latent representations. CVPR 2019.

## Introduction¶

Problem: given a set of examples from a particular class, the goal is to determine if a query example is from the same class.

Solution: Based on leanring latent representations of in-class examples using a denoising auto-encoder network, the key contribution is to explicitly constrain the latent space to exclusively represent the given class.

1. force the latent space to have bounded support using tanh activation in the encoder's output layer.
2. use a discriminator in the latent space for adversarial training such that the encoded feature representations of in-class examples resemble uniform random samples drawn from the same bounded space.
3. use another discriminator in the input space to ensure all randomly drawn latent samples generate examples that look real.
4. introduce a gradient-descent based sampling method to explore points in the latent space that generate potential out-of-class examples, which are fed back to the network to further train it to generate in-class examples from those special points.

## Method¶

OCGAN consists of 4 components

- a denoising auto-encoder
- a latent discriminator
- a visual discriminator
- a classifier

### Denoising auto-encoder¶

In a denoising auto-encoder (AE), noise is added to the input image and the network is expected to reconstruct the denoised version of the image. Literature shows that denoising auto-encoders reduce over-fitting and improve generalizability of the network compared to regular auto-encoders.

In order to densely sample the latent space, we have a bounded support for the latent space, i.e., a tanh activation is used for the output of the encoder. Therefore, support of the latent space is $(-1, 1)^d$, where $d$ is the feature dimension of the latent space.

Specifically, the loss function of the denoising AE is

$$l_\text{MSE} = \lVert x - \text{De}(\text{En}(x+n))\rVert_2^2$$

where $x$ is the input image, $n$ is the zeron mean Gaussian white noise with variance of 0.2, De and En denote Decoder and Encoder respectively.

### Latent Discriminator¶

To obtain a latent space from which every instance sampled represents the latent space for the given one-class, we forced explicitly that latent representations of in-class samples to be uniformly distributed accross the latent space. Here we used a latent discriminator to be trained to differentiate between latent representations of real images of the given class and samples drawn from a $\mathbf{U}(-1, 1)^d$ distribution. Loss function and training are

$$\max_\text{En} \min_{D_l} l_\text{latent} = \max_\text{En} \min_{D_l} -\mathbf{E}_{s \sim \mathbf{U}(-1, 1)}[\log D_l(s)] - \mathbf{E}_{x \sim p_x}[\log (1-D_l(\text{En}(x+n)))]$$

where $p_x$ is the distribution of in-class examples.

### Visual Discriminator¶

To make sure the recovered images are not from any out-of-class objects, we sample exhaustively from the latent space and train the net the generate images within the given class.

Another visual discriminator is trained to differentiate between images of the given class and images generated from random latent samples using the decoder $\text{De}(s)$, where $s$ is a random latent sample. This is in fact the regular GAN. As a result,

$$\max_\text{De} \min_{D_v} l_\text{visual} = -\mathbf{E}_{s \sim \mathbf{U}(-1, 1)}[\log D_v(\text{De}(s))] - \mathbf{E}_{x \sim p_x}[\log (1-D_v(x))]$$

#### -- Informative negative mining --¶

This is an additional step to address the issue that it is impossible to exhaustively sample all possible instances from $\mathbf{U}(-1, 1)^d$ in practice, especially when $d$ is large. One solution is to reduce the dimension $d$ of latent representation. But this would diminish the performance since low dimensional representation can only represent part of distribution of the given class.

Alternatively, we actively seek regions in the latent space that generate poor quality images. Use the loss of the image classification (true of fake) to inform which sample in the latent space is out-of-class. Specifically, compute the gradient of the loss w.r.t the latent space, then take a step further in the direction of the gradient and sample that point such that the classifier would be more confident that such a point would generate an out-of-class image.

p.s., a step further in the direction of gradient means the corresponding loss at that point is larger.

### Classifier¶

• Images from the given class: true, 1
• Images generated by random samples from $\mathbf{U}(-1, 1)$: fake, 0

Loss: binary cross entropy, $l_{classifier}$

Note that the classifier does not participate in the training of discriminator and generator.

NOTE, why updating the Generator De and En involves $D_l(l_2, 0)$ and $D_v(x, 0)$ and set the label of $l_2$ and $x$ to be 0? ? ?