Beyond autoregressive text generation

[Update 2022 Dec. 4] Added contrastive learning / decoding methods. [Update 2023 Mar. 10] Refactor and adding RLHF and diffusion methods. [Update 2023 Dec. 22] Added DPO, RAG and EMNLP 2023 papers. Intro The task of generating content from deep learning models is different from other common tasks in the sens that 1) the model is often trained to replicate the training data for continuous models or predict the next element for discrete ones; 2) when testing a model, there is no exact expected results, in consequence; 3) its evaluation is often tricky....

October 18, 2022 · 51 min · Nathan Fradet

Text to image generation

[Update 2022 Oct. 30] Added the text-to-video models recently introduced: Imagen Video and Phenaki. Notation Symbol Meaning $g_\theta$ Generator network with parameters $\theta$ $\mathbf{c}$ A caption, represented as a sequence of tokens $x$ An input image, optionally fed to $g_\theta$ to perform modification on it $y$ The output image, sampled from $g_\theta(\mathbf{c})$ or $g_\theta(\mathbf{c}, x)$ $\mathbf{z}$ A latent vector $\mathbf{h}$ Hidden states, intermediate representation of the input data Intro and problem formulation We refer to text-to-image generation as the tasks of generating visual content conditioned on some text description....

September 29, 2022 · 30 min · Nathan Fradet