Breaking Down Reddit's Imposter

A view at the r/Imposter UI

The new Reddit April Fools social experiment, r/Imposter, went live today. The concept is this: users are presented with a list of 5 answers to the question “What makes you human?”, one of which is by the Imposter. The challenge is to identify the answer by the Imposter. Users are also able to post an answer to that question, which they can change at any time.

What makes you human?
Can you recognize it in others?
Are you sure?
Reddit 2020

As alluded to in the question and the traditional cryptic message, the Imposter is most likely a bot trained to mimic the human responses.[1] This piqued my interest, as text generation is right up my alley. So I spent a while toying around with the site, looking at the generated samples, and the weaknesses are very similar to what one would expect from, say, GPT-2.[2] In fact, I’d hazard a guess and say that the model used probably is GPT-2, if for no other reason than for its recent name recognition. There is still a chance of it being some other Transformer or RNN based model, too. It’s highly unlikely to be a rule based system, due to the necessity of constant retraining, or a Markov chain system, due to the surprising coherence (see r/SubredditSimulator for a taste of Markov bots).[3]

Some interesting features of the Imposter, many of which have already been discovered by Redditors:

  • An inability to do math. This is consistent with GPT-2, which (when trained on WebText) is unable to handle basic arithmetic questions, a feature of how GPT-2 is “creating a model that includes math and somehow doing the math in the model”. This doesn’t mean all transformers are unable to do math, though. When trained directly to do so, Transformers are quite adept at math; this ICLR 2019 paper shows a 90% accuracy for extrapolating addition/subtraction problems to numbers larger than seen during training, and this ICLR 2020 paper shows that it’s possible to train a Transformer to do integration and ODEs. Overall, this provides evidence for GPT-2 (or similar) powering the Imposter. GPT-2-large attempts to do math (Write With Transformer).

  • Sentences are sometimes strikingly coherent, sometimes blatantly incoherent. After a few dozen rounds, it becomes obvious that the Imposter is much better some rounds than others. Incidentally, GPT-2 is also known for massive variability in output quality, even with tricks like nucleus sampling. There is also the confounding factor here of the users that intentionally attempt to post with broken grammar, though, so this should be taken with a grain of salt.

Unfortunately, there is no way to confirm for sure whether the model is GPT-2, a different RNN- or Transformer-based model, or something altogether (although both its quality and variance therein, not to mention the hype factor benefit[4], suggest that it’s GPT-2) before the end of the experiment. Also, many of these points are moot in practice, as a major component of the challenge is the adversarial nature against humans pretending to be bots. Still, it’s been interesting picking apart the clues around this first ever AI-based Reddit April Fools.


  1. Some amusing examples of the bot picking up on reddit idiosyncrasies have already emerged. ↩︎

  2. An interesting aspect of this experiment is that one can also intentionally sabotage other users by pretending to be the Imposter, using similar broken sentences. ↩︎

  3. One particular chatbot that has been mentioned many times on the subreddit is Cleverbot, which, according to this article, works by storing human responses and using some (undisclosed) algorithm to choose responses from this database. In other words, Cleverbot does not actually generate sentences. This probably wouldn’t work very well for r/Imposter, as it has a fixed prompt and becomes indiscernible if the bot simply copies user answers. ↩︎

  4. If it does turn out to be GPT-2 (or even better, a new and improved GPT-3 GPT-2-Episode-1), there will likely be another wave of media hype. Brace yourselves. ↩︎

...