Overview of Self-Questioning Language Models. The only input to the system is a single prompt, given to the proposer. The proposer generates a question related to the given topic, and the solver aims to solve the question. The solver's reward is computed by using the majority vote as a proxy for the ground-truth answer. The proposer's reward is computed based on how many of the answers match the majority answer, to encourage problems not to be too easy or too difficult.
Can large language models improve without external data -- by generating their own questions and answers? We hypothesize that a pre-trained language model can improve its reasoning skills given only a single prompt specifying the topic (e.g., algebra word problems) and asking the model to generate its own questions. To do this, we propose Self-Questioning Language Models (SQLM): an asymmetric self-play framework where a proposer is given the topic and generates a question for a solver, who tries to answer it. Both the proposer and solver are trained via reinforcement learning. The proposer receives a reward if the problem is not too easy or too difficult, and the solver receives a reward based on majority voting, a proxy for correctness in the absence of ground-truth answers. For coding, the proposer can instead generate unit tests which are used for verification. We study this asymmetric self-play framework on three benchmarks: three-digit multiplication, algebra problems from the OMEGA benchmark, and programming problems from Codeforces. By continually generating more interesting problems and attempting to solve them, language models can improve on downstream benchmarks without access to any curated training datasets.
We train two policies in a self-play setup: a proposer policy \(\pi_{P_t}(x)\) that generates problems and a solver policy \(\pi_S(y_{\text{pred}} \mid x)\) that attempts to solve them. Both are optimized via reinforcement learning to maximize their expected rewards:
\[ \text{Solver: } \mathbb{E}_{x \sim \pi_{P_t},\, y_{\text{pred}} \sim \pi_S}[ \mathcal{R}_S(x, y_{\text{pred}}) ], \quad \text{Proposer: } \mathbb{E}_{x \sim \pi_{P_t},\, y_{\text{pred}} \sim \pi_S}[ \mathcal{R}_P(x, y_{\text{pred}}) ] \]
The proposer's problems condition the solver, and the solver's performance provides rewards that in turn refine the proposer. Since there are no ground-truth answers, we design self-supervised reward functions based on the generator-verifier gap.
Small generator-verifier gap (e.g. arithmetic): verification is as difficult as generation. We use majority voting as a proxy reward:
\[ \mathcal{R}_S(x, y_i) = \begin{cases} 1 & \text{if } y_i = y_{\text{maj}}, \\ 0 & \text{otherwise} \end{cases}, \quad \mathcal{R}_P(x) = \begin{cases} 1 & \text{if } 0 < |\{y_i = y_{\text{maj}}\}| < N, \\ 0 & \text{otherwise} \end{cases} \]
Large generator-verifier gap (e.g. coding): verification is easier than generation. The proposer generates test cases, and rewards are based on the fraction of tests passed:
\[ \mathcal{R}_S(x, y_{\text{pred}}) = \text{Pass}(y_{\text{pred}}, \text{Tests}(x)), \quad \mathcal{R}_P(x, y_{\text{pred}}) = \begin{cases} 1 & \text{if } 0 < \text{Pass}(y_{\text{pred}}, \text{Tests}(x)) < 1, \\ 0 & \text{otherwise} \end{cases} \]
This minimax formulation enables stable training through self-play while adapting reward design to the problem domain.