Publications

(back to homepage)

2022
Emergent abilities of large language models.
J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama, M. Bosma, D. Zhou, D. Metzler, E. Chi, T. Hashimoto, O. Vinyals, P. Liang, J. Dean, and W. Fedus.
Least-to-most prompting enables complex reasoning in large language models.
D. Zhou, N. Schärli, L. Hou, J. Wei, N. Scales, X. Wang, D. Schuurmans, O. Bousquet, Q. Le, and E. Chi.
PaLM: Scaling language modeling with Pathways.
{A. Chowdhery, S. Narang, J. Devlin} and 64 additional authors.
Artificial stream of thought has non-trivial connections to consciousness.
J. Wei.
Self-consistency improves chain of thought reasoning in language models.
X. Wang, J. Wei, D. Schuurmans, Q. Le, E. Chi, S. Narang, A. Chowdhery, and D. Zhou.
Chain of thought prompting elicits reasoning in large language models.
J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. Chi, Q. Le, and D. Zhou.
Sundar explains chain of thought prompting at Google I/O 2022 / Google AI blog
ACL '22A recipe for arbitrary text style transfer with large language models.
{E. Reif, D. Ippolito}, A. Yuan, A. Coenen, C. Callison-Burch, and J. Wei.
ICLR '22Finetuned language models are zero-shot learners.
{J. Wei, M. Bosma, V. Zhao, K. Guu}, A. Yu, B. Lester, N. Du, A. Dai, and Q. Le.
Google AI blog / oral
ICLR '22The MultiBERTs: BERT reproductions for robustness analysis.
{T. Sellam, S. Yadlowsky}, I. Tenney, J. Wei, N. Saphra, A. D'Amour, T. Linzen, J. Bastings, I. Turc, J. Eisenstein, D. Das, and E. Pavlick.
2021
EMNLP '21Frequency effects on syntactic rule learning in transformers.
J. Wei, D. Garrette, T. Linzen, and E. Pavlick. Google AI blog / oral
EMNLP '21Good-enough example extrapolation.
J. Wei.
ACL '21A cognitive regularizer for language modeling.
J. Wei, C. Meister, and R. Cotterell.
ACL '21Language model augmented relevance score.
R. Liu, J. Wei, and S. Vosoughi.
ACL '21A survey of data augmentation approaches for NLP.
(Findings){S. Feng, V. Gangal}, J. Wei, S. Chandar, S. Vosoughi, T. Mitamura, and E. Hovy.
ACL '21Modulating language models with emotions.
(Findings)R. Liu, J. Wei, C. Jia, and S. Vosoughi.
NAACL '21Linguistic complexity loss in text-based therapy.
J. Wei, K. Finn, E. Templeton, T. Wheatley, and S. Vosoughi.
NAACL '21Few-shot text classification with triplet networks, data augmentation, and curriculum learning.
J. Wei, C. Huang, S. Vosoughi, Y. Cheng, and S. Xu.
EACL '21Text augmentation in a multi-task view.
J. Wei, C. Huang, S. Xu, and S. Vosoughi.
AAAI '21Mitigating political bias in language models through reinforced calibration (outstanding paper).
R. Liu, C. Jia, J. Wei, G. Xu, L. Wang, and S. Vosoughi.
2019
EMNLP '19Easy data augmentation techniques for boosting performance on text classification tasks.
J. Wei and K. Zou.