Bibliography

Contents

Bibliography#

References#

[AYQ+21]

Hassan Akbari, Liangzhe Yuan, Rui Qian, Wei-Hong Chuang, Shih-Fu Chang, Yin Cui, and Boqing Gong. Vatt: transformers for multimodal self-supervised learning from raw video, audio and text. Advances in Neural Information Processing Systems, 34:24206–24221, 2021.

[BBD16]

Scott R Baker, Nicholas Bloom, and Steven J Davis. Measuring economic policy uncertainty. The quarterly journal of economics, 131(4):1593–1636, 2016.

[BDVJ03]

Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155, 2003. URL: https://jmlr.org/papers/volume3/bengio03a/bengio03a.pdf.

[BWT21]

Gedas Bertasius, Heng Wang, and Lorenzo Torresani. Is space-time attention all you need for video understanding? In ICML, volume 2, 4. 2021.

[BHK+21]

Nicholas Bloom, Tarek Alexander Hassan, Aakash Kalyani, Josh Lerner, and Ahmed Tahoun. The diffusion of disruptive technologies. Technical Report, National Bureau of Economic Research, 2021.

[BGJM16]

Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606, 2016. URL: https://arxiv.org/pdf/1607.04606.pdf.

[BGKS+16]

Matthew Burgess, Eugenia Giraudy, Julian Katz-Samuels, Joe Walsh, Derek Willis, Lauren Haynes, and Rayid Ghani. The legislative influence detector: finding text reuse in state legislation. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 57–66. 2016.

[Cat18]

Amy Catalinac. From pork to policy: the rise of programmatic campaigning in japanese elections. In Critical Readings on the Liberal Democratic Party in Japan, pages 882–917. Brill, 2018.

[CGZE19]

Caroline Chan, Shiry Ginosar, Tinghui Zhou, and Alexei A Efros. Everybody dance now. In Proceedings of the IEEE/CVF international conference on computer vision, 5933–5942. 2019.

[DlRPV+22]

Javier De la Rosa, Eduardo G Ponferrada, Paulo Villegas, Pablo Gonzalez de Prado Salas, Manu Romero, and Marıa Grandury. Bertin: efficient pre-training of a spanish language model using perplexity sampling. arXiv preprint arXiv:2207.06814, 2022.

[DCLT18]

Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. URL: https://arxiv.org/abs/1810.04805.

[DCL21]

Yihe Dong, Jean-Baptiste Cordonnier, and Andreas Loukas. Attention is not all you need: pure attention loses rank doubly exponentially with depth. In International Conference on Machine Learning, 2793–2803. PMLR, 2021.

[DBK+20]

Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, and others. An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.

[FCDZ04]

Haodi Feng, Kang Chen, Xiaotie Deng, and Weimin Zheng. Accessor variety criteria for chinese word extraction. Computational linguistics, 30(1):75–93, 2004. URL: https://aclanthology.org/J04-1004.pdf.

[Fir57]

John Firth. A synopsis of linguistic theory, 1930-1955. Studies in linguistic analysis, pages 10–32, 1957.

[GZZ+22]

Chuan Guo, Shihao Zou, Xinxin Zuo, Sen Wang, Wei Ji, Xingyu Li, and Li Cheng. Generating diverse and natural 3d human motions from text. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5152–5161. 2022.

[HXW+21]

Kai Han, An Xiao, Enhua Wu, Jianyuan Guo, Chunjing Xu, and Yunhe Wang. Transformer in transformer. Advances in Neural Information Processing Systems, 34:15908–15919, 2021.

[HMP18]

Stephen Hansen, Michael McMahon, and Andrea Prat. Transparency and deliberation within the fomc: a computational linguistics approach. The Quarterly Journal of Economics, 133(2):801–870, 2018.

[Har54]

Zellig S Harris. Distributional structure. Word, 10(2-3):146–162, 1954.

[Har70]

Zellig S Harris. From phoneme to morpheme. In Papers in structural and transformational linguistics, pages 32–67. Springer, 1970. URL: http://www.eecs.qmul.ac.uk/~mpurver/papers/griffiths-et-al15qitl.pdf.

[HHVLT19]

Tarek A Hassan, Stephan Hollander, Laurence Van Lent, and Ahmed Tahoun. Firm-level political risk: measurement and effects. The Quarterly Journal of Economics, 134(4):2135–2202, 2019.

[HSST21]

Tarek Alexander Hassan, Jesse Schreger, Markus Schwedeler, and Ahmed Tahoun. Sources and transmission of country risk. Technical Report, National Bureau of Economic Research, 2021.

[HP16]

Gerard Hoberg and Gordon Phillips. Text-based network industries and endogenous product differentiation. Journal of Political Economy, 124(5):1423–1465, 2016.

[HKS17]

Daniel Holden, Taku Komura, and Jun Saito. Phase-functioned neural networks for character control. ACM Transactions on Graphics (TOG), 36(4):1–13, 2017.

[JTI06]

Zhihui Jin and Kumiko Tanaka-Ishii. Unsupervised segmentation of Chinese text by use of branching entropy. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, 428–435. Sydney, Australia, July 2006. Association for Computational Linguistics. URL: https://aclanthology.org/P06-2056.

[KLS+20]

Gary A Kane, Gonçalo Lopes, Jonny L Saunders, Alexander Mathis, and Mackenzie W Mathis. Real-time, low-latency closed-loop feedback using markerless posture tracking. Elife, 9:e61909, 2020.

[KKC22]

Jihoon Kim, Jiseob Kim, and Sungjoon Choi. Flame: free-form language-based motion synthesis & editing. arXiv preprint arXiv:2209.00349, 2022.

[Kud18]

Taku Kudo. Subword regularization: improving neural network translation models with multiple subword candidates. arXiv preprint arXiv:1804.10959, 2018.

[Lee22]

Minchul Lee. Bab2min/tomotopy: 0.12.3. July 2022. URL: https://doi.org/10.5281/zenodo.6868418, doi:10.5281/zenodo.6868418.

[LKP19]

Young Joon Lee, Soohyon Kim, and Ki Young Park. Deciphering monetary policy board minutes with text mining: the case of south korea. Korean Economic Review, 35:471–511, 2019.

[MMC+18]

Alexander Mathis, Pranav Mamidanna, Kevin M Cury, Taiga Abe, Venkatesh N Murthy, Mackenzie Weygandt Mathis, and Matthias Bethge. Deeplabcut: markerless pose estimation of user-defined body parts with deep learning. Nature neuroscience, 21(9):1281–1289, 2018.

[MSC+13]

Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems, 2013.

[MLK+23]

Eric Mitchell, Yoonho Lee, Alexander Khazatsky, Christopher D Manning, and Chelsea Finn. Detectgpt: zero-shot machine-generated text detection using probability curvature. arXiv preprint arXiv:2301.11305, 2023. URL: https://arxiv.org/pdf/2301.11305.pdf.

[NLH19]

Oded Netzer, Alain Lemaire, and Michal Herzenstein. When words sweat: identifying signals for loan default in the text of loan applications. Journal of Marketing Research, 56(6):960–980, 2019.

[PKM+18]

Xue Bin Peng, Angjoo Kanazawa, Jitendra Malik, Pieter Abbeel, and Sergey Levine. Sfv: reinforcement learning of physical skills from videos. ACM Transactions On Graphics (TOG), 37(6):1–14, 2018.

[PSM14]

Jeffrey Pennington, Richard Socher, and Christopher Manning. GloVe: global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 1532–1543. Doha, Qatar, October 2014. Association for Computational Linguistics. URL: https://aclanthology.org/D14-1162, doi:10.3115/v1/D14-1162.

[RSR+20]

Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, and others. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1–67, 2020.

[RZP+22]

Scott Reed, Konrad Zolna, Emilio Parisotto, Sergio Gomez Colmenarejo, Alexander Novikov, Gabriel Barth-Maron, Mai Gimenez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, and others. A generalist agent. arXiv preprint arXiv:2205.06175, 2022.

[RSN20]

Margaret E Roberts, Brandon M Stewart, and Richard A Nielsen. Adjusting for confounding with text matching. American Journal of Political Science, 64(4):887–903, 2020.

[RoderBH15]

Michael Röder, Andreas Both, and Alexander Hinneburg. Exploring the space of topic coherence measures. In Proceedings of the eighth ACM international conference on Web search and data mining, 399–408. 2015.

[SHG+15]

David Sculley, Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay Chaudhary, Michael Young, Jean-Francois Crespo, and Dan Dennison. Hidden technical debt in machine learning systems. Advances in neural information processing systems, 2015.

[SHB16]

Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 1715–1725. Berlin, Germany, August 2016. Association for Computational Linguistics. URL: https://aclanthology.org/P16-1162, doi:10.18653/v1/P16-1162.

[SDSKs18]

Eli Shlizerman, Lucio Dery, Hayden Schoen, and Ira Kemelmacher-shlizerman. Audio to body dynamics. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7574–7583. 2018.

[TRG+22]

Guy Tevet, Sigal Raab, Brian Gordon, Yonatan Shafir, Daniel Cohen-or, and Amit H Bermano. Human motion diffusion model. arXiv preprint arXiv:2209.14916, 2022.

[VSP+17]

Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 2017.

[WTB+22]

Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, and others. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682, 2022. URL: https://openreview.net/pdf?id=yzkSU5zdwD.

[XWI+21]

Kevin Xie, Tingwu Wang, Umar Iqbal, Yunrong Guo, Sanja Fidler, and Florian Shkurti. Physics-based human motion estimation and synthesis from videos. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 11532–11541. 2021.

[XBC+22]

Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, and Colin Raffel. Byt5: towards a token-free future with pre-trained byte-to-byte models. Transactions of the Association for Computational Linguistics, 10:291–306, 2022. URL: https://arxiv.org/pdf/2105.13626v1.pdf.

[XCR+20]

Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. Mt5: a massively multilingual pre-trained text-to-text transformer. arXiv preprint arXiv:2010.11934, 2020.

[YDY+19]

Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. Xlnet: generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 2019. URL: https://arxiv.org/pdf/1906.08237.pdf.

[ZCP+22]

Mingyuan Zhang, Zhongang Cai, Liang Pan, Fangzhou Hong, Xinying Guo, Lei Yang, and Ziwei Liu. Motiondiffuse: text-driven human motion generation with diffusion model. arXiv preprint arXiv:2208.15001, 2022.