Webinar #1 "Harnessing Large Language Models in Asset Management"

September 19, 2024

REPLAY AVAILABLE BELOW

ESSEC - Amundi Chair on Asset & Risk Management


WEBINAR

"Harnessing Large Language Models in Asset Management"


September 19, 2024


Program:


16.30 - 17.15: "Are Green Innovations Priced? Evidence

Beyond Patents"

by Tingyu Yu, University of Zurich



17.15 - 18.00: "Expected Returns and Large Language Models

by Dacheng Xiu, University of Chicago, Booth School of Business 

Until recently, the finance literature rested mostly on the use of numerical data for predictions. Yet, the last two decades have seen a rising use of new information from text sources (news press, social media, corporate documents,…) to get a better understanding of financial markets. Recent developments in AI and machine learning technology provide new possibilities to go deeper into textual data and extract information which may not be captured by standard numerical data. New language models are able to better understand complex relationships within a text and potentially improve our ability to better understand and predict.

 

In a first paper presented by Tingyu Yu (University of Zurich), the authors develop a new text-based green innovation measures at the firm level to quantify how financial markets integrate green innovations. The authors fine-tune ClimateBERT, a deep neural network language model pre-trained on climate-related texts, to identify and categorize green innovation sentences in earnings calls. This approach allows them to expand the scope and improve the precision in evaluating companies’ green innovation activities. In doing so, the authors show that the value of green innovation lies not only in its invention but also in its adoption and application within corporate strategies, as captured in their textual measures.

 

The second paper presented by Dacheng Xiu (University of Chicago, Booth School of Business) aim at better exploiting text data sources by constructing refined news text representations derived from large language models (LLMs) and then using them to improve models of expected stock return. They study three large-scale pre-trained LLMs: BERT, RoBERTa and OPT. They compare this with SESTM, a sentiment analyzer based on Bags-of-Words representation. They also compare the performance of LLMs with common machine learning models commonly used in finance. Although the analysis is conducted on the US equity market focusing on news articles in English, they extend their analysis to 16 international stock markets using news articles written in 12 other languages. The primary research contribution of the paper is to highlight the advantages of LLM representations for effectively modeling stock returns.

PAPERS

Leippold & Yu (2023).pdf
Chen, Y., Kelly, B. T., & Xiu, D. (2022). Expected returns and large language models.pdf