老虎机游戏在线玩-小蜜蜂老虎机技巧_百家乐桌子租_全讯网2 融天下 (中国)·官方网站

搜索
你想要找的

9月17日 朱英倫:Efficient Sequential Decision Making with Large Language Models
2024-09-17 10:00:00
活動主題:Efficient Sequential Decision Making with Large Language Models
主講人:朱英倫
開始時間:2024-09-17 10:00:00
舉行地點:普陀校區理科大樓A座1514
主辦單位:統計學院、統計交叉科學研究院
報告人簡介

Yinglun Zhu is an assistant professor in the ECE department at the University of California, Riverside; he is also affiliated with the CSE department, the Riverside Artificial Intelligence Research Institute, and the Center for Robotics and Intelligent Systems. Yinglun’s research focuses on machine learning, particularly in developing efficient and reliable learning algorithms and systems for large-scale, multimodal problems. His work not only establishes the foundations of various learning paradigms but also applies them to practical settings, addressing real-world challenges. His research has been integrated into leading machine learning libraries such as Vowpal Wabbit and commercial products like Microsoft Azure Personalizer Service. More information can be found on Yinglun’s personal website at https://yinglunz.com/.


內容簡介

This presentation focuses on extending the success of large language models (LLMs) to sequential decision making. Existing efforts either (i) re-train or finetune LLMs for decision making, or (ii) design prompts for pretrained LLMs. The former approach suffers from the computational burden of gradient updates, and the latter approach does not show promising results. In this presentation, I’ll talk about a new approach that leverages online model selection algorithms to efficiently incorporate LLMs agents into sequential decision making. Statistically, our approach significantly outperforms both traditional decision making algorithms and vanilla LLM agents. Computationally, our approach avoids the need for expensive gradient updates of LLMs, and throughout the decision making process, it requires only a small number of LLM calls. We conduct extensive experiments to verify the effectiveness of our proposed approach. As an example, on a large-scale Amazon dataset, our approach achieves more than a 6x performance gain over baselines while calling LLMs in only 1.5% of the time steps.


威尼斯人娱乐城线上赌场| 册亨县| 百家乐官网视频多开| 百家乐永利娱乐场| 盈得利| 百家乐官网游戏程序下载| 百家乐玩法秘决| 增城太阳城巧克力| 百家乐官网捡揽方法| 百家乐历史路单| 真人百家乐破解软件下载| 大发888游戏平台hplsj| 太阳城音乐广场| 百家乐官网建材| 優博百家乐客服| 隆德县| 百家乐网娱乐城| 澳门百家乐娱乐城怎么样| 赌博百家乐官网技术| 上海百家乐官网的玩法技巧和规则| 百家乐存200送200| 百家乐官网破解视频| 百家乐赌场娱乐城大全| 轮盘必胜法| 百家乐打印机分析| 淘宝皇冠网店| 网上百家乐官网打牌| 大发888手机版下载| 百家乐官网象棋赌博| 德州扑克在线| 百家乐官网那里最好| 全讯网新3| 新奥博百家乐官网娱乐城| 蓝盾百家乐赌场娱乐网规则 | 赌博百家乐官网秘笈| 欢乐谷娱乐城官网| 至尊百家乐贺一航| 百家乐官网高人玩法| 大发888英皇国际| 百家乐官网贴士介绍| 瑞丰国际开户|