Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
PaLM 2 Technical Report
by
Clark, Jonathan H
, Robinson, Kevin
, Tokumine, Simon
, Shachi Dave
, Lin, Hanzhao
, Catasta, Michele
, Dehghani, Mostafa
, Yu, Jiahui
, Botha, Jan
, Zheng, Ce
, Dai, Andrew M
, Wu, Yonghui
, Gustavo Hernandez Abrego
, Chowdhery, Aakanksha
, Hui, Jeffrey
, Liu, Zhongtao
, Johnson, Melvin
, Nham, John
, Slone, Ambrose
, Zhang, Qiao
, Meier-Hellstern, Kathy
, Brahma, Siddhartha
, Crepy, Clément
, Rajkumar, Samuel
, Laurent El Shafey
, Qiao, Siyuan
, Chu, Eric
, Vodrahalli, Kiran
, Xue, Linting
, Zhang, Yujing
, Devlin, Jacob
, Nado, Zachary
, Austin, Jacob
, Richter, Bryan
, Cheng, Yong
, Xiao, Kefan
, Zhou, Weikang
, Wieting, John
, Garcia, Xavier
, Li, Jian
, Hashemi, Hadi
, Zheng, Steven
, Ruder, Sebastian
, Moussalem, Maysam
, Gur-Ari, Guy
, Krikun, Maxim
, Li, YaGuang
, Lee, Benjamin
, Jia, Wenhao
, Ahn, Junwhan
, Bradbury, James
, Freitag, Markus
, Choquette-Choo, Christopher A
, Moreira, Erica
, Chen, Zhifeng
, Maynez, Joshua
, Reif, Emily
, Wang, Zirui
, Isard, Michael
, Lee, Katherine
, Xu, Kelvin
, Sunipa Dev
, Hand, Steven
, Hu, Andrea
, Shelby, Renee
, Brennan Saeta
, Wang, Tao
, Aurko Roy
, Parrish, Alicia
, Barham, Paul
, Brooks, Kevin
, Pellat, Marie
, Polozov, Alex
, Hou, Le
, Feinberg, Vl
in
Inference
/ Multilingualism
/ Reasoning
/ State of the art
/ Toxicity
2023
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
PaLM 2 Technical Report
by
Clark, Jonathan H
, Robinson, Kevin
, Tokumine, Simon
, Shachi Dave
, Lin, Hanzhao
, Catasta, Michele
, Dehghani, Mostafa
, Yu, Jiahui
, Botha, Jan
, Zheng, Ce
, Dai, Andrew M
, Wu, Yonghui
, Gustavo Hernandez Abrego
, Chowdhery, Aakanksha
, Hui, Jeffrey
, Liu, Zhongtao
, Johnson, Melvin
, Nham, John
, Slone, Ambrose
, Zhang, Qiao
, Meier-Hellstern, Kathy
, Brahma, Siddhartha
, Crepy, Clément
, Rajkumar, Samuel
, Laurent El Shafey
, Qiao, Siyuan
, Chu, Eric
, Vodrahalli, Kiran
, Xue, Linting
, Zhang, Yujing
, Devlin, Jacob
, Nado, Zachary
, Austin, Jacob
, Richter, Bryan
, Cheng, Yong
, Xiao, Kefan
, Zhou, Weikang
, Wieting, John
, Garcia, Xavier
, Li, Jian
, Hashemi, Hadi
, Zheng, Steven
, Ruder, Sebastian
, Moussalem, Maysam
, Gur-Ari, Guy
, Krikun, Maxim
, Li, YaGuang
, Lee, Benjamin
, Jia, Wenhao
, Ahn, Junwhan
, Bradbury, James
, Freitag, Markus
, Choquette-Choo, Christopher A
, Moreira, Erica
, Chen, Zhifeng
, Maynez, Joshua
, Reif, Emily
, Wang, Zirui
, Isard, Michael
, Lee, Katherine
, Xu, Kelvin
, Sunipa Dev
, Hand, Steven
, Hu, Andrea
, Shelby, Renee
, Brennan Saeta
, Wang, Tao
, Aurko Roy
, Parrish, Alicia
, Barham, Paul
, Brooks, Kevin
, Pellat, Marie
, Polozov, Alex
, Hou, Le
, Feinberg, Vl
in
Inference
/ Multilingualism
/ Reasoning
/ State of the art
/ Toxicity
2023
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
PaLM 2 Technical Report
by
Clark, Jonathan H
, Robinson, Kevin
, Tokumine, Simon
, Shachi Dave
, Lin, Hanzhao
, Catasta, Michele
, Dehghani, Mostafa
, Yu, Jiahui
, Botha, Jan
, Zheng, Ce
, Dai, Andrew M
, Wu, Yonghui
, Gustavo Hernandez Abrego
, Chowdhery, Aakanksha
, Hui, Jeffrey
, Liu, Zhongtao
, Johnson, Melvin
, Nham, John
, Slone, Ambrose
, Zhang, Qiao
, Meier-Hellstern, Kathy
, Brahma, Siddhartha
, Crepy, Clément
, Rajkumar, Samuel
, Laurent El Shafey
, Qiao, Siyuan
, Chu, Eric
, Vodrahalli, Kiran
, Xue, Linting
, Zhang, Yujing
, Devlin, Jacob
, Nado, Zachary
, Austin, Jacob
, Richter, Bryan
, Cheng, Yong
, Xiao, Kefan
, Zhou, Weikang
, Wieting, John
, Garcia, Xavier
, Li, Jian
, Hashemi, Hadi
, Zheng, Steven
, Ruder, Sebastian
, Moussalem, Maysam
, Gur-Ari, Guy
, Krikun, Maxim
, Li, YaGuang
, Lee, Benjamin
, Jia, Wenhao
, Ahn, Junwhan
, Bradbury, James
, Freitag, Markus
, Choquette-Choo, Christopher A
, Moreira, Erica
, Chen, Zhifeng
, Maynez, Joshua
, Reif, Emily
, Wang, Zirui
, Isard, Michael
, Lee, Katherine
, Xu, Kelvin
, Sunipa Dev
, Hand, Steven
, Hu, Andrea
, Shelby, Renee
, Brennan Saeta
, Wang, Tao
, Aurko Roy
, Parrish, Alicia
, Barham, Paul
, Brooks, Kevin
, Pellat, Marie
, Polozov, Alex
, Hou, Le
, Feinberg, Vl
in
Inference
/ Multilingualism
/ Reasoning
/ State of the art
/ Toxicity
2023
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Paper
PaLM 2 Technical Report
2023
Request Book From Autostore
and Choose the Collection Method
Overview
We introduce PaLM 2, a new state-of-the-art language model that has better multilingual and reasoning capabilities and is more compute-efficient than its predecessor PaLM. PaLM 2 is a Transformer-based model trained using a mixture of objectives. Through extensive evaluations on English and multilingual language, and reasoning tasks, we demonstrate that PaLM 2 has significantly improved quality on downstream tasks across different model sizes, while simultaneously exhibiting faster and more efficient inference compared to PaLM. This improved efficiency enables broader deployment while also allowing the model to respond faster, for a more natural pace of interaction. PaLM 2 demonstrates robust reasoning capabilities exemplified by large improvements over PaLM on BIG-Bench and other reasoning tasks. PaLM 2 exhibits stable performance on a suite of responsible AI evaluations, and enables inference-time control over toxicity without additional overhead or impact on other capabilities. Overall, PaLM 2 achieves state-of-the-art performance across a diverse set of tasks and capabilities. When discussing the PaLM 2 family, it is important to distinguish between pre-trained models (of various sizes), fine-tuned variants of these models, and the user-facing products that use these models. In particular, user-facing products typically include additional pre- and post-processing steps. Additionally, the underlying models may evolve over time. Therefore, one should not expect the performance of user-facing products to exactly match the results reported in this report.
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.