Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Examination of Code generated by Large Language Models
by
Feix, Alexander
, Müller, Vincent
, Rauscher, Maurice
, Löwe, Welf
, Schäffler, Florian
, Beer, Robin
, Muras, Tamara
, Guttzeit, Tim
in
Algorithms
/ Chatbots
/ Large language models
/ Python
/ Rapid prototyping
/ Software development
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Examination of Code generated by Large Language Models
by
Feix, Alexander
, Müller, Vincent
, Rauscher, Maurice
, Löwe, Welf
, Schäffler, Florian
, Beer, Robin
, Muras, Tamara
, Guttzeit, Tim
in
Algorithms
/ Chatbots
/ Large language models
/ Python
/ Rapid prototyping
/ Software development
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Paper
Examination of Code generated by Large Language Models
2024
Request Book From Autostore
and Choose the Collection Method
Overview
Large language models (LLMs), such as ChatGPT and Copilot, are transforming software development by automating code generation and, arguably, enable rapid prototyping, support education, and boost productivity. Therefore, correctness and quality of the generated code should be on par with manually written code. To assess the current state of LLMs in generating correct code of high quality, we conducted controlled experiments with ChatGPT and Copilot: we let the LLMs generate simple algorithms in Java and Python along with the corresponding unit tests and assessed the correctness and the quality (coverage) of the generated (test) codes. We observed significant differences between the LLMs, between the languages, between algorithm and test codes, and over time. The present paper reports these results together with the experimental methods allowing repeated and comparable assessments for more algorithms, languages, and LLMs over time.
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.