Treffer: PyScribe–Learning to describe python code

Title:
PyScribe–Learning to describe python code
Contributors:
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > Other
Source:
Software: Practice and Experience, 1-27 (2023-12)
Publisher Information:
John Wiley and Sons Ltd
Publication Year:
2023
Collection:
University of Luxembourg: ORBilu - Open Repository and Bibliography
Document Type:
Fachzeitschrift article in journal/newspaper
Language:
English
ISSN:
0038-0644
1097-024X
Relation:
urn:issn:0038-0644; urn:issn:1097-024X; https://orbilu.uni.lu/handle/10993/59633; info:hdl:10993/59633; wos:001117942500001
DOI:
10.1002/spe.3291
Rights:
open access ; http://purl.org/coar/access_right/c_abf2 ; info:eu-repo/semantics/openAccess
Accession Number:
edsbas.5202D864
Database:
BASE

Weitere Informationen

peer reviewed ; Code comment generation, which attempts to summarize the functionality of source code in textual descriptions, plays an important role in automatic software development research. Currently, several structural neural networks have been exploited to preserve the syntax structure of source code based on abstract syntax trees (ASTs). However, they can not well capture both the long-distance and local relations between nodes while retaining the overall structural information of AST. To mitigate this problem, we present a prototype tool titled PyScribe, which extends the Transformer model to a new encoder-decoder-based framework. Particularly, the triplet position is designed and integrated into the node-level and edge-level structural features of AST for producing Python code comments automatically. This paper, to the best of our knowledge, makes the first effort to model the edges of AST as an explicit component for improved code representation. By specifying triplet positions for each node and edge, the overall structural information can be well preserved in the learning process. Moreover, the captured node and edge features go through a two-stage decoding process to yield higher qualified comments. To evaluate the effectiveness of PyScribe, we resort to a large dataset of code-comment pairs by mining Jupyter Notebooks from GitHub, for which we have made it publicly available to support further studies. The experimental results reveal that PyScribe is indeed effective, outperforming the state-ofthe-art by achieving an average BLEU score (i.e., av-BLEU) of (Formula presented.) 0.28.