Please use this identifier to cite or link to this item: https://elibrary.khec.edu.np:8080/handle/123456789/673
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorEr. Dinesh Gothe-
dc.contributor.authorAakash Pradhan (750301)-
dc.contributor.authorAshish Lawaju (750307)-
dc.contributor.authorDeepesh Kayastha (750311)-
dc.contributor.authorSangam Thapa (750339)-
dc.contributor.authorPratham Dahal (740327)-
dc.date.accessioned2023-09-20T12:08:08Z-
dc.date.available2023-09-20T12:08:08Z-
dc.date.issued2023-08-
dc.identifier.urihttps://elibrary.khec.edu.np/handle/123456789/673-
dc.description.abstractPrasta Nepali functions as a web application that provides grammar-checking services for the Nepali language. This process involves analyzing text to identify errors in grammar and suggesting potential corrections. Traditionally, grammar checks required manual creation and upkeep of predefined grammar rules, demanding substantial effort. However, recent strides in artificial intelligence, notably the emergence of transformer models, offer novel avenues for automating this task. Transformers are a type of deep learning architecture applicable to various tasks, encompassing text generation and analysis. They comprise two key elements: an encoder to process input data and a decoder to generate output. Unlike conventional methods, transformers capture contextual relationships between words in a sentence, enabling a more nuanced understanding of grammar. The model is trained using a dataset containing both accurate and erroneous sentences. Once trained, it can produce corrections for new sentences, enhancing their grammatical accuracy. This approach reduces manual intervention while increasing the efficiency and accuracy of grammar error detection and correction. Grammar checkers such as stacked LSTMs, bi-LSTMs with attention, and transformers were used, but the transformer model performed best. The stacked LSTM model obtained training accuracy, validation accuracy, training loss, and validation loss of 73.12%, 65.00%, 54.02%, and 64.13% respectively. The Bi-LSTM with attention model obtained training accuracy, validation accuracy, training loss, and validation loss of 88.62%, 80.43%, 25.15%, and 43.47% respectively. The transformer model obtained training accuracy, validation accuracy, training loss, and validation loss of 90.45%, 92.15%, 28.12%, and 51.23% respectively. For 1-gram, 2-gram, 3-gram, and 4-gram candidates matching the reference translation, bilingual evaluation understudy (BLEU) scores were 0.9037, 0.8170, 0.7838, and 0.7694, respectively.en_US
dc.language.isoenen_US
dc.subjectPrasta Nepali, Grammar checking, Transformer models, Artificial intelligence, Contextual relationships, Accuracy, Loss, BLEU score, Candidate translation, Reference translation iien_US
dc.titlePrasta Nepalien_US
dc.typeTechnical Reporten_US
local.college.nameKhwopa Engineering College-
local.degree.departmentDepartment of Computer-
local.degree.nameBE Computer-
local.degree.levelBachelor's Degree-
local.item.accessionnumberD.1361-
Appears in Collections:PU Computer Report

Files in This Item:
File Description SizeFormat 
prasta nepali_final_printed.pdf
  Restricted Access
1.58 MBAdobe PDFThumbnail
View/Open Request a copy


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.