In the last couple of years, generative models especially large language models drawing from recent advances in AI have emerged as promising for numerous applications. This work illustrates how a large language model, CodeT5, can enhance secure text-to-code generation. CodeT5 is proposed as a unified pre-trained encoder–decoder transformer model that benefits from semantic hints given by developer-assigned identifier names, improving code understanding and promoting trustworthy text-to-code transcribing. It addresses gaps by incorporating an identifier-aware pre-training task and connecting natural language to programming language abstractions through user-written code comments. To enhance code security, CodeT5 is trained on a huge CVE dataset, leveraging code snippets before and after security patches. This new hybrid paradigm helps promote secure coding as well as AI-enhanced software engineering.