Bibliography

[1]
Ada Developers, “Ada Reference Manual, 2022 Edition,” 2022. Available: https://www.adaic.org/resources/add_content/standards/22rm/html/RM-TTL.html
[2]
M. Zalewski, “American fuzzy lop.” Available: https://lcamtuf.coredump.cx/afl/
[3]
M. Heuse, H. Eißfeldt, A. Fioraldi, and D. Maier, AFL++.” Jan. 2022. Available: https://github.com/AFLplusplus/AFLplusplus
[4]
Astral, “Astral-sh/uv.” Astral, Jul. 18, 2025. Available: https://github.com/astral-sh/uv
[5]
Astral, “Astral-sh/ruff.” Astral, Jul. 18, 2025. Available: https://github.com/astral-sh/ruff
[6]
Google, “Google/atheris.” Google, Apr. 09, 2025. Available: https://github.com/google/atheris
[7]
T. Avgerinos et al., “The mayhem cyber reasoning system,” IEEE Security & Privacy, vol. 16, no. 2, pp. 52–60, 2018.
[8]
F. Bacon, Of the Proficience and Advancement of Learning... Edited by the Rev. GW Kitchin. Bell & Daldy, 1861.
[9]
D. Bahdanau, K. Cho, and Y. Bengio, “Neural Machine Translation by Jointly Learning to Align and Translate,” May 19, 2016. doi: 10.48550/arXiv.1409.0473. Available: http://arxiv.org/abs/1409.0473
[10]
GNU Project, “Bash - GNU Project - Free Software Foundation.” Available: https://www.gnu.org/software/bash/
[11]
F. Bellard, P. Maydell, and QEMU Team, QEMU.” May 29, 2025. Available: https://www.qemu.org/
[12]
G. Black, V. Mathew Vaidyan, and G. Comert, “Evaluating Large Language Models for Enhanced Fuzzing: An Analysis Framework for LLM-Driven Seed Generation,” IEEE Access, vol. 12, pp. 156065–156081, 2024, doi: 10.1109/ACCESS.2024.3484947. Available: https://ieeexplore.ieee.org/abstract/document/10731701
[13]
F. Both, “Why we no longer use LangChain for building our AI agents,” 2024. Available: https://octomind.dev/blog/why-we-no-longer-use-langchain-for-building-our-ai-agents
[14]
T. B. Brown et al., “Language Models are Few-Shot Learners,” Jul. 22, 2020. doi: 10.48550/arXiv.2005.14165. Available: http://arxiv.org/abs/2005.14165
[15]
A. Cedilnik, B. Hoffman, B. King, K. Martin, and A. Neundorf, CMake - Upgrade Your Software Build System.” 2000. Available: https://cmake.org/
[16]
S. K. Cha, T. Avgerinos, A. Rebert, and D. Brumley, “Unleashing mayhem on binary code,” in 2012 IEEE symposium on security and privacy, IEEE, 2012, pp. 380–394.
[17]
J. Wei et al., “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models,” Jan. 10, 2023. doi: 10.48550/arXiv.2201.11903. Available: http://arxiv.org/abs/2201.11903
[18]
OpenAI, ChatGPT,” 2025. Available: https://chatgpt.com
[19]
M. Chen et al., “Evaluating Large Language Models Trained on Code,” Jul. 14, 2021. doi: 10.48550/arXiv.2107.03374. Available: http://arxiv.org/abs/2107.03374
[20]
Anthropic, “Claude,” 2025. Available: https://claude.ai/new
[21]
Clibs Project, “Clibs/clib.” clibs, Jul. 01, 2025. Available: https://github.com/clibs/clib
[22]
Clibs Project, “Clib Packages,” 2025. Available: https://github.com/clibs/clib/wiki/Packages
[23]
Google, “Google/clusterfuzz.” Google, Apr. 09, 2025. Available: https://github.com/google/clusterfuzz
[24]
A. Cortesi, M. Hils, and T. Kriechbaumer, “Mitmproxy/pdoc.” mitmproxy, Jul. 18, 2025. Available: https://github.com/mitmproxy/pdoc
[25]
Anysphere, “Cursor - The AI Code Editor,” 2025. Available: https://cursor.com/
[26]
DeepSeek-AI et al., DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning,” Jan. 22, 2025. doi: 10.48550/arXiv.2501.12948. Available: http://arxiv.org/abs/2501.12948
[27]
J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” May 24, 2019. doi: 10.48550/arXiv.1810.04805. Available: http://arxiv.org/abs/1810.04805
[28]
O. Khattab et al., DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines,” Oct. 05, 2023. doi: 10.48550/arXiv.2310.03714. Available: http://arxiv.org/abs/2310.03714
[29]
M. Douze et al., “The Faiss library,” Feb. 11, 2025. doi: 10.48550/arXiv.2401.08281. Available: http://arxiv.org/abs/2401.08281
[30]
S. I. Feldman, “Make — a program for maintaining computer programs,” Software: Practice and Experience, vol. 9, no. 4, pp. 255–265, 1979, doi: 10.1002/spe.4380090402. Available: https://onlinelibrary.wiley.com/doi/abs/10.1002/spe.4380090402
[31]
D. A. Wheeler, “Flawfinder Home Page.” Available: https://dwheeler.com/flawfinder/
[32]
O. I. Franksen, “Babbage and cryptography. Or, the mystery of Admiral Beaufort’s cipher,” Mathematics and Computers in Simulation, vol. 35, no. 4, pp. 327–367, 1993, Available: https://www.sciencedirect.com/science/article/pii/037847549390063Z
[33]
D. Babić et al., FUDGE: Fuzz driver generation at scale,” in Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, Tallinn Estonia: ACM, Aug. 2019, pp. 975–985. doi: 10.1145/3338906.3340456. Available: https://dl.acm.org/doi/10.1145/3338906.3340456
[34]
Open Source Security Foundation (OpenSSF), “Ossf/fuzz-introspector.” Open Source Security Foundation (OpenSSF), Jun. 30, 2025. Available: https://github.com/ossf/fuzz-introspector
[35]
K. Ispoglou, D. Austin, V. Mohan, and M. Payer, FuzzGen: Automatic fuzzer generation,” in 29th USENIX Security Symposium (USENIX Security 20), 2020, pp. 2271–2287. Available: https://www.usenix.org/conference/usenixsecurity20/presentation/ispoglou
[36]
Y. Deng, C. S. Xia, C. Yang, S. D. Zhang, S. Yang, and L. Zhang, “Large Language Models are Edge-Case Fuzzers: Testing Deep Learning Libraries via FuzzGPT,” Apr. 04, 2023. doi: 10.48550/arXiv.2304.02014. Available: http://arxiv.org/abs/2304.02014
[37]
Google, “Google/fuzztest.” Google, Jul. 10, 2025. Available: https://github.com/google/fuzztest
[38]
D. Ganguly, S. Iyengar, V. Chaudhary, and S. Kalyanaraman, “Proof of Thought : Neurosymbolic Program Synthesis allows Robust and Interpretable Reasoning,” Sep. 25, 2024. doi: 10.48550/arXiv.2409.17270. Available: http://arxiv.org/abs/2409.17270
[39]
A. d’Avila Garcez and L. C. Lamb, “Neurosymbolic AI: The 3rd Wave,” Dec. 16, 2020. doi: 10.48550/arXiv.2012.05876. Available: http://arxiv.org/abs/2012.05876
[40]
M. Gaur and A. Sheth, “Building Trustworthy NeuroSymbolic AI Systems: Consistency, Reliability, Explainability, and Safety,” Dec. 05, 2023. doi: 10.48550/arXiv.2312.06798. Available: http://arxiv.org/abs/2312.06798
[41]
Google, “‎Google Gemini,” 2025. Available: https://gemini.google.com
[42]
Microsoft, GitHub Copilot · Your AI pair programmer,” 2025. Available: https://github.com/features/copilot
[43]
D. Giannone, “Demystifying AI Agents: ReAct-Style Agents vs Agentic Workflows,” Feb. 09, 2025. Available: https://medium.com/@DanGiannone/demystifying-ai-agents-react-style-agents-vs-agentic-workflows-cedca7e26471
[44]
[45]
GitHub Docs, “About GitHub-hosted runners,” 2025. Available: https://docs-internal.github.com/en/actions/concepts/runners/about-github-hosted-runners
[46]
A. Grattafiori et al., “The Llama 3 Herd of Models,” Nov. 23, 2024. doi: 10.48550/arXiv.2407.21783. Available: http://arxiv.org/abs/2407.21783
[47]
H. Green and T. Avgerinos, GraphFuzz: Library API fuzzing with lifetime-aware dataflow graphs,” in Proceedings of the 44th International Conference on Software Engineering, Pittsburgh Pennsylvania: ACM, May 2022, pp. 1070–1081. doi: 10.1145/3510003.3510228. Available: https://dl.acm.org/doi/10.1145/3510003.3510228
[48]
T. He, “Sighingnow/libclang.” Jul. 03, 2025. Available: https://github.com/sighingnow/libclang
[49]
Blackduck, Inc., “Heartbleed Bug,” Mar. 07, 2025. Available: https://heartbleed.com/
[50]
CVE Program, CVE - CVE-2014-0160,” 2014. Available: https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2014-0160
[51]
G. J. Holzmann, “The Power of 10: Rules for Developing Safety-Critical Code,” Jun. 2006, Available: https://web.eecs.umich.edu/~imarkov/10rules.pdf
[52]
Google, “Google/honggfuzz.” Google, Jul. 10, 2025. Available: https://github.com/google/honggfuzz
[53]
L. Huang et al., “A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions,” ACM Trans. Inf. Syst., vol. 43, no. 2, pp. 1–55, Mar. 2025, doi: 10.1145/3703155. Available: http://arxiv.org/abs/2311.05232
[54]
Z. Li, S. Dutta, and M. Naik, IRIS: LLM-Assisted Static Analysis for Detecting Security Vulnerabilities,” Apr. 06, 2025. doi: 10.48550/arXiv.2405.17238. Available: http://arxiv.org/abs/2405.17238
[55]
Y. Jiang et al., “When Fuzzing Meets LLMs: Challenges and Opportunities,” in Companion Proceedings of the 32nd ACM International Conference on the Foundations of Software Engineering, in ACM Conferences. 2024, pp. 492–496. doi: 10.1145/3663529.3663784. Available: https://dl.acm.org/doi/abs/10.1145/3663529.3663784
[56]
J. Kaddour, J. Harris, M. Mozes, H. Bradley, R. Raileanu, and R. McHardy, “Challenges and Applications of Large Language Models,” Jul. 19, 2023. doi: 10.48550/arXiv.2307.10169. Available: http://arxiv.org/abs/2307.10169
[57]
D. Kahneman, Thinking, fast and slow, 1st ed. New York: Farrar, Straus and Giroux, 2011.
[58]
H. Kautz, “The Third AI Summer,” presented at the 34th Annual Meeting of the Association for the Advancement of Artificial Intelligence, Feb. 10, 2020. Available: https://www.youtube.com/watch?v=_cQITY0SPiw
[59]
B. W. Kernighan and D. M. Ritchie, The C programming language. in Prentice-Hall software series. Englewood Cliffs, N.J: Prentice-Hall, 1978.
[60]
S. Kim and S. Lee, “Performance Comparison of Prompt Engineering and Fine-Tuning Approaches for Fuzz Driver Generation Using Large Language Models,” in Innovative Mobile and Internet Services in Ubiquitous Computing, L. Barolli, H.-C. Chen, and K. Yim, Eds., Cham: Springer Nature Switzerland, 2025, pp. 111–120. doi: 10.1007/978-3-031-96093-2_12
[61]
C. Cadar, D. Dunbar, and D. Engler, KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs,” presented at the USENIX Symposium on Operating Systems Design and Implementation, Dec. 2008. Available: https://www.semanticscholar.org/paper/KLEE%3A-Unassisted-and-Automatic-Generation-of-Tests-Cadar-Dunbar/0b93657965e506dfbd56fbc1c1d4b9666b1d01c8
[62]
N. Kosmyna et al., “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task,” Jun. 10, 2025. doi: 10.48550/arXiv.2506.08872. Available: http://arxiv.org/abs/2506.08872
[63]
H. Chase, LangChain.” Oct. 2022. Available: https://github.com/langchain-ai/langchain
[64]
H.-P. H. Lee et al., “The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers,” 2025, Available: https://hankhplee.com/papers/genai_critical_thinking.pdf
[65]
P. Lewis et al., “Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks,” Apr. 12, 2021. doi: 10.48550/arXiv.2005.11401. Available: http://arxiv.org/abs/2005.11401
[66]
H. Li, “Language models: Past, present, and future,” Commun. ACM, vol. 65, no. 7, pp. 56–63, Jun. 2022, doi: 10.1145/3490443. Available: https://dl.acm.org/doi/10.1145/3490443
[67]
LLVM Project, libFuzzer – a library for coverage-guided fuzz testing. — LLVM 21.0.0git documentation,” 2025. Available: https://llvm.org/docs/LibFuzzer.html
[68]
D. Liu, J. Metzman, O. Chang, and G. O. S. S. Team, AI-Powered Fuzzing: Breaking the Bug Hunting Barrier,” Aug. 16, 2023. Available: https://security.googleblog.com/2023/08/ai-powered-fuzzing-breaking-bug-hunting.html
[69]
J. Liu, LlamaIndex.” Nov. 2022. doi: 10.5281/zenodo.1234. Available: https://github.com/jerryjliu/llama_index
[70]
LLVM Project, “The LLVM Compiler Infrastructure Project,” 2025. Available: https://llvm.org/
[71]
V. J. M. Manes et al., “The Art, Science, and Engineering of Fuzzing: A Survey,” Apr. 07, 2019. doi: 10.48550/arXiv.1812.00140. Available: http://arxiv.org/abs/1812.00140
[72]
E. Martin, “Ninja-build/ninja.” ninja-build, Jul. 14, 2025. Available: https://github.com/ninja-build/ninja
[73]
A. Mastropaolo and D. Poshyvanyk, “A Path Less Traveled: Reimagining Software Engineering Automation via a Neurosymbolic Paradigm,” May 04, 2025. doi: 10.48550/arXiv.2505.02275. Available: http://arxiv.org/abs/2505.02275
[74]
T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient Estimation of Word Representations in Vector Space,” Sep. 06, 2013. doi: 10.48550/arXiv.1301.3781. Available: http://arxiv.org/abs/1301.3781
[75]
B. P. Miller, L. Fredriksen, and B. So, “An empirical study of the reliability of UNIX utilities,” Commun. ACM, vol. 33, no. 12, pp. 32–44, Dec. 1990, doi: 10.1145/96267.96279. Available: https://dl.acm.org/doi/10.1145/96267.96279
[76]
E. Nijkamp, H. Hayashi, C. Xiong, S. Savarese, and Y. Zhou, CodeGen2: Lessons for training llms on programming and natural languages,” ICLR, 2023.
[77]
E. Nijkamp et al., CodeGen: An open large language model for code with multi-turn program synthesis,” ICLR, 2023.
[78]
OpenAI et al., GPT-4 Technical Report,” Mar. 04, 2024. doi: 10.48550/arXiv.2303.08774. Available: http://arxiv.org/abs/2303.08774
[79]
OpenAI, “Introducing GPT-4.1 in the API,” Apr. 14, 2025. Available: https://openai.com/index/gpt-4-1/
[80]
OpenAI Docs, GPT-4.1 mini - Open AI API,” 2025. Available: https://platform.openai.com
[81]
OpenAI Docs, “Text-embedding-3-small - OpenAI API,” 2025. Available: https://platform.openai.com
[82]
OpenAI Docs, “Model optimization - OpenAI API,” 2025. Available: https://platform.openai.com
[83]
A. Arya, O. Chang, J. Metzman, K. Serebryany, and D. Liu, OSS-Fuzz.” Apr. 08, 2025. Available: https://github.com/google/oss-fuzz
[84]
D. Liu, O. Chang, J. metzman, M. Sablotny, and M. Maruseac, OSS-fuzz-gen: Automated fuzz target generation.” May 2024. Available: https://github.com/google/oss-fuzz-gen
[85]
OSS-Fuzz Maintainers, “Introducing LLM-based harness synthesis for unfuzzed projects,” May 27, 2024. Available: https://blog.oss-fuzz.com/posts/introducing-llm-based-harness-synthesis-for-unfuzzed-projects/
[86]
OSS-Fuzz, OSS-Fuzz Documentation,” 2025. Available: https://google.github.io/oss-fuzz/
[87]
OWASP Foundation, “Fuzzing.” Available: https://owasp.org/www-community/Fuzzing
[88]
J. Pakkanen, “Mesonbuild/meson.” The Meson Build System, Jul. 14, 2025. Available: https://github.com/mesonbuild/meson
[89]
N. Perry, M. Srivastava, D. Kumar, and D. Boneh, “Do Users Write More Insecure Code with AI Assistants?” Dec. 18, 2023. doi: 10.48550/arXiv.2211.03622. Available: http://arxiv.org/abs/2211.03622
[90]
pip developers, “Pip documentation V25.1.1,” 2025. Available: https://pip.pypa.io/en/stable/
[91]
D. Wang, G. Zhou, L. Chen, D. Li, and Y. Miao, ProphetFuzz: Fully Automated Prediction and Fuzzing of High-Risk Option Combinations with Only Documentation via Large Language Model,” Sep. 01, 2024. doi: 10.1145/3658644.3690231. Available: http://arxiv.org/abs/2409.00922
[92]
PyTest Dev Team, “Pytest-dev/pytest.” pytest-dev, Jul. 18, 2025. Available: https://github.com/pytest-dev/pytest
[93]
Python Software Foundation, “Python/mypy.” Python, Jul. 18, 2025. Available: https://github.com/python/mypy
[94]
A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever, “Improving language understanding by generative pre-training,” 2018, Available: https://www.mikecaptain.com/resources/pdf/GPT-1.pdf
[95]
A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, “Language models are unsupervised multitask learners,” OpenAI blog, vol. 1, no. 8, p. 9, 2019, Available: https://storage.prod.researchhub.com/uploads/papers/2020/06/01/language-models.pdf
[96]
N. Rathaus and G. Evron, Open source fuzzing tools. Burlington, MA: Syngress Pub, 2007.
[97]
S. Yao et al., ReAct: Synergizing Reasoning and Acting in Language Models,” Mar. 10, 2023. doi: 10.48550/arXiv.2210.03629. Available: http://arxiv.org/abs/2210.03629
[98]
A. Rebert et al., “Optimizing seed selection for fuzzing,” in Proceedings of the 23rd USENIX conference on Security Symposium, in SEC’14. USA: USENIX Association, Aug. 2014, pp. 861–875.
[99]
D. M. Ritchie, S. C. Johnson, M. E. Lesk, and B. W. Kernighan, “The C programming language,” Bell Sys. Tech. J, vol. 57, no. 6, pp. 1991–2019, 1978, Available: https://www.academia.edu/download/67840358/1978.07_Bell_System_Technical_Journal.pdf#page=85
[100]
Rust Project Developers, “Rust Programming Language,” 2025. Available: https://www.rust-lang.org/
[101]
J. Saarinen, “Further flaws render Shellshock patch ineffective,” Sep. 29, 2014. Available: https://www.itnews.com.au/news/further-flaws-render-shellshock-patch-ineffective-396256
[102]
A. Sarkar and I. Drosos, “Vibe coding: Programming through conversation with artificial intelligence,” Jun. 29, 2025. doi: 10.48550/arXiv.2506.23253. Available: http://arxiv.org/abs/2506.23253
[103]
M. K. Sarker, L. Zhou, A. Eberhart, and P. Hitzler, “Neuro-symbolic artificial intelligence: Current trends,” AIC, vol. 34, no. 3, pp. 197–209, Mar. 2022, doi: 10.3233/aic-210084. Available: https://journals.sagepub.com/doi/full/10.3233/AIC-210084
[104]
N. Sasirekha, A. Edwin Robert, and M. Hemalatha, “Program Slicing Techniques and its Applications,” IJSEA, vol. 2, no. 3, pp. 50–64, Jul. 2011, doi: 10.5121/ijsea.2011.2304. Available: http://www.airccse.org/journal/ijsea/papers/0711ijsea04.pdf
[105]
T. Preston-Werner, “Semantic Versioning 2.0.0.” Available: https://semver.org/
[106]
K. Serebryany, D. Bruening, A. Potapenko, and D. Vyukov, AddressSanitizer: A fast address sanity checker,” in 2012 USENIX annual technical conference (USENIX ATC 12), 2012, pp. 309–318. Available: https://www.usenix.org/conference/atc12/technical-sessions/presentation/serebryany
[107]
A. Sheth, K. Roy, and M. Gaur, “Neurosymbolic AIWhy, What, and How,” May 01, 2023. doi: 10.48550/arXiv.2305.00813. Available: http://arxiv.org/abs/2305.00813
[108]
W. Shi, Y. Zhang, X. Xing, and J. Xu, “Harnessing Large Language Models for Seed Generation in Greybox Fuzzing,” Nov. 27, 2024. doi: 10.48550/arXiv.2411.18143. Available: http://arxiv.org/abs/2411.18143
[109]
T. Simonite, “This Bot Hunts Software Bugs for the Pentagon,” Wired, Jun. 01, 2020. Available: https://www.wired.com/story/bot-hunts-software-bugs-pentagon/
[110]
Stanford NLP Team, “Signatures - DSPy Documentation,” 2025. Available: https://dspy.ai/learn/programming/signatures/
[111]
Stanford NLP Team, ReAct - DSPy Documentation,” 2025. Available: https://dspy.ai/api/modules/ReAct/
[112]
Y. Sun, “Automated Generation and Compilation of Fuzz Driver Based on Large Language Models,” in Proceedings of the 2024 9th International Conference on Cyber Security and Information Engineering, in ICCSIE ’24. New York, NY, USA: Association for Computing Machinery, Dec. 2024, pp. 461–468. doi: 10.1145/3689236.3689272. Available: https://doi.org/10.1145/3689236.3689272
[113]
M. Sutton, A. Greene, and P. Amini, Fuzzing: Brute force vulnerabilty discovery. Upper Saddle River, NJ: Addison-Wesley, 2007.
[114]
A. Takanen, J. DeMott, C. Miller, and A. Kettunen, Fuzzing for software security testing and quality assurance, Second edition. in Information security and privacy library. Boston London Norwood, MA: Artech House, 2018.
[115]
The OpenSSL Project, “Openssl/openssl.” OpenSSL, Jul. 15, 2025. Available: https://github.com/openssl/openssl
[116]
L. Thomason, “Leethomason/Tinyxml2.” Jul. 10, 2025. Available: https://github.com/leethomason/tinyxml2
[117]
D. Tilwani, R. Venkataramanan, and A. P. Sheth, “Neurosymbolic AI approach to Attribution in Large Language Models,” Sep. 30, 2024. doi: 10.48550/arXiv.2410.03726. Available: http://arxiv.org/abs/2410.03726
[118]
Y. Deng, C. S. Xia, H. Peng, C. Yang, and L. Zhang, “Large Language Models Are Zero-Shot Fuzzers: Fuzzing Deep-Learning Libraries via Large Language Models,” in Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis, in ISSTA 2023. New York, NY, USA: Association for Computing Machinery, Jul. 2023, pp. 423–435. doi: 10.1145/3597926.3598067. Available: https://dl.acm.org/doi/10.1145/3597926.3598067
[119]
L. Torvalds, “Git.” Apr. 07, 2005. Available: https://git-scm.com/
[120]
Unicorn Engine, “Unicorn-engine/unicorn.” Unicorn Engine, Jul. 15, 2025. Available: https://github.com/unicorn-engine/unicorn
[121]
B. Jeong et al., UTopia: Automatic Generation of Fuzz Driver using Unit Tests,” in 2023 IEEE Symposium on Security and Privacy (SP), May 2023, pp. 2676–2692. doi: 10.1109/SP46215.2023.10179394. Available: https://ieeexplore.ieee.org/abstract/document/10179394
[122]
A. Vaswani et al., “Attention Is All You Need,” Aug. 01, 2023. doi: 10.48550/arXiv.1706.03762. Available: http://arxiv.org/abs/1706.03762
[123]
A. Velasco, A. Garryyeva, D. N. Palacio, A. Mastropaolo, and D. Poshyvanyk, “Toward Neurosymbolic Program Comprehension,” Feb. 03, 2025. doi: 10.48550/arXiv.2502.01806. Available: http://arxiv.org/abs/2502.01806
[124]
Python Software Foundation, “Venv — Creation of virtual environments,” Jul. 17, 2025. Available: https://docs.python.org/3/library/venv.html
[125]
Z. Wang, Z. Chu, T. V. Doan, S. Ni, M. Yang, and W. Zhang, “History, development, and principles of large language models: An introductory survey,” AI Ethics, vol. 5, no. 3, pp. 1955–1971, Jun. 2025, doi: 10.1007/s43681-024-00583-7. Available: https://doi.org/10.1007/s43681-024-00583-7
[126]
D. Wheeler, “How to Prevent the next Heartbleed,” 2014. Available: https://dwheeler.com/essays/heartbleed.html
[127]
M. Woolf, “The Problem With LangChain,” Jul. 14, 2023. Available: https://minimaxir.com/2023/07/langchain-problem/
[128]
Woyera, “6 Reasons why Langchain Sucks,” Sep. 08, 2023. Available: https://medium.com/@woyera/6-reasons-why-langchain-sucks-b6c99c98efbe
[129]
H. Xu et al., CKGFuzzer: LLM-Based Fuzz Driver Generation Enhanced By Code Knowledge Graph,” Dec. 20, 2024. doi: 10.48550/arXiv.2411.11532. Available: http://arxiv.org/abs/2411.11532
[130]
S. Yao et al., “Tree of Thoughts: Deliberate Problem Solving with Large Language Models,” Dec. 03, 2023. doi: 10.48550/arXiv.2305.10601. Available: http://arxiv.org/abs/2305.10601
[131]
M. Zhang, J. Liu, F. Ma, H. Zhang, and Y. Jiang, IntelliGen: Automatic Driver Synthesis for FuzzTesting,” Mar. 01, 2021. doi: 10.48550/arXiv.2103.00862. Available: http://arxiv.org/abs/2103.00862
[132]
S. Zhao, Y. Yang, Z. Wang, Z. He, L. K. Qiu, and L. Qiu, “Retrieval Augmented Generation (RAG) and Beyond: A Comprehensive Survey on How to Make your LLMs use External Data More Wisely,” Sep. 23, 2024. doi: 10.48550/arXiv.2409.14924. Available: http://arxiv.org/abs/2409.14924