Authors:
Nane Kratzke
1
and
André Drews
2
Affiliations:
1
Department of Electrical Engineering and Computer Science, Lübeck University of Applied Sciences, Germany
;
2
Expert Group Artificial Intelligence in Applications, Lübeck University of Applied Sciences, Germany
Keyword(s):
Prompt Engineering, Large Language Model, cloud-native, Container, Orchestration, Automation, Intelligent, Service Management, Kubernetes, LLM, GPT-3.5, GPT-4, Llama2, Mistral, DevOps.
Abstract:
Background: The intricate architecture of container orchestration systems like Kubernetes relies on the critical role of declarative manifest files that serve as the blueprints for orchestration. However, managing these manifest files often presents complex challenges requiring significant DevOps expertise. Methodology: This position paper explores using Large Language Models (LLMs) to automate the generation of Kubernetes manifest files through natural language specifications and prompt engineering, aiming to simplify Kubernetes management. The study evaluates these LLMs using Zero-Shot, Few-Shot, and Prompt-Chaining techniques against DevOps requirements and the ability to support fully automated deployment pipelines. Results show that LLMs can produce Kubernetes manifests with varying degrees of manual intervention, with GPT-4 and GPT-3.5 showing potential for fully automated deployments. Interestingly, smaller models sometimes outperform larger ones, questioning the assumption th
at bigger is always better. Conclusion: The study emphasizes that prompt engineering is critical to optimizing LLM outputs for Kubernetes. It suggests further research into prompt strategies and LLM comparisons and highlights a promising research direction for integrating LLMs into automatic deployment pipelines.
(More)