At DockerCon in Los Angeles, Docker recently unveiled two new tools: GenAI Stack and Docker AI. Both components represent the latest additions to a comprehensive AI initiative. They are designed to help developers seamlessly integrate pre-built AI components into their projects.
In the case of GenAI Stack, it is a collection of applications based on the integration of prepared large language models (LLMs), vector and graph databases, and the LangChain framework. The package is also equipped with a variety of generative use cases and is based on trusted open source content from Docker Hub.
Among the elements included in the GenAI stack are prebuilt LLMs such as Code Llama/Llama 2, GPT-3.5/4 and Mistral.Ollama. This is especially relevant for developers who rely on local open source LLMs. In addition, the package includes a Neo4j graph and vector database for long-term storage models as well as LangChain orchestration and other supporting tools.
These tools bring a number of useful features for developers. They can create vector indexes for the question-answer system, efficiently summarize data, and visualize search results in the form of knowledge graphs. The provided answers can take various formats, including lists, discussions, GitHub tasks, PDF documents, and even poems.
The other featured tool, Docker AI, represents Docker’s first proprietary AI-powered product. It is designed to provide users with contextual help in configuring the Docker system. This can be extremely useful when creating Dockerfiles or debugging configuration issues.
Docker AI draws on the experience and best practices from millions of Docker developments. It acts as a kind of “advisor” that offers contextual recommendations for optimizing system configuration. Docker AI is currently available through an Early Access program, making it the next step in Docker’s ongoing AI initiative.