Skip to main content

AI and Machine Learning

Bluefin's developer experience fully supports local AI development workflows. GPU Acceleration for both Nvidia and AMD are included out of the box and usually do not require any extra setup.

Methodology

Bluefin was brought to life by Jacob Schnurr and Andy Frazer. The artwork is free for you to use. It represents the delicate balance of life and is there to remind us that open source is an ecosystem that needs to be sustained. The software we make has an effect on the world: Bluefin might be put together by technology nerds, but it took two humans to show us the importance of the creativity of the human mind.

AI is an extension of cloud native

Bluefin's focus in AI is providing a generic API endpoint to the operating system that is controlled by the user. Similar to how Bluefin ships podman, we feel that ramalama and other great open source tools can be used for many useful purposes. Bluefin's AI integration will always be local-first and user controlled.

AI Lab with Podman Desktop

The AI Lab extension can be installed inside the included Podman Desktop to provide a graphical interface for managing local models:

image

Ramalama

Install Ramalama via brew install ramalama: manage local models and is the prefered default experience. It's for people who work with local models frequently and need advanced features. It offers the ability to pull models from huggingface, ollama, and any container registry. By default it pulls from ollama.com, check the Ramalama documentation for more information.

Ramalama's command line experience is similar to Podman. Bluefin sets rl as an alias for ramalama, for brevity. Examples include:

rl pull llama3.2:latest
rl run llama3.2
rl run deepseek-r1

You can also serve the models locally:

rl serve deepseek-r1

Then go to http://127.0.0.0:8080 in your browser.

Ramalama will automatically pull in anything your host needs to do the workload. The images are also stored in the same container storage as your other containers. This allows for centralized management of the models and other podman images:

❯ podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/ramalama/rocm latest 8875feffdb87 5 days ago 6.92 GB

Integrating with Existing Tools

ramalama serve will serve an OpenAI compatible endpoint at http://0.0.0.0:8080, you can use this to configure tools that do not support ramalama directly:

Newelle

Alpaca Graphical Client

For light chatbot usage we recommend that users install Alpaca to manage and chat with your LLM models from within a native desktop application. Alpaca supports Nvidia and AMD[^1] acceleration natively.

Only a keystroke away

Bluefin binds Ctrl-Alt-Backspace as a quicklaunch for Alpaca automatically after you install it!

Configuration

Alpaca

image