in Code

Part 1: Setting up your environment

This post is the first of five parts of: A simple guide to local LLM fine-tuning on a Mac with MLX.


If you’re starting on this journey from scratch, you’ll need to set a few things up on your Mac in order to get going. If you have Python and Pip ready to go, you can skip to step two.

Step 1: Install Homebrew

Homebrew is a package manager that will help you manage and install software that you need (like Python). I like it because it provides an easy way to switch versions and uninstall packages.

To install Homebrew, open the Terminal app (or install iTerm) and paste the following:

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Bash command to install Homebrew

Go ahead and install Xcode command line tools if it prompts you during the installation.

Step 2: Install an upgraded version of Python and Pip

Mac OS has come pre-installed with Python 3 since Mac OS 12 Sonoma. I’ve upgraded to 3.11.x in order to get the latest fixes and security updates.

I would stick to the latest version minus one major version otherwise you may run into issues with libraries that are incompatible. So if 3.12 is the latest, pick the latest from the 3.11 versions.

To install the latest minor release of Python 3.11 use the following command:

brew install python@3.11
Command to install Python 3.11 using Homebrew

Once that has completed you’ll have python 3.11.x and its package manager pip installed.

One optional step is to alias the python3.11 and pip3.11 commands to python and pip, simply to make life a bit simpler when following examples. You’d need to change these aliases when versions change.

You can alias them using the following command (if you installed a newer version, use the newer version number):

echo '
alias python=python3.11
alias pip=pip3.11
' >> ~/.zshrc
Command to add python and pip aliases to your zsh config

That’s it for the environment, you’re now ready to dive in to Part 2: Building your training data for fine-tuning.


If you have any feedback, questions, or suggestions for this part of the guide please drop them on the Twitter/X thread, or on the Mastodon thread.


Overview

Part 2: Building your training data for fine-tuning