What is Umami? (And Why I’m Writing This)
Umami is an open-source, self-hosted web analytics platform. It tracks page views, referrers, device types, and other basics—without cookies or third-party scripts. It’s a privacy-first alternative to Google Analytics that’s easy to run and doesn’t need a lot of infrastructure. If you’re looking for a broader critique of Google’s tracking ecosystem and the case for switching to tools like this, I wrote about that here.
This post is a technical follow-up focused specifically on running Umami with MariaDB in a local development setup.
Why MariaDB?
I run MariaDB in development and production, so it made sense to use it for Umami as well. It’s lightweight, fully open source, and a solid fit for small deployments⸺especially when you already have it up and running.
According to Umami’s GitHub page, MariaDB is supported. I can vouch for that⸺as can many others⸺but it doesn’t come without some caveats:
- There are installation and upgrade issues.
- Future MariaDB support is uncertain.
- The developers now recommend using PostgreSQL (more on that later).
Still, if you’re running MariaDB already⸺or just prefer it⸺you can absolutely get Umami working. Let’s walk through the setup together.
What We’ll Need Before Continuing
Before we get into installing Umami itself, here’s what we’ll need to have set up and ready to go.
Node.js
I used v22.12.0 with NVM. Version 22 is the Active LTS release as of writing. You can check out other versions here. To install with NVM:
nvm install 22
nvm use 22
pnpm
Umami’s docs use Yarn, but I prefer pnpm. It works especially well for monorepos like this one.
npm install -g pnpm
MariaDB
I used Homebrew to install MariaDB on macOS, but any install method should work as long as it gives us a working local instance.
brew install mariadb
brew services start mariadb
Create the database and user
We’ll need a database in MariaDB for Umami to connect to. First, we log in to the MariaDB CLI:
mariadb -u root
Then we create the database and a user account with access to it:
CREATE DATABASE umami;
CREATE USER 'umami_user'@'localhost' IDENTIFIED BY 'your_strong_password';
GRANT ALL PRIVILEGES ON umami.* TO 'umami_user'@'localhost';
FLUSH PRIVILEGES;
EXIT;
You can name the databse whatever you like. Also make sure you replace unami_user
and your_strong_password
with your preferred values.
Install Umami
The Umami installation docs are straightforward, though in this case we’ve swapped out Yarn for pnpm
.
We start by cloning Umami to wherever we want it on our local system (I’m using ~/projects/umami
):
git clone https://github.com/umami-software/umami.git ~/projects/umami
Next, we configure Umami by setting the DATABASE_URL
environment variable. We can also define other optional variables, such as APP_SECRET
, which isn’t required by the installer but is worth setting. (See the full list of environment variables here.)
Create a .env
file in ~/projects/umami
and add the following:
DATABASE_URL="mysql://umami_user:your_strong_password@localhost:3306/umami"
APP_SECRET="generate_a_secure_random_string"
Now we’re ready to install Umami’s dependencies and build the project. From the Umami project root, run these commands:
pnpm install
pnpm build
And this is where I encountered the caveat I mentioned ealier.
Where Things Break with MariaDB
With your .env
file set and your database ready, you run pnpm build
expecting Umami to initialise the database schema. Instead, you hit this:
Database error: Cannot load from mysql.proc. The table is probably corrupted
This message is misleading. The real issue isn’t a corrupted system table; it’s that one of the included migration files uses a MySQL-only function that MariaDB doesn’t support:
BIN_TO_UUID(session_id)
This shows up in the 05_add_visit_id
migration and causes the build to fail. Prisma doesn’t check whether functions like BIN_TO_UUID
exist in your database engine; it just tries to run the SQL. When that fails, you get this vague and unhelpful mysql.proc
error.
A Quick Word on Migration Files
Umami uses Prisma migrations to evolve its database schema over time. Every time the schema changes (new tables, renamed fields, etc.), a new migration file is created. These are stored in the repo and automatically applied during the build process via a script called check-db.js.
The point of this is to allow existing Umami instances to upgrade cleanly between versions. Even though you’re doing a fresh install, the full migration history is still bundled in the repo for compatibility with upgrades from earlier versions.
In other words:
- Migrations are applied in sequence, one after the other, starting from the earliest file in the repo.
- Each file represents a snapshot of what changed in the schema at that point in development.
- Even for fresh installs, all migration files are included and expected to run in order to reach the current schema.
So what are our options?
Two Approaches, Depending on Your Setup
The best fix depends on whether you’re installing Umami for the first time or upgrading an existing instance.
Option 1: For Existing Databases
If you already have data in your Umami instance and want to skip the failing migration, you can patch the schema manually. This approach preserves your data and completes what the broken 05_add_visit_id
migration was supposed to do.
Start by marking the migration as applied:
npx prisma migrate resolve --applied "05_add_visit_id"
Then apply the changes manually. This script adds the visit_id
column, assigns values to existing records, and creates the necessary indexes. (Source: GitHub Issue #2645)
-- AlterTable
ALTER TABLE website_event ADD COLUMN visit_id VARCHAR(36) NULL;
-- Update with UUIDv4
UPDATE website_event we
JOIN (
SELECT DISTINCT
s.session_id,
s.visit_time,
LOWER(CONCAT(
HEX(SUBSTR(MD5(RAND()), 1, 4)), '-',
HEX(SUBSTR(MD5(RAND()), 1, 2)), '-4',
SUBSTR(HEX(SUBSTR(MD5(RAND()), 1, 2)), 2, 3), '-',
CONCAT(HEX(FLOOR(ASCII(SUBSTR(MD5(RAND()), 1, 1)) / 64)+8), SUBSTR(HEX(SUBSTR(MD5(RAND()), 1, 2)), 2, 3)), '-',
HEX(SUBSTR(MD5(RAND()), 1, 6))
)) AS uuid
FROM (
SELECT DISTINCT session_id,
DATE_FORMAT(created_at, '%Y-%m-%d %H:00:00') visit_time
FROM website_event
) s
) a ON we.session_id = a.session_id AND DATE_FORMAT(we.created_at, '%Y-%m-%d %H:00:00') = a.visit_time
SET we.visit_id = a.uuid
WHERE we.visit_id IS NULL;
-- ModifyColumn
ALTER TABLE website_event MODIFY visit_id VARCHAR(36) NOT NULL;
-- CreateIndex
CREATE INDEX website_event_visit_id_idx ON website_event(visit_id);
CREATE INDEX website_event_website_id_visit_id_created_at_idx ON website_event(website_id, visit_id, created_at);
This should leave your database in a valid state without needing to downgrade or wipe anything.
Option 2: For Fresh Installs
For a fresh installation, a better approach is to bypass Prisma’s migration system altogether and apply the full schema ourselves. This avoids the broken migration and gives us a clean, working schema based on the current state of the project.
Step 1: Remove bundled Prisma migrations
We don’t need Umami’s migration history, so we’ll remove them to ensures Prisma won’t try to apply them later.
rm -rf prisma/migrations
Step 2: Generate the full schema as SQL
Here we use prisma migrate diff
to create a raw SQL file based on the current schema.prisma
. This represents the entire schema in its latest form, as a one-shot deployment.
DATABASE_URL="mysql://umami_user:your_strong_password@localhost:3306/umami" \
npx prisma migrate diff --from-empty --to-schema-datamodel prisma/schema.prisma --script > scripts/umami_schema.sql
This command tells Prisma to compare an empty database (--from-empty
) with the current schema defined in schema.prisma
(--to-schema-datamodel
). Instead of applying changes directly, it generates the SQL needed to bring a blank database up to date with that schema. The --script
flag tells Prisma to output raw SQL instead of running a migration, and we redirect the result into a file for manual review.
We also pass the DATABASE_URL
inline to tell Prisma which database to connect to. And since Prisma is installed locally in the Umami project, we use npx
to run it without needing a global install.
Step 3: Apply it manually
Now we apply the generated SQL to the database manually. This initializes all the tables and relationships that Umami expects.
mariadb -u umami_user -p umami < scripts/umami_schema.sql
Step 4: Patch the build script
As I said before, the Umami build process runs a script called check-db.js
, which attempts to apply migrations using prisma migrate deploy
. Since we’ve already applied the schema manually, we want to prevent this step from running.
One option is to remove check-db
from the build command in package.json
:
- "build": "npm-run-all check-env build-db check-db build-tracker build-geo build-app",
+ "build": "npm-run-all check-env build-db build-tracker build-geo build-app",
Alternatively, you can keep check-db
in place and use the SKIP_DB_MIGRATION
environment variable to tell it not to apply migrations. This variable was added in Umami 2.17:
SKIP_DB_MIGRATION=true pnpm build
Either approach will prevent Prisma from trying to reapply migrations we’ve already handled manually.
Step 5: Build and run
Once the schema is in place and the build script is patched, we can build and start Umami normally:
pnpm build
pnpm start
The install should create the following default login:
Username: admin
Password: umami
Where MariaDB Support Might Be Headed
Although MariaDB is still listed as supported, it’s unclear how much longer that will be the case. The issues we covered here are early signs that the project is shifting away from maintaining compatibility with MySQL-based databases.
In fact, one of the Umami maintainers recently commented on this in a GitHub discussion:
We may be moving away from MySQL entirely. It’s a lot of work supporting multiple databases. Have you run Postgres before?
If you’re running Umami with MariaDB today, it can still work well. But future updates may require more hands-on fixes or might not work at all without switching to PostgreSQL.