This is what my entire server config looks like:
(laravelSite {
name = "foo";
domains = [ "foo.com" "bar.com" ];
phpPackage = pkgs.php84;
ssl = true;
cloudflareOnly = true;
queue = true;
sshKeys = [
"ssh-ed25519 redacted"
"ssh-ed25519 redacted"
];
extraPackages = [ pkgs.nodejs_24 pkgs.sqlite-interactive ];
})
Note that this is not some PaaS or serverless cloud platform. It’s a regular VPS that I have full control over. To make it host a Laravel site, I only need the bit of code you see above.
I can add as many of these blocks as I want. The site above is actually a rather complex one, really all you need is 5 lines of code to host a Laravel site on your server:
(laravelSite {
name = "baz";
domains = [ "baz.com" ];
phpPackage = pkgs.php84;
})
The code you see above is written in Nix, the programming language used to configure NixOS. The two main buzzwords you should think of when you hear NixOS are declarative and reproducible. Meaning, the entire system is configured in a single file and if you copy that to another server, you will precisely reproduce the original system.
You can think of it as a Dockerfile but for an actual operating system, and actually reproducible (Dockerfiles depend on a nebulous FROM ...
which changes and often breaks things).
That said, I personally do not need perfect reproducibility for what I’m doing, so I will not be focusing on that in this article. I will focus on the declarative aspect much more since that’s the main thing I love about NixOS.
Also, the first thing you’ll learn: a direct consequence of the two properties mentioned above is that you make changes to your system like this:
Edit
/etc/nixos/configuration.nix
or/etc/nixos/flake.nix
Run
nixos-rebuild switch
You do not use anything like apt install
, do not edit files in /etc/nginx
, do not create users by hand. Everything you need to do is done by editing a file and running nixos-rebuild switch
.
NixOS is also recoverable, as in if nixos-rebuild switch
breaks your system you can just boot into a previous generation. Each execution of that command creates a new generation, which can be directly switched to or booted into.
Importantly, this article is not a NixOS tutorial. I’m very much a beginner myself so all I can do is show what’s possible with NixOS. And honestly, as much as NixOS is amazing, it will be painful to get started with it. The state of the documentation is not good. Many online resources use syntax or features that people nowadays would tell you not to use, in favor of newer things. The main such split is that NixOS has an experimental feature called flakes which, for our purposes here, just means slightly different syntax and the presence of lockfiles. Without flakes, you can cleanly roll back to a previous generation, or you can rebuild using an older version of your system configuration, but it will not be perfectly reproducible since the versions of packages used might differ. Flakes solve that and most people using Nix nowadays are using flakes, though the learning materials lag a bit behind. For learning resources I’d recommend:
With that out of the way, here’s what we’ll cover:
The
laravelSite
module. This is my own Nix module that configures your system to serve a Laravel app. Walking through that code will show you how Nix works as well as what all a simple module call can achieve.Setting up NixOS locally in a VM, so we can test this somewhere
Setting up a production VPS with push deployments
But before we get to that, a quick rant.
Why this is so great
After writing this section I realized I’ve already expressed this in a better way before, so I’ll just paste this here:
The main reason I really like Nix is that I absolutely despise configuring servers, but do want my stuff to run on my own servers. I don't need autoscaling, I want to add crazy custom stuff at will, I want the control. But not the config.
The thing that sucks about setting up servers by hand is that the "config" is scattered everywhere. It's config files for different services at random locations. It's users, permissions, ownership — so many "OS constructs" that aren't expressed in code. It's systemd services, it's cron, firewall and a million other things.
Guess what I'm getting at is that I really like the idea of "infrastructure as code", but since I don't deal with distributed systems or microservice frankensteins for me it just means "single server's config as code". I want to write it and apply it. I want to be able to back it up. And maybe I'll want it to run on multiple servers in the future if I need rudimentary load balancing.
Ansible addresses this, but just from seeing how it's used (and never having done so myself) it doesn't seem like a tool that sparks joy. It's imperative step by step config, an incremental improvement on a bash script.
Nix is great because it's completely declarative and you can extremely easily change your system config at any point in time by making arbitrarily small or huge changes to the file responsible for how your server is already set up.
The rollbacks and reproducibility are just a cherry on top, giving assurances that if you need to quickly set up a new server, be it because of some issue with the existing one or for scaling, things will pretty much always work as expected.
I strongly prefer Docker to configuring servers because of the things mentioned above - everything is expressed in a bunch of Dockerfiles and docker-compose files in some apps I run in production - but there's STILL a ton of surrounding infrastructure that needs to be set up. Those docker compose projects need systemd services, each new service is a bunch of work to set up, you still get some server config like cronjobs etc. Alternatively you use something like k8s or k3s but that seems like just as much work. Maybe once you get familiar with the tooling it becomes straightforward, but still, those tools are for orchestrating servers, managing individual servers should just be simple.
I mainly work on monoliths, with the occasional microservice with some special logic here and there, and my infra is simple. The tooling should also be simple while still letting me define everything in my editor.
Just look at the examples below, this makes dealing with servers actually enjoyable. Server users, php-fpm pools, nginx vhosts, and databases *all in the same file* in just a few LOC.
The laravelSite module
Let’s go over the laravelSite
module to understand what it does and what Nix lets us do.
First, to clarify:
(laravelSite {
name = "mysite";
domains = [ "mysite.com" ];
phpPackage = pkgs.php84;
ssl = true; # optional, defaults to false, affects *ALL* domains
extraNginxConfig = "nginx configuration string"; # optional
sshKeys = [ "array" "of" "public" "ssh" "keys" ]; # optional
extraPackages = [ pkgs.nodejs_24 ]; # optional
queue = true; # start a queue worker - defaults to false, optional
queueArgs = "--tries=3"; # optional, default empty
generateSshKey = false; # optional, defaults to true
poolSettings = { # optional - overrides all of our defaults
"pm.max_children" = 12;
"php_admin_value[opcache_memory_consumption]" = "512";
"php_admin_flag[opcache.validate_timestamps]" = true;
};
# alternatively:
extraPoolSettings = { # merged with poolSettings, doesn't override our defaults
"pm.max_children" = 12;
}
})
This is a function call. We’re calling the laravelSite
function with some { parameters }
. NixOS uses a functional domain-specific language, hence the syntax. The fact that it’s an actual language and not just a JSON/YAML config file is what lets us reuse logic in the form of a callable function like this. Also, you’ll soon see some things that are unique to Nix, like extremely intelligent property merging.
Our laravelSite
module is actually a function that returns a module (in the form of a callable function). We pass parameters (like domains
) directly to the outer function, which produces the inner callable module, which then Nix knows what to pass to.
{
name, # Name of the site, the username and /srv/{name} will be based on this
phpPackage, # e.g. pkgs.php84
domains ? [], # e.g. [ "example.com" "acme.com" ]
ssl ? false, # Should SSL be used
cloudflareOnly ? false, # Should CF Authenticated Origin Pulls be used
extraNginxConfig ? null, # Extra nginx config string
sshKeys ? null, # SSH public keys used to log into the site's user for deployments
extraPackages ? [], # Any extra packages the user should have in $PATH
queue ? false, # Should a queue worker systemd service be created
queueArgs ? "", # Extra args for the queue worker (e.g. "--tries=2")
generateSshKey ? true, # Generate an SSH key for the user (used for GH deploy keys)
poolSettings ? { # PHP-FPM pool settings. Changing this will override all of these defaults
"pm" = "dynamic";
"pm.max_children" = 8;
"pm.start_servers" = 2;
"pm.min_spare_servers" = 1;
"pm.max_spare_servers" = 3;
"pm.max_requests" = 200;
"php_admin_flag[opcache.enable]" = true;
"php_admin_value[opcache.memory_consumption]" = "256";
"php_admin_value[opcache.max_accelerated_files]" = "10000";
"php_admin_value[opcache.revalidate_freq]" = "0";
"php_admin_flag[opcache.validate_timestamps]" = false;
"php_admin_flag[opcache.save_comments]" = true;
},
extraPoolSettings ? {}, # PHP-FPM pool settings merged into poolSettings. Doesn't override defaults
...
}:
{ config, lib, pkgs, ... }:
let
username = "laravel-${name}";
in {
Most of the parameters should be self-explanatory. If you don’t understand any of them, you’ll see how they’re all used soon. The username
is just how we derive the Linux user name from the site name. All of the following code is stuff from the inner module.
services.nginx.enable = true;
security.acme.acceptTerms = lib.mkIf ssl true;
networking.firewall.allowedTCPPorts = [80] ++ lib.optionals ssl [443];
We enable the nginx service and accept acme terms. That way we can configure nginx sites and get HTTPS with just a single line of code.
The firewall part is a bit more interesting. We set allowedTCPPorts
to just [80] if SSL is not used, and merge it with [443] if SSL is used. However this whole line will be merged with other allowedTCPPorts = ...
lines in other modules. We are not overriding the value. Nix knows how to magically combine config from different modules. If we wanted to, we could override the value, but that’s out of the scope of this article. If you want more details, see this for instance. All you need to know for now is that assignments do not necessarily override existing values.
Next, we configure nginx:
services.nginx.virtualHosts = lib.genAttrs domains (domain: {
enableACME = ssl;
forceSSL = ssl;
root = "/srv/${name}/public";
extraConfig = ''
add_header X-Frame-Options "SAMEORIGIN";
add_header X-Content-Type-Options "nosniff";
charset utf-8;
index index.php;
error_page 404 /index.php;
${lib.optionalString cloudflareOnly ''
ssl_verify_client on;
ssl_client_certificate ${pkgs.fetchurl {
url = "https://developers.cloudflare.com/ssl/static/authenticated_origin_pull_ca.pem";
sha256 = "0hxqszqfzsbmgksfm6k0gp0hsx9k1gqx24gakxqv0391wl6fsky1";
}};
''}
${lib.optionalString (extraNginxConfig != null) extraNginxConfig}
'';
locations = {
"/" = {
tryFiles = "$uri $uri/ /index.php?$query_string";
};
"= /favicon.ico".extraConfig = ''
access_log off;
log_not_found off;
'';
"= /robots.txt".extraConfig = ''
access_log off;
log_not_found off;
'';
"~ ^/index\\.php(/|$)".extraConfig = ''
fastcgi_pass unix:${config.services.phpfpm.pools.${name}.socket};
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include ${pkgs.nginx}/conf/fastcgi_params;
fastcgi_hide_header X-Powered-By;
'';
"~ /\\.(?!well-known).*".extraConfig = ''
deny all;
'';
};
});
Some things to notice:
The site will located in
/srv/{name}
— the name we’ve passed to the module parametersIf ssl is set to true, we enable acme (this is ALL you need to get HTTPS working!) as well as force SSL so all HTTP requests are redirected to HTTPS
Most of the config is just taken from the sample nginx configuration in Laravel docs
We can reference
pkgs.nginx
to get its directory (it will be/nix/store/nginx-longhash
), from which we can access the includes like fastcgi_paramsSimilarly, we can reference the config for the phpfpm pool (we’ll be setting that up next) to get the unix socket path for this site’s pool
If
cloudflareOnly
is passed, we enablessl_verify_client on;
andssl_client_certificate <path>
, where the path is dynamically inserted by Nix. Notice that we’re just telling it to pull a file from some URL and verify its checksum. The result of that call will be replaced with the path to the file (also in/nix/store/
). We can directly fetch URLs and embed them in our config! We’ll be doing more of that soon. Also, if you’d like to learn more about what exactlycloudflareOnly
does, click here.
# PHP-FPM pool configuration
services.phpfpm.pools.${name} = {
user = username;
phpPackage = phpPackage;
settings = poolSettings // extraPoolSettings // {
"listen.owner" = config.services.nginx.user;
};
};
# User and group settings
users.users.${username} = {
group = username;
isSystemUser = true;
createHome = true;
home = "/home/${username}";
homeMode = "750";
shell = pkgs.bashInteractive;
packages = [ phpPackage pkgs.git pkgs.unzip phpPackage.packages.composer ] ++ extraPackages;
} // lib.optionalAttrs (sshKeys != null) {
openssh.authorizedKeys.keys = sshKeys;
};
users.groups.${username} = {};
# Add site group to nginx service
systemd.services.nginx.serviceConfig.SupplementaryGroups = [ username ];
In this part of the config, we create a php-fpm pool that will be executed by the site’s user (which we create directly below). It defaults to a unix socket so we don’t need to configure that here and nginx can just easily reference it as we did above. For the pool settings, we merge the poolSettings
with extraPoolSettings
. This way anyone using this module can either override the pool settings completely, or just keep my defaults and append some extra configuration. We also merge that with an attribute set that sets listen.owner
to the nginx user. This sets the ownership of the unix socket (/run/phpfpm/{sitename}.sock
) to nginx so it can access it.
Next, we create the actual user that the site will be using. This user will have a dedicated home directory separate from the /srv
site directory, mainly for things like .cache
. We set the user’s shell and define what packages the user will be able to access.
In NixOS, we can configure system-wide packages as well as user-specific packages. Since for instance PHP is available as just php
, it’s best that we don’t install any PHP version system-wide and instead install specific packages for users. Here we’re just directly using the phpPackage
argument that was passed to the module. The array also includes git, unzip, and composer taken from the user’s PHP package, as well as any extraPackages
we pass to this module — like a specific version of Node.js.
We also configure the SSH keys for the user. That is, ssh keys using which someone can log in as this user. This could be your primary public key or a key used by a CI action that connects to the server via SSH and runs the deployment script. We’ll get to that.
Finally, we add the user’s group to the “supplementary groups” of the nginx service. The nice thing about NixOS is that most services are pretty hardened by default, so you usually don’t need to write configuration to make things more restrictive. Rather, you may need to write configuration to make things more permissive. Which is what we’re doing here — without this, nginx wouldn’t be able to access static files in /srv/{site}/public
.
systemd.services."laravel-queue-${name}" = lib.mkIf queue {
description = "Laravel Queue Worker for ${name}";
after = [ "network.target" "phpfpm-${name}.service" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "simple";
User = username;
Group = username;
WorkingDirectory = "/srv/${name}";
ExecStart = "${phpPackage}/bin/php artisan queue:work ${queueArgs}";
Restart = "always";
RestartSec = 10;
KillMode = "mixed";
KillSignal = "SIGTERM";
TimeoutStopSec = 60;
};
};
This is pretty straightforward. A regular systemd service for running the queue worker (if queue
was set to true). Notably, we set the user the service should be running as, the working directory, Restart = “always”
because php artisan queue:restart
just sends a signal telling the queue worker to die. We again use the passed phpPackage
, this time as a path, to get the right PHP CLI binary. And if any queueArgs
were passed, we append them to the command (this could be --tries=2
or similar).
security.sudo.extraRules = [{
users = [ username ];
commands = [
{
command = "/run/current-system/sw/bin/systemctl reload phpfpm-${name}";
options = [ "NOPASSWD" ];
}
{
command = "/run/current-system/sw/bin/systemctl reload phpfpm-${name}.service";
options = [ "NOPASSWD" ];
}
] ++ lib.optionals queue [
{
command = "/run/current-system/sw/bin/systemctl status laravel-queue-${name}";
options = [ "NOPASSWD" ];
}
{
command = "/run/current-system/sw/bin/systemctl status laravel-queue-${name}.service";
options = [ "NOPASSWD" ];
}
];
}];
Here we configure “sudo rules” specifying which commands, that normally require sudo perms, the user may execute without a password. This is useful in deployment scripts. We use systemctl reload phpfpm-${name}
since we’re using opcache so php-fpm needs to be reloaded on each deployment. And we let the user inspect the status of the queue worker. For reloading the user can use php artisan queue:restart
as described above. The status thing is just a convenience so you don’t need to switch back to root when you’re in a shell as this user and want to check what the queue is doing.
services.cron.systemCronJobs = [
"* * * * * ${username} cd /srv/${name} && ${phpPackage}/bin/php artisan schedule:run > /dev/null 2>&1"
];
We set up cron for the site. Again you can see how we’re using the variables in the string. This cron job runs every minute and runs scheduled tasks in Laravel.
Now we get to some nice-to-haves, not strictly webserver config.
environment.etc."laravel-${name}-bashrc".text = ''
export PATH="$HOME/.config/composer/vendor/bin/:$PATH"
# Laravel site welcome message
echo "Welcome to ${name} Laravel site!"
echo "Domains: ${lib.concatStringsSep ", " domains}"
echo "User home: /home/${username}"
echo "Site: /srv/${name}"
echo "Restart php-fpm: sudo systemctl reload phpfpm-${name}"
${lib.optionalString queue ''echo "Restart queue: php artisan queue:restart"''}
${lib.optionalString queue ''echo "Queue status: sudo systemctl status laravel-queue-${name}"''}
${lib.optionalString generateSshKey ''echo "SSH public key: cat ~/.ssh/id_ed25519.pub"''}
echo "---"
'';
We create a file called /etc/laravel-${name}-bashrc
with the contents above. This is the bashrc for the user we’re creating. It’s just a helpful welcome message when the user enters a shell, showing a quick overview of where the site is, what domains it uses, how to restart the queue or php-fpm, or where the user can find his public key. Honestly there might be a better way to do this, but this is a straightforward way to create a file that we can later use. The bashrc also adds composer’s bin dir to $PATH so we can use composer global
.
systemd.tmpfiles.rules = [
"d /srv 0751 root root - -"
"d /home 0751 root root - -"
"d /srv/${name} 0750 ${username} ${username} - -"
"C /home/${username}/.bashrc 0640 ${username} ${username} - /etc/laravel-${name}-bashrc"
];
We define some file/directory rules. This is a bit confusing section, mainly because the service is called “tmpfiles” even though it can also be used for persistent files.
We use these rules to ensure certain paths exist with the right ownership and permissions. The last entry copies the etc bashrc file we created above to the user’s home directory.
systemd.services."generate-ssh-key-${name}" = lib.mkIf generateSshKey {
description = "Generate SSH key for ${username}";
wantedBy = [ "multi-user.target" ];
after = [ "users.target" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
User = "root";
};
script = ''
USER_HOME="/home/${username}"
SSH_DIR="$USER_HOME/.ssh"
KEY_FILE="$SSH_DIR/id_ed25519"
if [[ ! -f "$KEY_FILE" ]]; then
echo "Generating SSH key for ${username}"
mkdir -p "$SSH_DIR"
${pkgs.openssh}/bin/ssh-keygen -t ed25519 -f "$KEY_FILE" -N "" -C "${username}"
chown -R ${username}:${username} "$SSH_DIR"
chmod 700 "$SSH_DIR"
chmod 600 "$KEY_FILE"
chmod 640 "$KEY_FILE.pub"
echo "SSH key generated: $KEY_FILE.pub"
echo "Public key for deploy key:"
cat "$KEY_FILE.pub"
else
echo "SSH key already exists for ${username}"
fi
'';
};
Finally we have a section for generating an SSH key for the user. This differs from the SSH public keys used for logging in as the user over SSH. Rather, this is a key the user will use when ssh’ing into other servers. In practical terms, this ssh key can be used as a deploy key in a GitHub repo to pull a private repo’s contents, as part of a deploy script, via an SSH remote. We’ll get to that in the section about deployments.
Just like with the bashrc, I don’t love this part of the code and am sure there is a better way to do this, but for our purposes this works completely fine. Also note that this only executes if generateSshKey
is set to true (which is the default).
Putting all of this together, we have a module that configures:
nginx
queue worker
cron
bashrc
ssh keys (both identity and authorized keys)
sudo rules
php-fpm
users and groups
firewall
any extra packages the site may need
all in a single file, a bit over 200 lines of code. This module can be used in as little as 5 lines of code. All of that is configured for each site. Everything is isolated as it should be, you get the comfort of using regular Linux, just su laravel-foo && cd /srv/foo
and use php
, npm
or anything else — no docker run
annoyance. Everything will be the right version for the site, isolated from all other sites. The user can only access things related to his site.
You can find the full file here with more detailed documentation in the README.
Setting up a local VM
The laravelSite
module shown above can be directly used in production (or on something like a staging server, we’ll get to how to set up servers in the next section) but when I was developing it, I obviously needed to be able to test changes locally quickly.
Luckily, you can get experience that’s very close to “editing a Dockerfile” locally. If you’re on NixOS, there’s some nice tooling I believe that lets you create slim QEMU VMs that mount your machine’s /nix/store
so they barely take up any space. On Linux in general there may be more neat tooling. I am on macOS however. So this section will focus on how to set up an actual VM that’s nice to use. This is not exactly step by step, I might update this at some point in the future, but for now you’ll have to figure out the details yourself just by following these general steps. It’s pretty easy.
Install Parallels. Yes it costs money, but it’s the best VM experience you can have
Download the NixOS ISO. Specifically the full ISO with a graphical interface. I had issues booting into the minimal one
Go over the installation, there you select to not install a desktop environment
Once you have the VM set up, configure it like this:
Options → Startup and Shutdown → Custom:
Start Automatically: When Parallels Desktop starts (which should also be on your macOS boot btw)
Startup delay: 5s or so
Startup view: Headless
On VM shutdown: Close window
On Mac shutdown: Suspend
On Window close: Keep running in background
Options → Sharing → Share custom Mac folders with Linux → Manage Folders, add a custom path on your host machine (like ~/Projects/nix) where you have modules such as laravel.nix
Options → More Options →
Clipboard sync: Bidirectional
Authenticate with macOS SSH public key
Hardware → Network → Shared Network
Now what should happen is that every time you start your Mac, the NixOS VM will automatically open. You just close the window and it’ll keep running in the background with no desktop environment. I find that this uses basically 0% CPU, much less than the average macOS app you haven’t opened in 2 days that for whatever reason still keeps using 30% CPU.
When the VM is started, you should see a new entry in /etc/hosts
. I don’t remember if this requires more config, if it does it’s probably Parallels settings not the specific VM. This means you just ssh username@nixos-dev
(or however you named the user and VM’s hostname) and you should be able to connect from your terminal.
Your shared paths will be automatically mounted in /mnt/something
(just ls that folder) but I found that /mnt
doesn’t exist out of the box. You could just create that dir with mkdir, maybe set some relaxed perms. To make this reproducible you could use tmpfiles like we do above — but I don’t care on my dev machine. After that you just need to restart the mount service, I think if you run systemctl list-units --type=mount --all | grep psf
and just restart the service that shows up, the mount point will work. Or just restart the VM.
With that, you should have everything set up. Open your code editor and make changes to modules, have another window open with SSH into the NixOS VM, use that module in your system config and just run nixos-rebuild switch
. You may also need to use the --impure
flag if you’re using a mounted module.
This is especially comfortable if you use neovim with tmux. One tmux tab with neovim, one tab with ssh.
Some more info about how I use this is here.
To start using the laravelSite
module in your VM, I highly recommend reading the full README of this repo. Out of the box, your NixOS VM will only have /etc/nixos/configuration.nix
(and a hardware config imported by the configuration). You’ll want to create /etc/nixos/flake.nix
that looks something like the sample “full system config” described in the README.
The only change you’ll want to make to the sample config is:
- laravelSite = import ./laravel.nix;
+ laravelSite = import /mnt/psf/<your path>/laravel.nix;
Then, within the modules
array we can add a site like this:
(laravelSite {
name = "demo";
domains = [ "demo.localhost" ];
phpPackage = pkgs.php84;
extraPackages = [ pkgs.nodejs_24 ];
})
We’ve added the node package because we’ll be actually creating a site. You will not be doing this in production, but locally we want to be able to easily create a simple site to test out the config.
Now run (as root from /etc/nixos
):
nixos-rebuild switch --impure
Once the command is finished executing, you should be able to run:
curl demo.localhost
And get back a “File not found”. If we check:
# ls /srv
demo
# ls /srv/demo
We can see there’s no public/
directory which is where Laravel’s index.php
normally is. Let’s try putting a dummy file there:
I’ll be using vim here. If you do not know how to use vim, see if
nano
is available. You can add any editors to/etc/nixos/configuration.nix
, find the line withenvironment.systemPackages = with pkgs; [
and add your editor there. You can search packages here.
# su laravel-demo
Welcome to demo Laravel site!
Domains: demo.localhost
User home: /home/laravel-demo
Site: /srv/demo
Restart php-fpm: sudo systemctl reload phpfpm-demo
SSH public key: cat ~/.ssh/id_ed25519.pub
$ cd /srv/demo
$ mkdir public
$ vim public/index.php
<?php
printf("Hello world!\n");
$ curl demo.localhost
Hello world!
We can see nginx and php-fpm are working fine. Let’s install Laravel now:
$ composer global require laravel/installer
$ cd /srv/demo # make sure we're still in /srv/demo
$ rm -rf * # clean the files we've just created
$ laravel new temp # installer needs a path, can't be .
$ # select for instance the Vue starter kit and say "Yes" to npm install
$ mv temp/* temp/.* . # move files back into /srv/demo
$ ls -la temp # should be empty
$ rm -r temp
Now if we reload php-fpm and try to access the site:
$ sudo systemctl reload phpfpm-demo
$ curl demo.localhost
We should see the Inertia site (or its markup rather). If we go add a route route:
$ vim routes/web.php
Route::get('/nix', function () {
return "Hi from /srv/demo!\n";
});
The route should work as expected:
$ sudo systemctl reload phpfpm-demo
$ curl demo.localhost/nix
Hi from /srv/demo!
Finally, to test out making tweaks to our config, let’s add another domain. If we try to use foobar.localhost
now, we’d expect to get no response, but due to nginx’s annoying fallthrough behavior, it will serve our site on any domain if the request hostname can’t be properly matched with a site. Let’s turn that off:
# # back to root
# cd /etc/nixos
# vim flake.nix
{
services.nginx.virtualHosts."catchall" = {
default = true;
locations."/".return = "444";
rejectSSL = true;
};
}
This should get the job done. We just create a new virtual host, set it as the default vhost, return 444 (no response) and reject SSL. Let’s rebuild NixOS:
# nixos-rebuild switch --impure
Now if we try to access foobar.localhost
we get:
# curl foobar.localhost
curl: (52) Empty reply from server
Perfect. Let’s add this domain to the Laravel site now:
- domains = [ "demo.localhost" ];
+ domains = [ "demo.localhost" "foobar.localhost" ];
And again rebuild:
# nixos-rebuild switch --impure
Now if we try curl again:
# curl foobar.localhost # long HTML response
# curl foobar.localhost/nix
Hi from /srv/demo!
You get the point. Any changes we want to make to the server, we make in the config files in /etc/nixos
and then just run the rebuild command.
Once again I’ll point you to this repo, the README includes a section about cleanup which shows how we can delete the past generations (that are all being saved each time we rebuild the server) once we don’t need them anymore.
Deploying to production
Installing NixOS on a server
The great thing about having a properly configured web server is that you can use simple caveman techniques like deployments via SSH from GitHub Actions. We’ll just use the ssh credentials for the site’s user to connect from GHA and git pull
(and any other commands you run as part of your deployment script).
But first, we need to actually get NixOS running on a server. Most cloud providers don’t offer NixOS. Luckily there’s a tool called nixos-anywhere which can turn an existing Linux installation into NixOS.
In this repo I have a thin wrapper around nixos-anywhere to make it a little nicer to use.
Here’s what you do:
You need Nix (not NixOS, the Nix package manager supports multiple platforms) locally. It’s preferable to run nixos-anywhere from NixOS but it works just fine from a Mac as long as you have Nix installed. Personally, I recommend using the Determinate Nix installer if you’re going to use Nix on your macOS. During installation you can select to use regular Nix instead of Determinate Nix, but the benefit of using this installer instead of the official one is that it comes with a proper uninstaller.
Create a new server, for instance on Hetzner Cloud (that’s the only one I’ve tested this with). This could be a dedicated server too but you might need to deal with hardware config. Out of the box my wrapper works with Hetzner VMs and likely any other virtual private servers. When creating this server, I recommend using Debian (shouldn’t matter though) and importantly the same architecture (x86/arm) as your local NixOS VM or local machine with Nix. While creating this server, make sure to attach SSH keys so you can connect without a password. I recommend not connecting to the server upfront so you don’t have to clean up
~/.ssh/known_hosts
after the installation process.If you’re not using ARM, change this to
x86_64-linux
before proceedingRun
(cd anywhere && ./auto.sh <server_ip> <path_to_your_ssh_key>)
so for instance(cd anywhere && ./auto.sh 123.123.123.123 ~/.ssh/id_ed25519.pub)
. This should handle the installation without any interaction.1Wait for a minute or two — the server is restarting.
Run
(cd postinstall && ./auto.sh <server_ip> <path_to_your_ssh_key>)
. nixos-anywhere itself only sets up the server per your config, it doesn’t actually copy the config into/etc/nixos
for whatever reason. This does that and sets up a simple system flake you can work with. It also rebuilds the system so you know the system state will match exactly what’s in your/etc/nixos
.
Now you should be able to ssh into the server (it will have a new SSH identity) without a password as root. I highly recommend reading the README (as well as the scripts you’re executing) of the repo before running these commands. That said, I’ve set up numerous servers like this and the process works very well.
In essence, if everything else is in place, setting up a new server is just a matter of:
$ (cd anywhere && ./auto.sh <server_ip> ~/.ssh/id_ed25519.pub)
$ sleep 60
$ (cd postinstall && ./auto.sh <server_ip> ~/.ssh/id_ed25519.pub)
Adding a Laravel site
Now we’ll set up a real Laravel site on a real domain. If you have any simple app you could deploy, use that. Otherwise create a basic Laravel app (can be any starter kit) and put it on GitHub as a private repo.
To configure our server, we’ll follow similar steps as above when we were setting up Laravel in a local VM:
# cd /etc/nixos
# vim flake.nix
(laravelSite {
name = "foo";
domains = [ "nix.your-domain.com" ]; # use a real domain
# don't enable SSL yet
phpPackage = pkgs.php84;
extraPackages = [ pkgs.nodejs_24 ];
})
Note that you’ll need to update the structure of flake.nix
to match the sample “full system config” from this README (it includes the laravelSite import, pkgs definition and so on) first before adding your sites.
To make the laravelSite import work, you’ll also need to copy laravel.nix
from our repo into /etc/nixos
. Either just paste the contents into an editor or:
scp laravel.nix root@<your server ip>:/etc/nixos/
With that in place, we can rebuild NixOS:
# nixos-rebuild switch
Now we have the user for the site created. Let’s continue there:
# su laravel-foo # the site name was foo
Welcome to demo Laravel site!
Domains: nix.your-domain.com
User home: /home/laravel-foo
Site: /srv/foo
Restart php-fpm: sudo systemctl reload phpfpm-foo
SSH public key: cat ~/.ssh/id_ed25519.pub
Let’s grab the SSH key:
$ cat ~/.ssh/id_ed25519.pub
And copy it into your clipboard. Then go to the GitHub repo you created, Settings → Deploy keys and add this SSH key. Do not enable write access.
We should be able to pull the site now:
$ cd /srv/foo
$ git clone git@github.com:username/repo.git .
$ composer install
$ sudo systemctl reload phpfpm-foo
$ npm install
$ npm run build
$ cp .env.example .env
$ touch database/database.sqlite
$ # any other setup steps your app needs
Now point the DNS record for nix.your-domain.com
to the server IP and you should be able to see the site. If you get any errors, remember you can use systemctl status
, journalctl -fu
and so on like on any other Linux server. Just make sure you run these as root.
# systemctl status nginx
# systemctl status phpfpm-foo
# journalctl -fu phpfpm-foo
Assuming everything went well with your DNS and site setup, it’s time to add SSL now before moving on to configuring deployments.
To reproduce a real site more realistically, we’ll also enable the queue worker. It’s fine if it does nothing.
# vim /etc/nixos/flake.nix
... in your laravelSite block
queue = true;
ssl = true;
Now if we rebuild:
# nixos-rebuild switch
The site should now automatically redirect to HTTPS, with SSL certificates being automatically provisioned and extended.
If we go back to the site user, we can see the queue worker is running:
# su laravel-foo
Welcome ...
Restart queue: php artisan queue:restart
Queue status: sudo systemctl status laravel-queue-foo
^ two new lines
$ sudo systemctl status laravel-queue-foo
... systemd[1]: Started Laravel Queue Worker for foo.
As mentioned at the start of the post, cron is enabled automatically for scheduled tasks in Laravel, so by this point we have:
nginx
php-fpm
SSL
SSL renewals
git pulls
Laravel queue worker
Laravel schedule
all set up. To reiterate, this is all with just this:
(laravelSite {
name = "foo";
domains = [ "nix.your-domain.com" ];
ssl = true;
queue = true;
phpPackage = pkgs.php84;
extraPackages = [ pkgs.nodejs_24 ];
})
Pretty awesome. With this part done, all that remains is deployments.
Setting up deployments
The fact that we have a properly set up server with strong user isolation means we can use simple techniques for deployments, like GitHub Actions ssh’ing into your server to run a deploy script whenever you push to master. (You may prefer having a separate branch like production
that’s deployed, this is up to you.)
Create a file called .github/workflows/deploy.yml
with the following contents:
name: Deploy to Production
on:
push:
branches: [master]
jobs:
deploy:
name: Deploy to Production
runs-on: ubuntu-latest
steps:
- name: Deploy via SSH
uses: appleboy/ssh-action@v1.0.3
with:
host: ${{ secrets.SSH_HOST }}
username: ${{ secrets.SSH_USERNAME }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
port: 22
script: |
cd /srv/foo
./deploy.sh
Notice two things: we’re using secrets (which we’ll configure now) and we’re using ./deploy.sh
. We’ll get to that soon.
First, secrets. Simply open (in the repo) Settings → Secrets and variables → Actions and set repository secrets with the host (server IP) and username (laravel-foo
). For the private key (this key will be used to ssh into our server) we’ll generate a new SSH key. Simply run (on your own dev machine):
$ mkdir /tmp/ssh_keys
$ cd /tmp/ssh_keys
$ ssh-keygen -f id_ed25519 -t ed25519 -P "" -C "GHA deploy workflow"
$ ls
id_ed25519 id_ed25519.pub
id_ed25519
is the private key. id_ed25519.pub
is the public key. Copy the private key’s contents and paste them into the SSH_PRIVATE_KEY
repo secret. Then copy the public key’s contents into clipboard. We’ll have to add it to the laravel-foo
user’s authorized keys. This is as easy as:
(laravelSite {
# ... existing config
# add this:
sshKeys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICsMlOZ2LVOuYsTDs4+M2IKi92ubEu9SQ85ZkRTCF+bx GHA deploy workflow"
];
})
Now your GHA workflow will be able to log into the site’s user. It is riskier than some deployment strategies like push webhooks, but dramatically simpler.
We don’t need the keys locally anymore so:
rm -rf /tmp/ssh_keys
Now let’s just finalize the deployment script. It’s up to you how you handle this. Some people want to have the commands in the deployment workflow. I like having ./deploy.sh
in the project root. Ultimately it’s the same thing — both are in the same repo.
Here’s what a sample deployment script looks like:
#!/usr/bin/env bash
set -xe
php artisan down --refresh=10 --retry=50
git pull
composer install --no-dev --no-interaction --prefer-dist --optimize-autoloader
npm ci --omit=dev
npm run build
php artisan config:clear
php artisan route:clear
php artisan view:clear
php artisan migrate --force
php artisan optimize
php artisan queue:restart
sudo systemctl reload phpfpm-foo
php artisan up
You likely understand all of this, but some main points to focus on:
The workflow changes the working directory, so we don’t need to do that here
We use
#!/usr/bin/env bash
, it’s generally preferable to use “env” with NixOS since many binaries may be in nonstandard pathsWe put the app into maintenance mode. I like to use maintenance mode for the full process of every deployment. It’s only about 30s of downtime and means I don’t have to worry about what kinds of changes I’m deploying. The args also instruct other services and browsers to retry in some number of seconds, so users don’t have to F5 by hand
We install prod dependencies via composer. Same thing with npm
We compile frontend assets. In some projects (largely server-rendered) I prefer including built assets in version control since they so rarely changes. In SPA apps I gitignore built assets and build during deployment
We run the usual artisan optimization scripts. It is possible some of them are redundant because
optimize
covers them but that’s fineWe restart the queue worker
We reload php-fpm to clear opcache. The default php-fpm config in
laravel.nix
is: cache everything, never revalidate files, kill process after serving 200 requests. This means we don’t have to worry about memory leaks but we do have to manually clear opcache whenever we change the PHP files (like during agit pull
or subsequentcomposer install
).We exit maintenance mode
Add this workflow and push some changes. You should see a successful deployment. If you add a new route, it should be accessible. If you check the status of the queue worker or php-fpm service you should see that they were reset.
And that’s really all there’s to it. You now have working push deployments on a server that’s fully configured using an extremely simple config file. It’s a total of about 10 lines of server config (excluding brief boilerplate) and about 20 lines of GHA YAML and a 20 line long deployment script. So in about 50 lines of very basic config you have all the infrastructure you need to deploy Laravel to production.
All there’s to do now is get more familiar with Nix. If you care about reproducibility a lot, I recommend initializing a git repo in /etc/nixos
and pushing that to a remote origin so you don’t lose the config and the lockfile. You don’t need to do this if you just care about configuring a server easily. For many of my projects, that’s by far the biggest value add here.
Speaking of the lockfile, very briefly: your nixpkgs
input is now locked to some specific version in the lockfile. If you sync all /etc/nixos
contents (maybe besides the hardware config) to another machine, you’ll get an identical setup. To update your config, simply run nix flake update
and rebuild. If anything goes wrong, you can always roll back to a previous generation (which is beyond the scope of this article, but really easy, so go Google).
Bonus: Additional webserver config
Some extra things I like to do on a real server include:
Configuring a default nginx host. We’ve done this in the local VM section but not prod section. Extracted into catchall.nix.
Using Authenticated Origin Pulls. This makes nginx block any requests that are not coming from Cloudflare (I always use Cloudflare) before they reach your PHP app. This requires that you use SSL. This is just a matter of cloudflareOnly = true.
Using real_ip so nginx (and as such PHP) works with the real user IPs rather than Cloudflare IPs. Extracted into realip.nix.
These are all covered in this README in depth, so here I’ll just include the code with no explanations. That is to show what a real webserver configuration might look like (and again, this is ALL optional, you truly only need the laravelSite
calls).
modules = [
{
nix.nixPath = [ "nixpkgs=${inputs.nixpkgs}" ];
security.acme.defaults.email = "my@email.com";
}
./configuration.nix
./realip.nix
./catchall.nix
(laravelSite {
name = "foo";
domains = [ "foo.com" "bar.com" ];
phpPackage = pkgs.php84;
ssl = true;
cloudflareOnly = true;
queue = true;
sshKeys = [
"ssh key used in GHA"
"my own SSH key for convenience"
];
# interactive SQLite so I can interact with the DB
# in an SSH shell
extraPackages = [ pkgs.nodejs_24 pkgs.sqlite_interactive ];
})
];
Bonus: Backups
To add backups, you can either set up new cronjobs in NixOS (which is trivial, see laravel.nix
) or you can use the Laravel scheduler. Former is a bit more reliable but I appreciate the simplicity of having the automatic cron execute the Laravel scheduler, which then executes my backup jobs. If reliability is a concern, add some cron monitoring.
Since I love using SQLite with Laravel, my backups are literally just: VACUUM
the DB into a temporary file and upload to S3.
If I run this on macOS I get one warning (ostensibly an error) during the early stage of the installation but it seems to have no impact on the script or server whatsoever. Still, you may prefer running this from a NixOS VM for better compatibility.