Deploying Laravel with Kamal 2: Lessons from a Real Production Setup
Reading time: ~ 5 minutes
If you're looking into Kamal 2 for a Laravel application, here's what you probably want to know: does it actually work in production, and is the setup worth the friction?
Short answer: yes. But there are sharp edges, and most of them aren't where you'd expect.
This post covers what it was actually like to deploy an existing Laravel application using Docker and Kamal 2. This wasn't a greenfield project or a tutorial built around a clean example. It was a real app already in production, with users relying on it, and a deployment process that needed to become calmer and more predictable without disrupting the work already underway.
We didn't rewrite anything. We didn't migrate the stack. The application stayed exactly what it was. What changed was how it got to production, and with it, how much stress surrounded every release.
That's the kind of problem Kamal 2 is good at solving. Here's how it went.
What we were working with: an existing Laravel app in production
We were not building a new project from zero.
The Laravel application already existed and was doing its job. The team was comfortable with Linux, Docker, and Git, but we were not looking for a very advanced DevOps setup. We wanted production to stay simple and predictable.
At the beginning, the biggest risk was not the application itself. The risk was the deployment process.
We were worried about changing the delivery process and accidentally creating new problems in production. The application worked. We did not want the deployment system to become an unstable part.
Why we chose Kamal 2 over other Laravel deployment options
Before we started configuring anything, Kamal 2 looked interesting for a simple reason: it matched tools we already knew.
We looked at other options too. Some platforms made deployment easy at the beginning, but they also created more dependency on the platform and more cost over time. Other options gave full control, but they felt too heavy for what we needed.
Kamal felt like a middle ground.
It uses SSH, Docker, and Git. These are tools many developers already know. That meant we did not need to learn a completely new model just to deploy an application.
Another important point was control. We wanted to keep the application on the infrastructure we manage ourselves, without adding unnecessary complexity.
Where Kamal 2 gets tricky: Docker builds, SSH, and config gotchas
The first setup looked simple, but in practice, there were some sharp edges.
One problem was Docker build performance. Builds took longer than expected, especially when architectural differences were involved. Building on one machine and deploying to another architecture added more friction than we first thought.
Another issue was SSH behavior in automation. A connection that works in a normal terminal does not always work the same way when Docker Buildx or another tool uses SSH internally. Host verification, key selection, and small configuration details became important very quickly.
Kamal's configuration is also simple, until the moment something is slightly wrong. The deploy.yml file is small, but the parts are connected. A mistake in the builder, registry, or server configuration can affect the whole deployment flow.
When Kamal 2 deployment becomes predictable (and why that's the goal)
The real change happened when the process became consistent.
Once image builds were predictable, the registry flow made sense, and containers started correctly during deploys, the whole system felt different.
That was the point where deployment started to feel boring.
And that is a good thing.
When deployments stop surprising you, confidence grows. You no longer feel like every release is a special event. You stop watching the process with tension. Production starts to feel more normal and more manageable.
What we learned the hard way
A few lessons became very clear during this process.
SSH config aliases turned out to matter more than expected. Using an alias in ~/.ssh/config is consistently more reliable than a raw user@ip connection string, especially when tools like Docker Buildx are making SSH connections internally rather than through a normal terminal session. The behavior isn't always the same, and that difference can cost hours. In practice, defining the host in your SSH config like this:
Host laravel-kamal
HostName X.X.X.X
User root
IdentityFile ~/.ssh/id_rsa
And referencing it in your deploy.yml like this:
builder: remote: ssh://laravel-kamal
ensures that the correct user, key, and host verification are all in place — for your terminal, for Docker, and for Buildx.
If your local machine and production server use different architectures, set up a remote builder early. We didn't, and the friction from cross-architecture builds added more time and troubleshooting than it should have. It's a small setup cost upfront that pays off quickly.
Health checks are easy to treat as an afterthought. They're not. A working health endpoint is what makes zero-downtime deployment actually safe. Without it, you're guessing whether the container is ready rather than knowing for sure. It belongs in the setup from the start, not added later when something breaks. Laravel 11 and above includes the /up route by default. If you're on Laravel 10 or below, add it manually to routes/web.php:
Route::get('/up', function () { return response()->json([ 'status' => 'ok', 'timestamp' => now()->toIso8601String() ]); });
The most useful reframe was this: deployment problems are often process problems. When something went wrong, the issue usually wasn't Kamal itself. It was an unclear assumption about networking, environment configuration, or how a tool behaved outside of a normal terminal context. Slowing down to question the process, not just the tool, is what moved things forward.
The app didn't change. The delivery process did.
One of the best parts of this experience was what we didn't have to touch. The Laravel application stayed exactly what it was. No rewrite, no framework migration, no rebuilding the product around a new platform. The team kept shipping product work without interruption.
What changed was the path to production. That distinction matters because a lot of teams assume that improving deployment means a significant technical overhaul. In this case, it didn't. The app was never the problem. Getting it to production reliably was. Separating those two things made the whole project clearer and kept the scope from expanding into something much harder to justify.
The payoff
After the process stabilized, the benefits were practical and immediate. Deployments became easier to repeat. Rollback felt realistic rather than theoretical. Infrastructure ownership got clearer. And releasing changes stopped feeling like a calculated risk.
The biggest result was probably confidence. Production didn't feel fragile anymore. It felt like a system the team understood- one they could manage and improve without dreading what might go wrong. That shift is harder to put into a ticket or a retrospective, but it's the thing that changes how a team operates day-to-day.
What this means for teams maintaining existing apps
This experience reinforced something we see often.
Most teams do not need more tools. They need a calmer and clearer system.
A lot of "modern DevOps" discussion focuses on complexity, scale, and automation. But many teams are not looking for that. They want a deployment process that respects the application they already have and reduces operational stress.
That is why experience matters. The same kinds of edge cases appear again and again: SSH issues, build issues, registry issues, health checks, and environment confusion.

This experience was not about changing the application. It was about improving how the application reaches production.
Kamal 2 gave us a practical way to do that with tools we already trusted: Docker, SSH, and Git. It was not friction-free, but once the process became stable, deployment started to feel much more predictable.
Resources
Most teams we work with don't need to start over. They need a calmer path forward. If that sounds familiar, we'd love to hear about your situation.