M

Research report

How to build a BYOS deployment product for Lovable repos without lying about compatibility

A practical report for a system where users bring their own server and a Lovable Git repo, and your platform decides whether to deploy it, adapt it, partially support it, or reject it with a useful reason.

May 6, 2026 Deployment systems / Lovable / SSR / VPS compatibility

Your current system already proves the easy part: static apps are straightforward. `npm install`, `npm run build`, get `dist/index.html`, point nginx at it, done. For conventional Node SSR, PM2 plus nginx reverse proxy also works well.

The real product challenge starts when a repo looks like a frontend app but actually depends on an external backend, a serverless provider runtime, a framework adapter, or a Cloudflare-specific execution model. That means your product should not be a shell-script launcher. It should be a classifier + build probe + bounded compatibility engine.

Tier A — deploy automatically

  • Static Vite/React apps that build to dist/
  • Static Astro / static export apps
  • Node SSR apps that expose a normal server process
  • Conventional Express / Fastify / Koa / Node web apps

Tier B — deploy with compatibility mode

  • Framework apps that can switch from an edge adapter to a Node adapter
  • Simple fetch-native worker-style apps with no Cloudflare-only services
  • Hybrid repos where the frontend is deployable but provider-managed backend artifacts must be treated separately

Tier C — reject clearly

  • True Cloudflare platform apps using Durable Objects, KV, R2, D1, Queues, or service bindings
  • Repos that require provider-managed backends you do not operate on the user server
  • Ambiguous repos where the build probe cannot produce either static assets or a runnable Node server safely

What Lovable usually generates

Deployment-relevant repo patterns

1. Default Lovable shape: Vite React SPA

This is the common case. You usually get package.json, vite.config.*, index.html, src/, Tailwind-ish styling, and a standard npm install + npm run build flow. If output lands in dist/, your nginx static flow is exactly the right move.

2. Frontend-only backend integrations

Many Lovable repos use Supabase or Firebase from the frontend only. These are still deployable as static apps, but they depend on external managed services and correct environment variables, auth callback URLs, and backend readiness.

3. Provider-native backend artifacts

Some repos include supabase/functions, supabase/migrations, firebase.json, Firestore rules, or Firebase functions. The frontend may still be deployable on your server, but those artifacts are not generic VPS backend code and should not be mistaken for PM2 targets.

4. Custom Node backend variants

Sometimes the repo includes server/, api/, backend/, Express, Fastify, or a start script that launches an HTTP server. These fit your PM2 + nginx model if the build probe confirms they behave like a normal Node service.

5. Edge-runtime / Cloudflare-style projects

This is where the pain begins. A repo with wrangler.toml, Cloudflare bindings, Workers semantics, or TanStack Start on a Cloudflare target is not the same species as a normal VPS deploy. Treat these as compatibility cases or rejections, not as ordinary SSR.

Recommended system design

The analysis pipeline your product actually needs

Stage 1 — fast repo scan

Read package.json, lockfiles, framework config, wrangler files, monorepo manifests, server entrypoints, and deployment scripts. Extract runtime clues before running anything.

Stage 2 — framework inference

Classify the repo into one of: static SPA, static SSG, Node web app, Node SSR app, provider-managed frontend, edge-runtime app, or unknown. Use framework-specific rules instead of generic folder-guessing cosplay.

Stage 3 — build probe

Run install and build in a sandbox. Inspect whether the result is dist/, out/, build/, .output/server, .next, a worker bundle, or total chaos. This is the part that catches repos that lie in package.json.

Stage 4 — startup / artifact validation

If the output is static, verify index rendering. If the output is server-like, boot it with an injected PORT, wait for a bind, and hit a smoke URL. If it looks like a worker runtime bundle, stop and classify it accordingly.

Stage 5 — decision synthesis

Choose exactly one outcome: deploy directly, deploy in compatibility mode, partial deploy with warnings, or reject with evidence. No vague “maybe it works” branch. That is how support tickets multiply.

Decision matrix

How the platform should behave by repo type

Repo type Strong signals Recommended outcome
Static Vite/React SPA vite.config.*, index.html, src/main.*, build => dist, no server or worker markers Deploy directly with nginx static hosting + SPA fallback
Static app with Supabase/Firebase SDK only @supabase/supabase-js or firebase SDK, VITE_* envs, no local functions runtime Deploy directly, but require envs and warn about external backend readiness
Static frontend + provider-managed backend artifacts supabase/functions, supabase/migrations, firebase.json, firestore.rules, Firebase functions Deploy frontend only or mark backend as out-of-scope; never pretend those are PM2 apps
Conventional Node SSR / web server next start, nuxt start, express/fastify/koa, node server.js, .output/server Deploy with PM2 + nginx reverse proxy
Framework edge target that can retarget Node recognized framework + official Node adapter path + no hard Cloudflare-only dependencies Offer compatibility mode by switching adapter/preset, then rebuild and deploy
True Cloudflare runtime app wrangler.toml + Durable Objects / KV / R2 / D1 / Queues / service bindings / waitUntil semantics Reject for VPS deployment and explain why

Compatibility mode

Useful compatibility ideas

Adapter-switch compatibility

Best option when a framework officially supports both Cloudflare and Node. Example: change from a Cloudflare adapter/preset to a Node adapter/preset, rebuild, and run the resulting server output. Keep this explicit and auditable.

Static export fallback

If the app uses an SSR-capable framework but does not actually need SSR behavior, try a static export path and deploy the generated assets instead of forcing a server runtime.

Fetch-to-Node bridge

For simple worker-style apps that only expose a fetch handler and avoid Cloudflare-only services, a thin Node wrapper can sometimes translate HTTP requests into Web Request/Response handling. Use this sparingly. It is a compatibility mode, not a religion.

Rejection discipline

When you should say no

  • The repo requires a provider runtime you are not hosting, such as Cloudflare Durable Objects, KV, R2, D1, or Queues.
  • The build succeeds but does not produce either static assets or a runnable Node server output.
  • The app depends on provider-managed backend functions, rules, migrations, or bindings that cannot be safely recreated on the user server.
  • The repo needs a framework adapter switch that is not official, not reversible, or too risky to automate.
  • The build probe cannot identify a deployable app package inside a monorepo with enough confidence.

Best practical recommendation

What to support first

1. Make static apps absolutely bulletproof. That is the highest-volume, highest-confidence Lovable case, and it already fits your nginx flow perfectly.

2. Add excellent support for Node SSR and conventional Node apps. This is the next biggest win. If the repo can be built into a normal Node process, PM2 + nginx remains the clean default.

3. Add bounded compatibility for official adapter switches. If a framework can move from a Cloudflare/edge target to a Node target using an official adapter or preset, automate that path carefully and show the user exactly what changed.

4. Reject true edge-platform apps cleanly. If the repo depends on Cloudflare platform primitives, do not pretend your prepared VPS is secretly Cloudflare in a trench coat. Reject it with evidence and a useful explanation.

Official references

Docs used for the runtime-specific parts

Bottom line

The product should be honest, not magical

The right story is not “paste any repo and we deploy everything.” The right story is: paste your Lovable repo and server, and we will analyze it, deploy what is compatible, adapt what is safely adaptable, and reject what does not belong on a generic VPS.

That positioning is stronger, more truthful, and much easier to operate at scale than pretending every SSR or edge runtime can be force-fed into the same deployment path.

Back to writing View repository