Reliability
Stuff breaks 💔. EmbraceSQL tries to make this less painful.
End to End Retry
Interrupted networks connections happen unpredictably. Particularly in containerized setups, servers are starting and stopping. Load balancers are taking services in and out.
These interruptions can happen between client/web, web/api, and api/database. Folks know that retry logic is an effective technique to mask transient errors -- but coding up all those retries, and coding them both client and server is a huge chore ⛏️.
EmbraceSQL provides a retries
option, which can be set that will automatically
retry calls - both client and server - with exponential backoff.
You can set this overall on EmbraceSQLClient.
Pooling
EmbraceSQL uses a connection pool, but by default only leaves connections in the pool for 30 seconds. This resists 'torn' or 'dead' connections clogging up the pool that can result from:
- network transients
- database restarts, particularly in a serveless setup
- database providers (or DBAs 😏) that kill connections
Testing
Here is an example test from the source -- you can see EmbraceSQL can live through even a pathological query that occasionally kills itself.
import { Database } from "../../src/marshalling";
describe("The database can", () => {
let db: Database;
beforeAll(async () => {
db = await Database.connect(
"postgres://postgres:postgres@localhost:5432/marshalling",
);
});
afterAll(async () => {
await db.disconnect();
});
beforeEach(() => {
// no middleware, we'll be adding middleware
db.clear();
});
it("recover from a killed connection", async () => {
db.use(async (context, next) => {
// simulate a really bad disconnect, some DBA type smote you 🗡️
// we won't do this on any subsequent retries
if (context.retry === 0) {
await context.sql`SELECT pg_terminate_backend(pid) FROM (SELECT pg_backend_pid() pid)`;
}
return next();
});
const ret = await db.Api.Procedures.Echo.call(
{
message: "Hello",
},
{
retries: 1,
},
);
expect(ret).toBe("Hello");
});
});