Annotate your queries; sqltgen emits idiomatic, type-safe database access code in your language of choice. No reflection. No runtime query building. Just your SQL, compiled to code.
docker run --rm -v $(pwd):/workspace sqltgen/sqltgen generate
Why sqltgen
A clean pipeline from annotated SQL to hand-quality generated code. One binary, no runtime dependencies, zero boilerplate.
Java, Kotlin, Rust, Go, Python, TypeScript, JavaScript, and more to come. Each backend emits idiomatic code that looks like a developer wrote it by hand — proper types, proper nullable handling, proper formatting.
PostgreSQL, SQLite, MySQL — with full DDL and query
parsing, migration directory support, and dialect-specific features like
RETURNING, ANY($1), and JSON_TABLE.
One small sqltgen binary, five runtime
dependencies. Install once, run anywhere. No JVM, no Node, no Python
required to drive the generator.
The generated code uses only your language's standard database driver.
There is no sqltgen-runtime package, no framework in your
import path, no DSL, no reflection. It's just your code.
Use @param_name in your SQL instead of positional
$1 / ?1. sqltgen rewrites them to the correct
dialect syntax and infers types from the schema automatically.
Mark a param with -- @ids bigint[] not null and use it in
an IN (@ids) clause. sqltgen handles engine-specific
expansion: ANY($1) on PostgreSQL, json_each
on SQLite, JSON_TABLE on MySQL.
Philosophy
sqltgen is a dev-time tool, not a runtime library. The output is plain source files that belong to your project — readable, debuggable, and completely yours.
The generated code depends only on the standard database driver for
your language — JDBC, sqlx, psycopg3, pg, database/sql. There is no
sqltgen-runtime package. Your dependency tree stays clean.
If you stop using sqltgen tomorrow, your code still compiles and runs. There is no migration path, no framework to unwind. The generated files are just files. Compare that to removing an ORM.
Stack traces go through your code and your driver, not through a library you don't control. When something breaks, you're reading SQL and native driver calls — not ORM internals.
The generated files are idiomatic source code you can inspect, review in PRs, and even hand-edit if needed. They're not opaque binaries or hidden behind a framework's abstractions.
Upgrading sqltgen doesn't force a runtime migration. Re-run the generator, review the diff, commit. If you don't like the new output, don't regenerate — your existing code keeps working.
Generation is a single binary invocation with no infrastructure setup. No database service container, no network, no health-check polling. Just the binary and your SQL files.
See it in action
Annotate a query with a name and a command. sqltgen emits fully typed, ready-to-use functions in every target language you configure.
-- name: GetBook :one SELECT id, author_id, title, genre, price, published_at FROM book WHERE id = @id; -- name: GetBooksByIds :many -- @ids bigint[] not null SELECT id, author_id, title, genre, price, published_at FROM book WHERE id IN (@ids) ORDER BY title;
// Queries.java (generated) public static Optional<Book> getBook( Connection conn, long id ) throws SQLException { try (PreparedStatement ps = conn.prepareStatement(SQL_GET_BOOK)) { ps.setLong(1, id); try (ResultSet rs = ps.executeQuery()) { if (!rs.next()) return Optional.empty(); return Optional.of(new Book( rs.getLong(1), rs.getLong(2), rs.getString(3), rs.getString(4), rs.getBigDecimal(5), rs.getObject(6, LocalDate.class))); } } } public static List<Book> getBooksByIds( Connection conn, List<Long> ids ) throws SQLException { java.sql.Array arr = conn.createArrayOf("bigint", ids.toArray()); try (PreparedStatement ps = conn.prepareStatement(SQL_GET_BOOKS_BY_IDS)) { ps.setArray(1, arr); List<Book> rows = new ArrayList<>(); try (ResultSet rs = ps.executeQuery()) { while (rs.next()) rows.add(new Book( rs.getLong(1), rs.getLong(2), rs.getString(3), rs.getString(4), rs.getBigDecimal(5), rs.getObject(6, LocalDate.class))); } return rows; } }
// Queries.kt (generated) fun getBook(conn: Connection, id: Long): Book? { conn.prepareStatement(SQL_GET_BOOK).use { ps -> ps.setLong(1, id) ps.executeQuery().use { rs -> if (!rs.next()) return null return Book( rs.getLong(1), rs.getLong(2), rs.getString(3), rs.getString(4), rs.getBigDecimal(5), rs.getObject(6, LocalDate::class.java)) } } } fun getBooksByIds( conn: Connection, ids: List<Long> ): List<Book> { val arr = conn.createArrayOf("bigint", ids.toTypedArray()) conn.prepareStatement(SQL_GET_BOOKS_BY_IDS).use { ps -> ps.setArray(1, arr) val rows = mutableListOf<Book>() ps.executeQuery().use { rs -> while (rs.next()) rows.add(Book( rs.getLong(1), rs.getLong(2), rs.getString(3), rs.getString(4), rs.getBigDecimal(5), rs.getObject(6, LocalDate::class.java))) } return rows } }
// queries.rs (generated) pub async fn get_book( pool: &DbPool, id: i64, ) -> Result<Option<Book>, sqlx::Error> { let sql = r##" SELECT id, author_id, title, genre, price, published_at FROM book WHERE id = $1 "##; sqlx::query_as::<_, Book>(sql) .bind(id) .fetch_optional(pool) .await } pub async fn get_books_by_ids( pool: &DbPool, ids: &[i64], ) -> Result<Vec<Book>, sqlx::Error> { let sql = r##" SELECT id, author_id, title, genre, price, published_at FROM book WHERE id = ANY($1) ORDER BY title "##; sqlx::query_as::<_, Book>(sql) .bind(ids) .fetch_all(pool) .await }
// queries.go (generated) func GetBook( ctx context.Context, db *sql.DB, id int64, ) (*Book, error) { row := db.QueryRowContext(ctx, SQL_GET_BOOK, id) var r Book err := row.Scan( &r.Id, &r.AuthorId, &r.Title, &r.Genre, &r.Price, &r.PublishedAt) if err == sql.ErrNoRows { return nil, nil } if err != nil { return nil, err } return &r, nil } func GetBooksByIds( ctx context.Context, db *sql.DB, ids []int64, ) ([]Book, error) { rows, err := db.QueryContext( ctx, SQL_GET_BOOKS_BY_IDS, pq.Array(ids)) if err != nil { return nil, err } defer rows.Close() var result []Book for rows.Next() { var r Book rows.Scan( &r.Id, &r.AuthorId, &r.Title, &r.Genre, &r.Price, &r.PublishedAt) result = append(result, r) } return result, rows.Err() }
# queries.py (generated) def get_book( conn: Connection, id: int, ) -> Book | None: with execute(conn, SQL_GET_BOOK, (id,)) as cur: row = cur.fetchone() if row is None: return None return Book(*row) def get_books_by_ids( conn: Connection, ids: list[int], ) -> list[Book]: with execute(conn, SQL_GET_BOOKS_BY_IDS, (ids,)) as cur: return [Book(*row) for row in cur.fetchall()]
// queries.ts (generated) export async function getBook( db: Db, id: number, ): Promise<Book | null> { const result = await db.query<Book>(SQL_GET_BOOK, [id]); return result.rows[0] ?? null; } export async function getBooksByIds( db: Db, ids: number[], ): Promise<Book[]> { const result = await db.query<Book>( SQL_GET_BOOKS_BY_IDS, [ids]); return result.rows; }
Compatibility
Every language target supports all three SQL dialects. The same configuration file generates code for as many languages as you need.
| Language | PostgreSQL | SQLite | MySQL |
|---|---|---|---|
| Java (JDBC) | ✓ | ✓ | ✓ |
| Kotlin (JDBC) | ✓ | ✓ | ✓ |
| Rust (sqlx) | ✓ | ✓ | ✓ |
| Go (database/sql) | ✓ | ✓ | ✓ |
| Python (psycopg3 / sqlite3 / mysql-connector) | ✓ | ✓ | ✓ |
| TypeScript (pg / better-sqlite3 / mysql2) | ✓ | ✓ | ✓ |
| JavaScript (pg / better-sqlite3 / mysql2) | ✓ | ✓ | ✓ |
Get started
Three steps from an empty directory to fully typed database access code.
Run via Docker — no install required:
$ docker run --rm -v $(pwd):/workspace sqltgen/sqltgen generate
Or build from source with Cargo:
$ git clone https://github.com/sqltgen/sqltgen $ cd sqltgen/sqltgen && cargo build --release
Add a -- name: QueryName :command comment above each query.
Use @param_name instead of positional placeholders.
-- name: GetUser :one SELECT id, name, email FROM users WHERE id = @id;
Point sqltgen.json at your schema and queries, pick your
target languages, then run the generator.
{
"version": "1",
"engine": "postgresql",
"schema": "migrations/",
"queries": "queries.sql",
"gen": {
"rust": { "out": "src/db" }
}
}
$ sqltgen generate --config sqltgen.json