Knex.js is a "batteries included" SQL query builder for Postgres, MSSQL, MySQL, MariaDB, SQLite3, and Oracle designed to be flexible, portable, and fun to use. It features both traditional node style callbacks as well as a promise interface for cleaner async flow control, a stream interface, full featured query and schema builders, transaction support (with savepoints), connection pooling and standardized responses between different query clients and dialects.
The project is hosted on GitHub, and has a comprehensive test suite.
Knex is available for use under the MIT software license.
You can report bugs and discuss features on the GitHub issues page, add pages to the wiki or send tweets to @tgriesser.
Thanks to all of the great contributions to the project.
Special thanks to Taylor Otwell and his work on the Laravel Query Builder, from which much of the builder's code and syntax was originally derived.
generic-pool was upgraded to v3. If you have specified idleTimeoutMillis
or softIdleTimeoutMillis
in the pool config then you will need to add evictionRunIntervalMillis: 1000
when upgrading to 0.14.
See original issue #2322 for details.
Knex can be used as an SQL query builder in both Node.JS and the browser, limited to WebSQL's constraints (like the inability to drop tables or read schemas). Composing SQL queries in the browser for execution on the server is highly discouraged, as this can be the cause of serious security vulnerabilities. The browser builds outside of WebSQL are primarily for learning purposes - for example, you can pop open the console and build queries on this page using the knex object.
The primary target environment for Knex is Node.js, you will need to install the knex
library, and then install the appropriate database library: pg
for PostgreSQL, mysql
for MySQL or MariaDB, sqlite3
for SQLite3, or mssql
for MSSQL.
$ npm install knex --save
# Then add one of the following (adding a --save) flag:
$ npm install pg
$ npm install sqlite3
$ npm install mysql
$ npm install mysql2
$ npm install mariasql
$ npm install strong-oracle
$ npm install oracle
$ npm install mssql
Knex can be built using a JavaScript build tool such as browserify or webpack. In fact, this documentation uses a webpack build which includes knex. View source on this page to see the browser build in-action (the global knex
variable).
The knex
module is itself a function which takes a configuration object for Knex, accepting a few parameters. The client
parameter is required and determines which client adapter will be used with the library.
var knex = require('knex')({
client: 'mysql',
connection: {
host : '127.0.0.1',
user : 'your_database_user',
password : 'your_database_password',
database : 'myapp_test'
}
});
The connection options are passed directly to the appropriate database client to create the connection, and may be either an object, or a connection string:
Note: Knex's PostgreSQL client allows you to set the initial search path for each connection automatically using an additional option "searchPath" as shown below.
var pg = require('knex')({
client: 'pg',
connection: process.env.PG_CONNECTION_STRING,
searchPath: ['knex', 'public'],
});
Note: When you use the SQLite3 adapter, there is a filename required, not a network connection. For example:
var knex = require('knex')({
client: 'sqlite3',
connection: {
filename: "./mydb.sqlite"
}
});
Note: The database version can be added in knex configuration, when you use the PostgreSQL adapter to connect a non-standard database.
var knex = require('knex')({
client: 'pg',
version: '7.2',
connection: {
host : '127.0.0.1',
user : 'your_database_user',
password : 'your_database_password',
database : 'myapp_test'
}
});
You can also connect via an unix domain socket, which will ignore host and port.
var knex = require('knex')({
client: 'mysql',
connection: {
socketPath : '/path/to/socket.sock',
user : 'your_database_user',
password : 'your_database_password',
database : 'myapp_test'
}
});
Initializing the library should normally only ever happen once in your application, as it creates a connection pool for the current database, you should use the instance returned from the initialize call throughout your library.
Specify the client for the particular flavour of SQL you are interested in.
var pg = require('knex')({client: 'pg'});
knex('table').insert({a: 'b'}).returning('*').toString();
// "insert into "table" ("a") values ('b')"
pg('table').insert({a: 'b'}).returning('*').toString();
// "insert into "table" ("a") values ('b') returning *"
Passing a debug: true
flag on your initialization object will turn on debugging for all queries.
The client created by the configuration initializes a connection pool, using the generic-pool library. This connection pool has a default setting of a min: 2, max: 10
for the MySQL and PG libraries, and a single connection for sqlite3 (due to issues with utilizing multiple connections on a single file). To change the config settings for the pool, pass a pool
option as one of the keys in the initialize block.
Checkout the generic-pool library for more information.
var knex = require('knex')({
client: 'mysql',
connection: {
host : '127.0.0.1',
user : 'your_database_user',
password : 'your_database_password',
database : 'myapp_test'
},
pool: { min: 0, max: 7 }
});
If you ever need to explicitly teardown the connection pool, you may use knex.destroy([callback])
. You may use knex.destroy
by passing a callback, or by chaining as a promise, just not both.
afterCreate
callback (rawDriverConnection, done) is called when the pool aquires a new connection from the database server. done(err, connection) callback must be called for knex
to be able to decide if the connection is ok or if it should be discarded right away from the pool.
var knex = require('knex')({
client: 'pg',
connection: {...},
pool: {
afterCreate: function (conn, done) {
// in this example we use pg driver's connection API
conn.query('SET timezone="UTC";', function (err) {
if (err) {
// first query failed, return error and don't try to make next query
done(err, conn);
} else {
// do the second query...
conn.query('SELECT set_limit(0.01);', function (err) {
// if err is not falsy, connection is discarded from pool
// if connection aquire was triggered by a query the error is passed to query promise
done(err, conn);
});
}
});
}
}
});
acquireConnectionTimeout
defaults to 60000ms and is used to determine how long knex should wait before throwing a timeout error when acquiring a connection is not possible. The most common cause for this is using up all the pool for transaction connections and then attempting to run queries outside of transactions while the pool is still full. The error thrown will provide information on the query the connection was for to simplify the job of locating the culprit.
var knex = require('knex')({
client: 'pg',
connection: {...},
pool: {...},
acquireConnectionTimeout: 10000
});
Utilized by Oracledb. An array of types. The valid types are 'DATE', 'NUMBER' and 'CLOB'. When any column having one of the specified types is queried, the column data is returned as a string instead of the default representation.
var knex = require('knex')({
client: 'oracledb',
connection: {...},
fetchAsString: [ 'number', 'clob' ]
});
For convenience, the any migration configuration may be specified when initializing the library. Read the Migrations section for more information and a full list of configuration options.
var knex = require('knex')({
client: 'mysql',
connection: {
host : '127.0.0.1',
user : 'your_database_user',
password : 'your_database_password',
database : 'myapp_test'
},
migrations: {
tableName: 'migrations'
}
});
Hook for modifying returned rows, before passing them forward to user. One can do for example snake_case -> camelCase conversion for returned columns with this hook.
var knex = require('knex')({
client: 'mysql',
// overly simplified snake_case -> camelCase converter
postProcessResponse: (result) => {
// TODO: add special case for raw results (depends on dialect)
if (Array.isArray(result)) {
return result.map(row => convertToCamel(row));
} else {
return convertToCamel(result);
}
}
});
Knex supports transforming identifier names automatically to quoted versions for each dialect. For example 'Table.columnName as foo'
for PostgreSQL is converted to "Table"."columnName" as "foo".
With wrapIdentifier
one may override the way how identifiers are transformed. It can be used to override default functionality and for example to help doing camelCase
-> snake_case
conversion.
Conversion function wrapIdentifier(value, dialectImpl): string
gets each part of the identifier as a single value
and the second parameter is the original conversion function from the dialect implementation. For example knex('table').withSchema('foo').select('table.field as otherName').where('id', 1)
will call wrapIdentifier
converter for following values 'table'
, 'foo'
, 'table'
, 'field'
, 'otherName'
and 'id'
.
var knex = require('knex')({
client: 'mysql',
// overly simplified camelCase -> snake_case converter
wrapIdentifier: (value, origImpl) => origImpl(convertToSnakeCase(value))
});
The heart of the library, the knex query builder is the interface used for building and executing standard SQL queries, such as select
, insert
, update
, delete
.
In many places in APIs identifiers like table name or column name can be passed to methods.
Most commonly one needs just plain tableName.columnName
, tableName
or columnName
, but in many cases one also needs to pass an alias how that identifier is referred later on in the query.
There are two ways to declare an alias for identifier. One can directly give as aliasName
prefix for the identifier or oen can pass an object { aliasName: 'identifierName' }
.
If in the object has multiple aliases { alias1: 'identifier1', alias2: 'identifier2' }
, then all the aliased identifiers are expanded to comma separated list.
NOTE: identifier syntax has no place for selecting schema, so if you are doing schemaName.tableName
, query might be rendered wrong. Use .withSchema('schemaName')
instead.
knex({ a: 'table', b: 'table' })
.select({
aTitle: 'a.title',
bTitle: 'b.title'
})
.whereRaw('?? = ??', ['a.column_1', 'b.column_2'])
Outputs:
select `a`.`title` as `aTitle`, `b`.`title` as `bTitle` from `table` as `a`, `table` as `b` where `a`.`column_1` = `b`.`column_2`
knex(tableName, options={only: boolean}) / knex.[methodName]
The query builder starts off either by specifying a tableName you wish to query against, or by calling any method directly on the knex object. This kicks off a jQuery-like chain, with which you can call additional query builder methods as needed to construct the query, eventually calling any of the interface methods, to either convert toString, or execute the query with a promise, callback, or stream. Optional second argument for passing options: only: if true
, the ONLY keyword is used before the tableName
to discard inheriting tables' data. *NOTE: only supported in PostgreSQL for now.
.timeout(ms, options={cancel: boolean})
Sets a timeout for the query and will throw a TimeoutError if the timeout is exceeded. The error contains information about the query, bindings, and the timeout that was set. Useful for complex queries that you want to make sure are not taking too long to execute. Optional second argument for passing options: cancel: if true
, cancel query if timeout is reached. *NOTE: only supported in MySQL and MariaDB for now.
knex.select().from('books').timeout(1000)
Outputs:
select * from `books`
knex.select().from('books').timeout(1000, {cancel: true}) // MySQL and MariaDB only
Outputs:
select * from `books`
.select([*columns])
Creates a select query, taking an optional array of columns for the query, eventually defaulting to * if none are specified when the query is built. The response of a select call will resolve with an array of objects selected from the database.
knex.select('title', 'author', 'year').from('books')
Outputs:
select `title`, `author`, `year` from `books`
knex.select().table('books')
Outputs:
select * from `books`
.as(name)
Allows for aliasing a subquery, taking the string you wish to name the current query. If the query is not a sub-query, it will be ignored.
knex.avg('sum_column1').from(function() {
this.sum('column1 as sum_column1').from('t1').groupBy('column1').as('t1')
}).as('ignored_alias')
Outputs:
select avg(`sum_column1`) from (select sum(`column1`) as `sum_column1` from `t1` group by `column1`) as `t1`
.column(columns)
Specifically set the columns to be selected on a select query, taking an array, an object or a list of column names. Passing an object will automatically alias the columns with the given keys.
knex.column('title', 'author', 'year').select().from('books')
Outputs:
select `title`, `author`, `year` from `books`
knex.column(['title', 'author', 'year']).select().from('books')
Outputs:
select `title`, `author`, `year` from `books`
knex.column('title', {by: 'author'}, 'year').select().from('books')
Outputs:
select `title`, `author` as `by`, `year` from `books`
.from([tableName], options={only: boolean})
Specifies the table used in the current query, replacing the current table name if one has already been specified. This is typically used in the sub-queries performed in the advanced where or union methods. Optional second argument for passing options: only: if true
, the ONLY keyword is used before the tableName
to discard inheriting tables' data. *NOTE: only supported in PostgreSQL for now.
knex.select('*').from('users')
Outputs:
select * from `users`
.with(alias, function|raw)
Add a "with" clause to the query. "With" clauses are supported by PostgreSQL, Oracle, SQLite3 and MSSQL.
knex.with('with_alias', knex.raw('select * from "books" where "author" = ?', 'Test')).select('*').from('with_alias')
Outputs:
with `with_alias` as (select * from "books" where "author" = 'Test') select * from `with_alias`
knex.with('with_alias', (qb) => {
qb.select('*').from('books').where('author', 'Test')
}).select('*').from('with_alias')
Outputs:
with `with_alias` as (select * from `books` where `author` = 'Test') select * from `with_alias`
.withSchema([schemaName])
Specifies the schema to be used as prefix of table name.
knex.withSchema('public').select('*').from('users')
Outputs:
select * from `public`.`users`
Several methods exist to assist in dynamic where clauses. In many places functions may be used in place of values, constructing subqueries. In most places existing knex queries may be used to compose sub-queries, etc. Take a look at a few of the examples for each method for instruction on use:
Important: Supplying knex with an undefined
value to any of the where
functions will cause knex to throw an error during sql compilation. This is both for yours and our sake. Knex cannot know what to do with undefined values in a where clause, and generally it would be a programmatic error to supply one to begin with. The error will throw a message containing the type of query and the compiled query-string. Example:
knex('accounts')
.where('login', undefined)
.select()
.toSQL()
Error:
Undefined binding(s) detected when compiling SELECT query: select * from `accounts` where `login` = ?
.where(~mixed~)
Object Syntax:
knex('users').where({
first_name: 'Test',
last_name: 'User'
}).select('id')
Outputs:
select `id` from `users` where `first_name` = 'Test' and `last_name` = 'User'
Key, Value:
knex('users').where('id', 1)
Outputs:
select * from `users` where `id` = 1
Grouped Chain:
knex('users').where(function() {
this.where('id', 1).orWhere('id', '>', 10)
}).orWhere({name: 'Tester'})
Outputs:
select * from `users` where (`id` = 1 or `id` > 10) or (`name` = 'Tester')
Operator:
knex('users').where('columnName', 'like', '%rowlikeme%')
Outputs:
select * from `users` where `columnName` like '%rowlikeme%'
The above query demonstrates the common use case of returning all users for which a specific pattern appears within a designated column.
knex('users').where('votes', '>', 100)
Outputs:
select * from `users` where `votes` > 100
var subquery = knex('users').where('votes', '>', 100).andWhere('status', 'active').orWhere('name', 'John').select('id');
knex('accounts').where('id', 'in', subquery)
Outputs:
select * from `accounts` where `id` in (select `id` from `users` where `votes` > 100 and `status` = 'active' or `name` = 'John')
.orWhere with an object automatically wraps the statement and creates an or (and - and - and)
clause
knex('users').where('id', 1).orWhere({votes: 100, user: 'knex'})
Outputs:
select * from `users` where `id` = 1 or (`votes` = 100 and `user` = 'knex')
.whereNot(~mixed~)
Object Syntax:
knex('users').whereNot({
first_name: 'Test',
last_name: 'User'
}).select('id')
Outputs:
select `id` from `users` where not `first_name` = 'Test' and not `last_name` = 'User'
Key, Value:
knex('users').whereNot('id', 1)
Outputs:
select * from `users` where not `id` = 1
Grouped Chain:
knex('users').whereNot(function() {
this.where('id', 1).orWhereNot('id', '>', 10)
}).orWhereNot({name: 'Tester'})
Outputs:
select * from `users` where not (`id` = 1 or not `id` > 10) or not `name` = 'Tester'
Operator:
knex('users').whereNot('votes', '>', 100)
Outputs:
select * from `users` where not `votes` > 100
CAVEAT: WhereNot is not suitable for "in" and "between" type subqueries. You should use "not in" and "not between" instead.
var subquery = knex('users')
.whereNot('votes', '>', 100)
.andWhere('status', 'active')
.orWhere('name', 'John')
.select('id');
knex('accounts').where('id', 'not in', subquery)
Outputs:
select * from `accounts` where `id` not in (select `id` from `users` where not `votes` > 100 and `status` = 'active' or `name` = 'John')
.whereIn(column, array|callback|builder) / .orWhereIn
Shorthand for .where('id', 'in', obj), the .whereIn and .orWhereIn methods add a "where in" clause to the query.
knex.select('name').from('users')
.whereIn('id', [1, 2, 3])
.orWhereIn('id', [4, 5, 6])
Outputs:
select `name` from `users` where `id` in (1, 2, 3) or `id` in (4, 5, 6)
knex.select('name').from('users')
.whereIn('account_id', function() {
this.select('id').from('accounts');
})
Outputs:
select `name` from `users` where `account_id` in (select `id` from `accounts`)
var subquery = knex.select('id').from('accounts');
knex.select('name').from('users')
.whereIn('account_id', subquery)
Outputs:
select `name` from `users` where `account_id` in (select `id` from `accounts`)
.whereNotIn(column, array|callback|builder) / .orWhereNotIn
knex('users').whereNotIn('id', [1, 2, 3])
Outputs:
select * from `users` where `id` not in (1, 2, 3)
knex('users').where('name', 'like', '%Test%').orWhereNotIn('id', [1, 2, 3])
Outputs:
select * from `users` where `name` like '%Test%' or `id` not in (1, 2, 3)
.whereNull(column) / .orWhereNull
knex('users').whereNull('updated_at')
Outputs:
select * from `users` where `updated_at` is null
.whereNotNull(column) / .orWhereNotNull
knex('users').whereNotNull('created_at')
Outputs:
select * from `users` where `created_at` is not null
.whereExists(builder | callback) / .orWhereExists
knex('users').whereExists(function() {
this.select('*').from('accounts').whereRaw('users.account_id = accounts.id');
})
Outputs:
select * from `users` where exists (select * from `accounts` where users.account_id = accounts.id)
knex('users').whereExists(knex.select('*').from('accounts').whereRaw('users.account_id = accounts.id'))
Outputs:
select * from `users` where exists (select * from `accounts` where users.account_id = accounts.id)
.whereNotExists(builder | callback) / .orWhereNotExists
knex('users').whereNotExists(function() {
this.select('*').from('accounts').whereRaw('users.account_id = accounts.id');
})
Outputs:
select * from `users` where not exists (select * from `accounts` where users.account_id = accounts.id)
knex('users').whereNotExists(knex.select('*').from('accounts').whereRaw('users.account_id = accounts.id'))
Outputs:
select * from `users` where not exists (select * from `accounts` where users.account_id = accounts.id)
.whereBetween(column, range) / .orWhereBetween
knex('users').whereBetween('votes', [1, 100])
Outputs:
select * from `users` where `votes` between 1 and 100
.whereNotBetween(column, range) / .orWhereNotBetween
knex('users').whereNotBetween('votes', [1, 100])
Outputs:
select * from `users` where `votes` not between 1 and 100
.whereRaw(query, [bindings])
Convenience helper for .where(knex.raw(query)).
knex('users').whereRaw('id = ?', [1])
Outputs:
select * from `users` where id = 1
Several methods are provided which assist in building joins.
.join(table, first, [operator], second)
The join builder can be used to specify joins between tables, with the first argument being the joining table, the next three arguments being the first join column, the join operator and the second join column, respectively.
knex('users')
.join('contacts', 'users.id', '=', 'contacts.user_id')
.select('users.id', 'contacts.phone')
Outputs:
select `users`.`id`, `contacts`.`phone` from `users` inner join `contacts` on `users`.`id` = `contacts`.`user_id`
knex('users')
.join('contacts', 'users.id', 'contacts.user_id')
.select('users.id', 'contacts.phone')
Outputs:
select `users`.`id`, `contacts`.`phone` from `users` inner join `contacts` on `users`.`id` = `contacts`.`user_id`
For grouped joins, specify a function as the second argument for the join query, and use on
with orOn
or andOn
to create joins that are grouped with parentheses.
knex.select('*').from('users').join('accounts', function() {
this.on('accounts.id', '=', 'users.account_id').orOn('accounts.owner_id', '=', 'users.id')
})
Outputs:
select * from `users` inner join `accounts` on `accounts`.`id` = `users`.`account_id` or `accounts`.`owner_id` = `users`.`id`
For nested join statements, specify a function as first argument of on
, orOn
or andOn
knex.select('*').from('users').join('accounts', function() {
this.on(function() {
this.on('accounts.id', '=', 'users.account_id')
this.orOn('accounts.owner_id', '=', 'users.id')
})
})
Outputs:
select * from `users` inner join `accounts` on (`accounts`.`id` = `users`.`account_id` or `accounts`.`owner_id` = `users`.`id`)
It is also possible to use an object to represent the join syntax.
knex.select('*').from('users').join('accounts', {'accounts.id': 'users.account_id'})
Outputs:
select * from `users` inner join `accounts` on `accounts`.`id` = `users`.`account_id`
If you need to use a literal value (string, number, or boolean) in a join instead of a column, use knex.raw
.
knex.select('*').from('users').join('accounts', 'accounts.type', knex.raw('?', ['admin']))
Outputs:
select * from `users` inner join `accounts` on `accounts`.`type` = 'admin'
.innerJoin(table, ~mixed~)
knex.from('users').innerJoin('accounts', 'users.id', 'accounts.user_id')
Outputs:
select * from `users` inner join `accounts` on `users`.`id` = `accounts`.`user_id`
knex.table('users').innerJoin('accounts', 'users.id', '=', 'accounts.user_id')
Outputs:
select * from `users` inner join `accounts` on `users`.`id` = `accounts`.`user_id`
knex('users').innerJoin('accounts', function() {
this.on('accounts.id', '=', 'users.account_id').orOn('accounts.owner_id', '=', 'users.id')
})
Outputs:
select * from `users` inner join `accounts` on `accounts`.`id` = `users`.`account_id` or `accounts`.`owner_id` = `users`.`id`
.leftJoin(table, ~mixed~)
knex.select('*').from('users').leftJoin('accounts', 'users.id', 'accounts.user_id')
Outputs:
select * from `users` left join `accounts` on `users`.`id` = `accounts`.`user_id`
knex.select('*').from('users').leftJoin('accounts', function() {
this.on('accounts.id', '=', 'users.account_id').orOn('accounts.owner_id', '=', 'users.id')
})
Outputs:
select * from `users` left join `accounts` on `accounts`.`id` = `users`.`account_id` or `accounts`.`owner_id` = `users`.`id`
.leftOuterJoin(table, ~mixed~)
knex.select('*').from('users').leftOuterJoin('accounts', 'users.id', 'accounts.user_id')
Outputs:
select * from `users` left outer join `accounts` on `users`.`id` = `accounts`.`user_id`
knex.select('*').from('users').leftOuterJoin('accounts', function() {
this.on('accounts.id', '=', 'users.account_id').orOn('accounts.owner_id', '=', 'users.id')
})
Outputs:
select * from `users` left outer join `accounts` on `accounts`.`id` = `users`.`account_id` or `accounts`.`owner_id` = `users`.`id`
.rightJoin(table, ~mixed~)
knex.select('*').from('users').rightJoin('accounts', 'users.id', 'accounts.user_id')
Outputs:
select * from `users` right join `accounts` on `users`.`id` = `accounts`.`user_id`
knex.select('*').from('users').rightJoin('accounts', function() {
this.on('accounts.id', '=', 'users.account_id').orOn('accounts.owner_id', '=', 'users.id')
})
Outputs:
select * from `users` right join `accounts` on `accounts`.`id` = `users`.`account_id` or `accounts`.`owner_id` = `users`.`id`
.rightOuterJoin(table, ~mixed~)
knex.select('*').from('users').rightOuterJoin('accounts', 'users.id', 'accounts.user_id')
Outputs:
select * from `users` right outer join `accounts` on `users`.`id` = `accounts`.`user_id`
knex.select('*').from('users').rightOuterJoin('accounts', function() {
this.on('accounts.id', '=', 'users.account_id').orOn('accounts.owner_id', '=', 'users.id')
})
Outputs:
select * from `users` right outer join `accounts` on `accounts`.`id` = `users`.`account_id` or `accounts`.`owner_id` = `users`.`id`
.outerJoin(table, ~mixed~)
knex.select('*').from('users').outerJoin('accounts', 'users.id', 'accounts.user_id')
Outputs:
select * from `users` outer join `accounts` on `users`.`id` = `accounts`.`user_id`
knex.select('*').from('users').outerJoin('accounts', function() {
this.on('accounts.id', '=', 'users.account_id').orOn('accounts.owner_id', '=', 'users.id')
})
Outputs:
select * from `users` outer join `accounts` on `accounts`.`id` = `users`.`account_id` or `accounts`.`owner_id` = `users`.`id`
.fullOuterJoin(table, ~mixed~)
knex.select('*').from('users').fullOuterJoin('accounts', 'users.id', 'accounts.user_id')
Outputs:
select * from `users` full outer join `accounts` on `users`.`id` = `accounts`.`user_id`
knex.select('*').from('users').fullOuterJoin('accounts', function() {
this.on('accounts.id', '=', 'users.account_id').orOn('accounts.owner_id', '=', 'users.id')
})
Outputs:
select * from `users` full outer join `accounts` on `accounts`.`id` = `users`.`account_id` or `accounts`.`owner_id` = `users`.`id`
.crossJoin(table, ~mixed~)
Cross join conditions are only supported in MySQL and SQLite3. For join conditions rather use innerJoin.
knex.select('*').from('users').crossJoin('accounts')
Outputs:
select * from `users` cross join `accounts`
knex.select('*').from('users').crossJoin('accounts', 'users.id', 'accounts.user_id')
Outputs:
select * from `users` cross join `accounts` on `users`.`id` = `accounts`.`user_id`
knex.select('*').from('users').crossJoin('accounts', function() {
this.on('accounts.id', '=', 'users.account_id').orOn('accounts.owner_id', '=', 'users.id')
})
Outputs:
select * from `users` cross join `accounts` on `accounts`.`id` = `users`.`account_id` or `accounts`.`owner_id` = `users`.`id`
.joinRaw(sql, [bindings])
knex.select('*').from('accounts').joinRaw('natural full join table1').where('id', 1)
Outputs:
select * from `accounts` natural full join table1 where `id` = 1
knex.select('*').from('accounts').join(knex.raw('natural full join table1')).where('id', 1)
Outputs:
select * from `accounts` inner join natural full join table1 where `id` = 1
.onIn(column, values)
Adds a onIn clause to the query.
knex.select('*').from('users').join('contacts', function() {
this.on('users.id', '=', 'contacts.id').onIn('contacts.id', [7, 15, 23, 41])
})
Outputs:
select * from `users` inner join `contacts` on `users`.`id` = `contacts`.`id` and `contacts`.`id` in (7, 15, 23, 41)
.onNotIn(column, values)
Adds a onNotIn clause to the query.
knex.select('*').from('users').join('contacts', function() {
this.on('users.id', '=', 'contacts.id').onNotIn('contacts.id', [7, 15, 23, 41])
})
Outputs:
select * from `users` inner join `contacts` on `users`.`id` = `contacts`.`id` and `contacts`.`id` not in (7, 15, 23, 41)
.onNull(column)
Adds a onNull clause to the query.
knex.select('*').from('users').join('contacts', function() {
this.on('users.id', '=', 'contacts.id').onNull('contacts.email')
})
Outputs:
select * from `users` inner join `contacts` on `users`.`id` = `contacts`.`id` and `contacts`.`email` is null
.onNotNull(column)
Adds a onNotNull clause to the query.
knex.select('*').from('users').join('contacts', function() {
this.on('users.id', '=', 'contacts.id').onNotNull('contacts.email')
})
Outputs:
select * from `users` inner join `contacts` on `users`.`id` = `contacts`.`id` and `contacts`.`email` is not null
.onExists(builder | callback)
Adds a onExists clause to the query.
knex.select('*').from('users').join('contacts', function() {
this.on('users.id', '=', 'contacts.id').onExists(function() {
this.select('*').from('accounts').whereRaw('users.account_id = accounts.id');
})
})
Outputs:
select * from `users` inner join `contacts` on `users`.`id` = `contacts`.`id` and exists (select * from `accounts` where users.account_id = accounts.id)
.onNotExists(builder | callback)
Adds a onNotExists clause to the query.
knex.select('*').from('users').join('contacts', function() {
this.on('users.id', '=', 'contacts.id').onNotExists(function() {
this.select('*').from('accounts').whereRaw('users.account_id = accounts.id');
})
})
Outputs:
select * from `users` inner join `contacts` on `users`.`id` = `contacts`.`id` and not exists (select * from `accounts` where users.account_id = accounts.id)
.onBetween(column, range)
Adds a onBetween clause to the query.
knex.select('*').from('users').join('contacts', function() {
this.on('users.id', '=', 'contacts.id').onBetween('contacts.id', [5, 30])
})
Outputs:
select * from `users` inner join `contacts` on `users`.`id` = `contacts`.`id` and `contacts`.`id` between 5 and 30
.onNotBetween(column, range)
Adds a onNotBetween clause to the query.
knex.select('*').from('users').join('contacts', function() {
this.on('users.id', '=', 'contacts.id').onNotBetween('contacts.id', [5, 30])
})
Outputs:
select * from `users` inner join `contacts` on `users`.`id` = `contacts`.`id` and `contacts`.`id` not between 5 and 30
.clearSelect()
Clears all select clauses from the query, excluding subqueries.
knex.select('email', 'name').from('users').clearSelect()
Outputs:
select * from `users`
.clearWhere()
Clears all where clauses from the query, excluding subqueries.
knex.select('email', 'name').from('users').where('id', 1).clearWhere()
Outputs:
select `email`, `name` from `users`
.distinct()
Sets a distinct clause on the query.
// select distinct 'first_name' from customers
knex('customers')
.distinct('first_name', 'last_name')
.select()
Outputs:
select distinct `first_name`, `last_name` from `customers`
.groupBy(*names)
Adds a group by clause to the query.
knex('users').groupBy('count')
Outputs:
select * from `users` group by `count`
.groupByRaw(sql)
Adds a raw group by clause to the query.
knex.select('year', knex.raw('SUM(profit)')).from('sales').groupByRaw('year WITH ROLLUP')
Outputs:
select `year`, SUM(profit) from `sales` group by year WITH ROLLUP
.orderBy(column, [direction])
Adds an order by clause to the query.
knex('users').orderBy('name', 'desc')
Outputs:
select * from `users` order by `name` desc
.orderByRaw(sql)
Adds an order by raw clause to the query.
knex.select('*').from('table').orderByRaw('col DESC NULLS LAST')
Outputs:
select * from `table` order by col DESC NULLS LAST
.having(column, operator, value)
Adds a having clause to the query.
knex('users')
.groupBy('count')
.orderBy('name', 'desc')
.having('count', '>', 100)
Outputs:
select * from `users` group by `count` having `count` > 100 order by `name` desc
.havingIn(column, values)
Adds a havingIn clause to the query.
knex.select('*').from('users').havingIn('id', [5, 3, 10, 17])
Outputs:
select * from `users` having `id` in (5, 3, 10, 17)
.havingNotIn(column, values)
Adds a havingNotIn clause to the query.
knex.select('*').from('users').havingNotIn('id', [5, 3, 10, 17])
Outputs:
select * from `users` having `id` not in (5, 3, 10, 17)
.havingNull(column)
Adds a havingNull clause to the query.
knex.select('*').from('users').havingNull('email')
Outputs:
select * from `users` having `email` is null
.havingNotNull(column)
Adds a havingNotNull clause to the query.
knex.select('*').from('users').havingNotNull('email')
Outputs:
select * from `users` having `email` is not null
.havingExists(builder | callback)
Adds a havingExists clause to the query.
knex.select('*').from('users').havingExists(function() {
this.select('*').from('accounts').whereRaw('users.account_id = accounts.id');
})
Outputs:
select * from `users` having exists (select * from `accounts` where users.account_id = accounts.id)
.havingNotExists(builder | callback)
Adds a havingNotExists clause to the query.
knex.select('*').from('users').havingNotExists(function() {
this.select('*').from('accounts').whereRaw('users.account_id = accounts.id');
})
Outputs:
select * from `users` having not exists (select * from `accounts` where users.account_id = accounts.id)
.havingBetween(column, range)
Adds a havingBetween clause to the query.
knex.select('*').from('users').havingBetween('id', [5, 10])
Outputs:
select * from `users` having `id` between 5 and 10
.havingNotBetween(column, range)
Adds a havingNotBetween clause to the query.
knex.select('*').from('users').havingNotBetween('id', [5, 10])
Outputs:
select * from `users` having `id` not between 5 and 10
.havingRaw(column, operator, value)
Adds a havingRaw clause to the query.
knex('users')
.groupBy('count')
.orderBy('name', 'desc')
.havingRaw('count > ?', [100])
Outputs:
select * from `users` group by `count` having count > 100 order by `name` desc
.offset(value)
Adds an offset clause to the query.
knex.select('*').from('users').offset(10)
Outputs:
select * from `users` limit 18446744073709551615 offset 10
.limit(value)
Adds a limit clause to the query.
knex.select('*').from('users').limit(10).offset(30)
Outputs:
select * from `users` limit 10 offset 30
.union([*queries], [wrap])
Creates a union query, taking an array or a list of callbacks to build the union statement, with optional boolean wrap. The queries will be individually wrapped in parentheses with a true wrap parameter.
knex.select('*').from('users').whereNull('last_name').union(function() {
this.select('*').from('users').whereNull('first_name');
})
Outputs:
select * from `users` where `last_name` is null union select * from `users` where `first_name` is null
.unionAll(query)
Creates a union all query, with the same method signature as the union method.
knex.select('*').from('users').whereNull('last_name').unionAll(function() {
this.select('*').from('users').whereNull('first_name');
})
Outputs:
select * from `users` where `last_name` is null union all select * from `users` where `first_name` is null
.insert(data, [returning])
Creates an insert query, taking either a hash of properties to be inserted into the row, or an array of inserts, to be executed as a single insert command. Resolves the promise / fulfills the callback with an array containing the first insert id of the inserted model, or an array containing all inserted ids for postgresql.
// Returns [1] in "mysql", "sqlite", "oracle"; [] in "postgresql" unless the 'returning' parameter is set.
knex('books').insert({title: 'Slaughterhouse Five'})
Outputs:
insert into `books` (`title`) values ('Slaughterhouse Five')
// Normalizes for empty keys on multi-row insert:
knex('coords').insert([{x: 20}, {y: 30}, {x: 10, y: 20}])
Outputs:
insert into `coords` (`x`, `y`) values (20, DEFAULT), (DEFAULT, 30), (10, 20)
// Returns [2] in "mysql", "sqlite"; [2, 3] in "postgresql"
knex.insert([{title: 'Great Gatsby'}, {title: 'Fahrenheit 451'}], 'id').into('books')
Outputs:
insert into `books` (`title`) values ('Great Gatsby'), ('Fahrenheit 451')
If one prefers that undefined keys are replaced with NULL
instead of DEFAULT
one may give useNullAsDefault
configuration parameter in knex config.
var knex = require('knex')({
client: 'mysql',
connection: {
host : '127.0.0.1',
user : 'your_database_user',
password : 'your_database_password',
database : 'myapp_test'
},
useNullAsDefault: true
});
knex('coords').insert([{x: 20}, {y: 30}, {x: 10, y: 20}])
// insert into `coords` (`x`, `y`) values (20, NULL), (NULL, 30), (10, 20)"
.returning(column) / .returning([column1, column2, ...])
Utilized by PostgreSQL, MSSQL, and Oracle databases, the returning method specifies which column should be returned by the insert and update methods. Passed column parameter may be a string or an array of strings. When passed in a string, makes the SQL result be reported as an array of values from the specified column. When passed in an array of strings, makes the SQL result be reported as an array of objects, each containing a single property for each of the specified columns.
// Returns [1]
knex('books')
.returning('id')
.insert({title: 'Slaughterhouse Five'})
Outputs:
insert into `books` (`title`) values ('Slaughterhouse Five')
// Returns [2] in "mysql", "sqlite"; [2, 3] in "postgresql"
knex('books')
.returning('id')
.insert([{title: 'Great Gatsby'}, {title: 'Fahrenheit 451'}])
Outputs:
insert into `books` (`title`) values ('Great Gatsby'), ('Fahrenheit 451')
// Returns [ { id: 1, title: 'Slaughterhouse Five' } ]
knex('books')
.returning(['id','title'])
.insert({title: 'Slaughterhouse Five'})
Outputs:
insert into `books` (`title`) values ('Slaughterhouse Five')
.update(data, [returning]) / .update(key, value, [returning])
Creates an update query, taking a hash of properties or a key/value pair to be updated based on the other query constraints. Resolves the promise / fulfills the callback with the number of affected rows for the query. If a key to be updated has value undefined it is ignored.
knex('books')
.where('published_date', '<', 2000)
.update({
status: 'archived',
thisKeyIsSkipped: undefined
})
Outputs:
update `books` set `status` = 'archived' where `published_date` < 2000
// Returns [1] in "mysql", "sqlite", "oracle"; [] in "postgresql" unless the 'returning' parameter is set.
knex('books').update('title', 'Slaughterhouse Five')
Outputs:
update `books` set `title` = 'Slaughterhouse Five'
.del()
Aliased to del as delete is a reserved word in JavaScript, this method deletes one or more rows, based on other conditions specified in the query. Resolves the promise / fulfills the callback with the number of affected rows for the query.
knex('accounts')
.where('activated', false)
.del()
Outputs:
delete from `accounts` where `activated` = false
.transacting(transactionObj)
Used by knex.transaction, the transacting method may be chained to any query and passed the object you wish to join the query as part of the transaction for.
var Promise = require('bluebird');
knex.transaction(function(trx) {
knex('books').transacting(trx).insert({name: 'Old Books'})
.then(function(resp) {
var id = resp[0];
return someExternalMethod(id, trx);
})
.then(trx.commit)
.catch(trx.rollback);
})
.then(function(resp) {
console.log('Transaction complete.');
})
.catch(function(err) {
console.error(err);
});
.transacting(t).forUpdate()
Dynamically added after a transaction is specified, the forUpdate adds a FOR UPDATE in PostgreSQL and MySQL during a select statement.
knex('tableName')
.transacting(trx)
.forUpdate()
.select('*')
Outputs:
select * from `tableName` for update
.count(column|raw)
Performs a count on the specified column. Also accepts raw expressions. Note that in Postgres, count returns a bigint type which will be a String and not a Number (more info).
knex('users').count('active')
Outputs:
select count(`active`) from `users`
knex('users').count('active as a')
Outputs:
select count(`active`) as `a` from `users`
knex('users').count(knex.raw('??', ['active']))
Outputs:
select count(`active`) from `users`
Use countDistinct to add a distinct expression inside the aggregate function.
knex('users').countDistinct('active')
Outputs:
select count(distinct `active`) from `users`
.min(column|raw)
Gets the minimum value for the specified column. Also accepts raw expressions.
knex('users').min('age')
Outputs:
select min(`age`) from `users`
knex('users').min('age as a')
Outputs:
select min(`age`) as `a` from `users`
knex('users').min(knex.raw('??', ['age']))
Outputs:
select min(`age`) from `users`
.max(column|raw)
Gets the maximum value for the specified column. Also accepts raw expressions.
knex('users').max('age')
Outputs:
select max(`age`) from `users`
knex('users').max('age as a')
Outputs:
select max(`age`) as `a` from `users`
knex('users').max(knex.raw('??', ['age']))
Outputs:
select max(`age`) from `users`
.sum(column|raw)
Retrieve the sum of the values of a given column. Also accepts raw expressions.
knex('users').sum('products')
Outputs:
select sum(`products`) from `users`
knex('users').sum('products as p')
Outputs:
select sum(`products`) as `p` from `users`
knex('users').sum(knex.raw('??', ['products']))
Outputs:
select sum(`products`) from `users`
Use sumDistinct to add a distinct expression inside the aggregate function.
knex('users').sumDistinct('products')
Outputs:
select sum(distinct `products`) from `users`
.avg(column|raw)
Retrieve the average of the values of a given column. Also accepts raw expressions.
knex('users').avg('age')
Outputs:
select avg(`age`) from `users`
knex('users').avg('age as a')
Outputs:
select avg(`age`) as `a` from `users`
knex('users').avg(knex.raw('??', ['age']))
Outputs:
select avg(`age`) from `users`
Use avgDistinct to add a distinct expression inside the aggregate function.
knex('users').avgDistinct('age')
Outputs:
select avg(distinct `age`) from `users`
.increment(column, amount)
Increments a column value by the specified amount.
knex('accounts')
.where('userid', '=', 1)
.increment('balance', 10)
Outputs:
update `accounts` set `balance` = `balance` + 10 where `userid` = 1
.decrement(column, amount)
Decrements a column value by the specified amount.
knex('accounts').where('userid', '=', 1).decrement('balance', 5)
Outputs:
update `accounts` set `balance` = `balance` - 5 where `userid` = 1
.truncate()
Truncates the current table.
knex('accounts').truncate()
Outputs:
truncate `accounts`
.pluck(id)
This will pluck the specified column from each row in your results, yielding a promise which resolves to the array of values selected.
knex.table('users').pluck('id').then(function(ids) { console.log(ids); });
.first([columns])
Similar to select, but only retrieves & resolves with the first record from the query.
knex.table('users').first('id', 'name').then(function(row) { console.log(row); });
.clone()
Clones the current query chain, useful for re-using partial query snippets in other queries without mutating the original.
.modify(fn, *arguments)
Allows encapsulating and re-using query snippets and common behaviors as functions. The callback function should receive the query builder as its first argument, followed by the rest of the (optional) parameters passed to modify.
var withUserName = function(queryBuilder, foreignKey) {
queryBuilder.leftJoin('users', foreignKey, 'users.id').select('users.user_name');
};
knex.table('articles').select('title', 'body').modify(withUserName, 'articles_user.id').then(function(article) {
console.log(article.user_name);
});
.columnInfo([columnName])
Returns an object with the column info about the current table, or an individual column if one is passed, returning an object with the following keys: defaultValue: the default value for the column type: the column type maxLength: the max length set for the column nullable: whether the column may be null
knex('users').columnInfo().then(function(info) { // ... });
.debug([enabled])
Overrides the global debug setting for the current query chain. If enabled is omitted, query debugging will be turned on.
(incomplete) - This feature was incorrectly documented as functional.
If implemented, the method would set the db connection to use for the query without using the connection pool.
.options()
Allows for mixing in additional options as defined by database client specific libraries:
knex('accounts as a1')
.leftJoin('accounts as a2', function() {
this.on('a1.email', '<>', 'a2.email');
})
.select(['a1.email', 'a2.email'])
.where(knex.raw('a1.id = 1'))
.options({ nestTables: true, rowMode: 'array' })
.limit(2)
.then(...
Transactions are an important feature of relational databases, as they allow correct recovery from failures and keep a database consistent even in cases of system failure. All queries within a transaction are executed on the same database connection, and run the entire set of queries as a single unit of work. Any failure will mean the database will rollback any queries executed on that connection to the pre-transaction state.
Transactions are handled by passing a handler function into knex.transaction
. The handler function accepts a single argument, an object which may be used in two ways:
var Promise = require('bluebird');
// Using trx as a query builder:
knex.transaction(function(trx) {
var books = [
{title: 'Canterbury Tales'},
{title: 'Moby Dick'},
{title: 'Hamlet'}
];
return trx
.insert({name: 'Old Books'}, 'id')
.into('catalogues')
.then(function(ids) {
return Promise.map(books, function(book) {
book.catalogue_id = ids[0];
// Some validation could take place here.
return trx.insert(info).into('books');
});
});
})
.then(function(inserts) {
console.log(inserts.length + ' new books saved.');
})
.catch(function(error) {
// If we get here, that means that neither the 'Old Books' catalogues insert,
// nor any of the books inserts will have taken place.
console.error(error);
});
And then this example:
var Promise = require('bluebird');
// Using trx as a transaction object:
knex.transaction(function(trx) {
var books = [
{title: 'Canterbury Tales'},
{title: 'Moby Dick'},
{title: 'Hamlet'}
];
knex.insert({name: 'Old Books'}, 'id')
.into('catalogues')
.transacting(trx)
.then(function(ids) {
return Promise.map(books, function(book) {
book.catalogue_id = ids[0];
// Some validation could take place here.
return knex.insert(info).into('books').transacting(trx);
});
})
.then(trx.commit)
.catch(trx.rollback);
})
.then(function(inserts) {
console.log(inserts.length + ' new books saved.');
})
.catch(function(error) {
// If we get here, that means that neither the 'Old Books' catalogues insert,
// nor any of the books inserts will have taken place.
console.error(error);
});
Throwing an error directly from the transaction handler function automatically rolls back the transaction, same as returning a rejected promise.
Notice that if a promise is not returned within the handler, it is up to you to ensure trx.commit
, or trx.rollback
are called, otherwise the transaction connection will hang.
The knex.schema
is a getter function, which returns a stateful object containing the query. Therefore be sure to obtain a new instance of the knex.schema
for every query. These methods return promises.
knex.schema.withSchema([schemaName])
Specifies the schema to be used when using the schema-building commands.
knex.schema.withSchema('public').createTable('users', function (table) {
table.increments();
})
Outputs:
create table `public`.`users` (`id` int unsigned not null auto_increment primary key)
knex.schema.createTable(tableName, callback)
Creates a new table on the database, with a callback function to modify the table's structure, using the schema-building commands.
knex.schema.createTable('users', function (table) {
table.increments();
table.string('name');
table.timestamps();
})
Outputs:
create table `users` (`id` int unsigned not null auto_increment primary key, `name` varchar(255), `created_at` datetime, `updated_at` datetime)
knex.schema.createTableIfNotExists(tableName, callback)
Creates a new table on the database if it doesn't exists on database, with a callback function to modify the table's structure, using the schema-building commands.
knex.schema.createTableIfNotExists('users', function (table) {
table.increments();
table.string('name');
table.timestamps();
})
Outputs:
create table if not exists `users` (`id` int unsigned not null auto_increment primary key, `name` varchar(255), `created_at` datetime, `updated_at` datetime)
knex.schema.renameTable(from, to)
Renames a table from a current tableName to another.
knex.schema.renameTable('users', 'old_users')
Outputs:
rename table `users` to `old_users`
knex.schema.dropTable(tableName)
Drops a table, specified by tableName.
knex.schema.dropTable('users')
Outputs:
drop table `users`
knex.schema.hasTable(tableName)
Checks for a table's existence by tableName, resolving with a boolean to signal if the table exists.
knex.schema.hasTable('users').then(function(exists) {
if (!exists) {
return knex.schema.createTable('users', function(t) {
t.increments('id').primary();
t.string('first_name', 100);
t.string('last_name', 100);
t.text('bio');
});
}
});
knex.schema.hasColumn(tableName, columnName)
Checks if a column exists in the current table, resolves the promise with a boolean, true if the column exists, false otherwise.
knex.schema.dropTableIfExists(tableName)
Drops a table conditionally if the table exists, specified by tableName.
knex.schema.dropTableIfExists('users')
Outputs:
drop table if exists `users`
knex.schema.table(tableName, callback)
Chooses a database table, and then modifies the table, using the Schema Building functions inside of the callback.
knex.schema.table('users', function (table) {
table.dropColumn('name');
table.string('first_name');
table.string('last_name');
})
Outputs:
alter table `users` add `first_name` varchar(255), add `last_name` varchar(255);
alter table `users` drop `name`
knex.schema.raw(statement)
Run an arbitrary sql query in the schema builder chain.
knex.schema.raw("SET sql_mode='TRADITIONAL'")
.table('users', function (table) {
table.dropColumn('name');
table.string('first_name');
table.string('last_name');
})
Outputs:
SET sql_mode='TRADITIONAL';
alter table `users` add `first_name` varchar(255), add `last_name` varchar(255);
alter table `users` drop `name`
table.dropColumn(name)
Drops a column, specified by the column's name
table.dropColumns(*columns)
Drops multiple columns, taking a variable number of column names.
table.renameColumn(from, to)
Renames a column from one name to another.
table.increments(name)
Adds an auto incrementing column, in PostgreSQL this is a serial. This will be used as the primary key for the table. Also available is a bigIncrements if you wish to add a bigint incrementing number (in PostgreSQL bigserial).
table.integer(name)
Adds an integer column.
table.bigInteger(name)
In MySQL or PostgreSQL, adds a bigint column, otherwise adds a normal integer. Note that bigint data is returned as a string in queries because JavaScript may be unable to parse them without loss of precision.
table.text(name, [textType])
Adds a text column, with optional textType for MySql text datatype preference. textType may be mediumtext or longtext, otherwise defaults to text.
table.string(name, [length])
Adds a string column, with optional length defaulting to 255.
table.float(column, [precision], [scale])
Adds a float column, with optional precision (defaults to 8) and scale (defaults to 2).
table.decimal(column, [precision], [scale])
Adds a decimal column, with optional precision (defaults to 8) and scale (defaults to 2).
table.boolean(name)
Adds a boolean column.
table.date(name)
Adds a date column.
table.dateTime(name)
Adds a dateTime column.
table.time(name)
Adds a time column.
table.timestamp(name, [standard])
Adds a timestamp column, defaults to timestamptz in PostgreSQL, unless true is passed as the second argument. For Example:
table.timestamp('created_at').defaultTo(knex.fn.now());
table.timestamps([useTimestamps], [defaultToNow])
Adds a created_at and updated_at column on the database, setting these each to dateTime types. When true is passed as the first argument a timestamp type is used. Both colums default to being not null and the current timestamp when true is passed as the second argument.
table.dropTimestamps()
Drops the columns created_at and updated_at from the table, which can be created via timestamps.
table.binary(name, [length])
Adds a binary column, with optional length argument for MySQL.
table.enu(col, values)
Adds a enum column, (aliased to enu, as enum is a reserved word in JavaScript). Note that the second argument is an array of values. Example:
table.enu('column', ['value1', 'value2'])
table.json(name)
Adds a json column, using the built-in json type in postgresql, defaulting to a text column in older versions of postgresql or in unsupported databases. Note that when setting an array (or a value that could be an array) as the value of a json or jsonb column, you should use JSON.stringify() to convert your value to a string prior to passing it to the query builder, e.g.
knex.table('users')
.where({id: 1})
.update({json_data: JSON.stringify(mightBeAnArray)});
This is because postgresql has a native array type which uses a syntax incompatible with json; knex has no way of knowing which syntax to use, and calling JSON.stringify() forces json-style syntax.
table.jsonb(name)
Adds a jsonb column. Works similar to table.json(), but uses native jsonb type if possible.
table.uuid(name)
Adds a uuid column - this uses the built-in uuid type in postgresql, and falling back to a char(36) in other databases.
table.comment(value)
Sets the comment for a table.
table.engine(val)
Sets the engine for the database table, only available within a createTable call, and only applicable to MySQL.
table.charset(val)
Sets the charset for the database table, only available within a createTable call, and only applicable to MySQL.
table.collate(val)
Sets the collation for the database table, only available within a createTable call, and only applicable to MySQL.
table.inherits(val)
Sets the tables that this table inherits, only available within a createTable call, and only applicable to PostgreSQL.
table.specificType(name, type)
Sets a specific type for the column creation, if you'd like to add a column type that isn't supported here.
table.index(columns, [indexName], [indexType])
Adds an index to a table over the given columns. A default index name using the columns is used unless indexName is specified. The indexType can be optionally specified for PostgreSQL.
table.dropIndex(columns, [indexName])
Drops an index from a table. A default index name using the columns is used unless indexName is specified (in which case columns is ignored).
table.unique(columns, [indexName])
Adds an unique index to a table over the given columns
. A default index name using the columns is used unless indexName is specified.
knex.schema.alterTable('users', function(t) {
t.unique('email')
})
knex.schema.alterTable('job', function(t) {
t.unique(['account_id', 'program_id'])
})
table.foreign(columns, [foreignKeyName])
Adds a foreign key constraint to a table for an existing column using table.foreign(column).references(column)
or multiple columns using table.foreign(columns).references(columns)
. A default key name using the columns is used unless foreignKeyName is specified. You can also chain onDelete and/or onUpdate to set the reference option (RESTRICT, CASCADE, SET NULL, NO ACTION) for the operation. Note, this is the same as column.references(column) but works for existing columns.
knex.schema.table('users', function (table) {
table.integer('user_id').unsigned()
table.foreign('user_id').references('Items.user_id_in_items')
})
table.dropForeign(columns, [foreignKeyName])
Drops a foreign key constraint from a table. A default foreign key name using the columns is used unless foreignKeyName is specified (in which case columns is ignored).
table.dropUnique(columns, [indexName])
Drops a unique key constraint from a table. A default unique key name using the columns is used unless indexName is specified (in which case columns is ignored).
table.dropPrimary([constraintName])
Drops the primary key constraint on a table. Defaults to tablename_pkey unless constraintName is specified.
The following three methods may be chained on the schema building methods, as modifiers to the column.
column.alter()
Marks the column as an alter / modify, instead of the default add. Note: This only works in .alterTable() and is not supported by SQlite. Alter is not done incrementally over older column type so if you like to add notNull
and keep the old default value, the alter statement must contain both .notNull().defaultTo(1).alter()
. If one just tries to add .notNull().alter()
the old default value will be dropped.
knex.schema.alterTable('user', function(t) {
t.increments().primary(); // add
// drops previous default value from column, change type to string and add not nullable constraint
t.string('username', 35).notNullable().alter();
// drops both not null contraint and the default value
t.integer('age').alter();
});
column.index([indexName], [indexType])
Specifies a field as an index. If an indexName is specified, it is used in place of the standard index naming convention of tableName_columnName. The indexType can be optionally specified for PostgreSQL. No-op if this is chained off of a field that cannot be indexed.
column.primary([constraintName])
When called on a single column it will set that column as the primary key for a table.
To create a compound primary key, pass an array of column names:
`table.primary(['column1', 'column2'])`.
Constraint name defaults to `tablename_pkey` unless `constraintName` is specified.
column.unique()
Sets the column as unique.
column.references(column)
Sets the "column" that the current column references as a foreign key. "column" can either be ".
column.inTable(table)
Sets the "table" where the foreign key column is located after calling column.references.
column.onDelete(command)
Sets the SQL command to be run "onDelete".
column.onUpdate(command)
Sets the SQL command to be run "onUpdate".
column.defaultTo(value)
Sets the default value for the column on an insert.
column.unsigned()
Specifies an integer as unsigned. No-op if this is chained off of a non-integer field.
column.notNullable()
Adds a not null on the current column being created.
column.nullable()
Default on column creation, this explicitly sets a field to be nullable.
column.first()
Sets the column to be inserted on the first position, only used in MySQL alter tables.
column.after(field)
Sets the column to be inserted after another, only used in MySQL alter tables.
column.comment(value)
Sets the comment for a column.
knex.schema.createTable('accounts', function(t) {
t.increments().primary();
t.string('email').unique().comment('This is the email field');
});
column.collate(collation)
Sets the collation for a column (only works in MySQL). Here is a list of all available collations: https://dev.mysql.com/doc/refman/5.5/en/charset-charsets.html
knex.schema.createTable('users', function(t) {
t.increments();
t.string('email').unique().collate('utf8_unicode_ci');
});
Sometimes you may need to use a raw expression in a query. Raw query object may be injected pretty much anywhere you want, and using proper bindings can ensure your values are escaped properly, preventing SQL-injection attacks.
One can paramterize sql given to knex.raw(sql, bindings)
. Parameters can be positional named. One can also choose if parameter should be treated as value or as sql identifier e.g. in case of 'TableName.ColumnName'
reference.
knex('users')
.select(knex.raw('count(*) as user_count, status'))
.where(knex.raw(1))
.orWhere(knex.raw('status <> ?', [1]))
.groupBy('status')
Outputs:
select count(*) as user_count, status from `users` where 1 or status <> 1 group by `status`
Positional bindings ?
are interpreted as values and ??
are interpreted as identifiers.
knex('users').where(knex.raw('?? = ?', ['user.name', 1]))
Outputs:
select * from `users` where `user`.`name` = 1
Named bindings such as :name
are interpreted as values and :name:
interpreted as identifiers. Named bindings are processed so long as the value is anything other than undefined
.
knex('users')
.where(knex.raw(':name: = :thisGuy or :name: = :otherGuy or :name: = :undefinedBinding', {
name: 'users.name',
thisGuy: 'Bob',
otherGuy: 'Jay',
undefinedBinding: undefined
}))
Error:
Undefined binding(s) detected when compiling RAW query: `users`.`name` = ? or `users`.`name` = ? or `users`.`name` = :undefinedBinding
For simpler queries where one only has a single binding, .raw
can accept said binding as its second parameter.
knex('users')
.where(
knex.raw('LOWER("login") = ?', 'knex')
)
.orWhere(
knex.raw('accesslevel = ?', 1)
)
.orWhere(
knex.raw('updtime = ?', new Date.UTC('01-01-2016'))
)
Error:
Date.UTC is not a constructor
Note that due to ambiguity, arrays must be passed as arguments within a containing array.
knex.raw('select * from users where id in (?)', [1, 2, 3]);
// Error: Expected 3 bindings, saw 1
knex.raw('select * from users where id in (?)', [[1, 2, 3]])
Outputs:
select * from users where id in (1, 2, 3)
To prevent replacement of ?
one can use the escape sequence \\?
.
knex.select('*').from('users').where('id', '=', 1).whereRaw('?? \\? ?', ['jsonColumn', 'jsonKey'])
Outputs:
select * from `users` where `id` = 1 and `jsonColumn` ? 'jsonKey'
To prevent replacement of named bindings one can use the escape sequence \\:
.
knex.select('*').from('users').whereRaw(":property: = '\\:value' OR \\:property: = :value", {
property: 'name',
value: 'Bob'
})
Outputs:
select * from `users` where `name` = ':value' OR :property: = 'Bob'
Raw expressions are created by using knex.raw(sql, [bindings])
and passing this as a value for any value in the query chain.
knex('users')
.select(knex.raw('count(*) as user_count, status'))
.where(knex.raw(1))
.orWhere(knex.raw('status <> ?', [1]))
.groupBy('status')
Outputs:
select count(*) as user_count, status from `users` where 1 or status <> 1 group by `status`
The knex.raw
may also be used to build a full query and execute it, as a standard query builder query would be executed. The benefit of this is that it uses the connection pool and provides a standard interface for the different client libraries.
knex.raw('select * from users where id = ?', [1]).then(function(resp) { ... });
Note that the response will be whatever the underlying sql library would typically return on a normal query, so you may need to look at the documentation for the base library the queries are executing against to determine how to handle the response.
The raw query builder also comes with a wrap
method, which allows wrapping the query in a value:
var subcolumn = knex.raw('select avg(salary) from employee where dept_no = e.dept_no')
.wrap('(', ') avg_sal_dept');
knex.select('e.lastname', 'e.salary', subcolumn)
.from('employee as e')
.whereRaw('dept_no = e.dept_no')
Outputs:
select `e`.`lastname`, `e`.`salary`, (select avg(salary) from employee where dept_no = e.dept_no) avg_sal_dept from `employee` as `e` where dept_no = e.dept_no
Note that the example above be achieved more easily using the as method.
var subcolumn = knex.avg('salary')
.from('employee')
.whereRaw('dept_no = e.dept_no')
.as('avg_sal_dept');
knex.select('e.lastname', 'e.salary', subcolumn)
.from('employee as e')
.whereRaw('dept_no = e.dept_no')
Outputs:
select `e`.`lastname`, `e`.`salary`, (select avg(`salary`) from `employee` where dept_no = e.dept_no) as `avg_sal_dept` from `employee` as `e` where dept_no = e.dept_no
A collection of utilities that the knex library provides for convenience.
The batchInsert
utility will insert a batch of rows wrapped inside a transaction (which is automatically created unless explicitly given a transaction using transacting), at a given chunkSize
.
It's primarily designed to be used when you have thousands of rows to insert into a table.
By default, the chunkSize
is set to 1000.
BatchInsert also allows for returning values and supplying transactions using transacting.
var rows = [{...}, {...}];
var chunkSize = 30;
knex.batchInsert('TableName', rows, chunkSize)
.returning('id')
.then(function(ids) { ... })
.catch(function(error) { ... });
knex.transaction(function(tr) {
return knex.batchInsert('TableName', rows, chunkSize)
.transacting(tr)
})
.then(function() { ... })
.catch(function(error) { ... });
Knex.js provides several options to deal with query output. The following methods are present on the query builder, schema builder, and the raw builder:
Promises are the preferred way of dealing with queries in knex, as they allow you to return values from a fulfillment handler, which in turn become the value of the promise. The main benefit of promises are the ability to catch thrown errors without crashing the node app, making your code behave like a .try / .catch / .finally in synchronous code.
knex.select('name')
.from('users')
.where('id', '>', 20)
.andWhere('id', '<', 200)
.limit(10)
.offset(x)
.then(function(rows) {
return _.pluck(rows, 'name');
})
.then(function(names) {
return knex.select('id').from('nicknames').whereIn('nickname', names);
})
.then(function(rows) {
console.log(rows);
})
.catch(function(error) {
console.error(error)
});
.then(onFulfilled, [onRejected])
Coerces the current query builder chain into a promise state, accepting the resolve and reject handlers as specified by the Promises/A+ spec. As stated in the spec, more than one call to the then method for the current query chain will resolve with the same value, in the order they were called; the query will not be executed multiple times.
knex.select('*')
.from('users')
.where({name: 'Tim'})
.then(function(rows) {
return knex.insert({user_id: rows[0].id, name: 'Test'}, 'id').into('accounts');
})
.then(function(id) {
console.log('Inserted Account ' + id);
})
.catch(function(error) { console.error(error); });
.catch(onRejected)
Coerces the current query builder into a promise state, catching any error thrown by the query, the same as calling .then(null, onRejected).
return knex.insert({id: 1, name: 'Test'}, 'id')
.into('accounts')
.catch(function(error) {
console.error(error);
}).then(function() {
return knex.select('*')
.from('accounts')
.where('id', 1);
}).then(function(rows) {
console.log(rows[0]);
})
.catch(function(error) {
console.error(error);
});
.tap(sideEffectHandler)
Executes side effects on the resolved response, ultimately returning a promise that fulfills with the original value. A thrown error or rejected promise will cause the promise to transition into a rejected state.
// Using only .then()
query.then(function(x) {
doSideEffectsHere(x);
return x;
});
// Using .tap()
promise.tap(doSideEffectsHere);
.map(mapper)
A passthrough to Bluebird's map implementation with the result set.
knex.select('name').from('users').limit(10).map(function(row) {
return row.name;
})
.then(function(names) { console.log(names); })
.catch(function(e) { console.error(e); });
.reduce(reducer, [initialValue])
A passthrough to Bluebird's reduce implementation with the result set.
knex.select('name').from('users').limit(10).reduce(function(memo, row) {
memo.names.push(row.name);
memo.count++;
return memo;
}, {count: 0, names: []})
.then(function(obj) { console.log(obj); })
.catch(function(e) { console.error(e); });
.bind(context)
A passthrough to Bluebird's bind method which sets the context value (this) for the returned promise.
knex.select('name').from('users')
.limit(10)
.bind(console)
.then(console.log)
.catch(console.error)
.return(value)
Shorthand for calling .then(function() { return value }).
// Without return:
knex.insert(values).into('users')
.then(function() {
return {inserted: true};
});
knex.insert(values).into('users').return({inserted: true});
.asCallback(callback)
If you'd prefer a callback interface over promises, the asCallback function accepts a standard node style callback for executing the query chain. Note that as with the then method, subsequent calls to the same query chain will return the same result.
knex.select('name').from('users')
.where('id', '>', 20)
.andWhere('id', '<', 200)
.limit(10)
.offset(x)
.asCallback(function(err, rows) {
if (err) return console.error(err);
knex.select('id').from('nicknames')
.whereIn('nickname', _.pluck(rows, 'name'))
.asCallback(function(err, rows) {
if (err) return console.error(err);
console.log(rows);
});
});
Streams are a powerful way of piping data through as it comes in, rather than all at once. You can read more about streams here at substack's stream handbook. See the following for example uses of stream & pipe. If you wish to use streams with PostgreSQL, you must also install the pg-query-stream module. On an HTTP server, make sure to manually close your streams if a request is aborted.
.stream([options], [callback])
If called with a callback, the callback is passed the stream and a promise is returned. Otherwise, the readable stream is returned.
// Retrieve the stream:
var stream = knex.select('*').from('users').stream();
stream.pipe(writableStream);
// With options:
var stream = knex.select('*').from('users').stream({highWaterMark: 5});
stream.pipe(writableStream);
// Use as a promise:
var stream = knex.select('*').from('users')
.where(knex.raw('id = ?', [1]))
.stream(function(stream) {
stream.pipe(writableStream);
})
.then(function() { // ... })
.catch(function(e) { console.error(e); });
.pipe(writableStream)
Pipe a stream for the current query to a writableStream.
var stream = knex.select('*').from('users').pipe(writableStream);
A query event is fired just before a query takes place, providing data about the query, including the connection's __cid property and any other information about the query as described in toSQL. Useful for logging all queries throughout your application.
knex.select('*')
.from('users')
.on('query', function(data) {
app.log(data);
})
.then(function() {
// ...
});
A query-error event is fired when an error occurs when running a query, providing the error object and data about the query, including the connection's __cid property and any other information about the query as described in toSQL. Useful for logging all query errors throughout your application.
knex.select(['NonExistentColumn'])
.from('users')
.on('query-error', function(error, obj) {
app.log(error);
})
.then(function() { // ... })
.catch(function(error) {
// Same error object as the query-error event provides.
});
A query-response event is fired when a successful query has been run, providing the response of the query and data about the query, including the connection's __cid property and any other information about the query as described in toSQL, and finally the query builder used for the query.
knex.select('*')
.from('users')
.on('query-response', function(response, obj, builder) {
// ...
})
.then(function(response) {
// Same response as the emitted event
})
.catch(function(error) { });
.toString()
Returns an array of query strings filled out with the correct values based on bindings, etc. Useful for debugging, but should not be used to create queries for running them against DB.
var toStringQuery = knex.select('*').from('users').where('id', 1).toString();
// Outputs: console.log(toStringQuery);
// select * from "users" where "id" = 1
.toSQL() and toSQL().toNative()
Returns an array of query strings filled out with the correct values based on bindings, etc. Useful for debugging and building queries for running them manually with DB driver. .toSQL().toNative()
outputs object with sql string and bindings in a dialects format in the same way that knex internally sends them to unterlying DB driver.
knex.select('*').from('users')
.where(knex.raw('id = ?', [1]))
.toSQL()
// Outputs:
// {
// bindings: [1],
// method: 'select',
// sql: 'select * from "users" where id = ?',
// options: undefined,
// toNative: function () {}
// }
knex.select('*').from('users')
.where(knex.raw('id = ?', [1]))
.toSQL().toNative()
// Outputs for postgresql dialect:
// {
// bindings: [1],
// sql: 'select * from "users" where id = $1',
// }
Migrations allow for you to define sets of schema changes so upgrading a database is a breeze.
The migration CLI is bundled with the knex install, and is driven by the node-liftoff module. To install globally, run:
$ npm install knex -g
Migrations use a knexfile, which specify various configuration settings for the module. To create a new knexfile, run the following:
$ knex init
# or for .coffee
$ knex init -x coffee
will create a sample knexfile.js - the file which contains our various database configurations. Once you have a knexfile.js, you can use the migration tool to create migration files to the specified directory (default migrations). Creating new migration files can be achieved by running:
$ knex migrate:make migration_name
Once you have finished writing the migrations, you can update the database matching your NODE_ENV
by running:
$ knex migrate:latest
You can also pass the --env
flag or set NODE_ENV
to select an alternative environment:
$ knex migrate:latest --env production
# or
$ NODE_ENV=production knex migrate:latest
To rollback the last batch of migrations:
$ knex migrate:rollback
Seed files allow you to populate your database with test or seed data independent of your migration files.
To create a seed file, run:
$ knex seed:make seed_name
Seed files are created in the directory specified in your knexfile.js for the current environment. A sample seed configuration looks like:
development: {
client: ...,
connection: { ... },
seeds: {
directory: './seeds/dev'
}
}
If no seeds.directory
is defined, files are created in ./seeds
. Note that the seed directory needs to be a relative path. Absolute paths are not supported (nor is it good practice).
To run seed files, execute:
$ knex seed:run
Seed files are executed in alphabetical order. Unlike migrations, every seed file will be executed when you run the command. You should design your seed files to reset tables as needed before inserting data.
A knexfile.js or knexfile.coffee generally contains all of the configuration for your database. It can optionally provide different configuration for different environments. You may pass a --knexfile
option to any of the command line statements to specify an alternate path to your knexfile.
module.exports = {
client: 'pg',
connection: process.env.DATABASE_URL || { user: 'me', database: 'my_app' }
};
module.exports = {
development: {
client: 'pg',
connection: { user: 'me', database: 'my_app' }
},
production: { client: 'pg', connection: process.env.DATABASE_URL }
};
You may provide a custom migration stub to be used in place of the default option.
module.exports = {
client: 'pg',
migration: {
stub: 'migration.stub'
}
};
knex.migrate
is the class utilized by the knex migrations cli.
Each method takes an optional config
object, which may specify the following properties:
directory
: a relative path to the directory containing the migration files (default ./migrations
)extension
: the file extension used for the generated migration files (default js
)tableName
: the table name used for storing the migration state (default knex_migrations
)disableTransactions
: don't run migrations inside transactions (default false
)loadExtensions
: array of file extensions which knex will treat as migrations. For example, if you have typescript transpiled into javascript in the same folder, you want to execute only javascript migrations. In this case, set loadExtensions
to ['.js']
(Notice the dot!) (default ['.co', '.coffee', '.eg', '.iced', '.js', '.litcoffee', '.ls', '.ts']
)By default, each migration is run inside a transaction. Whenever needed, one can disable transactions for all migrations via the common migration config option config.disableTransactions
or per-migration, via exposing a boolean property config.transaction
from a migration file:
exports.up = function(knex, Promise) { /* ... */ };
exports.down = function(knex, Promise) { /* ... */ };
exports.config = { transaction: false };
The same config property can be used for enabling transaction per-migration in case the common configuration has disableTransactions: true
.
knex.migrate.make(name, [config])
Creates a new migration, with the name of the migration being added.
knex.migrate.latest([config])
Runs all migrations that have not yet been run.
If you need to run something only after all migrations have finished their execution, you can do something like this:
knex.migrate.latest()
.then(function() {
return knex.seed.run();
})
.then(function() {
// migrations are finished
});
knex.migrate.rollback([config])
Rolls back the latest migration group.
knex.migrate.currentVersion([config])
Retrieves and returns the current migration version, as a promise. If there aren't any migrations run yet, returns "none" as the value for the currentVersion.
A lock system is there to prevent multiple processes from running the same migration batch in the same time. When a batch of migrations is about to be run, the migration system first tries to get a lock using a SELECT ... FOR UPDATE
statement (preventing race conditions from happening). If it can get a lock, the migration batch will run. If it can't, it will wait until the lock is released.
Please note that if your process unfortunately crashes, the lock will have to be manually removed in order to let migrations run again. The locks are saved in a table called "tableName
_lock"; it has single one column called is_locked
that you need to set to 0
in order to release the lock.
knex.seed
is the class utilized by the knex seed CLI.
Each method takes an optional config
object, which may specify the following properties:
directory
: a relative path to the directory containing the seed files (default ./seeds
)loadExtensions
: array of file extensions which knex will treat as seeds. For example, if you have typescript transpiled into javascript in the same folder, you want to execute only javascript seeds. In this case, set loadExtensions
to ['.js']
(Notice the dot!) (default ['.co', '.coffee', '.eg', '.iced', '.js', '.litcoffee', '.ls', '.ts']
)knex.seed.make(name, [config])
Creates a new seed file, with the name of the seed file being added.
knex.seed.run([config])
Runs all seed files for the current environment.
Have questions about the library? Come join us in the #bookshelf freenode IRC channel for support on knex.js and bookshelf.js, or post an issue on Stack Overflow or in the GitHub issue tracker.
How do I help contribute?
Glad you ask! Pull requests, or feature requests, though not always implemented, are a great way to help make Knex even better than it is now. If you're looking for something specific to help out with, there's a number of unit tests that aren't implemented yet, the library could never have too many of those. If you want to submit a fix or feature, take a look at the Contributing readme in the Github and go ahead and open a ticket.
How do I debug?
Knex is beginning to make use of the debug module internally, so you can set the DEBUG
environment variable to knex:*
to see all debugging, or select individual namespaces DEBUG=knex:query,knex:tx
to constrain a bit.
If you pass {debug: true}
as one of the options in your initialize settings, you can see all of the query calls being made. Sometimes you need to dive a bit further into the various calls and see what all is going on behind the scenes. I'd recommend node-inspector, which allows you to debug code with debugger
statements like you would in the browser.
At the start of your application code will catch any errors not otherwise caught in the normal promise chain handlers, which is very helpful in debugging.
How do I run the test suite?
The test suite looks for an environment variable called KNEX_TEST
for the path to the database configuration. If you run the following command:
$ export KNEX_TEST='/path/to/your/knex_config.js'
$ npm test
replacing with the path to your config file, and the config file is valid, the test suite should run properly.
My tests are failing because slow DB connection and short test timeouts! How to extend test timeouts?
Sometimes, e.g. when running CI on travis, test suite's default timeout of 5 seconds might be too short. In such cases an alternative test timeout value in milliseconds can be specified using the KNEX_TEST_TIMEOUT
environment variable.
$ export KNEX_TEST_TIMEOUT=30000
$ npm test
Can I use Knex outside of Node.js
Yes. While the WebSQL spec is deprecated, there is still an adapter that provides support. You will need to use a build tool like browserify or webpack for a browser build.
Fixed unhandled exception in batchInsert when the rows to be inserted resulted in duplicate key violation #1880
:
-chars #1890first
method as aliased select #1784with
to builder for common table expressions, #1599Reverted knex client breaking change (commit b74cd69e906), fixes #1587
Add missing dependency. #1516
undefined
values in QueryBuilder#where
and Raw
. #1459ping
function. (Oracle) #1486columnInfo
. (MSSQL) #1464ColumnCompiler#binary
. (MSSQL) #1464Bugfix: Using Raw
or QueryBuilder
as a binding to Raw
now works as intended
.primary()
and .dropPrimary()
between dialects #1430batchInsert
with existing transaction #1354knex.raw
to .whereNot
#1402.returning
#1398Support nested joins. #1397
Fix error in CLI command migrate:make
. #1386
QueryBuilder#orWhere
joins multiple arguments with AND
. #1164QueryBuilder#timeout
, Raw#timeout
. #1201 #1260host
as an alias for server
in accordance with other dialects. #1239query-response
event. #1231RETURNING
with UPDATE
. (Oracle) #1253Raw
bindings. #1251timestamps
accepts an argument to set NOT NULL
and default to current timestamp.TableBuilder#inherits
for PostgreSQL. #601andWhereBetween
and andWhereNotBetween
#1132valueForUndefined
failure. #1269renameColumn
no longer drops default value or nullability. #1326createTableIfNotExists
. #1362insert
and update
now ignore undefined
values. Back compatibility is provided through the option useNullAsDefault
. #1174, #1043countDistinct
, avgDistinct
and sumDistinct
. #1046schema.jsonb
. Deprecated schema.json(column, true)
. #991??
. #1103query
event when triggered by transactions. #855.
. #1181batchInsert
. #1182knex.raw
.query-error
event. #1163batchInsert
. #1182acquireConnectionTimeout
default 60 seconds preventing #1040 from happening. #1177.raw
queries in debug output. #1169cli
to use basic configuration without specific environment set. #1101knex.raw
instances without arguments. #853compress
option to MySQL2. #843timestamp with timezone
by default for time
, datetime
and timestamp
for Oracle. #876QueryBuilder#modify
#881QueryBuilder#withSchema
#518?
in knex.raw
queries. #9460
in join clause. #953Pool should be initialized if no pool options are specified
Pool should not be initialized if {max: 0} is sent in config options
Alias postgresql -> postgres in connection config options
Fix regression in using query string in connection config
commit
& rollback
releasing or rolling back the current savepoint.Fix bug in validateMigrationList, (#697)
join(table, rawOrBuilder)
syntax.Fix for regression in migrations
Better disconnect handling & pool removal for MySQL clients, #452
whereIn
Bug fix for properly binding postgresql streaming queries, (#363)
Allow case insensitive operators in sql clauses, (#344)
Patch for the method, calling without a handler should return the stream, not a promise (#337)
Adding missing map / reduce proxies to bluebird's implementation
Added the as method for aliasing subqueries
whereExists / whereNotExists may now take a query builder instance as well as a callback
Fix for regression with boolean default types in PostgreSQL
Fix for regression with queries containing multiple order by statements in sqlite3
Fix for big regression in memoization of column names from 0.5 -> 0.6
Fix for regression in specificType method
Package.json fix for CLI
raw.wrap
Add missing _ require to WebSQL builds
Fix & document schema.raw method
Reverting to using .npmignore, the "files" syntax forgot the knex.js file
.references(tableName.columnName)
as shorthand for .references(columnName).inTable(tableName)
.join('table.column', 'otherTable.column')
as shorthand for .join('table.column', '=', 'otherTable.column')Dropped indexes feature now functions correctly, (#278)
Remove the charset encoding if it's utf8 for mysql, as it's the default but also currently causes some issues in recent versions of node-mysql
Fix regression in array bindings for postgresql (#228)
Add more operators for where clauses, including && (#226)
.where(col, 'is', null)
or .where(col, 'is not', null)
are not supported (#221).where
operators now allowed (#212).Add the .exec method to the internal promise shim
Remove error'ed connections from the connection pool (#206), added support for node-postgres-pure (pg.js) (#200)
Fix for chaining on forUpdate / forShare, adding map & reduce from bluebird
Fix for a null limit / offset breaking query chain (#182)
Bump bluebird dependency to ~1.0.0, fixing regression in Bluebird 1.0.2 (#176)
Fix for using raw statements in defaultTo schema builder methods (#146)
Fix for incorrectly formed sql when aggregates are used with columns (#144)
Adding passthrough "catch", "finally" to bluebird implementations, use bluebird's "nodeify" internally for exec
returning
value. (#132)bigint
and bigserial
type to PostgreSQL. (#111)up
and down
methods, allowing the "knex" module to be used globally, file ordering on migrations, and other small improvements. (#112-115, #125, #135)Fix for aggregate methods on toString and clone, (#98)
Fix incorrect values passed to float in MySQL and decimal in PostgreSQL
Fix potential sql injection vulnerability in orderBy, thanks to @sebgie
into
as an alias of from, for builder syntax of: insert(value).into(tableName)
Connections are no longer pushed back into the pool if they never existed to begin with (#85)
The column is now a documented method on the builder api, and takes either an individual column or an array of columns to select
Standardizing handling of errors for easier debugging, as noted in (#39)
Fix for hasTable always returning true in MySQL (#82), fix where sql queries were duplicated with multiple calls on toSql with the schema builder
Fix for debug method not properly debugging individual queries
Fix for underscore not being defined in various grammar files
Fix for an error being thrown when an initialized ClientBase instance was passed into Knex.initialize. pool.destroy now optionally accepts a callback to notify when it has completed draining and destroying all connections
Cleanup from the 0.4.0 release, fix a potential exploit in "where" clauses pointed out by Andri Möll, fix for clients not being properly released from the pool #70, fix for where("foo", "<>", null) doing an "IS NULL" statement
Knex.initialize
, so you will need to call this once and then reference this knex
client elsewhere in your application.knex.raw
, knex.transaction
, and knex.schema
.notNullable
is chained as an option.increments
are now assumed to be unsigned (MySQL) by default.destroyAllNow
is no longer called by the library on process.exit
event. If you need to call it explicitly yourself, you may use knex.client.destroyPool
Reject the transaction promise if the transaction "commit" fails, (#50)
Fix error if a callback isn't specified for exec, (#49)
Fix SQLite3 delete not returning affected row count, (#45)
Fix insert with default values in PostgreSQL and SQLite3, (#44)
Allowing Raw queries to be passed as the primary table names
Fix for an array passed to insert being mutated
id
is not assumed on inserts (#18). The second parameter of insert is now required to return an array of insert id's for the last insert.dateTime
rather than a timestamp
Somehow missing the != operator. Using .find rather than .where in getCommandsByName(#22)
Ensures unhandled errors in the exec callback interface are re-thrown
Renaming beforeCreate to afterCreate. Better handling of errors in the connection pooling
Added the ability to specify beforeCreate and beforeDestroy hooks on the initialize's options.pool to perform any necessary database setup/teardown on connections before use (#14). where and having may now accept Knex.Raw instances, for consistency (#15). Added an orHaving method to the builder. The ability to specify bindings on Raw queries has been removed
defaultTo now accepts "false" for boolean columns, allows for empty strings as default values
Enabling table aliases (#11). Fix for issues with transactions not functioning (#12)
Bug fixes for groupBy (#7). Mysql using collation, charset config settings in createTable. Added engine on schemaBuilder specifier (#6). Other doc fixes, tests
Bug fixes for sub-queries, minor changes to initializing "main" instance, adding "pg" as a valid parameter for the client name in the connection settings
Initial Knex release