Warehouses
Altimate Code connects to 12 warehouse types. Configure them in .altimate-code/connections.json (project-local) or ~/.altimate-code/connections.json (global).
Configuration
Each warehouse has a key (the connection name) and a config object:
{
"my-connection-name": {
"type": "<warehouse-type>",
...
}
}
Tip
Use {env:...} substitution for passwords and tokens so you never commit secrets to version control.
Snowflake
{
"prod-snowflake": {
"type": "snowflake",
"account": "xy12345.us-east-1",
"user": "analytics_user",
"password": "{env:SNOWFLAKE_PASSWORD}",
"warehouse": "COMPUTE_WH",
"database": "ANALYTICS",
"role": "ANALYST_ROLE"
}
}
| Field | Required | Description |
|---|---|---|
account |
Yes | Snowflake account identifier (e.g. xy12345.us-east-1) |
user |
Yes | Username |
password |
Auth | Password (use one auth method) |
private_key_path |
Auth | Path to private key file (alternative to password) |
private_key_passphrase |
No | Passphrase for encrypted private key |
warehouse |
No | Warehouse name |
database |
No | Database name |
schema |
No | Schema name |
role |
No | User role |
Key-pair authentication
{
"prod-snowflake": {
"type": "snowflake",
"account": "xy12345.us-east-1",
"user": "svc_altimate",
"private_key_path": "~/.ssh/snowflake_rsa_key.p8",
"private_key_passphrase": "{env:SNOWFLAKE_KEY_PASSPHRASE}",
"warehouse": "COMPUTE_WH",
"database": "ANALYTICS",
"role": "TRANSFORM_ROLE"
}
}
BigQuery
{
"bigquery-prod": {
"type": "bigquery",
"project": "my-gcp-project",
"credentials_path": "/path/to/service-account.json",
"location": "US"
}
}
| Field | Required | Description |
|---|---|---|
project |
Yes | Google Cloud project ID |
credentials_path |
No | Path to service account JSON file. Omit to use Application Default Credentials (ADC) |
location |
No | Default location (default: US) |
Using Application Default Credentials
If you're already authenticated via gcloud, omit credentials_path:
{
"bigquery-prod": {
"type": "bigquery",
"project": "my-gcp-project"
}
}
Databricks
{
"databricks-prod": {
"type": "databricks",
"server_hostname": "adb-1234567890.1.azuredatabricks.net",
"http_path": "/sql/1.0/warehouses/abcdef1234567890",
"access_token": "{env:DATABRICKS_TOKEN}",
"catalog": "main",
"schema": "default"
}
}
| Field | Required | Description |
|---|---|---|
server_hostname |
Yes | Databricks workspace hostname |
http_path |
Yes | HTTP path from compute resources |
access_token |
Yes | Personal Access Token (PAT) |
catalog |
No | Unity Catalog name |
schema |
No | Schema/database name |
PostgreSQL
{
"my-postgres": {
"type": "postgres",
"host": "localhost",
"port": 5432,
"database": "analytics",
"user": "analyst",
"password": "{env:PG_PASSWORD}"
}
}
| Field | Required | Description |
|---|---|---|
connection_string |
No | Full connection string (alternative to individual fields) |
host |
No | Hostname (default: localhost) |
port |
No | Port (default: 5432) |
database |
No | Database name (default: postgres) |
user |
No | Username |
password |
No | Password |
Using a connection string
{
"my-postgres": {
"type": "postgres",
"connection_string": "postgresql://analyst:secret@localhost:5432/analytics"
}
}
Special characters in passwords
Passwords with special characters (@, #, :, /) are automatically percent-encoded. No manual escaping required.
Redshift
{
"redshift-prod": {
"type": "redshift",
"host": "my-cluster.abc123.us-east-1.redshift.amazonaws.com",
"port": 5439,
"database": "analytics",
"user": "admin",
"password": "{env:REDSHIFT_PASSWORD}"
}
}
| Field | Required | Description |
|---|---|---|
connection_string |
No | Full connection string (alternative to individual fields) |
host |
No | Hostname |
port |
No | Port (default: 5439) |
database |
No | Database name (default: dev) |
user |
No | Username |
password |
No | Password |
iam_role |
No | IAM role ARN (alternative to password) |
region |
No | AWS region (default: us-east-1) |
cluster_identifier |
No | Cluster identifier (required for IAM auth) |
IAM authentication
{
"redshift-prod": {
"type": "redshift",
"host": "my-cluster.abc123.us-east-1.redshift.amazonaws.com",
"database": "analytics",
"user": "admin",
"iam_role": "arn:aws:iam::123456789012:role/RedshiftReadOnly",
"cluster_identifier": "my-cluster",
"region": "us-east-1"
}
}
DuckDB
{
"dev-duckdb": {
"type": "duckdb",
"path": "./dev.duckdb"
}
}
| Field | Required | Description |
|---|---|---|
path |
No | Database file path. Omit or use ":memory:" for in-memory |
Concurrent access
DuckDB does not support concurrent write access to the same file. If another process holds a write lock, Altimate Code automatically retries the connection in read-only mode so you can still query the data. A clear error message is shown if read-only access also fails.
MySQL
{
"mysql-prod": {
"type": "mysql",
"host": "localhost",
"port": 3306,
"database": "analytics",
"user": "analyst",
"password": "{env:MYSQL_PASSWORD}"
}
}
| Field | Required | Description |
|---|---|---|
host |
No | Hostname (default: localhost) |
port |
No | Port (default: 3306) |
database |
No | Database name |
user |
No | Username |
password |
No | Password |
ssl_ca |
No | Path to CA certificate file |
ssl_cert |
No | Path to client certificate file |
ssl_key |
No | Path to client key file |
MongoDB
{
"my-mongodb": {
"type": "mongodb",
"host": "localhost",
"port": 27017,
"database": "analytics",
"user": "analyst",
"password": "{env:MONGO_PASSWORD}"
}
}
| Field | Required | Description |
|---|---|---|
connection_string |
No | Full connection string (alternative to individual fields) |
host |
No | Hostname (default: 127.0.0.1) |
port |
No | Port (default: 27017) |
database |
No | Database name |
user |
No | Username |
password |
No | Password |
auth_source |
No | Authentication database (default: admin) |
replica_set |
No | Replica set name |
tls |
No | Enable TLS (default: false) |
direct_connection |
No | Connect directly to a single host |
Using a connection string
{
"my-mongodb": {
"type": "mongodb",
"connection_string": "mongodb://analyst:secret@localhost:27017/analytics"
}
}
Note
MongoDB uses MQL (MongoDB Query Language) instead of SQL. Queries are submitted as JSON objects via the execute method. Supported commands: find, aggregate, countDocuments, distinct, insertOne, insertMany, updateOne, updateMany, deleteOne, deleteMany, createIndex, listIndexes, createCollection, dropCollection, ping.
Blocked operators
For safety, aggregate pipelines block $out and $merge (write stages) and $function and $accumulator (arbitrary JavaScript execution). Use find, countDocuments, or safe aggregate stages for read-only analysis.
Server compatibility
The MongoDB driver (v6.x) supports MongoDB server versions 3.6 through 8.0, covering all releases from the last 3+ years.
ClickHouse
{
"clickhouse-prod": {
"type": "clickhouse",
"host": "localhost",
"port": 8123,
"database": "analytics",
"user": "default",
"password": "{env:CLICKHOUSE_PASSWORD}"
}
}
| Field | Required | Description |
|---|---|---|
connection_string |
No | Full URL (alternative to individual fields, e.g. http://user:pass@host:8123) |
host |
No | Hostname (default: localhost) |
port |
No | HTTP port (default: 8123) |
database |
No | Database name (default: default) |
user |
No | Username (default: default) |
password |
No | Password |
protocol |
No | http or https (default: http) |
request_timeout |
No | Request timeout in ms (default: 30000) |
tls_ca_cert |
No | Path to CA certificate for TLS |
tls_cert |
No | Path to client certificate for mutual TLS |
tls_key |
No | Path to client key for mutual TLS |
clickhouse_settings |
No | Object of ClickHouse server settings |
ClickHouse Cloud
{
"clickhouse-cloud": {
"type": "clickhouse",
"host": "abc123.us-east-1.aws.clickhouse.cloud",
"port": 8443,
"protocol": "https",
"user": "default",
"password": "{env:CLICKHOUSE_CLOUD_PASSWORD}",
"database": "default"
}
}
Using a connection string
{
"clickhouse-prod": {
"type": "clickhouse",
"connection_string": "https://default:secret@my-ch.cloud:8443"
}
}
Server compatibility
The ClickHouse driver supports ClickHouse server versions 23.3 and later, covering all non-EOL releases. This includes LTS releases 23.8, 24.3, 24.8, and all stable releases through the current version.
Oracle
{
"oracle-prod": {
"type": "oracle",
"host": "localhost",
"port": 1521,
"service_name": "ORCL",
"user": "analyst",
"password": "{env:ORACLE_PASSWORD}"
}
}
| Field | Required | Description |
|---|---|---|
connection_string |
No | Full connect string (alternative to individual fields, e.g. host:1521/ORCL) |
host |
No | Hostname (default: 127.0.0.1) |
port |
No | Port (default: 1521) |
service_name |
No | Oracle service name (default: ORCL) |
database |
No | Alias for service_name |
user |
No | Username |
password |
No | Password |
Pure JavaScript driver
The Oracle driver uses oracledb in thin mode (pure JavaScript) — no Oracle Instant Client installation is required.
SQLite
{
"dev-sqlite": {
"type": "sqlite",
"path": "./dev.sqlite"
}
}
| Field | Required | Description |
|---|---|---|
path |
No | Database file path. Omit or use ":memory:" for in-memory |
readonly |
No | Open in read-only mode (default: false) |
Note
SQLite uses Bun's built-in bun:sqlite driver. WAL journal mode is enabled automatically for writable databases.
SQL Server
{
"sqlserver-prod": {
"type": "sqlserver",
"host": "localhost",
"port": 1433,
"database": "analytics",
"user": "sa",
"password": "{env:MSSQL_PASSWORD}"
}
}
| Field | Required | Description |
|---|---|---|
host |
No | Hostname (default: localhost) |
port |
No | Port (default: 1433) |
database |
No | Database name |
user |
No | Username |
password |
No | Password |
driver |
No | ODBC driver name (default: ODBC Driver 18 for SQL Server) |
azure_auth |
No | Use Azure AD authentication (default: false) |
trust_server_certificate |
No | Trust server certificate without validation (default: false) |
Unsupported Databases
The following databases are not yet natively supported, but workarounds are available:
| Database | Workaround |
|---|---|
| Cassandra | Use the bash tool with cqlsh to query directly |
| CockroachDB | PostgreSQL-compatible — use type: postgres |
| TimescaleDB | PostgreSQL extension — use type: postgres |
SSH Tunneling
All warehouse types support SSH tunneling for connections behind a bastion host:
{
"prod-via-bastion": {
"type": "postgres",
"host": "10.0.1.50",
"database": "analytics",
"user": "analyst",
"password": "{env:PG_PASSWORD}",
"ssh_host": "bastion.example.com",
"ssh_port": 22,
"ssh_user": "ubuntu",
"ssh_auth_type": "key",
"ssh_key_path": "~/.ssh/id_rsa"
}
}
| Field | Required | Description |
|---|---|---|
ssh_host |
Yes | SSH bastion hostname |
ssh_port |
No | SSH port (default: 22) |
ssh_user |
Yes | SSH username |
ssh_auth_type |
No | "key" or "password" |
ssh_key_path |
No | Path to SSH private key |
ssh_password |
No | SSH password |
Auto-Discovery
The /discover command can automatically detect warehouse connections from:
| Source | Detection |
|---|---|
| dbt profiles | Searches for profiles.yml (see resolution order below) |
| Docker containers | Finds running PostgreSQL, MySQL, SQL Server, and ClickHouse containers |
| Environment variables | Scans for SNOWFLAKE_ACCOUNT, PGHOST, DATABRICKS_HOST, etc. |
dbt profiles.yml resolution order
When discovering dbt profiles, altimate checks the following locations in priority order and uses the first one found:
| Priority | Location | Description |
|---|---|---|
| 1 | Explicit path | If you pass a path parameter to the dbt_profiles tool |
| 2 | DBT_PROFILES_DIR env var |
Standard dbt environment variable — set it to the directory containing your profiles.yml |
| 3 | Project-local profiles.yml |
A profiles.yml in your dbt project root (next to dbt_project.yml) |
| 4 | <home>/.dbt/profiles.yml |
The global default location (e.g., ~/.dbt/ on macOS/Linux, %USERPROFILE%\.dbt\ on Windows) |
This means teams that keep profiles.yml in their project repo (a common pattern for CI/CD) will have it detected automatically — no extra configuration needed.
# Option 1: Set the environment variable
export DBT_PROFILES_DIR=/path/to/your/project
# Option 2: Just put profiles.yml next to dbt_project.yml
# Copy from default location (macOS/Linux)
cp ~/.dbt/profiles.yml ./profiles.yml
altimate /discover
See Warehouse Tools for the full list of environment variable signals.
Testing Connections
After configuring a warehouse, verify it works:
> warehouse_test prod-snowflake
Testing connection to prod-snowflake (snowflake)...
✓ Connected successfully
Account: xy12345.us-east-1
User: analytics_user
Role: ANALYST_ROLE
Warehouse: COMPUTE_WH
Database: ANALYTICS
Post-Connection Suggestions
After you successfully connect a warehouse, altimate suggests next steps to help you get the most out of your connection. Suggestions are shown progressively based on what you've already done:
- Index your schemas — populate the schema cache for autocomplete and context-aware analysis
- Run SQL analysis — scan your query history for anti-patterns and optimization opportunities
- Inspect schema structure — review tables, columns, and relationships
- Check lineage — trace column-level data flow across your models
If altimate detects a dbt project in your workspace, it also recommends relevant dbt skills (/dbt-develop, /dbt-troubleshoot, /dbt-analyze).
Each suggestion is shown once per session — dismissing or acting on a suggestion removes it from the queue. You can also run a suggested action later via its corresponding tool or slash command.