Add the following code to your project's shard.yml under:
dependencies
to use in production
- OR -
development_dependencies
to use in development
A set of scripts for initialization of PlaceOS.
The scripts are methods wrapped by a sam.cr interface. Most use named arguments which are used as described here.
Execute scripts as one-off container jobs.
# Initialize PostgreSQL database
docker-compose run --no-deps -it init task db:init host=$PG_HOST port=$PG_PORT db=$PG_DB user=$PG_USER password=$PG_PASSWORD
# Dump PostgreSQL database to local filesystem
docker-compose run --no-deps -it init task db:dump host=$PG_HOST port=$PG_PORT db=$PG_DB user=$PG_USER password=$PG_PASSWORD
# Restore PostgreSQL database from local filesystem dump
docker-compose run --no-deps -it init task db:restore path=DUMP_FILE_LOCATION host=$PG_HOST port=$PG_PORT db=$PG_DB user=$PG_USER password=$PG_PASSWORD
# Migrate RethinkDB dump to PostgreSQL database
docker-compose run --no-deps -it init task migrate:rethink_dump path=DUMP_FILE_LOCATION host=$PG_HOST port=$PG_PORT db=$PG_DB user=$PG_USER password=$PG_PASSWORD clean_before=true
# Create a set of placeholder records
docker-compose run --no-deps -it init task create:placeholder
# Create an Authority
docker-compose run --no-deps -it init task create:authority domain="localhost:8080"
# Create a backoffice application hosted on `http://localhost:4200`
docker-compose run --no-deps -it init task create:application \
authority_id=<authority_id> \
name="development" \
base="http://localhost:4200" \
redirect_uri="http://localhost:4200/backoffice/oauth-resp.html"
# Create a User
docker-compose run --no-deps -it init task create:user \
authority_id="s0mek1nd4UUID" \
email="support@place.tech" \
username="burger" \
password="burgerR00lz" \
sys_admin=true \
support=true
# Restore to a database backup from S3
docker-compose run --no-deps -it init task restore:pg \
pg_host=$PG_HOST \
pg_port=$PG_PORT \
pg_db=$PG_DB \
pg_user=$PG_USER \
pg_password=$PG_PASS \
force_restore=$PG_FORCE_RESTORE \
aws_region=$AWS_REGION \
aws_s3_bucket=$AWS_S3_BUCKET \
aws_s3_object=$AWS_S3_BUCKET \
aws_key=$AWS_KEY \
aws_secret=$AWS_SECRET
# Restore to a database backup from filesystem
docker-compose run --no-deps \
-v /etc/placeos/pg_dump_2020-07-14T14_26_19.gz:/pg-dump.gz:Z \
init task db:restore user=$PG_USER password=$PG_PASS db=$PG_DB path=/pg-dump.gz
The default entrypoint to the init container generates a User, Authority, and Application dependent on the environment variables below.
email
: PLACE_EMAIL
, required.username
: PLACE_USERNAME
, required.password
: PLACE_PASSWORD
, required.application_name
: PLACE_APPLICATION
|| "backoffice"
domain
: PLACE_DOMAIN
|| "localhost:8080"
tls
: PLACE_TLS == "true"
auth_host
: PLACE_AUTH_HOST
|| "auth"
development
: ENV == "development"
backoffice_branch
: PLACE_BACKOFFICE_BRANCH
, build/prod
or build/dev
dependent on environment.backoffice_commmit
: PLACE_BACKOFFICE_COMMIT
||"HEAD"
Dockerfile.pg-backup
generates a container that will backup the state of PG to S3.
By default, the backup will take place at midnight every day.
cron
: BACKUP_CRON
|| 0 0 * * *
pg_host
: PG_HOST
|| "localhost"
pg_port
: PG_PORT
|| 5432
pg_db
: PG_DB
, required.pg_user
: PG_USER
, required.pg_password
: PG_PASS
, required.aws_region
: AWS_REGION
, required.aws_key
: AWS_KEY
, required,aws_secret
: AWS_SECRET
, required.aws_s3_bucket
: AWS_S3_BUCKET
, required.aws_kms_key_id
: AWS_KMS_KEY_ID
help
: List all defined tasks
check:user
: Check for existence of a user
domain
: The PlaceOS domain the user is associated with (e.g. example.com
). Required.email
: Email of the user (e.g. alice@example.com
). Required.create:placeholders
: Creates a representative set of documents in RethinkDB
create:authority
: Creates an Authority
domain
: Defaults to PLACE_DOMAIN
|| "localhost:8080"
tls
: Defaults to PLACE_TLS
|| false
create:application
: Creates an Application
authority
: Authority ID. Required.base
: Defaults to "http://localhost:8080"
name
: Defaults to "backoffice"
redirect_uri
: Defaults to "#{base}/#{name}/oauth-resp.html"
scope
: Defaults to "public"
create:user
: Creates a User
authority_id
: Id of Authority. Required.email
: Email of user. Required.username
: Username of user. Required.password
: Password of user. Required.sys_admin
: Defaults to false
support
: Defaults to false
backup:pg
: Backup PostgreSQL DB to S3.
pg_host
: Defaults to PG_HOST
|| "localhost"
pg_port
: Defaults to PG_PORT
|| 5432
pg_db
: Defaults to PG_DB
, or the postgres databasepg_user
: Defaulto PG_USER
, or postgrespg_password
: Defaults to PG_PASS
aws_s3_bucket
: Defaults to AWS_S3_BUCKET
, required.aws_region
: Defaults to AWS_REGION
, required.aws_key
: Defaults to AWS_KEY
, required,aws_secret
: Defaults to AWS_SECRET
, required.aws_kms_key_id
: Defaults to AWS_KMS_KEY_ID
secret:rotate_server_secret
: Rotate from old server secret to current value in PLACE_SERVER_SECRET
old_secret
: The previous value of PLACE_SERVER_SECRET
, required.restore:pg
: Restore PostgreSQL DB from S3.
pg_host
: Defaults to PG_HOST
|| "localhost"
pg_port
: Defaults to PG_PORT
|| 5432
pg_db
: Defaults to PG_DB
, or the postgres databasepg_user
: Defaulto PG_USER
, or postgrespg_password
: Defaults to PG_PASS
force_restore
: Defaults to PG_FORCE_RESTORE
|| false
aws_s3_object
: Object to restore DB from. Defaults to AWS_S3_BUCKET
, required.aws_s3_bucket
: Defaults to AWS_S3_BUCKET
, required.aws_region
: Defaults to AWS_REGION
, required.aws_key
: Defaults to AWS_KEY
, required,aws_secret
: Defaults to AWS_SECRET
, required.aws_kms_key_id
: Defaults to AWS_KMS_KEY_ID
drop
: Drops Elasticsearch and PostgreSQL DB
drop:elastic
and drop:db
via environmental configurationdrop:elastic
: Deletes all elastic indices tables
host
: Defaults to ES_HOST
|| "localhost"
port
: Defaults to ES_PORT
|| 9200
drop:db
: Drops all PostgreSQL DB tables
db
: Defaults PG_DB
|| "postgres"
host
: Defaults to PG_HOST
|| "localhost"
port
: Defaults to PG_PORT
|| 5432
user
: Defaults to PG_USER
|| "postgres"
password
: Defaults to PG_PASS
|| ""
src/tasks
src/sam.cr