Getting Started
This article guides you through four essential steps to get your system ready to deploy the Confidential Computing components.
- Software Setup – Install Docker, log into the GitLab registry, and ensure required command-line utilities are available.
- Storage Configuration – Create the necessary folder structure for configuration files and persistent data.
- Key Generation – Generate cryptographic keys for secure authentication.
- Network Setup – Configure network access and communication between deployed services.
Once you've completed the setup, you can proceed to the deployment instructions specific to your role—whether you are a Data Provider, Analyst, or Node Operator.
Required software setup
The components required for deployment are provided as Docker images. To be able to pull and use the Docker images, the following must be done:
- Install docker.
- Log into the gitlab registry to pull images. This can be done by running the command:
docker login registry.gitlab.com
Some components require ECDSA keys. To generate them, ensure the following command-line utilities are available:
- OpenSSL
- awk
- tr
- partisia-contract tool
To install the partisia-contract tool, first install Rust,
and then install the partisia-contract tool.
awk
and tr
is included in most standard linux installations.
Windows users
On Windows, awk
and tr
are not available by default in the Command Prompt or PowerShell.
To get them on Windows,
you can use Git Bash for Windows, which provides a Unix-like shell environment, including awk
and tr
.
You can download it here.
Storage Configuration
Create the following folders and files. Files are empty for now, and will be filled out, as you follow the guide.
The folders will be mapped into the docker containers, such that configuration files and persistent storage are available for the services to use.
On linux, the folders could live for example in /opt/
.
- docker-compose.yml
- database/
- storage/
- init-script/
- database-init.sql
- authentication/
- conf/
- server.json
- storage/
- logs/
- key-management
- conf/
- server.json
- storage/
- logs/
- backend/
- conf/
- server.json
- storage/
- logs/
- data-provider-frontend/
- conf/
- config.js
- logo.png
- data-runway/
- conf/
- server.json
- storage/
- logs/
It is crucial that the storage folder for the Data Runway is created in a secure location with appropriate access control, ensuring no data can be leaked.
The logo.png
file in the data provider frontend should be an image file with the logo of your organization.
- docker-compose.yml
- database/
- storage/
- init-script/
- database-init.sql
- authentication/
- conf/
- server.json
- storage/
- logs/
- key-management
- conf/
- server.json
- storage/
- logs/
- backend/
- conf/
- server.json
- storage/
- logs/
- analyst-frontend/
- conf/
- config.js
- logo.png
The logo.png
file in the analyst frontend should be an image file with the logo of your organization.
- docker-compes.yml
- blockchain-node/
- conf/
- server.json
- genesis.zip
- storage/
- logs/
- execution-container/
- conf/
- server.json
- storage/
- logs/
The genesis.zip
is created in the genesis block step.
The deployment instructions for each component specifies an example docker compose configuration. All example configuration
is meant to go in the single docker-compose.yml
file.
Key Generation
The authentication service, data runway, blockchain nodes, and execution containers use ECDSA cryptographic keys from curve SECP256r1. Here we outline which keys are needed for each role, and how to generate keys.
The keys will be used in the server.json
configuration files, and relevant instructions are also given in respective components.
The Authentication Service uses a Base64 encoding of a keypair, with the private key in PKCS8 format, and the public key in X509 format.
To generate a private key and outputting it to a file tokenPrivateKey.pem
in the working directory
run the command:
openssl genpkey -algorithm EC -pkeyopt ec_paramgen_curve:P-256 -out tokenPrivateKey.pem
To extract the public key in X509 format, from the private key, and outputting it to a file tokenPublicKey.pem
in the working directory run the command:
openssl pkey -in tokenPrivateKey.pem -pubout -out tokenPublicKey.pem
These keys are used when configuring the authentication, key management and backend services, but should be deleted afterward.
The data runway, a component unique to the data provider,
requires an additional key to the one just created.
The runway requires only the integer encoding of a secret key, as base-16. This can be generated and saved
to a file runwayPrivateKey
in the current working directory using the partisia-contract tool:
cargo pbc account create --file=runwayPrivateKey.pk --net=mainnet
The Authentication Service uses a Base64 encoding of a keypair, with the private key in PKCS8 format, and the public key in X509 format.
To generate a private key and outputting it to a file tokenPrivateKey.pem
in the working directory
run the command:
openssl genpkey -algorithm EC -pkeyopt ec_paramgen_curve:P-256 -out tokenPrivateKey.pem
To extract the public key in X509 format, from the private key, and outputting it to a file tokenPublicKey.pem
in the working directory run the command:
openssl pkey -in tokenPrivateKey.pem -pubout -out tokenPublicKey.pem
These keys are used when configuring the authentication, key management and backend services, but should be deleted afterward.
As a node operator you must generate a keypair used for a TLS connection between the blockchain node, and the execution container.
To generate a keypair, and save the private key to a file externalListenerKey.pk
in the current working directory run the command:
cargo pbc account create --file=externalListenerKey.pk --net=mainnet
The base64 encoded public key is printed to the terminal, by can also be read by the private key file by running:
cargo pbc account publickey externalListenerKey.pk
The blockchain node additionally needs a private key to produce blocks. To generate a private key,
and save it to a file blockProducerKey.pk
in the current working directory run the command:
cargo pbc account create --file=blockProducerKey.pk --net=mainnet
The execution container additionally needs private keys to send transactions, run TCP connections, and produce preprocessing material.
To generate a the three needed private keys, and save to to files transactionPrivateKey
, tcpPrivateKey
, preprocessingKey
in the current working directory, run the following commands:
cargo pbc account create --file=transactionPrivateKey.pk --net=mainnet
cargo pbc account create --file=tcpPrivateKey.pk --net=mainnet
cargo pbc account create --file=preprocessingKey.pk --net=mainnet
Network Setup
Deployed components communicate over the network. Data providers and analyst components need only to be open for communication within the network of the organization, as other organizations should not communicate with them.
In the following sections, example Docker Compose configurations are provided for the Docker images. In the provided examples, we assume the images share a Docker network, and can communicate using the Docker container names.
If you do not wish to use a shared Docker network, or provide specific hostnames for each service, the following must be set up for the following services:
- database
- hostname (Default Docker: `cc-postgres`)
- open port (Default: `5432`)
- authentication
- hostname (Default Docker: `cc-authentication`)
- open port (Default: `8061`)
- key management
- hostname (Default Docker: `cc-key-management`)
- open port (Default: `8051`)
- backend
- hostname (Default Docker: `cc-backend`)
- open port (Default `8031`)
- data provider frontend
- hostname (Default Docker: `cc-data-provider-frontend`)
- open port (Default `8084`)
- data runway
- hostname (Default Docker: `cc-data-runway`)
- open port (Default `8091`)
If you do not wish to use a shared Docker network, or provide specific hostnames for each service, the following must be set up for the following services:
- database
- hostname (Default Docker: `cc-postgres`)
- open port (Default: `5432`)
- authentication
- hostname (Default Docker: `cc-authentication`)
- open port (Default: `8061`)
- key management
- hostname (Default Docker: `cc-key-management`)
- open port (Default: `8051`)
- backend
- hostname (Default Docker: `cc-backend`)
- open port (Default `8031`)
- analyst frontend/
- hostname (Default Docker: `cc-analyst-frontend`)
- open port (Default `8080`)
All participating organizations must be able to reach the deployed Blockchain Node and Execution Container.
Thus, hostnames and ports must be configured to comply with the following:
- blockchain node
- Externally reachable hostname (E.g.: `blockchain-node.organization.host`).
- Open REST port (Default `8041`). Used by data providers and analysts components.
- Open TCP Flooding port (Default `8999`). Used by other blockchain nodes.
- Open TCP Listener port (Default `9111`). Used by execution containers.
- execution container
- Externally reachable hostname (E.,g.: `execution-container.organization.host`)
- Open REST port (Default `8071`). Used by data providers and analysts components.
- Open TCP Communication port (Default `8999`). Used for communication between execution containers.
What's Next
After completing this guide, you will be ready to start deploying the different components, depending on your role:
- Data provider deployment guide
- Analyst deployment guide
- Node operator deployment guide