Bash Smart Contract Template
A Bash-based smart contract client for Dragonchain Prime that connects via gRPC.
This template uses a thin Python gRPC infrastructure layer to handle the network protocol, while your smart contract logic lives entirely in process.sh.
Prerequisites
- Bash 4.0+
- Python 3.10+ (for gRPC infrastructure)
- pip
- jq (for JSON processing in bash)
Quick Start
-
Copy this template to create your smart contract:
cp -r bash /path/to/my-smart-contract cd /path/to/my-smart-contract -
Set up the environment:
make setup source venv/bin/activateOr without make:
python3 -m venv venv source venv/bin/activate pip install -r requirements.txt -
Generate the protobuf code:
make proto -
Configure your connection by editing
config.yaml:server_address: "your-dragonchain-server:50051" smart_contract_id: "your-smart-contract-id" api_key: "your-api-key" -
Implement your smart contract logic in
process.sh. -
Run:
python main.py --config config.yaml
Configuration
| Field | Description | Default |
|---|---|---|
server_address |
gRPC server address | Required |
smart_contract_id |
Your smart contract ID | Required |
api_key |
API key for authentication | Required |
use_tls |
Enable TLS encryption | false |
tls_cert_path |
Path to TLS certificate | - |
num_workers |
Concurrent transaction processors | 10 |
reconnect_delay_seconds |
Delay between reconnection attempts | 5 |
max_reconnect_attempts |
Max reconnect attempts (0 = infinite) | 0 |
Implementing Your Smart Contract
Edit process.sh. The script receives the transaction JSON as its first argument ($1) and must output a JSON result to stdout.
Interface
Input:
$1- Transaction JSON string- Environment variables - Server env vars and secrets are exported
Output (stdout):
{
"data": { "your": "result" },
"output_to_chain": true,
"error": ""
}
Logs (stderr): Anything written to stderr is captured and returned as logs.
Exit code: 0 = success, non-zero = error (stderr used as error message).
Example
#!/usr/bin/env bash
set -euo pipefail
TX_JSON="$1"
# Parse transaction fields with jq
TXN_ID=$(echo "$TX_JSON" | jq -r '.header.txn_id')
TXN_TYPE=$(echo "$TX_JSON" | jq -r '.header.txn_type')
PAYLOAD=$(echo "$TX_JSON" | jq -c '.payload')
# Access environment variables
SC_NAME="${SMART_CONTRACT_NAME:-}"
DC_ID="${DRAGONCHAIN_ID:-}"
# Access secrets
MY_SECRET="${SC_SECRET_MY_SECRET:-}"
# Log to stderr
echo "Processing transaction $TXN_ID" >&2
# Process based on payload action
ACTION=$(echo "$TX_JSON" | jq -r '.payload.action // empty')
case "$ACTION" in
create)
RESULT='{"status": "created"}'
;;
update)
RESULT='{"status": "updated"}'
;;
*)
RESULT='{"status": "unknown"}'
;;
esac
# Output result as JSON
jq -n --argjson result "$RESULT" '{
"data": $result,
"output_to_chain": true,
"error": ""
}'
Transaction Structure
The transaction JSON passed to your script has this format:
{
"version": "1",
"header": {
"tag": "my-tag",
"dc_id": "dragonchain-id",
"txn_id": "transaction-id",
"block_id": "block-id",
"txn_type": "my-type",
"timestamp": "2024-01-01T00:00:00Z",
"invoker": "user-id"
},
"payload": {
"your": "custom data"
}
}
Available Environment Variables
| Variable | Description |
|---|---|
TZ |
Timezone |
ENVIRONMENT |
Deployment environment |
INTERNAL_ID |
Internal identifier |
DRAGONCHAIN_ID |
Dragonchain ID |
DRAGONCHAIN_ENDPOINT |
Dragonchain API endpoint |
SMART_CONTRACT_ID |
This smart contract's ID |
SMART_CONTRACT_NAME |
This smart contract's name |
SC_ENV_* |
Custom environment variables |
Secrets
Secrets are exported as environment variables with keys prefixed by SC_SECRET_.
Project Structure
.
├── main.py # gRPC infrastructure (do not modify)
├── process.sh # Your smart contract logic (modify this)
├── proto/
│ └── remote_sc.proto # gRPC service definition
├── config.yaml # Configuration file
├── requirements.txt # Python dependencies (for infrastructure)
├── Makefile # Build commands
└── README.md # This file
File Descriptions
process.sh- Your smart contract logic. This is the only file you need to modify for most use cases.main.py- gRPC client infrastructure that invokesprocess.shfor each transaction. You typically don't need to modify this file.
Make Commands
make setup # Create venv and install dependencies
make proto # Generate Python code from proto files
make run # Run with default config
make test # Syntax check and sample run of process.sh
make clean # Remove generated files and venv
make deps # Install dependencies (no venv)
make check # Verify required tools (python3, bash, jq)
make format # Format process.sh with shfmt (if installed)
Concurrent Processing
The client uses a thread pool to process multiple transactions concurrently. Each worker invokes a separate instance of process.sh. The number of workers is configurable via num_workers in the config file.
Error Handling
- Return errors by setting the
errorfield in your JSON output, or exit with a non-zero code - Anything written to stderr is captured as logs
- The client automatically handles reconnection on connection failures
Docker
Example Dockerfile:
FROM python:3.11-slim
RUN apt-get update && apt-get install -y --no-install-recommends jq bash && rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
RUN chmod +x process.sh
RUN python -m grpc_tools.protoc \
-I./proto \
--python_out=. \
--grpc_python_out=. \
proto/remote_sc.proto
CMD ["python", "main.py", "--config", "config.yaml"]
License
[Your License Here]