Parity pass on the other three language templates. Same guarantees as go/: survive server restart, client restart, half-open TCP, and long outages; rejoin and drain prime-side backlog on reconnect, without the user writing any of this in process.*. python/main.py: - grpc.keepalive_time_ms=10000, keepalive_timeout_ms=3000, keepalive_permit_without_calls=1 on the channel. Half-open TCP is detected within ~13s instead of the OS default ~2h. - Exponential backoff with jitter; max_backoff_seconds config ceiling (default 120). Attempts counter resets after a session runs healthy for 60s so transient restarts don't escalate the delay. - chain_id added as a required config field and sent as the x-chain-id gRPC metadata header (prime rejects streams without it). typescript/src/main.ts: - Same keepalive options on the @grpc/grpc-js client. - Same exponential backoff + jitter logic. - chain_id added to Config + metadata. bash/: - Config + README updated. The bash template uses Python's main.py as its runtime, so the behavioural changes above flow through without a separate main per language. Docs: each README gains a "Durability guarantees" section so contract authors see the invariants without reading the runtime code.
Bash Smart Contract Template
A Bash-based smart contract client for Dragonchain Prime that connects via gRPC.
This template uses a thin Python gRPC infrastructure layer to handle the network protocol, while your smart contract logic lives entirely in process.sh.
Prerequisites
- Bash 4.0+
- Python 3.10+ (for gRPC infrastructure)
- pip
- jq (for JSON processing in bash)
Quick Start
-
Copy this template to create your smart contract:
cp -r bash /path/to/my-smart-contract cd /path/to/my-smart-contract -
Set up the environment:
make setup source venv/bin/activateOr without make:
python3 -m venv venv source venv/bin/activate pip install -r requirements.txt -
Generate the protobuf code:
make proto -
Configure your connection by editing
config.yaml:server_address: "your-dragonchain-server:50051" chain_id: "your-chain-public-id" smart_contract_id: "your-smart-contract-id" api_key: "your-api-key" -
Implement your smart contract logic in
process.sh. -
Run:
python main.py --config config.yaml
Configuration
| Field | Description | Default |
|---|---|---|
server_address |
gRPC server address | Required |
chain_id |
Public chain id the SC is registered on (sent as x-chain-id metadata) |
Required |
smart_contract_id |
Your smart contract ID | Required |
api_key |
API key for authentication | Required |
use_tls |
Enable TLS encryption | false |
tls_cert_path |
Path to TLS certificate | - |
num_workers |
Concurrent transaction processors | 10 |
reconnect_delay_seconds |
Base delay for exponential backoff between reconnect attempts | 3 |
max_backoff_seconds |
Ceiling for the exponential backoff | 120 |
max_reconnect_attempts |
Max reconnect attempts (0 = infinite, recommended) | 0 |
Durability guarantees (provided by the Python main.py runtime, no work for you)
- Server restart, update, crash, or network blip → the runtime auto-reconnects and resumes processing. Transactions observed while the stream was down stay queued on the Dragonchain Prime side and are delivered (oldest first) on reconnect.
- Client restart or long outage → when this process comes back up (minutes, hours, months later), it rejoins the stream and prime re-delivers every still-pending transaction that should have invoked it.
- Half-open TCP (silent peer, resumed laptop, corporate NAT dropping idle flows) is detected within ~13 seconds via gRPC keepalive and triggers a reconnect. No dangling ghost streams.
- Reconnect storms are avoided: exponential backoff with jitter means many clients reconnecting after a server restart don't all slam
accept()at the same instant.
These are invariants of the runtime — you do not add any of this in process.sh.
Implementing Your Smart Contract
Edit process.sh. The script receives the transaction JSON as its first argument ($1) and must output a JSON result to stdout.
Interface
Input:
$1- Transaction JSON string- Environment variables - Server env vars and secrets are exported
Output (stdout):
{
"data": { "your": "result" },
"output_to_chain": true,
"error": ""
}
Logs (stderr): Anything written to stderr is captured and returned as logs.
Exit code: 0 = success, non-zero = error (stderr used as error message).
Example
#!/usr/bin/env bash
set -euo pipefail
TX_JSON="$1"
# Parse transaction fields with jq
TXN_ID=$(echo "$TX_JSON" | jq -r '.header.txn_id')
TXN_TYPE=$(echo "$TX_JSON" | jq -r '.header.txn_type')
PAYLOAD=$(echo "$TX_JSON" | jq -c '.payload')
# Access environment variables
SC_NAME="${SMART_CONTRACT_NAME:-}"
DC_ID="${DRAGONCHAIN_ID:-}"
# Access secrets
MY_SECRET="${SC_SECRET_MY_SECRET:-}"
# Log to stderr
echo "Processing transaction $TXN_ID" >&2
# Process based on payload action
ACTION=$(echo "$TX_JSON" | jq -r '.payload.action // empty')
case "$ACTION" in
create)
RESULT='{"status": "created"}'
;;
update)
RESULT='{"status": "updated"}'
;;
*)
RESULT='{"status": "unknown"}'
;;
esac
# Output result as JSON
jq -n --argjson result "$RESULT" '{
"data": $result,
"output_to_chain": true,
"error": ""
}'
Transaction Structure
The transaction JSON passed to your script has this format:
{
"version": "1",
"header": {
"tag": "my-tag",
"dc_id": "dragonchain-id",
"txn_id": "transaction-id",
"block_id": "block-id",
"txn_type": "my-type",
"timestamp": "2024-01-01T00:00:00Z",
"invoker": "user-id"
},
"payload": {
"your": "custom data"
}
}
Available Environment Variables
| Variable | Description |
|---|---|
TZ |
Timezone |
ENVIRONMENT |
Deployment environment |
INTERNAL_ID |
Internal identifier |
DRAGONCHAIN_ID |
Dragonchain ID |
DRAGONCHAIN_ENDPOINT |
Dragonchain API endpoint |
SMART_CONTRACT_ID |
This smart contract's ID |
SMART_CONTRACT_NAME |
This smart contract's name |
SC_ENV_* |
Custom environment variables |
Secrets
Secrets are exported as environment variables with keys prefixed by SC_SECRET_.
Project Structure
.
├── main.py # gRPC infrastructure (do not modify)
├── process.sh # Your smart contract logic (modify this)
├── proto/
│ └── remote_sc.proto # gRPC service definition
├── config.yaml # Configuration file
├── requirements.txt # Python dependencies (for infrastructure)
├── Makefile # Build commands
└── README.md # This file
File Descriptions
process.sh- Your smart contract logic. This is the only file you need to modify for most use cases.main.py- gRPC client infrastructure that invokesprocess.shfor each transaction. You typically don't need to modify this file.
Make Commands
make setup # Create venv and install dependencies
make proto # Generate Python code from proto files
make run # Run with default config
make test # Syntax check and sample run of process.sh
make clean # Remove generated files and venv
make deps # Install dependencies (no venv)
make check # Verify required tools (python3, bash, jq)
make format # Format process.sh with shfmt (if installed)
Concurrent Processing
The client uses a thread pool to process multiple transactions concurrently. Each worker invokes a separate instance of process.sh. The number of workers is configurable via num_workers in the config file.
Error Handling
- Return errors by setting the
errorfield in your JSON output, or exit with a non-zero code - Anything written to stderr is captured as logs
- The client automatically handles reconnection on connection failures
Docker
Example Dockerfile:
FROM python:3.11-slim
RUN apt-get update && apt-get install -y --no-install-recommends jq bash && rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
RUN chmod +x process.sh
RUN python -m grpc_tools.protoc \
-I./proto \
--python_out=. \
--grpc_python_out=. \
proto/remote_sc.proto
CMD ["python", "main.py", "--config", "config.yaml"]
License
[Your License Here]