Administration

JUICE module operations and troubleshooting guide.

Common operations:

See FAQ for more.


Initialize Instance

./juice.yml -l <host>
./juice.yml -l <host> -e fsname=<name>

Initialization steps:

  • Install juicefs package
  • Create shared cache dir (default /data/juice)
  • Run juicefs format --no-update (only effective on first creation)
  • Create mount point and set permissions
  • Render systemd unit and env files
  • Start service and wait for metrics port
  • Register to VictoriaMetrics (if infra node exists)

Reconfigure

After changing config, it’s recommended to run (update config and ensure service is online):

./juice.yml -l <host> -t juice_config,juice_launch

Render config without touching service state:

./juice.yml -l <host> -t juice_config

Notes:

  • juice_config,juice_launch ensures service is started, but does not force-restart an already running instance
  • data only takes effect on the first format
  • After changing mount options, manually restart the instance service (systemctl restart juicefs-<name>)

Remove Instance

  1. Set instance state to absent
  2. Run juice_clean
juice_instances:
  jfs:
    path: /fs
    meta: postgres://...
    state: absent
./juice.yml -l <host> -t juice_clean
./juice.yml -l <host> -e fsname=jfs -t juice_clean

Removal actions:

  • Stop systemd service
  • umount -l lazy unmount
  • Remove unit and env files
  • Reload systemd

PostgreSQL metadata and object storage data are not deleted.


Add New Instance

Add a new instance in config, ensure unique port:

juice_instances:
  newfs:
    path: /newfs
    meta: postgres://...
    data: --storage minio --bucket http://minio:9000/newfs
    port: 9568

Deploy:

./juice.yml -l <host> -e fsname=newfs

Shared Mount Across Nodes

Configure the same meta and instance name on multiple nodes:

app:
  hosts:
    10.10.10.11: { juice_instances: { shared: { path: /shared, meta: "postgres://...", port: 9567 } } }
    10.10.10.12: { juice_instances: { shared: { path: /shared, meta: "postgres://...", port: 9567 } } }

Only one node needs to format the filesystem; others will skip via --no-update.


PITR Recovery

When data is also stored in PostgreSQL (--storage postgres), filesystem PITR can be done via PG PITR:

# Stop services on all nodes
systemctl stop juicefs-jfs

# Restore metadata DB with pgBackRest
pb restore --stanza=meta --type=time --target="2024-01-15 10:30:00"

# Start PostgreSQL
systemctl start postgresql

# Start JuiceFS service
systemctl start juicefs-jfs

If data is stored in MinIO/S3, only metadata is rolled back; objects will not.


Troubleshooting

Mount Fails

systemctl status juicefs-jfs
journalctl -u juicefs-jfs -f
mountpoint /fs

Metadata Connection Issues

psql "postgres://dbuser_meta:DBUser.Meta@10.10.10.10:5432/meta" -c "SELECT 1"

Metrics Port Check

ss -tlnp | grep 9567
curl http://localhost:9567/metrics

Performance Tuning

Pass juicefs mount options via mount:

juice_instances:
  jfs:
    path: /fs
    meta: postgres://...
    mount: --cache-size 102400 --prefetch 3 --max-uploads 50

Key metrics to watch:

  • juicefs_blockcache_hits/juicefs_blockcache_miss: cache hit ratio
  • juicefs_object_request_durations_histogram_seconds: object storage latency
  • juicefs_transaction_durations_histogram_seconds: metadata transaction latency