Current public label
Use Caution
The current label should account for both the file-level review and the fact that the sandbox runtime pass did not come back perfectly clean.
This label is currently coming from the automated scorecard.
Automated result
Use Caution
Driftloom has both static evidence and sandbox runtime evidence for this skill, and the runtime smoke pass surfaced at least one operational issue worth understanding. The sandbox supplied fake placeholder configuration values for common env vars, so this runtime result is still about behavior under isolation rather than proof that real integrations work.
1 medium, 1 low Final label: use caution.
Human review
No human review has been recorded yet.
The current public label is still relying on automation. A human has not weighed in yet.
What happened
Driftloom completed both a static scan and a runtime smoke pass. It inspected the source, stored findings, then ran lightweight sandbox probes on the isolated runner to see whether basic execution behaved cleanly.
Runtime evidence
OpenClaw disconnected runtime evidence completed on [email protected].
Status: Completed · Duration: 471 ms · Findings: 3
Evidence class: OpenClaw disconnected runtime evidence
OpenClaw disconnected runtime validation completed cleanly in the isolated runner.
Runner image: driftloom/runtime-runner:2026-03-25-openclaw1
Tool coverage: python, node, npm, pipx, uv, gsc, gh, http, curl, jq, yq, rg, fd, shellcheck, git, bash
Network: disabled
Mode: openclaw_disconnected
Driftloom supplied fake placeholder env vars for common credentials/config checks, kept network disabled, and did not use any real secrets.
Driftloom has both static evidence and sandbox runtime evidence for this skill, and the runtime smoke pass surfaced at least one operational issue worth understanding. The sandbox supplied fake placeholder configuration values for common env vars, so this runtime result is still about behavior under isolation rather than proof that real integrations work.