How Driftloom Works

What this site checks, what it does not check, and why the labels exist.

Driftloom is a review system for OpenClaw skills. It tries to turn a pile of source files into something a normal person can reason about. That means showing evidence, explaining labels, and being honest about what has not happened yet.

1. Driftloom reads the skill

Driftloom ingests the skill files and records the version, source, and metadata so the result is tied to something concrete.

2. Driftloom checks for warning signs

The current live pass looks for things like shell commands, network behavior, secret references, and broken internal references.

3. Driftloom stores the evidence

Findings are saved with evidence snippets, file paths, and severity so the output is inspectable instead of magical.

4. Driftloom assigns a label

The scorecard summarizes what was found into labels like Trusted, Use Caution, or Needs Review. These labels are structured judgments, not guarantees.

5. Humans still matter

A future reviewer can confirm, reject, or override the automated conclusion. Driftloom is meant to support judgment, not replace it.

What Driftloom checks right now

Shell behavior
Looks for command patterns that deserve closer attention, especially destructive or privilege-heavy ones.
Network behavior
Looks for external URLs, network libraries, and signs that the skill may reach outside the local machine.
Secrets and credentials
Looks for references to tokens, passwords, keys, and similar material.
Structure and references
Looks for missing `SKILL.md`, broken references, and other signs the skill may be incomplete or messy.

Label glossary

Trusted

No major problems were found in the current checks.

This label means the current scan did not find strong warning signs. It does not mean the skill is perfect or guaranteed safe forever.

Use Caution

The skill may be fine, but there are things a human should understand before trusting it fully.

This label usually means Driftloom saw moderate warning signs such as network use, secret handling, or other patterns that deserve context.

Needs Review

There are enough warning signs that a human should look at this before treating it as trustworthy.

This does not mean the skill is malicious. It means the current evidence is strong enough that automation alone should not make the final call.

High Risk

Driftloom found stronger risk signals or multiple concerning patterns.

This label means the skill shows behavior or structure that deserves careful review before use, especially in sensitive environments.

Broken / Fails Validation

The skill appears malformed, incomplete, or fails basic validation checks.

This usually means the skill is not ready to trust operationally, even before discussing safety.