publisher profilecommunity

killerapp

killerapp publishes 5 tracked skills in DriftBot.

Catalog decision: Mixed but usable publisher: there is meaningful evidence here, but reputation still needs to be earned skill by skill.
5
indexed skills
57
average score
0
manual reviews
0
high-risk labels
catalog evidence snapshotbaseline-v3 coverage 1/5functionality-v2 coverage 1no manual reviews yetno high-risk labels
Read this row as a catalog snapshot: runtime coverage, deeper follow-on coverage, human review presence, and high-risk concentration before you compare individual skills.

πŸ“Š Runtime quality summary

Runtime read: stronger publisher evidence means more than broad coverage β€” look for low current failure pressure, some functionality depth, and stale-runtime counts that stay under control.
eligible runtime skills: 5latest touch: 17h agono current regressions
Baseline coverage
120% of eligible skills have baseline-v3 receipts
Baseline pass rate
100%1 passed Β· 0 currently failing
Functionality coverage
1100% of baseline-cleared skills have functionality-v2
Fixture-backed rate
0%0 functionality-v2 rows have richer fixture/example proof
Stale baseline rows
0baseline receipts older than 7 days
Functionality failures
0current failed functionality-v2 rows in the latest publisher state

This is the quality surface for the publisher, not just a directory listing. It shows how much of the catalog has real receipts, how often those receipts are passing, whether richer fixture-backed proof exists, and whether the publisher currently carries regressions, reproduced failures, or stale runtime evidence.

Latest runtime touch: 2026-03-15 08:45 UTC. Publisher-level summaries do not replace skill-level review, but they do make reputation more earned: a publisher with broader coverage, stronger pass rates, and fixture-backed proof looks different from one living on thin smoke tests.

If you want the system-wide view, open the runtime dashboard. If you want the scoring logic, read the methodology.

Skills from this publisher

Showing 5 of 5 skills

Label mix on this page

Trusted: 3Use Caution: 2Insufficient Evidence: 0High Risk: 0

This distribution is a quick provenance cue, not a verdict. A publisher can have a mix of safer and riskier skills, so the useful move is to compare patterns here and then open the individual scorecards.

Publisher profiles are best for spotting catalog patterns: repeated shell access, common external services, whether manual review exists, and whether higher-risk labels are isolated or widespread.

On this page: 5 source-scanned, 0 catalog-only, and 0 manually reviewed entries in the current slice.

If you want the scoring logic, read the methodology. If you want the broader landscape, go back to the full index.

agentskills-io

killerapp Β· vsource-scanned
67
overall

Create, validate, and publish Agent Skills following the official open standard from agentskills.io. Use when (1) creating new skills for AI agents, (2) validating skill structure and metadata, (3) understanding the Agent Skills specification, (4) converting existing documentation into portable skills, or (5) ensuring cross-platform compatibility with Claude Code, Cursor, GitHub Copilot, and other tools.

Trustedconfidence: source evidencesource-scanned
+ 1 more
privileged capability
Take: Source-aware scan found higher-privilege capability areas (token), but that alone is not evidence of malicious behavior.
Decision cue: Decent evidence base β€” source-level signals are available, so inspect the receipts.

aws-agentcore-langgraph

killerapp Β· vsource-scanned
63
overall

Deploy production LangGraph agents on AWS Bedrock AgentCore. Use for (1) multi-agent systems with orchestrator and specialist agent patterns, (2) building stateful agents with persistent cross-session memory, (3) connecting external tools via AgentCore Gateway (MCP, Lambda, APIs), (4) managing shared context across distributed agents, or (5) deploying complex agent ecosystems via CLI with production observability and scaling.

Trustedconfidence: source evidencesource-scanned
+ 1 more
privileged capability
Take: Source-aware scan found higher-privilege capability areas (token, oauth), but that alone is not evidence of malicious behavior.
Decision cue: Decent evidence base β€” source-level signals are available, so inspect the receipts.

chain-of-density

killerapp Β· vsource-scanned
60
overall

Iteratively densify text summaries using Chain-of-Density technique. Use when compressing verbose documentation, condensing requirements, or creating executive summaries while preserving information density.

Trustedconfidence: source evidencesource-scanned
+ 1 more
privileged capability
Take: Source-aware scan found normal operational surface via environment, network, or shell-related references.
Decision cue: Decent evidence base β€” source-level signals are available, so inspect the receipts.

baml-codegen

killerapp Β· vsource-scanned
53
overall

Use when generating BAML code for type-safe LLM extraction, classification, RAG, or agent workflows - creates complete .baml files with types, functions, clients, tests, and framework integrations from natural language requirements. Queries official BoundaryML repositories via MCP for real-time patterns. Supports multimodal inputs (images, audio), Python/TypeScript/Ruby/Go, 10+ frameworks, 50-70% token optimization, 95%+ compilation success.

Use Cautionfollow-on functionality checks passed Β· 5/5confidence: source evidence
+ 2 more
source-scannedsuspicious
Runtime receipts + what passed2026-03-15 08:45 UTC
functionality-v2evidence depth: follow-on functionality checkstested recently: within 24 hourspassedoutput 80 Bartifacts 0worker oc-sandboxsource stage: cache hitsuite 1674 msbaseline-v3 8/8
RatioDaemon on this skillBaml Codegen sits in the baml codegen lane. Functionality-v2 currently passes, the trust label is High Risk, and setup looks advanced.
Observed: skill-structure-ok
Take: Potentially suspicious implementation signals detected: eval(.
Decision cue: Proceed carefully β€” suspicious signals matter more than capability surface alone.

adversarial-coach

killerapp Β· vsource-scanned
41
overall

Adversarial implementation review based on Block's g3 dialectical autocoding research. Use when validating implementation completeness against requirements with fresh objectivity.

Use Cautionconfidence: source evidencesource-scanned
+ 1 more
suspicious
Take: Potentially suspicious implementation signals detected: password.
Decision cue: Proceed carefully β€” suspicious signals matter more than capability surface alone.
Page 1 / 1

Trust reading guide

Publisher-level summaries help with provenance context, but trust still lives at the skill level. Use this page to compare patterns across the publisher’s catalog, then inspect the raw findings on individual skill pages.

Back to the full index