CalSync โ€” Automate Outlook Calendar Colors

Auto-color-code events for your team using rules. Faster visibility, less admin. 10-user minimum ยท 12-month term.

CalSync Colors is a service by CPI Consulting

In this blog post Use GitHub Copilot Agent Skills Without Blowing Your Context Window we will explain what Agent Skills are, how they work under the hood, and how to design them so Copilot stays helpful without dragging your entire repo (and your patience) into every conversation.

If youโ€™ve ever watched Copilot do great work for five minutes and then drift because the chat got too long, youโ€™ve met the context window problem. Large language models only โ€œseeโ€ a limited amount of text at once. When we paste logs, long specs, and internal runbooks into chat, we waste that precious window on information thatโ€™s only relevant sometimes. Agent Skills are GitHubโ€™s practical answer: keep heavy instructions and resources outside the conversation, and let the agent pull them in only when needed.

High-level idea: modular instructions, loaded on demand

Agent Skills are small, named folders that contain instructions (and optionally scripts and resources). Copilot can decide to load a skill based on your prompt and the skillโ€™s description. When it loads a skill, it injects the skillโ€™s instructions into the agentโ€™s context for that taskโ€”without you having to paste the whole runbook into chat.

That โ€œonly when relevantโ€ part is the key to avoiding context bloat. You can keep your everyday chat short (โ€œFix the failing buildโ€) while the agent quietly brings in the precise checklist, commands, and guardrails for your environment.

The technology behind Agent Skills (whatโ€™s actually happening)

At a technical level, this is a controlled prompt-injection mechanism with routing. Your conversation stays lean, and Copilot dynamically composes an internal prompt that includes:

  • Your request and recent chat history
  • Relevant repo context the agent has access to
  • Optional โ€œskillโ€ instructions when the system believes they match the task

GitHubโ€™s documentation describes skills as folders containing instructions and resources that Copilot can load when relevant, and specifically notes that when a skill is chosen, the SKILL.md contents are injected into the agentโ€™s context.

In parallel, GitHubโ€™s broader Copilot extensibility story uses a structured flow: intent routing, dynamic prompt crafting, and iterative function calls (where applicable) to gather only the data needed to complete the userโ€™s request. While Agent Skills are not the same thing as extension skillsets, the mental model is similar: keep prompts small, then selectively pull in tools and instructions.

Where Agent Skills work (and what to watch for)

As of late 2025, GitHub announced that Agent Skills work across Copilot coding agent, Copilot CLI, and agent mode in VS Code Insiders, with stable VS Code support rolling out around early January. In other words, features and UX may differ depending on which surface youโ€™re using, and you should expect some churn.

Practical takeaway: treat skills like code. Version them, review them, keep them small, and expect to refine your descriptions as the routing improves.

Design principle: donโ€™t store โ€œmore,โ€ store โ€œsharperโ€

The fastest way to overwhelm the context window is to create a single mega-skill called โ€œdevopsโ€ that contains your entire internal wiki. The best results come from skills that are:

  • Task-scoped (one job, one outcome)
  • Short (a page of guidance, not a chapter)
  • Deterministic (checklists, commands, acceptance criteria)
  • Composable (multiple small skills, used together when needed)

Step-by-step: create a skill that doesnโ€™t explode your prompt

Copilot skills live in your repo (project skills) or in your home directory (personal skills). GitHub documents common repo locations like .github/skills (and compatibility with .claude/skills). Each skill is a directory containing a required SKILL.md file with YAML frontmatter.

1) Create a narrow skill folder

Example: a skill for โ€œsummarise CI failures without pasting logs.โ€

mkdir -p .github/skills/ci-failure-triage

2) Add a SKILL.md with strong routing signals

Keep the description specific. Use the words your team will naturally type (โ€œCIโ€, โ€œworkflowโ€, โ€œfailing jobโ€, โ€œGitHub Actionsโ€). Make it obvious when to use it, and when not to.

---
name: ci-failure-triage
description: |
 Use this when a GitHub Actions workflow or CI job is failing.
 Goal: identify the most likely root cause quickly WITHOUT pasting full logs into chat.
 Triggers: "CI failing", "workflow failed", "job failed", "Actions failure".
---

## Approach
1. Ask which workflow name and branch/PR is failing.
2. Request the smallest useful artifact first (error summary, failing step name).
3. Only ask for expanded logs when the error summary is insufficient.
4. Provide a minimal fix and a verification step.

## Output format
- Suspected cause (1-2 sentences)
- Evidence (bullets)
- Fix (steps)
- Validate (exact commands)

Notice whatโ€™s missing: we didnโ€™t paste your whole Actions setup, your entire YAML standards doc, or all known failure modes. This is an โ€œinteraction recipe,โ€ not a knowledge dump.

Patterns that keep context small (but outcomes big)

Pattern 1: Put heavy data behind a โ€œprogressive disclosureโ€ ladder

Skills are perfect for teaching Copilot to ask for the minimum information first. For example:

  • Start with the failing step name and the top error line
  • Then ask for 30 lines around the error
  • Only then request full logs (and only for the failing job)

This keeps your context window reserved for reasoning and solutions, not raw text dumps.

Pattern 2: Separate โ€œalways trueโ€ repo rules from โ€œsometimes neededโ€ runbooks

GitHub explicitly distinguishes between skills and custom instructions: use custom instructions for guidance relevant to almost every task (naming conventions, formatting, branching policy), and skills for detailed instructions that should only be loaded when relevant.

Practical split:

  • Repo custom instructions: linting rules, PR title format, test command, definition of done
  • Agent skills: incident triage, migration playbooks, release cut procedure, CI debugging

Pattern 3: Use โ€œone skill per capabilityโ€ naming

Donโ€™t make โ€œcloudโ€ or โ€œplatformโ€ skills. Prefer verbs and outcomes:

  • create-terraform-module
  • review-api-breaking-changes
  • write-postgres-migration
  • triage-latency-regression

Why it helps: when you (or Copilot) scans skill descriptions, itโ€™s easier to route to the right chunk of instructions and avoid loading irrelevant text.

Practical workflow: how to use skills day-to-day

1) Prompt in layers

Start with a small request. Let the agent pull the skill if it matches.

  • โ€œCI failing on the main branch. Triage and propose a fix.โ€
  • โ€œPrepare a release PR for version 2.4.1 using our release process.โ€

2) If routing is flaky, explicitly name the skill

Because this is still evolving across tools and surfaces, itโ€™s worth adopting a team habit: if the agent seems confused, tell it which skill to use.

  • โ€œUse the ci-failure-triage skill and proceed.โ€
  • โ€œUse release-cut and produce a checklist.โ€

3) Keep outputs structured

Ask the skill to produce stable, reviewable artifacts:

  • A short plan
  • A checklist with verification steps
  • A patch with a minimal diff

Structured output reduces back-and-forth, which is another sneaky way context windows get consumed.

Skill authoring tips for tech leads (governance without bureaucracy)

Make skills reviewable

  • Put them in the repo so theyโ€™re versioned.
  • Require PR review for any skill touching security, deployments, or production data handling.
  • Write skills in the same tone youโ€™d want in an on-call runbook.

Keep secrets out by design

Skills should reference secret names and where to fetch them, not the secret values. Replace โ€œpaste token hereโ€ with โ€œretrieve token from secret store X and export ENV_VAR.โ€ Skills are instructions; treat them as shareable.

Measure success by fewer messages, not longer prompts

If a skill causes longer conversations, itโ€™s probably too broad. A good skill reduces chat length because it guides the agent through the shortest reliable path.

Quick checklist: context-window friendly Agent Skills

  • Small: fits on one screen
  • Specific: one outcome, clear triggers
  • Progressive: asks for minimal info first
  • Actionable: commands, acceptance criteria, validation steps
  • Composable: multiple skills instead of one mega-skill

Wrap up

Agent Skills give Copilot a way to โ€œrememberโ€ how your organisation works without stuffing every detail into every chat. The win isnโ€™t just better answersโ€”itโ€™s calmer workflows: shorter prompts, fewer pasted logs, and repeatable execution that feels like having your best runbooks embedded in the tool. Start with two or three narrow skills, refine the descriptions based on real usage, and youโ€™ll quickly feel the context window stop being the bottleneck.


Discover more from CPI Consulting -Specialist Azure Consultancy

Subscribe to get the latest posts sent to your email.