In this blog post Use GitHub Copilot Agent Skills Without Blowing Your Context Window we will explain what Agent Skills are, how they work under the hood, and how to design them so Copilot stays helpful without dragging your entire repo (and your patience) into every conversation.

If you’ve ever watched Copilot do great work for five minutes and then drift because the chat got too long, you’ve met the context window problem. Large language models only “see” a limited amount of text at once. When we paste logs, long specs, and internal runbooks into chat, we waste that precious window on information that’s only relevant sometimes. Agent Skills are GitHub’s practical answer: keep heavy instructions and resources outside the conversation, and let the agent pull them in only when needed.

High-level idea: modular instructions, loaded on demand

Agent Skills are small, named folders that contain instructions (and optionally scripts and resources). Copilot can decide to load a skill based on your prompt and the skill’s description. When it loads a skill, it injects the skill’s instructions into the agent’s context for that task—without you having to paste the whole runbook into chat.

That “only when relevant” part is the key to avoiding context bloat. You can keep your everyday chat short (“Fix the failing build”) while the agent quietly brings in the precise checklist, commands, and guardrails for your environment.

The technology behind Agent Skills (what’s actually happening)

At a technical level, this is a controlled prompt-injection mechanism with routing. Your conversation stays lean, and Copilot dynamically composes an internal prompt that includes:

  • Your request and recent chat history
  • Relevant repo context the agent has access to
  • Optional “skill” instructions when the system believes they match the task

GitHub’s documentation describes skills as folders containing instructions and resources that Copilot can load when relevant, and specifically notes that when a skill is chosen, the SKILL.md contents are injected into the agent’s context.

In parallel, GitHub’s broader Copilot extensibility story uses a structured flow: intent routing, dynamic prompt crafting, and iterative function calls (where applicable) to gather only the data needed to complete the user’s request. While Agent Skills are not the same thing as extension skillsets, the mental model is similar: keep prompts small, then selectively pull in tools and instructions.

Where Agent Skills work (and what to watch for)

As of late 2025, GitHub announced that Agent Skills work across Copilot coding agent, Copilot CLI, and agent mode in VS Code Insiders, with stable VS Code support rolling out around early January. In other words, features and UX may differ depending on which surface you’re using, and you should expect some churn.

Practical takeaway: treat skills like code. Version them, review them, keep them small, and expect to refine your descriptions as the routing improves.

Design principle: don’t store “more,” store “sharper”

The fastest way to overwhelm the context window is to create a single mega-skill called “devops” that contains your entire internal wiki. The best results come from skills that are:

  • Task-scoped (one job, one outcome)
  • Short (a page of guidance, not a chapter)
  • Deterministic (checklists, commands, acceptance criteria)
  • Composable (multiple small skills, used together when needed)

Step-by-step: create a skill that doesn’t explode your prompt

Copilot skills live in your repo (project skills) or in your home directory (personal skills). GitHub documents common repo locations like .github/skills (and compatibility with .claude/skills). Each skill is a directory containing a required SKILL.md file with YAML frontmatter.

1) Create a narrow skill folder

Example: a skill for “summarise CI failures without pasting logs.”

mkdir -p .github/skills/ci-failure-triage

2) Add a SKILL.md with strong routing signals

Keep the description specific. Use the words your team will naturally type (“CI”, “workflow”, “failing job”, “GitHub Actions”). Make it obvious when to use it, and when not to.

---
name: ci-failure-triage
description: |
 Use this when a GitHub Actions workflow or CI job is failing.
 Goal: identify the most likely root cause quickly WITHOUT pasting full logs into chat.
 Triggers: "CI failing", "workflow failed", "job failed", "Actions failure".
---

## Approach
1. Ask which workflow name and branch/PR is failing.
2. Request the smallest useful artifact first (error summary, failing step name).
3. Only ask for expanded logs when the error summary is insufficient.
4. Provide a minimal fix and a verification step.

## Output format
- Suspected cause (1-2 sentences)
- Evidence (bullets)
- Fix (steps)
- Validate (exact commands)

Notice what’s missing: we didn’t paste your whole Actions setup, your entire YAML standards doc, or all known failure modes. This is an “interaction recipe,” not a knowledge dump.

Patterns that keep context small (but outcomes big)

Pattern 1: Put heavy data behind a “progressive disclosure” ladder

Skills are perfect for teaching Copilot to ask for the minimum information first. For example:

  • Start with the failing step name and the top error line
  • Then ask for 30 lines around the error
  • Only then request full logs (and only for the failing job)

This keeps your context window reserved for reasoning and solutions, not raw text dumps.

Pattern 2: Separate “always true” repo rules from “sometimes needed” runbooks

GitHub explicitly distinguishes between skills and custom instructions: use custom instructions for guidance relevant to almost every task (naming conventions, formatting, branching policy), and skills for detailed instructions that should only be loaded when relevant.

Practical split:

  • Repo custom instructions: linting rules, PR title format, test command, definition of done
  • Agent skills: incident triage, migration playbooks, release cut procedure, CI debugging

Pattern 3: Use “one skill per capability” naming

Don’t make “cloud” or “platform” skills. Prefer verbs and outcomes:

  • create-terraform-module
  • review-api-breaking-changes
  • write-postgres-migration
  • triage-latency-regression

Why it helps: when you (or Copilot) scans skill descriptions, it’s easier to route to the right chunk of instructions and avoid loading irrelevant text.

Practical workflow: how to use skills day-to-day

1) Prompt in layers

Start with a small request. Let the agent pull the skill if it matches.

  • “CI failing on the main branch. Triage and propose a fix.”
  • “Prepare a release PR for version 2.4.1 using our release process.”

2) If routing is flaky, explicitly name the skill

Because this is still evolving across tools and surfaces, it’s worth adopting a team habit: if the agent seems confused, tell it which skill to use.

  • “Use the ci-failure-triage skill and proceed.”
  • “Use release-cut and produce a checklist.”

3) Keep outputs structured

Ask the skill to produce stable, reviewable artifacts:

  • A short plan
  • A checklist with verification steps
  • A patch with a minimal diff

Structured output reduces back-and-forth, which is another sneaky way context windows get consumed.

Skill authoring tips for tech leads (governance without bureaucracy)

Make skills reviewable

  • Put them in the repo so they’re versioned.
  • Require PR review for any skill touching security, deployments, or production data handling.
  • Write skills in the same tone you’d want in an on-call runbook.

Keep secrets out by design

Skills should reference secret names and where to fetch them, not the secret values. Replace “paste token here” with “retrieve token from secret store X and export ENV_VAR.” Skills are instructions; treat them as shareable.

Measure success by fewer messages, not longer prompts

If a skill causes longer conversations, it’s probably too broad. A good skill reduces chat length because it guides the agent through the shortest reliable path.

Quick checklist: context-window friendly Agent Skills

  • Small: fits on one screen
  • Specific: one outcome, clear triggers
  • Progressive: asks for minimal info first
  • Actionable: commands, acceptance criteria, validation steps
  • Composable: multiple skills instead of one mega-skill

Wrap up

Agent Skills give Copilot a way to “remember” how your organisation works without stuffing every detail into every chat. The win isn’t just better answers—it’s calmer workflows: shorter prompts, fewer pasted logs, and repeatable execution that feels like having your best runbooks embedded in the tool. Start with two or three narrow skills, refine the descriptions based on real usage, and you’ll quickly feel the context window stop being the bottleneck.