-
Notifications
You must be signed in to change notification settings - Fork 341
AI Ready
- setup .github/copilot-instructions.md
- setup .github/workflows/copilot-setup-steps.yml
- For VS Code tests to run you must either setup the .github/workflows/copilot-setup-steps.yml or tweak the firewall settings to allow the following:
- https://update.code.visualstudio.com
- https://vscode.download.prss.microsoft.com
- Do not enable
https://api.github.com/repos/microsoft/vscode-jupyter/discussions/13670, as the Github MCP server will have access to the require information via readonly API.
- For large complex repos, consider custom instructions, see .github/copilot-instructions.md and .github/instructions/kernel.instructions.md
- Note how the satellite instructions are referenced in .github/copilot-instructions.md
- Use common
script
names- For compiling, building and testing, like
npm compile
,npm build
,npm test
and the like. - Most of the time agents tend to use these scripts as these are most commonly used.
- Sometimes upon failing agents fall back to the right scripts or try something else and could keep failing until succeeds or just gives up
- For compiling, building and testing, like
- Testing
- Models search for tests of the pattern
test
andtest:*
(in package.json) - Models prefer running specific tests using
--grep
, hence ensure this is supported so model can run specific tests - Similarly if using other languages, ensure they line up closely with common practices
- Models search for tests of the pattern
- Prefer creating a
plan.prompt.md
- Get agent to plan out the work before agent writes any code
- Review the session logs in when running Copilot Coding Agents
How to fix a bug in VS Code (agent mode)?
- Open chat panel
- Prompt =
/plan <issue number | issue url | issue details>
Note: You can include additional context (or text in the prompt).
This is advisable as that will steer model in the right direction. - Review the output
- Implement the solution using the prompt =
/implement
How to fix a bug in VS Code and use Copilot Coding Agent?
- Open chat panel
- Prompt =
/plan <issue number | issue url | issue details>
- Review the output
- Implement the solution using the prompt =
implement the proposed changes
& click the☁️
icon
How to implement a new feature in VS Code?
- Open chat panel
- Prompt =
/plan <issue number | issue url | issue details>
- Review the output
- Implement the solution using the prompt =
/implement
How to fix a bug or implement a new feature from Github Copilot?
⚠️ No clear solution yet, this is being actively worked on.
Important
At a minimum review the overview and root cause analysis. Ensure you and Model have a common understanding of the issue.
Important
When using /plan
in the prompt, always try to use additional context/details if possible.
E.g. assume you want a problem fixed a specific way, then use the prompt = /plan Fix issue <issue link> by modifying the file xyz.ts
- Create
.github/copilot-instructions.md
file- With general overview of architecture, code layout, coding standards
- This in turn references some other instruction files as well
- Create
.github/instructions/*-.instrucions.md
- Files specific to different folders/features of the Jupyter extension
- With general overview of components, files and details required to understand code in this area
- Create custom prompts
-
plan.prompt.md
- Custom instructions to ask model to review a bug/feature and prepare a plan to complete this -
implement.prompt.md
- Custom instructions to ask model to implement the plan prepared usingplan.prompt.md
-
explain.prompt.md
- Custom instructions to ask model to explain parts of the code
-
- Supported by VS Code
- Supported by Github Copilot Code Reviews
⚠️ Supported by Github Copilot Coding Agent
Short answer, context is key
. We need to provide the right context to help model understand the code.
Jupyter extension is quite complex due to the various features supported (code running locally, remote, webviews, debugging, rendering, etc). Even though each feature is in its own folder, I've found that using AI to fix bug/features, can have the following issues
- Time consuming (model files, searches code to read and understand code)
- Too many tool calls (due to previous issue)
- Confusion/incorrect assumptions (features such as local and remote kernels are treated as the same, in some cases Model assumed there's a concept of remote Python Environments as well).
Providing the right information/context, such as explaining the tech stack, core components and workflow, helps model understand the code better.
Note
I do not believe this is required for small repos. Even with large repos this might not be required As models get better/faster/support larger context windows, the need for such documentation will be unnecessary.
Caution
Avoid generating large and complex documents. That only chews up the context window. Ensure the content is reviewed and updated by a model to ensure it contains information that a model would find useful and would need to help better understand the codebase or the like.
Warning
This does add to the debt. Over time these documents will need to be updated.
Hopefully by then models would have improved making these documents unnecessary.
Common understanding Imagine two engineers working together to address a bug. At a minimum, the two engineers need to have a common understanding of the bug. If not, once a PR is submitted by one engineer the other would simply reject the PR as (in their mind) the changes are in correct. This leads to waste in time/resources. This same analogy applies when working with AI as well.
Important
This is where a human will need to verify the models understanding of the bug is correct.
Helps model build a better solution
Getting the model to provide root cause analysis or think
more about the problem, helps the model produce better solutions.
The key here is to instruct the model to think
about the issue before churning out code.
- Ask model to provide overview (as mentioned, this helps us validate models understanding of issue)
- Ask model for a root cause analysis (as mentioned, this helps model produce better solutions due to better understanding of bug)
- Ask model not to make up solutions (sadly model seems to always try to satisfy our requirements, even if it cannot. We need to ask it not to do this)
- Provide model with documentation that can help model better understand code base (get the model to review/generate documentation)
- Ask model why it did something wrong (hallucinate) or made the wrong assumption, sometimes the model does provide good feedback.
We can later update the instructions to ensure model doesn't repeat these same mistakes again. - Different models behave differently to different instructions (hence always try the instructions with different models).
- Finally Keep reviewing and revising the prompts.
- Using
/plan
and/implement
prompts, one no longer needs to look at the code. Verifying we have common understanding and verifying the proposed plan iskey/crucial
. - For large complex tasks, prefer using
/plan
, then review and finally delegate to Copilot Coding Agent. - For Python repos its best to create copilot-instructions.md with agent specific instructions
E.g.
ensure a Python environment has been configured for use with this repo
.
- Update default firewall settings in Coding Agent to be able to download VSCode VS Code download sites are blocked, hence vscode tests cannot run.
- Automate
.github/workflows/copilot-setup-steps.yml
- Coding agent specific instructions?
- Agent specific instructions in copilot-instructions.md?
E.g.
ensure a Python environment has been configured for use with this repo
. - How do we get from README.md (other.md) to setup env for model to work without stumbling every step of the way.
Different repos have different ways of setting up the env. Setup runtime , packages (python, npm, rust), other tools (pre-commit hoooks), etc. Almost all of this info is in some README.md or GETTING_STARTED.md or CONTRIBUTING.md or the like.
E.g.
hatch
,pytest
, and other different test tools
- Contribution
- Source Code Organization
- Coding Standards
- Profiling
- Coding Guidelines
- Component Governance
- Writing tests
- Kernels
- Intellisense
- Debugging
- IPyWidgets
- Extensibility
- Module Dependencies
- Errors thrown
- Jupyter API
- Variable fetching
- Import / Export
- React Webviews: Variable Viewer, Data Viewer, and Plot Viewer
- FAQ
- Kernel Crashes
- Jupyter issues in the Python Interactive Window or Notebook Editor
- Finding the code that is causing high CPU load in production
- How to install extensions from VSIX when using Remote VS Code
- How to connect to a jupyter server for running code in vscode.dev
- Jupyter Kernels and the Jupyter Extension