Skip to main content

Applying Live Context to your Copilot

info

This feature is currently in Alpha and is now generally available to all users. It is only available to macOS & Windows users, with Linux support coming ASAP 🙏🏼.

Pieces for Developers has thus far been a productivity tool that is integrated into all of your tools using plugins and extensions, and allows users to have deep contextual conversations when they manually add context from folders, files, snippets, etc. While this is useful and the users love it, we want to go one step further.

We would like to introduce the Pieces for Developers Workstream Pattern Engine (WPE), which powers the world’s first Temporally Grounded Copilot. Our goal is to push the limits of intelligent Copilot interactions through truly horizontal context awareness across the operating system, enabling your copilot to understand what you've been working on and keep up with the productivity demands that developers deal with every day. Together with your help, the Pieces Copilot will become the first that can understand recent context, eliminating the need for manual grounding.

The Workstream Pattern Engine

The workflow context you will interact with will come from the Workstream Pattern Engine, an "intelligently on" system that shadows your day-to-day work in progress journey to capture relevant workflow materials and temporally ground your Pieces Copilot Chats with relevant and recent context. Practically, this enables natural questions such as "What was I talking to Mack about this morning?" or "Explain that error I just saw," allowing your Pieces Copilot to truly extend your train of thought and let you spend less time trying to track things down or capture context and more time doing the things you love, like building amazing software.

Getting Started with your Temporally Grounded Copilot

Enabling/Disabling the WPE

In order to use live context in your conversations with Pieces Copilot, you will need to enable the Workstream Pattern Engine. You can disable it at any time, but remember that the Copilot will not be able to use live context from when you had it disabled.

WPE Menu

Using Live Context

To engage Live Context, head to the Copilot Chats view in the Pieces for Developers Desktop App and select “New Chat.” In the “Set Context” section, tap the option labeled “Live Context”.

WPE Pipeline

info

The Workstream Pattern Engine must be turned on to use Live Context. We’ve made this super easy to do from wherever you’re getting started.

You can add additional context to further tailor the conversation if you’d like.

Permissions

If you’re a Mac user, you will need to update Pieces’ permissions in order to use live context. Windows users may disregard this step.

You can also manually update this by adding and enabling Pieces OS in the following settings:

Privacy & Security > Accessibility

Accessibility Permissions

Privacy & Security > Screen and System Audio Recording

Screen and System Audio Recording Permissions

Recommendations and Best Practices

tip

We recommend clicking around your workflow between chats, websites, emails, code, etc., while the WPE is running before asking a question to get the most value out of it (i.e., data capture).

Interaction with Language Models (LLM)

  • Cloud LLM: If you are using a cloud-based LLM, the data identified as relevant is sent to the cloud LLM for processing.
  • Local LLM: If you are using a local LLM, the data remains on your device, ensuring that all processing happens locally without any data leaving your device.

Prompting

As with any interaction with an LLM, good prompting practices will improve your experience greatly. Through the Workstream Pattern Engine, the Pieces Copilot is able to understand what you’re working on, including files, websites, folders, etc and with whom. Therefore, you are now able to ask even more natural questions that feel like extensions of thought. Here are some examples of what's now possible:

  • "Can you summarize the readme file from the pieces_for_x repo?"
  • "What did Sam have to say about the All Hands meeting in the GChat MLChat channel?"
  • "Generate a script in python using the function I saw on W3Schools to create a variable named xyz"
  • "Take the function from example_function.dart and add it as a method to the class in example_class.dart"
  • "What did Mark say about requests to the xyz api in slack?"

Use Cases and Tutorials

  1. Exception/error handling
  2. Getting started solving a problem
  3. PR review
  4. Summarize unread GChats when I log in

Data and Privacy

Your workstream data is captured and stored locally on-device. At no point will anyone, including the Pieces team, have access to this data unless you choose to share it with us.

The Workstream Pattern Engine triangulates and leverages on-task and technical context across developer-specific tools you're actively using. The bulk of the processing that occurs within the Workstream Pattern Engine is filtering, which utilizes our on-device machine learning engines to ignore sensitive information and secrets. This enables the highest levels of performance, security, and privacy.

Lastly, for some advanced components within the Workstream Pattern Engine, blended processing is required to be set via user preferences, and you will need to leverage a cloud-powered Large Language Model as your copilot’s runtime.

That said, you can leverage Local Large Language Models, but this may reduce the fidelity of output and requires a fairly new machine (2021 and newer) and ideally a dedicated GPU for this. You can read this blog for more information about running local models on your machine.

As always, we've built Pieces from the ground up to put you in control. With that, the Workstream Pattern Engine may be paused and resumed at any time.