Pieces finished 2023 on a high note! Last year our user base showed impressive growth and we introduced innovative features across all our plugins, most notably the Pieces Copilot. As we start 2024, we're turning it up a notch to further minimize context switching, boost your efficiency, and transform your workflow.
In our first plugin release of the year, we're excited to announce new runtime support for Pieces Copilot, enhanced developer experience, and crucial bug resolutions. Let’s dive into the details 🏊
Exciting news for Pieces Copilot users: Google's Gemini LLM is now at your fingertips! This cutting-edge language model is designed to elevate your coding experience by providing smarter, more responsive interactions.
To tap into Gemini's advanced capabilities, simply click the "Copilot Runtime" button in the top right corner and select "Gemini" from the dropdown menu of cloud-based LLMs. Select the Gemini Pro Chat model and start using it for Pieces Copilot!
We’ll continue to add support for additional LLMs and even support for using your own LLM to power Pieces Copilot in the near future.
As fine-tuned LLMs become more prevalent, we want to make sure that you can use any custom LLM to power Pieces Copilot, in addition to popular LLMs like GPT 4, Gemini, or Llama2.
In this release, we’re thrilled to introduce the first phase of our OpenAI integration to the plugins. Similar to the Pieces Desktop App, you can now add your own OpenAI API key to power the Pieces Copilot!
To access this feature, open the Pieces Copilot Runtime menu, navigate to the Cloud OpenAI models, and select “Manage Custom API Key.” This key will be used to power existing OpenAI Models in the Copilot Chat.
With this first version, you can add your personal or enterprise API key from OpenAI, allowing you to use your own quota and rate limits. In our next release, the platform will accommodate various “Bring your own model” scenarios, with a significant emphasis on supporting those who have fine-tuned GPT models based on their organizational or team-specific requirements.
We've introduced a new shortcut in Pieces for VS Code to enhance your workflow with the 'Ask Copilot about Selection' feature. Simply highlight the code you're curious about and press Ctrl+Shift+A (or Cmd+Shift+A on Mac) to bring up the prompt window. The selected code will automatically be included as context for your query to Pieces Copilot, streamlining your coding experience.
We've implemented several key enhancements to the Pieces Copilot experience:
1. We've introduced a new stop icon for Pieces Copilot, giving you the control to halt the response generation at any time.
2. We implemented a more accurate download % while downloading a model.
3 .You can now delete a downloaded local LLM right from the Manage Runtime menu. To delete any of the downloaded LLMs, click on the trash icon to delete the model.
We fixed several bugs to help improve the UX of these extensions:
We are excited to introduce our new community-driven support process! We have recently created a public Github repo where you can create issues to get assistance from us and other users, as well as join in on discussions to request features, show off something you’ve done lately with Pieces, and generally engage with us and the rest of the Pieces community.
Did you know Pieces has recently broken into the Open Source community? We recently launched support for our TypeScript SDK on NPM where developers around the world have started to build on top of the Pieces Platform and get familiar with our APIs.
With our SDKs, you can build your own apps, extend Pieces functionality, and so much more. Check out our GitHub to learn more about our Open Source initiatives and how you can start contributing today!
Do you love Pieces? Stop sending us carrier pigeons 🐦 and join our Discord Server to chat with our team, other power users, get support, and more. 🤝