Fresh Pieces Copilot updates, coming in hot to JetBrains! 🔥 Chatting with the Pieces Copilot in JetBrains is already a huge upgrade to your workflow. With this release, you can customize your experience further by choosing on-device LLMs. Plus, real-time snippet streaming throughout the Pieces Suite and a key bug fix.
You probably already know and love this feature from the Pieces Desktop App. But, there’s nothing like choosing the LLM that you want to power your Pieces Copilot interactions. You can choose between cloud LLMs like PaLM 2 and on-device LLMs like CodeLlama, which keep your code and copilot interactions secure.
To adjust your Pieces Copilot runtime:
Are you tired of manually refreshing your snippet list in your JetBrains IDE? So are we. Today, we’re introducing real-time snippet streaming to the Pieces for JetBrains Plugin. Real-time snippet streaming is a vastly more efficient way to update your snippet list, as changes to your Pieces repository will be reflected in real time.
This way, you can save a snippet through the Pieces Web Extension, rename it in the Pieces Desktop app, and reuse it in your JetBrains IDE without any manual refreshes.
This release includes a few pesky bug fixes. The most notable? We fixed an issue with automatic scrolling to the bottom of a streamed message.
Do you love Pieces? Stop sending us carrier pigeons 🐦 and join our Discord Server to chat with our team, other power users, get support, and more. 🤝
As always, if you run into issues or have feedback, please fill out this quick form or email us at email@example.com and we’ll be in touch as soon as possible!