Copilot FAQs
Find answers to common questions about Copilot.
Sage Copilot is an AI-powered finance assistant that can transform how you do your work.
It's an intelligent tool that automates complex tasks, offers personalized business insights, and provides proactive recommendations to improve performance.
Security is built into everything we do. Our security measures include secure data handling, anonymization, encryption, access management, AI model security, advanced monitoring, and threat detection. Learn more about Sage and Responsible AI .
Search help with Copilot is included in your subscription to Sage Intacct.
Contact your Account Manager or Sage Intacct Representative to learn how to get Close Automation and other Copilot features as they are released.
When Sage Copilot is available, you can use it by logging into Sage Intacct and selecting the Copilot icon in the title bar. Copilot opens in a panel on the right side of your page. Depending on the Copilot feature you have access to, you can then either select a suggested prompt, view a generated insight, or enter a search query.
Query or suggested prompt data is primarily used to improve Copilot's performance and response relevance. Query data is retained for a limited time. You can delete your chat history at any time.
Both Sage Copilot and Microsoft Copilot are digital assistants. However, Sage Copilot is integrated with Intacct and provides specialized functions specifically for Intacct. Microsoft Copilot is integrated with Microsoft 365 and is customized for that platform. While their architecture and infrastructure may have similar components, they are not the same entity.
Sage Copilot is only available to customers in production environments. It cannot be tested in sandbox environments.
Machine learning models are designed to learn and improve from real-world data to ensure accuracy and reliability. This is why training happens exclusively in the production environment.
Documents uploaded in sandbox environments are often test data, simulated scenarios, or incomplete records, which may not represent the actual variations and complexities of real-world data. If the model were to train on this data, it would compromise its accuracy and reliability. Allowing sandbox data into the training pipeline risks introducing noise or inaccuracies, as we cannot verify that sandbox documents represent authentic or meaningful data.