Microsoft Denies Using User Data from Microsoft 365 for AI Training Amid Privacy Concerns

Microsoft has officially denied allegations that it uses data from its Microsoft 365 applications, such as Word and Excel, to train AI models, following widespread concerns on social media.

Microsoft has found itself at the center of a privacy storm after allegations surfaced that the tech giant was using data from its Microsoft 365 applications, including Word and Excel, to train its artificial intelligence (AI) models. The claims, which spread rapidly across social media, suggested that Microsoft was leveraging a feature known as "Connected Experiences" to collect user data for AI training purposes. However, Microsoft has categorically denied these allegations, asserting that customer data from Microsoft 365 is not used to train large language models (LLMs).

The controversy began when users noticed that the "Connected Experiences" feature, which is enabled by default, could potentially allow Microsoft to access and use their data. This feature is designed to enhance productivity by integrating content with online resources, enabling functionalities like real-time co-authoring and cloud storage. Despite the uproar, Microsoft has clarified that this setting is not related to AI training.

In a statement to various media outlets, a Microsoft spokesperson emphasized that the company does not use customer data from Microsoft 365 consumer and commercial applications to train foundational LLMs. The spokesperson further explained that the "Connected Experiences" feature is intended to provide users with enhanced productivity tools, such as grammar suggestions and design recommendations, without involving AI model training.

The allegations have sparked significant concern among users, who fear that their private and sensitive data could be used without their consent. This concern is not unfounded, as other tech companies have faced similar accusations in the past. For instance, Adobe Systems Inc. recently faced backlash over its terms of use, which users claimed allowed the company to use their data for AI training. Adobe later clarified its stance, stating that it does not use cloud content to train its AI models.

Microsoft's denial comes amid a broader conversation about data privacy and the ethical use of AI. As tech companies increasingly rely on AI to enhance their products, users are becoming more vigilant about how their data is used. Microsoft has reiterated its commitment to user privacy, stating that any use of customer data for AI training would require explicit consent.

The company has also highlighted that enterprise customers have the option to manage privacy settings and control the availability of "Connected Experiences" within their organizations. This includes the ability to opt-out of certain features if they have concerns about data privacy.

Despite Microsoft's assurances, the incident underscores the importance of transparency and clear communication regarding data usage policies. As AI technology continues to evolve, companies must ensure that users are fully informed about how their data is being used and provide them with the necessary tools to protect their privacy.

Articles published about this story
More stories