A bombshell revelation has sent shockwaves through the tech world, with popular workplace communication platform Slack admitting to quietly training its machine-learning models on user messages, files, and other content without explicit permission.
While Slack claims its generative AI products rely on third-party LLMs, the company has confirmed that it uses de-identified, aggregate data from user interactions for features like channel and emoji recommendations. This practice, however, has raised serious privacy concerns as users are not informed of this data usage and, more troublingly, lack a direct opt-out option.
The only way to stop Slack from training its models on your data is by contacting your organization’s Slack administrator, who must then reach out to Slack directly to request opt-out. This cumbersome process has been met with widespread criticism, highlighting a blatant disregard for user privacy in the AI gold rush.
“Slack’s ‘opt-out’ process is a complete joke,” commented Corey Quinn, an executive at DuckBill Group, who discovered the policy buried within Slack’s Privacy Principles. “They are essentially saying, ‘We’re taking your data by default, but you can ask your boss to complain to us if you don’t like it.'”
This revelation comes amidst growing concerns surrounding the ethical use of user data for AI training. Slack’s opaque policies and misleading marketing materials further exacerbate these concerns, leaving users with little transparency and control over their data.
As the AI landscape continues to evolve, the need for clear and transparent data practices becomes paramount. Slack’s recent actions serve as a stark reminder of the potential for misuse of user data in the pursuit of AI innovation.
{{user}} {{datetime}}
{{text}}