Keep Your Team’s Skills Fresh and Relevant
23/09/2024The Power of Continuous Learning: Thriving in a Rapidly Changing World
29/09/2024As LinkedIn expands its generative AI features, from post creation to job applications, the platform has updated its User Agreement and Privacy Policy to clarify how it utilizes user data to enhance its AI models.
Spoiler: LinkedIn is using all public posts on the platform to train its AI tools.
As explained by LinkedIn:
“In our Privacy Policy, we have added language to clarify how we use the information you share with us to develop the products and services of LinkedIn and its affiliates, including by training AI models used for content generation (“generative AI”) and through security and safety measures.”
The specific section of LinkedIn’s updated policy now reads:
“We may use your data to improve, develop, and provide products and Services, develop and train artificial intelligence (AI) models, develop, provide, and personalize our Services, and gain insights with the help of AI, automated systems, and inferences so that our Services can be more relevant and useful to you and others.”
In its User Agreement, LinkedIn clarifies that by using the app, you’re agreeing to the terms laid out in its Privacy Policy, including clauses related to data usage, such as the one about AI training.
Notably, LinkedIn does not explicitly exclude direct messages (DMs) from this agreement. This means, theoretically, the platform could use information shared in private messages for AI training or ad targeting, which might raise privacy concerns for some users. In contrast, Meta has consistently emphasized that it does not use private messages to train its AI models, nor does it collect data from users under 18 for such purposes.
LinkedIn has not provided any such assurances regarding private messages in its legal documentation, which is important to note. However, LinkedIn has introduced an AI training opt-out option, allowing users to turn off this feature if they prefer not to have their information used for AI model training.
Thus, as with many security settings, most users likely won’t take the time to switch it off. This means that LinkedIn will automatically include the majority of its users in this new AI training agreement by default, except in regions where AI training permissions are still under legal debate.
This includes the European Union, where LinkedIn data is currently excluded from AI training due to regulatory restrictions, as well as Switzerland, which is still evaluating the parameters of such agreements. These regions are taking a more cautious approach to AI training and data privacy.
As mentioned, Meta is also refining its regional requirements for AI training permissions and recently gained approval to use UK user data for this purpose. Similarly, X (formerly Twitter) has added an AI training opt-out feature to comply with regional regulations. These moves reflect the growing focus on aligning AI data usage with local privacy standards.
But essentially, if you haven’t explicitly opted out of AI training on a social platform, it’s likely that your personal information is being used for that purpose. Your updates are probably being fed into a large language model (LLM) somewhere.
Is that a big issue?
Probably not, as the data is typically aggregated, heavily filtered, and anonymized, making it difficult to trace back to any individual. However, there is still a potential risk when sharing personal information online. Feeding it into LLMs could result in problematic or unintended content generation, which might be concerning depending on the nature of the information you share.
Ultimately, users should have a choice about how their data is used, and LinkedIn has introduced that option, while other platforms are starting to follow suit. This comes after the fact as many have already utilized your historical information without explicit consent.
This raises a larger concern: even if you decide to opt-out now, most of us have been on social media for over a decade, and much of that information has likely already been integrated into various AI models.
So, does opting out now really make a difference?
It depends on your perspective and what you choose to share. More apps are indeed adding options to disable data sharing, which is a positive step. However, in the broader context, it may feel a bit too late for many users.
Source: Social Media Today
Find more information here: http://bit.ly/2BPQn38
For more information contact us at: [email protected]