LinkedIn recruitment spam becomes Olde English prose after user hides AI prompt injection in bio — bots also also manipulated to address user as ‘My Lord’

TL;DR

A LinkedIn user inserted a prompt into their profile that caused AI recruiters to communicate in Old English, revealing vulnerabilities in AI profile scanning. This incident underscores potential manipulation of AI systems and raises concerns about AI reliability in professional contexts.

A LinkedIn user, tmuxvim, intentionally inserted a prompt into their profile that caused AI recruiters to address them in Old English, calling them ‘My Lord.’ This incident highlights vulnerabilities in AI systems used in professional networking and recruitment, raising concerns about manipulation and AI reliability.

According to reports from Tom’s Hardware, tmuxvim added a prompt injection to their LinkedIn bio instructing AI profile scanners to address them in Old English and speak as if in the year 900 AD. As a result, recruiters’ automated messages transformed into humorous, archaic language, with one message beginning ‘My Lord Arthur’ and including a lengthy Old English-style text referencing treasure and warriors. The user shared a screenshot of the message, illustrating the unexpected behavior caused by the prompt injection.

The incident was shared on social media, where discussions emerged about the ease of manipulating AI systems through prompt injections. Experts note that such manipulations could have broader implications for AI reliability and trust in automated professional interactions. The user’s action was a deliberate experiment to demonstrate how AI can be influenced in unintended ways, raising alarms about security and robustness in AI-powered platforms.

Why It Matters

This event underscores the potential for malicious or playful prompt injections to disrupt AI-driven systems, especially in high-stakes environments like recruitment. It highlights the need for improved safeguards against AI manipulation and raises questions about the trustworthiness of automated communication tools in professional settings. As AI continues to integrate into daily workflows, understanding and mitigating such vulnerabilities becomes increasingly critical for platform providers and users alike.

Amazon

AI prompt injection detection tools

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

Background

Prompt injections have been a known issue in AI language models, where users insert specific commands to influence output. This incident on LinkedIn is a high-profile example of such manipulation occurring in a real-world, professional context. Previously, AI vulnerabilities have been demonstrated in controlled environments, but this case shows how easily these tactics can be employed publicly. The broader conversation involves AI security, user trust, and the need for better safeguards against malicious prompt injections.

“I put a prompt injection into my LinkedIn bio and recruiters are messaging me in Old English and calling me Lord.”

— tmuxvim

“This demonstrates how prompt injections can manipulate AI outputs, raising concerns about the robustness of AI systems used in professional environments.”

— AI security expert

Amazon

AI security and safety software

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

What Remains Unclear

It is not yet clear how widespread such prompt injections could become or what specific safeguards platform providers might implement to prevent similar manipulations. The long-term impact on AI trustworthiness in professional contexts remains uncertain.

Amazon

AI chatbot moderation tools

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

What’s Next

Platform providers like LinkedIn and AI developers are expected to review and strengthen their safeguards against prompt injections. Further research and testing are likely to focus on improving AI robustness and detecting malicious prompt manipulations. Monitoring of similar incidents will help assess the evolving threat landscape.

Amazon

AI system robustness testing kits

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

Key Questions

Can AI systems be permanently manipulated through prompt injections?

Prompt injections can temporarily influence AI outputs, but ongoing research aims to develop defenses against persistent manipulation. The incident on LinkedIn demonstrates a specific case but does not imply permanent control over AI models.

What are the risks of such manipulations in professional settings?

Manipulations could undermine trust in automated communication, lead to misinformation, or be exploited for malicious purposes, such as phishing or impersonation.

How can platforms prevent such prompt injections?

Developers can implement safeguards like input sanitization, anomaly detection, and stricter AI control mechanisms to reduce the risk of prompt-based manipulation.

Is this incident an isolated case?

While this is a notable example, prompt injections are a known vulnerability in AI systems, and similar tactics could be employed elsewhere if safeguards are not improved.

You May Also Like

Solomon Island lawmakers pick China-cautious Matthew Wale as new PM

Matthew Wale, known for his cautious stance on China, has been elected as Solomon Islands’ new prime minister after the ousting of the previous leader, amid ongoing security deal concerns.

Medicare’s new payment model is built for AI. Most of the tech world has no idea

Medicare’s ACCESS program introduces a payment model rewarding health outcomes, enabling AI integration in healthcare for the first time at federal scale.

ICLR 2026 – Institutional Affiliations Dataset and Analysis

A new dataset derived from 5,356 ICLR 2026 papers offers detailed institutional affiliation data, enabling insights into AI research trends and leading institutions.

‘Millions’ of pounds saved by replacing Palantir tech in refugee system

UK government replaced Palantir’s system for managing Ukrainian refugee placements with an in-house solution, saving millions and increasing control.