Lobbying in the Age of AI - Part 2

Complex Use Cases, Regulatory Realities, and the Limits of Automation

Sebastian Peter Sass
By Sebastian Peter Sass
·February 13, 2026·18 minutes to read
Lobbying in the Age of AI - Part 2
About ThembiAI & TechnologyDecision-making

As discussed in Part 1 of this series, lobbying — understood as any form of targeted advocacy aimed at informing, influencing or nudging policy-makers — serves a legitimate democratic function when conducted transparently and professionally. When diverse stakeholders offer structured input, arguments and evidence, they help lawmakers navigate complexity and translate competing interests into workable rules. Part 1 also reviews a number of straightforward use cases where AI can deliver genuine productivity and quality gains in day-to-day advocacy work: targeted intelligence and alerting, background research and contextualisation, content preparation, and strategic horizon scanning.

This second part turns to the more complex cases — where the potential for AI assistance remains real, but where technical limitations, regulatory constraints, and ethical questions need to be considered.

Adoption accelerates, regulation lags

Meanwhile, the regulatory infrastructure meant to govern these tools is in flux. The European Commission missed its 2 February 2026 deadline for publishing guidelines on Article 6 high-risk AI classification under the EU AI Act. This is the guidance that AI tool providers and users need to understand their obligations. Meanwhile, the Digital Omnibus Simplification Package proposed in November 2025 would push high-risk enforcement back by up to sixteen months. Also, the CEN-CENELEC standardisation bodies missed their fall 2025 deadline for AI technical standards.

Regulatory details for the use of AI are not academic anymore. A 2025 survey of approximately 1,000 EU government affairs professionals found that 89% were exploring AI tools — up from 36% reporting actual use just one year earlier. That is not a gradual trend; it is a rapid, sector-wide shift in working practices.

Beyond the straightforward applications discussed in Part 1, several more complex use cases illustrate the territory where AI provides real added value but where simultaneously human judgment remains indispensable — not only for ethical reasons, but also for quality and reliability.

Intelligence beyond formal records

AI performs particularly well when tracking formal, structured signals - even very weak and hidden ones: pronouncements, committee assignments, amendment authorship, voting records, Transparency Register entries, published consultation responses. These leave clear digital traces and are well-suited to automated retrieval and analysis.

But in policy-making, there are also dynamics that leave no such traces. Trilogue negotiations, political package deals between party groups, personal alliances and rivalries, corridor conversations — these shape outcomes but are invisible to an algorithm. Understanding, tracking and making sense of such factors inevitably requires human acumen, trusted relationships and political judgment.

But this is precisely where AI's operational contributions become strategically important. Policy professionals in the Brussels bubble report that they spend up 30% of their time on handling time-intensive research, monitoring, summarisation - the bulk work that may consist of routines, but still requires diligence, sophistication and understanding of meaning and context: because where information is publicly available - no matter how well hidden and covered in complexity - professionals cannot afford to miss it. But all this eats into the resources that can be allocated to those high value tasks where human agency and discretion are actually invaluable and indispensable.

High-quality AI tools free human professionals to focus their intelligence and discretion exactly where it matters most — on the strategic and relational dimensions where quality depends on human involvement. The value is not that AI exercises such judgement, but that it creates the conditions for humans to focus on better and faster decisions - based on best possible situational awareness and available information.

Procedural tracking with domain-specific intelligence

One of the more technically demanding but practically essential applications is reliable procedural tracking — knowing not just what is happening on a given policy file, but where in the process it sits, what comes next, and what procedural windows exist for stakeholder input.

This requires architectural choices that go far beyond general-purpose AI. Systems need to be bound to structured taxonomies of EU legislative and regulatory procedures, constrained through retrieval-augmented generation to draw on verified institutional data rather than general training knowledge of unknown provenance. The AI must reliably distinguish between a Commission proposal at first reading, a Council general approach, a Parliament committee report, a trilogue compromise text, and a final act — and understand the procedural implications of each stage. More about that further below.

Without this grounding, general-purpose AI tools routinely conflate draft documents with adopted texts, misidentify procedural stages, or present outdated positions as current. In a domain where the procedural stage determines which actors have influence, what format input should take, and what deadlines apply, such errors can have significant consequences.

Monitoring oral proceedings and live events

A more advanced capability is the processing of oral proceedings in near real-time — from European Parliament committee hearings to the Commission's midday press briefings and other livestreamed EU events that happen every day. What once required teams of analysts monitoring multiple proceedings simultaneously for explicit statements, policy positioning or weak signals can now be handled systematically, extracting relevant content with full contextual background and alerting users immediately. This ensures that no relevant statement or signal is missed across the EU's extensive calendar of live proceedings — a significant operational advantage given the volume and frequency of such events.

Where AI applications overpromise

Some applications that sound impressive in concept cannot deliver reliable results in practice. Being honest about this is important — both for quality and credibility. For instance, it is technically possible to build models that predict voting outcomes or amendment trajectories based on historical patterns of parliamentary proceedings. Historical voting data in the European Parliament may indicate broad directional tendencies at the political group level. But experienced practitioners know that in any individual case, outcomes depend on factors that are invisible to AI: package deals between MEPs or factions across unrelated files, informal negotiations that shift positions at the last moment, personal relationships between key actors, and political dynamics that change outside the environment that AI models can track.

Of course, such models may sometimes correctly predict voting outcomes relying on coincidence rather than astute evaluation of all relevant drivers and factors. But upfront it is not possible to distinguish between cases where they are reliable and cases where they are not. Presenting probabilistic outputs as actionable intelligence in a domain governed by human negotiation and political judgment can be problematic.

The regulatory framework: AI Act and advocacy

The EU AI Act creates a tiered system that is directly relevant to providers and users of AI tools in public affairs. Understanding where different applications fall in this framework is a practical necessity.

Prohibited practices

Since 2 February 2025, the AI Act prohibits a number of AI applications outright. These include systems deploying subliminal, deceptive, or manipulative techniques that materially distort behaviour in ways a person cannot detect; systems exploiting vulnerabilities related to age, disability, or socioeconomic situation; and social scoring by or on behalf of public authorities. In an advocacy context, these prohibitions set a clear outer boundary — most directly relevant to disinformation and manipulation scenarios rather than to legitimate professional advocacy, but important to understand as the legal baseline.

The high-risk question

Most advocacy AI tools — monitoring platforms, summarisation engines, drafting assistants, alerting systems — will likely qualify as limited-risk or minimal-risk, subject mainly to transparency obligations such as informing users when they interact with an AI system and marking AI-generated content.

But certain applications could cross into high-risk territory, which triggers substantially heavier obligations: conformity assessment, registration, risk management systems, human oversight requirements, and post-market monitoring. The Annex III categories potentially relevant to lobbying include emotion recognition or biometric categorisation systems (for instance, imagine a tool analysing facial expressions in committee hearings to gauge receptiveness) as well as automated profiling or scoring of decision-makers.

Article 6 of the AI Act establishes how systems are classified as high-risk. The Commission missed the 2 February 2026 deadline to publish clarifying guidelines, with adoption now expected in spring 2026. Furthermore, the Digital Omnibus - now undergoing the EU legislative process - contributes to a dynamic regulatory situation by proposing a moveable start date for high-risk obligations linked to the availability of harmonised technical standards, potentially delaying enforcement to December 2027. The Omnibus is itself a legislative proposal requiring co-decision, so its timeline remains uncertain.

Transparency obligations for limited-risk tools

For the majority of advocacy AI tools, the relevant obligations are transparency-related. When a person interacts with an AI system, they must be informed. AI-generated content must be marked in machine-readable format. Deployers must also ensure appropriate human oversight: not merely perfunctory review, but meaningful human authority over outputs and responsibility for decisions. Users of AI-assisted advocacy tools are deployers in this sense and share these obligations.

The GPAI transparency chain

An additional regulatory layer under the AI Act applies to the foundation models that underpin many advocacy tools. From August 2025, all General-Purpose AI model providers must publish training data summaries, provide technical documentation to downstream integrators, and comply with copyright policies. Models classified as systemic risk — those trained using computational power exceeding 10²⁵ floating point operations — face additional obligations including adversarial testing, incident reporting, and cybersecurity protections.

The practical relevance for advocacy is a transparency chain: foundation model providers must supply downstream tool builders with documentation on capabilities, limitations, and integration requirements. This enables the platforms that public affairs professionals actually use to understand and communicate reliability boundaries to end users. The GPAI Code of Practice, finalised in July 2025 with twelve commitments across transparency, copyright, and safety, has been signed by major providers including OpenAI, Anthropic, Google, and Microsoft.

Ethical and legitimacy risks

Beyond the regulatory framework, a number of applications raise questions that sit in the territory between legal and appropriate. These deserve attention not because they are necessarily prohibited, but because they carry risks to credibility and legitimacy — the foundations on which professional advocacy rests.

The hallucination risk — and what mitigates it

When AI fabricates a legal reference, misattributes a position, or invents a procedural step, the consequences in advocacy are particularly acute because credibility is the fundamental currency. One wrong briefing can cause lasting damage.

Taking a look into the adjacent sector of LegalTech, there have been warning examples. Researcher Damien Charlotin has tracked 905 cases worldwide where AI produced fabricated content in court filings. Stanford HAI found hallucination rates of 69–88% when large language models responded to specific legal queries. One of the "Big Four" consulting firms was claimed to have included false academic references in government reports in both Australia and Canada.

No publicly documented cases of fabricated EurLex references seem to have emerged — but the risks are similar, and the institutional memory in Brussels is long.

However, such risks can be effectively mitigated, but it requires architectural choices and sophistication - an AI trained specifically for EU policy purposes. Systems built on retrieval-augmented generation with verified, domain-specific source material — rather than relying on a language model's general training knowledge — dramatically reduce hallucination rates. Binding outputs to structured procedural taxonomies, retrieving from authoritative institutional sources, and requiring source attribution for every factual claim creates a fundamentally different reliability profile than asking a general-purpose model to summarise EU law.

The boundary between lobbying and public campaigns

This series discusses AI use in targeted lobbying as a professional function directed at policy-makers. The use of AI in broader public campaigns, grassroots mobilisation, and electoral contexts raises related but distinct — and arguably even more complex — questions of appropriateness, legitimacy and legality. The role of using AI in public campaigns merits separate analysis; the relevant point here is that tools designed to enhance the quality and usefulness of professional advocacy are fundamentally different from tools involved with public sentiment.

Transparency of AI use — a maturing question

Professional associations in European public affairs have not yet adopted AI-specific guidelines, though one may expect that such standards will be developed and integrated into existing codes of conduct, as well as into frameworks like the EU Transparency Register, as the technology matures and practice evolves. The US National Institute for Lobbying & Ethics published an "AI in Advocacy Code of Ethics" in March 2025, establishing principles around transparency, fairness, privacy, and civic engagement — an early marker of the direction professional standards are likely to take internationally.

An emerging practical question is what meaningful AI disclosure actually looks like as the technology becomes ubiquitous. As AI-assisted drafting of any text document becomes more and more standard practice in all sectors — much as word processors and research databases once did — a blanket declaration that "AI was used in preparing this document" may become as uninformative as declaring it was "written on a computer." The more meaningful disclosure may concern the nature and depth of AI's contribution: was it used for routine formatting and language polishing, or did it generate the substantive analysis and core arguments? And wouldn't such an obligation have to apply to any function in the policy sector, not just lobbying? Just one of many questions worth considering as professional norms develop.

The level-playing-field question

AI has been attributed with a potential of becoming a democratising force in advocacy — enabling smaller organisations to operate more effectively against better-resourced competitors. There is genuine truth to this: the operational gains described in Part 1 of this series are accessible to organisations of all sizes and at comparatively low cost. But AI can only fulfil this potential if the providers of tools pass efficiency gains forward to end users through accessible pricing and genuine commitment to serving organisations of varying sizes, rather than reserving the most advanced capabilities exclusively for premium accounts. This is ultimately a market question, but it is one that will determine whether AI narrows or widens the existing disparities in advocacy capacity.

Looking ahead

The cases examined in this article — from procedural tracking and stakeholder intelligence to predictions, regulatory classification, and hallucination risk — share a common thread. In each case, AI's real value — if employed responsibly — lies not in replacing human judgment but in enhancing the conditions under which professionals can exercise it: better-informed, more timely, and with greater awareness of the procedural and political landscape.

Where this principle holds, AI is a genuine advance for the profession. Where it is abandoned — through over-automation, careless deployment, or the substitution of algorithmic output for professional discretion — the risks to credibility and legitimacy are real.

The regulatory framework is taking shape, even if unevenly. The AI Act's prohibited practices are in force, GPAI obligations apply, and high-risk rules are approaching — with delays and uncertainties, but approaching nonetheless. Professional norms are emerging internationally. The fundamental logic, however, predates any regulation and predates AI: advocacy's value to democratic decision-making depends on the quality, transparency, and accountability of the input it provides. AI can become a great contributor towards this objective.


Stay in the loop with Thembi

We're building a smarter way to track and understand EU policy, combining AI-powered monitoring with intuitive insights that deliver fast, comprehensively and accurately.

This article was originally published on Substack. Consider subscribing there for updates.

Other signals

All signals