ɫ

IMPORTANT Website terms of use and cookie statement

What are the risks to architects and practices who use AI?

With architects’ use of artificial intelligence on the rise, learn more about the risks associated with increased usage.

03 July 2026

The genie is well and truly out of the bottle.

RIBA’s AI Report 2025 (released 27 June 2025) found that well over half of UK architects’ practices (59%) are actively using generative AI (genAI) tools. This is almost a 50% increase on the previous survey result from 2024.

The emerging consensus says that the profession will be enhanced by AI, which is seen as a technology that can both increase productivity and creativity, although there are also concerns about its impact on future employment prospects and fees.

However, with increased usage brings increased risk.

May Winfield, Global Director of Commercial, Legal and Digital Risks at and a leading specialist on the legal side of digital risk management, says the power of AI should not leave architects blind to these risks, which go beyond much-debated copyright issues.

She warns that potential legal and liability risks could be going under the radar and may not being given due attention by practices implementing digitisation, and not just AI.

How can you balance innovation with privacy? (Video: Dice)

Are architects losing copyright of their data or designs when using AI?

Inputting practice data into a publicly available generative AI tool is, as May puts it, “like inputting it into a publicly accessible box or forum”. Depending on how the tool works and how it stores data, it may be possible to extract an architect’s data, imagery or design through appropriate prompts.

Architects are clearly alive to the risks: 69% of respondents to RIBA's report believe that AI increases the risk of work being imitated. It is therefore important to understand how a tool will store and utilise someone’s data. Will models be trained on an architect’s data? Is an architect losing control or copyright of their data or designs when inputting them into a generative AI tool or AI agent?

“The answers may depend on the terms of use, the way the model is trained and other factors, so it is important both to check the terms of use and have an informed discussion with your internal IT team, legal team, specialist advisors, and the AI provider to ensure one is comfortable with one’s copyright position,” May suggests. “This will enable an informed decision on whether, for example, some data should not be input into the tool to maintain copyright.”

She continues: “We have all seen headlines of court cases against AI companies for alleged breaches of copyright, and recent headlines have indicated some favourable victories in their favour. Most notable cases have occurred in the US, which has apparently much wider principle of ‘fair use’ than jurisdictions like the UK, so these rulings may not be directly applicable internationally. Any case considered in the EU would also need to take account of the and how other corresponding AI regulations impact and apply to the rights of both parties.”

However, May believes it’s a matter of watch-this-space when it comes to legal rights and positions on copyright use where it has not been expressly permitted/authorised by the author/owner.

Some of the big tech companies are offering indemnities to AI users for potential copyright breaches. We assume each company will have its own wording, and it will be interesting to see how their clauses will be applied and interpreted in practice.

What are the issues surrounding confidentiality and client or contract limitations?

Regarding confidentiality, May says it’s important to discuss with legal advisors and digital specialists what data or content should not be input into generative AI tools due to client requirements, contract terms or other limitations.

Wording of the terms of use of the AI tool will also need to be considered to understand how confidentiality is being maintained and if this is acceptable to the architect.

“It's worth keeping an eye out for references to AI in tender documents – this may range from there being a prohibition on using AI to prepare the tender or other requirements regarding AI contained in tender/contract scope – to ensure that the appropriate specialists in your organisation review and provide relevant input for the tender return,” May suggests.

Organisations should discuss their intended use of AI tools. (Photo: iStock Photo)

What are AI ‘hallucinations’ and how can an architect guard against them?

AI hallucinations refer to when an AI model generates outputs that are incorrect, nonsensical, or simply fabricated, which is one of genAI’s biggest drawbacks and things to watch for. However, hallucinations are a known phenomenon within genAI tools and they may appear entirely plausible.

May suggests that an organisation could partly mitigate against such risks, either by always having a human in the loop for checking, or by limiting the use of the generative AI tool to matters where hallucinations would not have important consequences.

Organisations should also discuss their intended use of AI tools, whether as part of the scope of their works/services or in their deliverables with their insurance broker to ensure they are fully insured in the event of a claim related to the use of the AI tool.

Also relevant to this discussion will be how an architect is covered if, for example, data is lost or compromised when using a genAI tool and this causes issues with the project.

It’s always better to have discussions with insurance brokers for clarity rather than finding that there may be potential insurance coverage issues later, May says.

She also points out that internal guidance, policies and training will be crucial in ensuring awareness and best practice in organisations.

Are there security and issues of ethics and bias in AI models?

Every practice should ideally be comfortable, and have clarity, about how their data input is (or isn’t) being stored, and the location of that data for security purposes, Mays says. It’s often useful to have a discussion with the genAI provider, your internal IT team, and a practice's digital specialists to understand how security steps are being enforced and whether any training or guidance needs to be implemented at a practice to ensure compliance with good practice.

Some commentators have expressed concern about the bias that may be present in historical data that models are trained on, thereby impacting an AI tool’s responses to prompts. Again, having a human in the loop to check for any such bias or inappropriate historical data could often be the best way of managing such issues.

May also recommends a check that your organisation’s use of AI is compliant with General Data Protection Regulation (GDPR).

Thanks to May Winfield, Global Director of Commercial, Legal and Digital Risks, Buro Happold. This article is not legal or professional advice, and readers should obtain professional advice before acting on any of its contents.

Text by Neal Morris. This is a Professional Feature edited by RIBA Practice team. Send us your feedback and ideas.

RIBA Core Curriculum topic: Design, construction, and technology.

As part of the flexible RIBA CPD programme, professional features count as microlearning. See further information on the updated RIBA CPD core curriculum and on fulfilling your CPD requirements as a RIBA Chartered Member.

Latest updates

keyboard_arrow_up To top