Insight: AI in Architecture – Balancing Opportunity with Responsibility
Share Article
Across the industry, Artificial Intelligence seems to be the phrase on everyone’s lips. Is this the beginning of a new way of working? Is it safe? Should it be embraced?
Today, we’re talking to AEW director, Mark Turner, to answer these questions.
There’s no denying that AI has arrived. From legal services and financial planning to marketing and design, the tools are getting smarter, the use cases more sophisticated and the noise louder. In architecture, the conversation is only just beginning. There’s huge potential for innovation, but also real risks if the technology is relied upon to replace professional judgement.
At AEW, we’re interested in the practical realities. Let’s examine the opportunities and risks the technology has for our sector
Early tools and low-hanging fruit
There’s a lot of talk about AI revolutionising architecture. But in reality, most tools being used or marketed today are more evolutionary than disruptive. The label “AI” is often applied to what are, in truth, smart software features. Think autocomplete, image refinement, and layout suggestions. These aren’t machines dreaming up buildings from scratch.
That’s not to dismiss them. Even relatively simple AI-powered tools have the potential to deliver real value by streamlining admin, summarising complex information, supporting bid writing or speeding up visual workflows. Like most of our contemporaries, we’re keeping a close eye on developments like large language models and generative image tools, because we can see how they could become useful additions to our toolkit in the right context.
But let’s be clear: the future-facing tools that promise fully generative floor plans or automated design solutions still need a healthy dose of scrutiny. None of these can replace design thinking, contextual understanding or technical rigour.
They’re certainly not ready to take over architectural workflows wholesale.
The power of the individual
One of the more interesting challenges with AI in professional practice is that adoption isn’t something that can be handed down from the top. Corporate policies and frameworks can set the boundaries, but the real experimentation and practical discovery of where AI can save time or improve output often happens at an individual level.
The biggest efficiency gains will come from the people who are curious, driven, and thoughtful about how they work. That might mean testing a transcription tool to speed up note-taking or using a language model to check the structure of a report. AI doesn’t have to change the core of what we do to be useful. Marginal gains will make a big difference and free up time for deeper work.
Here at AEW, we’re already creating a framework that encourages safe, responsible experimentation. We’re not mandating the use of AI tools, but we are encouraging our team to explore them, challenge them, and think critically about where they can add value safely. Those who do will likely find new ways to be effective and creative in their roles.
Managing risk
Of course, caution matters. As with any new wave of technology, there are risks. Some of those risks are immediate and some are harder to spot. For us, one of the clearest is around data security. Many of the most popular tools are cloud-based and hosted internationally. If you’re inputting anything commercially sensitive, you need to be absolutely sure where that data is going, how it’s being stored, and whether it could be used to train a broader model.
There’s also the issue of trust. Language models, for instance, don’t give you answers; they give you predictions. They scan large volumes of data and suggest what the next word or sentence should be, based on statistical likelihood. The results can sound convincing, but that doesn’t mean they’re right.
For experienced professionals, this isn’t necessarily a problem. We know how to cross-check, challenge and verify. But for junior team members, or those under pressure to work quickly, it’s easy to over-rely on tools that appear more capable than they really are. Whether it’s summarising building regulations or generating planning copy, there’s still no substitute for applying professional judgement and critical thought.
That’s why we’re cautious. Not negative, but alert to the implications. Encouraging our team to explore AI tools goes hand in hand with reminding them: you still need to know more than the system you’re using. It’s there to support your thinking, not replace it.
Freedom from the mundane
Used carefully, AI can help shift how we spend our time. If a tool can write a basic summary or generate a rough visual idea, that’s time saved. Time saved is time that can be spent on the real work: thinking, testing, refining, collaborating. In that sense, AI could help make architects and designers more valuable, not less.
So I don’t think we should worry about AI replacing design intelligence; that’s the wrong outlook. There’s much more reason to be enthusiastic about it giving us more space to operate. The less we’re bogged down in routine admin or repetitive early-stage tasks, the more time we have for design exploration, project strategy, and the kind of problem-solving that actually adds value to our clients and communities.
That’s the future we want to prepare for, one where technology enhances creativity, rather than undermines it. And it’s why we’re committed to exploring AI through a lens of integrity, curiosity and caution.