Hacker News new | past | comments | ask | show | jobs | submit | ArchitectAnon's comments login

I was looking for a fire standard; there's a particular fire standard, BS 5824, that I don't have that is referenced on a manufacturers website, which they claim relates to fire curtains. For context, yesterday as an experiment, I asked Deepseek to summarise BS 5839 part 6, a standard I know quite well, and it did a pretty good job. Let's ask Deepseek to summarise BS 5824...

"*BS 5824:2013* is a British Standard titled "Wall and floor tiling – Design and installation of ceramic, natural stone, and mosaic tiling in normal conditions – Code of practice". It provides guidelines for the *design, materials, installation, and testing* of tiling systems in interior and exterior applications..."

What? OK let's check the standard on the website of the actual body that publishes them: BSIgroup.com

"BS 5824:1980 Specification for low voltage switchgear and controlgear for industrial use. Mounting rails. C-profile and accessories for the mounting of equipment... Cost £149"

Oh shit the manufacturer's website is completely wrong and so is AI. They literally have no clue what they are talking about. 1. Let's not specify their fire curtains in my building. 2. Don't trust the AI.

My conclusion: If the info you need to do your job is behind a paywall or only in expensive textbooks, then AI hasn't seen it and it will make something up that's probably wrong, and probably don't get it to write your website or you will look like an idiot...


If you hire a world famous architect and then start micromanaging their design they will walk away from the project and publicly tell everyone it’s going to be shit. They don’t want to be associated with something that is not their design; it’s bad for their brand.


When I’ve looked into this before for a client in Scotland, it seems the smallest size that made sense in terms of the factors other commenters have mentioned was about £50,000 to install. Also it requires planning permission which is much harder to get for a wind turbine than for solar PV which you can often install without getting explicit permission under ‘permitted development’ rules.


Everyone is missing the point. It's not the form of the conveyor that matters it's the loading and unloading system. You need a set of standardised parcel sized containers first. Then you can design robotic handling equipment that can load and unload them easily. You can take the containers and feed them into your postal system and gradually automate more parts of the system reliably once you have standard reusable containers, bonus points if the containers are collapsible like some crates you get for fresh produce. You can imagine them clicking magnetically into totems on the street where they could be collected robotically more easily. Or in high traffic locations you might feed them hole in the wall that takes them into an underground system of some kind that transports them to a rail or truck depot. You can also imagine that parcels services might be reintroduced on local trains because you don't need an extra person to load and unload the containers at stations if this can be done robotically. There are endless possibilities for varying degrees of automation, but the key thing is that there has to be a standard interface for picking up a container of a known size. The parcel is the interface; the physical specifications of the standard containers are analogous to an API.

The technology already exists for this in airports[0][1]; when you check in a bag in a big modern airport after the gate agent sticks the sticker on the handle it won't be touched again by a human until it gets chucked into the aircraft hold. Your bag goes through the curtain and it is dropped into a standard bucket which is conveyed around under the concourse on a rollercoaster like automated rail system to screening then either to an automated vehicle for transfer between terminal buildings or to an automated storage system for people who have checked in too early or have a long transfer, finally to the stand/ramp for loading into the aircraft. Big international airports like Amsterdam Schipol, Paris Charles de Gaul, Madrid Barrajas and Heathrow all have systems like this.

[0] https://www.youtube.com/watch?v=LVesQ07GrRY&list=PLWwq_41dNV... edit: [1] https://www.youtube.com/watch?v=cac411oBqSE&t=57s


I think the thing that most perturbs me about AI is that it takes jobs that involve manipulating colours, light, shade and space directly and turns them into essay writing exercises. As a dyslexic I fucking hate writing essays. 40% of architects are dyslexic. I wouldn't be surprised if that was similar or higher in other creative industries such as filmmaking and illustration. Coincidentally 40% of the prison population is also dyslexic, I wonder if that's where all the spare creatives who are terrible at describing things with words will end up in 20 years time.


You're entitled to your opinion but this will open up a world of possibilities to people who couldn't work in these fields previously due to their own non-dyslexia disability. Handless intelligent people shouldn't lose out because incumbents don't want to share their lane.


So, the fall of the skilled professional and the rise of anybody who knows how to write prompts?


The AI we have today has very little to do with writing prompts, you still need to understand, correct, glue and edit the results and that is most of the work so you still need skilled professionals.


Pretty much everythnig I see about using AI is based around the construction of proper prompts to achieve the type of output you require. Could you explain how prompts are not a big part of interrfacing with AI?


Yes but you are trading off a lot of people with a one kind of disadvantage, dyslexia, for the benefit of very very few people with a motor skills disability that affects their ability to draw or manipulate an input device which is a different disadvantage. What's the acceptable ratio? One handless person enabled for every 100,000 dyslexics sidelined? Is that fair? How do you work out an acceptable tradeoff?

It is not a given that everyone can or should be enabled to do everything possible at any cost; people in wheelchairs can't be firefighters and we don't make all old subway lines fully accessible because it is incredibly expensive.

Disadvantaging a huge number of people for the benefit of very few has a societal cost.


I guess in the near future prompts can be replaced by a live editing conversation with the AI, like talking to a phantom draughtsman or a camera operator / movie team. The AI will adjust while you talk to it and can also ask questions.

ChatGPT already allows this workflow to some extent. You should try it out. I just talked to ChatGPT on my phone to test it. I think I will not go back to text for these purposes. It's much more creative to just say what you don't like about a picture.

If you speech is also affected rough sketches and other interfaces will/are also be available (see https://openart.ai/apps/sketch-to-image). What kind of expression do you prefer?


I would need to be able to talk and draw at the same time, which is how I interact with co-workers and clients.


This would be feasible. Even right now, but I am not sure how much delay is tolerable.

If you use tablets or screens, I would imagine a two screen/tablet setup, where on one screen there is a variant gallery with AI output and on the other screen there is the drawing area. The drawing constantly refreshes the gallery.

One can click on images in the gallery to move the whole image or parts of it into the drawing area. Additionally voice input leads to a conversation in the background that affects the variants as well. The process would be a mix of sketching, overpainting and voice-controlled image manipulation.

Automatic image segmentation that is automatically applied to all variants would make it easy to move objects/parts from the variants easily. The pulled parts would be stitched automatically into the drawing area, as some kind of super charged collage technique.

Maybe the variant gallery would be more like an idea board. You would say things like: "Can you make a variant with clinkers", "Please add garden furniture near the pond." etc. In the gallery these images would pop up and you can pick what you like from it.


I would imagine and hope for interfaces to exist where the natural language prompt is the initial seed and then you'd still be able to manipulate visual elements through other ways.


This is the case today. You won't get a "perfect" image without heavy post-processing, even if that post-processing is AI enhanced. ComfyUI is the new PhotoShop and although its not an easy app to understand, once it "clicks" its the most amazing piece of software to come out of the opensource oven in a long time.


Your claim that 40% of architects piqued my curiosity. I wonder if this would have an impact on the success of tools like ChatGPT in the architecture industry.

Do you have a source for this stat? I can't seem to find anything to support it.


Not sure I could fine a reference for that any more. I think I got it from an article or lecture by Richard Rogers years ago. He was a famously dyslexic architect and if I remember correctly was the patron of the British Dyslexia Association.


Terence McKenna predicted this:

“The engineers of the future will be poets.”


It’s seems exceedly clear to me that the primary interface for LLMs will voice.


>As a dyslexic I fucking hate writing essays

You can feed AI an image and ask it to describe. Kind of the inverse process.


you can speak instead if you wish. Speech to text is available for all operating systems.


Speaking has sound but that is still just words with the same logic structure. "Colours, light, shade and space" have entirely different logic.


Very interesting. Thank you for the perspective, it is extremely illuminating.

What is a user interface which can move from color, light, shade, and space to images or text? Could there be an architecture that takes blueprints and produces text or images?


Offices have a lot more people per m² all of them using computer equipment continuously for 7hrs/day so in my experience they tend to have considerably higher cooling loads. I know from experience that a typical high rise office building in London, UK will have no heating requirement for most of the year; it is in cooling mode most of the time.


You have to account for peak usage, not median. At 7AM and 6PM, everybody has their stove or ovens going to make dinner, plus the washing machines and dryers.

Building codes and practices are different for commercial and residential for good reasons.


Speaking as an architect who uses other BIM software than this ‘at the coal face’, I wanted to add a bit more of a real world example onto your explanation. BIM is a system which is meant to make the notes (semantic information) that are added to the industry standard diagrams (plans, sections and elevations)[0] stay pointed at the right thing with the right information in them. It also auto generates many of the lists of components that we use.

It makes some things easier; a quick video call to the engineer with a screen share of a 3d model to ask about something makes it much easier to talk about and resolve issues. It makes other things harder; generating the industry standard diagrams that we all use to analyse information is slower than just drawing them in 2d. You get 80% of the way there a lot faster but then you have to deal with all the situations that the software designers didn’t anticipate when they designed the wall, slab, roof, door and window tools and often the only way to do this is to drop objects back to ‘dumb’ geometry and rework them. You then have to go back to manually labelling them in the 2d 'diagrams' or trying to figure out how to tag them semantically with a specially generated tag so that they show up correctly in the auto generated schedules and notes. I personally find the BIM way of working more stressful as you never know when you are going to get caught out by a software glitch that halts your production, it is a lot more unpredictable than brainlessly slogging through drawing a bunch of 2d drawings. I think these are the challenges you are referring to.

So lets say I'm writing a 'Door schedule', a list of all the doors on the project, when I started my career you would go through a project with the paper plans and type up a list in excel with all the specifications manually, now when you place a door object it is tagged with various information which you can query to auto generate this list of doors. However, the doors will have been placed in the BIM model quite early on in the process when we were just thinking about where the doors needed to be and which way they opened. We were not thinking about which manufacturer they were from, what the finishes and hardware are going to be and fire ratings etc at that stage. So to get this list to autogenerate correctly you have to go back to each door and locate the correct fields from among hundreds of others in a clunky data entry interface to enter this information to get it to query correctly and show up in your list of doors. It is database data entry consistency problem. The list of all the doors shows up instantly; 2 mins work to get a list with all this detailed information set up. 2 hours later, I've managed to figure out the tagging system to get it to list the last weird edge case on door D25. I could have typed the whole thing faster in excel, but now that the information is there, it is tagged to that door and as long as no-one duplicates it and moves it to another location the door schedule will still be correct... So every time you re-issue this schedule, you still need to go through it door by door and check against the plan to see what its specification needs to be and check if it is still correct. You can't trust the automatic door schedule to be correct in case somebody with ADHD (a lot of architects including me) forgot to check and edit all the semantic information after they made the visual change they wanted.

Separating the process of adding written information to the drawing from the process of drawing the thing has always been a problem with CAD but with BIM it is even worse because there is a greater disconnect. In my experience BIM reduces problems with geometry not being correctly thought out and things not fitting together but it increases problems with mislabelled information because there is a greater mental distance between the thing you edit and where the information eventually ends up being presented.

I'm a software minded person I have a >20k LOC python BIM customisation project I've written myself and I've coded some embedded C in the past but I struggle to get the semantic tagging to work efficiently; it is much slower than just going to the 2d output drawing and adding a dumb note. I've coded my own BIM door and window objects for my CAD package to try and streamlines this and when I can use them they way I want to it works great, but I do find myself going back and coding more features on pretty much every project I work on to allow for a situation I hadn't anticipated when I first wrote the code.

It also raises an ethical issue if you are billing hourly because how many hours of troubleshooting your own BIM software can you reasonably bill for?

BIM has the same issues as other areas where bureaucracy has been computerised into a rigid process; it is very poor at edge cases and buildings are full of these. CAD software really needs a huge investment in deep interaction design psychology and research to resolve these issues.

[0]These are never going away because the are a very efficient abstraction to use for analysis and they need a clear presentation to be readable; you wouldn't ask an electrical engineer to give up circuit diagrams in favour of a 3d model.


> You can't trust the automatic door schedule to be correct in case somebody with ADHD (a lot of architects including me) forgot to check and edit all the semantic information after they made the visual change they wanted.

Would it be useful if elements contained some kind of "confirm date" field as well as "create date" field (create date would be the time it got pasted) and likely "last modified" on objects? Or would it be unreliable due to them e.g. not really taking surrounding changes in account?


Yes and no. A full revision control system would be handy but would probably have the same issues with commit comments as in software. It could help filter stuff that has been touched but the software needs to be very intelligent about marking something as actually changed. E.g. If everything is parametrically linked should it get touched by a change? and its last modified date should bump? The visual diff could highlight a whole lot of non-relevant stuff as changed.

To your other question, yes it is possible to require a change in the specification of a door because the layout has changed elsewhere, e.g. it might need to become a 1hr fire door with a door closer because the layout of a corridor changed elsewhere and it now forms part of the protected escape route.

Solving this problem is complicated. You can't link the fire resistance of the door to the wall it is in though as it is common practice to specify 30min fire doors even where they are not needed as it is a good way to get a better quality generic heavy door without specifying a particular make and model so this parameter will be commonly overridden for reasons that are not 'semantically logical' but are eminently practical in the real world. Also the two are not necessarily related anyway because a wall might have different requirements driving its specification.

Modelling the voids not the things that contain them is one way to look at the problem. It might be possible to place a sort of virtual smoke bomb object in the corridor that will try to fill the space with virtual smoke and anything that 'smoke' touches needs to have say 30mins fire resistance. However this is complicated because in my jurisdiction you can have different fire resistances required for walls, doors and ceilings forming compartments... So your smoke bomb object would need to know if it was interacting with a door, a wall, a compartment ceiling or a false ceiling and determine wether to alert that something is wrong or pass through it as if it's not there... That is assuming that the space has been modelled tightly enough in the first place to ensure that the 'smoke' doesn't leak out.

Another example of this problem of semantics is reinforced concrete structures because you can make any shape you want out of concrete it might be a wall to one thing and a part of a foundation structure to another and part of a column structure at the same time. Semantically it is one piece of concrete but it is also all three of those other things for the purposes of modelling the structure. This is a tricky problem for software to solve because you usually model those elements individually but when you generate drawings they should all show up as one homogenous lump. What tends to happen is the software sometimes fails to merge things together and you end up troubleshooting a bunch of spurious lines all over your Plans Sections and Elevations.


This seems like a textbook case where AI could let you have your cake and eat it too. Eg work in the easy 2d domain with maybe unstructured sticky notes. have a deterministic pipeline from 2d to geometric checking, with I guess patch points for the manual stuff. and then have AI draw up and maintain the BIM. It seems tailor made to copilot those adhd tasks you describe.


I think many people are exhausted (at least I am) with the idea of bolting AI onto every technology, product, and service in existence to end all of humanity's problems. It won't work like that.


Like clippy for fire doors. "I've noticed you have moved this into a corridor where all the other doors have 1hr rating is the 30min rating on this door correct" Would be amazing but sounds a but too much from what we can expect from AI at the moment; we would start to rely on it and it would have to be 99.999% correct.


A lot of this analysis is purely about style, not substance. It's a discussion of what font the website uses, not whether it's useful and easy to navigate website. We should be evaluating buildings based on wether they are in the right place in the first place, wether it creates a walkable setting, wether you can find the front door easily, wether you can navigate around the building easily, wether they are energy efficient, wether they provide a healthy internal environment... etc. What decorations you stick on the outside are kind of the last thing to worry about. For example, the main problem with the brutalist building with the caption 'Some people like this building, some do not. But all must experience it' is scale, not style, if you chopped off the top five floors and left only three storeys, it would be much more acceptable in its setting. A lot of the problems with brutalist architecture are less to do with the style of materials and more a failure to address basic design issues like signalling 'where's the front door?'.


I get where you're coming from, but the article is very specifically about the external design of buildings, and the impact that external design has on the area around it.

You're not wrong that the internal design is important (and perhaps, in some cases, given less attention than it should fairly receive) but one of the points the article makes is that for any given building, only a very small people who experience that building will actually be interacting with it directly. Most people will experience it as a background to something else going on in their life:

> Buildings’ exteriors serve as backgrounds to a huge range of activities. In my view, this generates constraints on what we want them to look like. The streets of a city are places of work and play, of sickness and health, of triumph and grief. To all this, buildings owned by strangers form the involuntary backdrop, and for this reason, we often want them to be as we want strangers to be: polite, courteous, friendly and unintrusive.

With this in mind, I don't think it's unreasonable to also evaluate buildings on the basis of how their facade contributes (or doesn't contribute) to the general vibe of the area around it.


Another way, I live in Tokyo and 95% of buildings here are prefab and ugly. But it’s a delightful city to be in nonetheless, because everything about them is functional in a way that creates a great atmosphere. I’ve come to value the physical prettiness of buildings less and their public function far more.


That's a good attitude I'll have to adopt. In America, the 4-over-1 design, of 1, ground floor of commercial, with 4 floors of residential above it, has become a very generic "downtown" look across the whole country. it's boring.


>And let's not forget, a lot of traditionalist design took into account a world without HVAC and other hacks. If you want energy efficiency, looking to solutions that tackled the problem in a more fundamental way is a good thing, rather than papering over the problem with technology.

A lot of these houses were heated with a coal fire in every room, they are completely uninsulated and sitting anywhere near the window in the dead of winter would be very cold! There is something to be said for applying modern passive solar design techniques, insulation and airtightness coupled with heat recovery ventilation. Of course, this does mean you can only have big Georgian style windows on south(ish) facing facades.


Go to Architecture school for 5 years. Tutors used to keep scores over how many students they could make cry. Anything else will seem tame in comparison. At the end of it you come to think of your work as something you can discard at a moment’s notice, you’ll have no sentimental attachment to it at all, if anything you’ll slightly hate it already anyway.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: