Skip to content

Customer stories

Five things a close reading of GTA parking-garage SRRs reveals about sealed fieldwork

A year of sealed site review reports from a single Ontario structural practice, covering concurrent parking-garage repair projects, exposes five features of how sealed fieldwork is actually produced - features that surface only when the reports are read as documents rather than described in the abstract.

The corpus

The reports examined here come from a GTA structural engineering firm working concurrent parking-garage repair projects across a full year of active construction: concrete slab repair, waterproofing removal and replacement, and ramp heating systems. Real projects, real observations, real seals. Four to eight site photographs per visit, captioned and cross-referenced by number in the observation text. Multi-visit observation chains in which each report carries the prior visit's open items forward.

Five features of the practice emerge when the reports are read as documents rather than as abstractions. None of them appear in a generic field-review template, and none of them would surface from a requirements document alone.

1. The template is the firm's voice

Every report in the corpus follows the same four-section structure: a header block with project metadata, a people table listing who conducted the review and who was present, an area-reviewed summary followed by numbered observations, and a sign-off block with the principal's wet signature and designation.

This is not generic field-review formatting. It is the firm's house format, refined across years of practice and consistent across both projects. The header block uses the same two-column layout every time. The people table has the same three sections every time. The observation numbering starts at 1 every time, whether the report covers a routine slab inspection or a complex multi-phase waterproofing sequence.

The template is not a formatting preference. It is the firm's professional identity on paper. A client who receives 50 reports across a project expects them to look the same. A code official reviewing a sequence of reports expects structural consistency. A principal who seals a report expects it to match the format they have signed their name under for years. Generic field-review layout produces a document that is technically correct and professionally wrong.

The implication for drafting tools is narrow: the test a generated draft must pass is not "does this look like a field review report," it is "does this look like this firm's field review report." At Fermito, the generation prompt is anchored to the firm's own reports rather than to a template library, so the header layout, the people-table structure, the observation numbering, and the signature-block placement all match the firm's existing work on first output.

2. Photo-to-observation linking is the hardest part of drafting

Every observation in the corpus carries a parenthetical photo reference, "(see Photo 3)" or "(Photos 5 and 6)." Every photograph has a caption that describes the subject and ties back to the observation. The cross-referencing is the evidentiary backbone of the report: the observation states what the engineer found, and the photograph proves that the observation corresponds to a real physical condition at a specific time.

Generating narrative prose from field notes is straightforward. Generating accurate photo-to-observation cross-references is not, and the difficulty is structural. Photographs are captured on site in the order the engineer encounters conditions, which is not the order in which observations appear in the report. A photograph of exposed reinforcing steel taken before a photograph of a concrete patch in progress might correspond to observation 4 and observation 2, respectively. A correct report matches each photograph to the right observation, assigns a sequential photo number, writes a caption that accurately describes the image, and inserts the correct reference into the correct observation paragraph.

When the linkage breaks, the report becomes internally inconsistent. If Photo 3 is referenced in observation 2 but shows a condition described in observation 4, a code official reading the report sees that the text does not match the image. The engineer's credibility suffers, and the report's evidentiary value is degraded.

This is the single most labour-intensive element of the drafting process, and the one most likely to break under time pressure. A drafting tool that generates prose but leaves photo linkage to manual reconciliation has solved the easy half of the problem.

3. Observation chains are embedded in the workflow

A significant share of the reports in the corpus belong to multi-visit observation chains, sequences in which the engineer returns to the same area across visits and each report references findings from the prior visit. The ramp-heating reports are the clearest example: SRR #19 documents heating cable spacing verified against shop drawings, and SRR #20 documents the metal mesh and mastic application over those same cables.

Carry-forward is the engineering practice of tracking an observation across visits: a non-conformance identified in visit 3, re-inspected in visit 5, resolved in visit 7. The report for visit 7 must state that the item was originally identified in visit 3, describe the corrective action, and confirm that the current condition is conforming.

Carry-forward is deeply embedded in the workflow. A firm producing 50 reports per month across 15 active projects is managing hundreds of open observations across overlapping visit sequences. The principal tracks these in their head, in a spreadsheet, or in the margins of prior reports. A drafting tool that treats each report as a standalone document, generated from a single set of field notes, is useful for one-off visits but incomplete for the actual practice. The generation context must include prior reports on the same project, not only the current visit's notes.

4. Principal review is voice plus accuracy, not accuracy alone

In practice, the principal's review of a draft focuses as much on voice as on technical correctness. The reports in the corpus carry a consistent tone - a specific way of describing concrete conditions, a preferred sentence structure for non-conformance findings, standard phrasing for the sign-off paragraph. The principal does not just check that an observation is accurate. They check that the observation sounds like something they would have written.

This is not vanity. Firm voice consistency is a quality signal to the reader. A code official who reviews a sequence of reports from the same firm expects linguistic consistency: the same terminology, the same observation structure, the same level of specificity. If report 12 reads differently from reports 1 through 11, the reader notices. The inconsistency raises a question about whether the same professional supervised this report, even when the answer is yes.

The implication for drafting tools: draft quality is not only factual accuracy, it is voice fidelity. A draft that accurately describes the site condition but uses different terminology than the firm's established style will require more revision than a draft that matches the firm's voice with slightly less specificity. The principal's revision time is driven by how much the draft sounds like their firm, not by how much information it contains.

A drafting tool trained on the firm's own reports learns the observation language, the way the firm describes delaminated concrete, the way it phrases conformance findings, the way it structures multi-photo observations. As the principal revises generated drafts, the language sharpens. The tenth report needs less editing than the first, not because the model improves, but because the prompt accumulates more of the firm's own voice.

5. A sealed document is not a draft

A sealed document is a professional opinion. It carries the personal liability of the engineer who signed it. It can be produced as evidence in a legal proceeding, reviewed by a regulatory body, or cited in a structural assessment that informs a building's future.

A draft is a starting point. It is a structured first pass that saves the engineer the labour of transcribing field notes into professional prose. The draft should be accurate, well-formatted, and consistent with the firm's template. But it is not a sealed document until the principal reads every sentence, exercises professional judgment over every finding, and applies their seal with the understanding that their name is now permanently attached to the contents.

The gap between draft and sealed document is the review, and the review is the entire point of the exercise. The goal of a drafting tool is not to eliminate that review. The goal is to give the principal more time for it. When drafting takes 90 minutes and review takes 15 minutes, professional judgment gets 15 minutes of attention. When drafting takes 10 minutes and review takes 30 minutes, professional judgment gets 30 minutes of attention, and the sealed document is better for it.

Every hour a licensed engineer spends typing is an hour they are not reviewing. The corpus makes that arithmetic concrete.

The Fermito notes

One email when a new piece or template drops.

Practice notes on sealed engineering work, the regulation around it, and what AI drafting does and does not change. No cross-promotion, no re-selling the list.

One email per release. Unsubscribe any time.

← All articles