3D-Reconstruction of an Egyptian coffin lid from the Third Intermediate Period (21st-25th dynasties)

For the Module Assessment ‘3D Documentation and Modelling’ as part of the course of study ‘Archaeoinformatics’ at the University of Cologne, I was tasked with the following assignments:

The Institute of Egyptology of the University of Cologne wants to present some of the finds in their collection to a wider audience in an online exhibition. Besides photos and texts, the Institute also wants to present some 3D models and reconstructions, of which you should prepare only one. As the object is very fragile and cannot be accessed easily, the curator of the collection already made photos of the object in question (provided as 120 DNG files, 5,15 GB). Your task is to create a 3D model of the provided photos and reconstruct the missing parts in a second step. The object in question is an Egyptian coffin lid. As the curator is very busy, it is your task to find suitable templates for reconstruction and the necessary literature. You have just a small selection of literature to start you off.

The first Task was to create a digital 3D object from the provided photos via Structure from Motion (SfM).

Image provided by Sebastian Hagenauer, University of Cologne

For this task I’ve used the software Agisoft Metashape Professional 1.6.0 (the successor of Photoscan from the same company). After going through the photos by hand to sort out blurred and therefore misleading ones (ca. 7 photos), I’ve opened them via Adobe Bridge in Adobe Camera RAW to colour correct them with the accompanying colour chart on the side of the artefact and also corrected the perspective with the saved lens meta data in the .DNG image-files. With these corrected and as high-quality saved JPG-files, I’ve started the SfM-process in Metashape.

The resulting digital 3D object below was achieved by the following steps with these settings:

1. Align Photos: High Quality instead Highest Quality because the base photos weren’t sharp enough to justify the extra time and hard drive space with the up scaling by the factor 4 of the highest quality. Generic preselection was also used.

2. Cleaning unnecessary points of the background and below the artefact.

3. Build Dense Cloud: High quality (processing of original photo size) because I own a PC with enough power to make this calculation in a sufficient time (but also cost roughly 7 hours). Also, the mild depth filtering mode was used. Note: using Ultra High quality has crashed after 7 hours. Second cleaning of unnecessary points

4. Build Mesh: Arbitrary surface type with the dense point cloud as source for a high-quality mesh. Two different versions: one with interpolation enabled to fill most of the gaps but missing a big hole on the right side (s. below). The other version with interpolation set to Extrapolated to close all gaps. The extrapolated approach gave a very good result and therefore this 3D-object is used further on (s. below). Both versions are generated with high and medium face count. High for the final renderings and medium to work on the reconstruction.

5. Build Texture: Standard settings (Diffuse map, Generic mapping mode, Mosaic blending mode) with a texture size of 8192×8192 pixels. Both advanced settings were set (hole filling and ghosting filter). Additional creation of Ambient Occlusion maps for later renderings.

6. Find Targets (for the scale): the artefact was pictured fitted with calibrated photogrammetric non coded cross-scales from Cultural Heritage Imaging, which centers are 18cm apart, accompanied by another 5cm-rectangle-scale with 1cm resolution. In Metashape you can detect markers on the photos and therefore on the 3D-SfM-Model automatically with these non-coded cross-scales following their guide. After this you can add scales in Metashape and manually add the known distance between the markers. I’ve used lcm, 3cm, and 18cm on all four iterations of the SfM-model, leaving one scale empty to test the accuracy (s. guide above). The scale-error is 0,000018 – 0,000008m (18 – 9μm), so they are pretty accurate.

7. Export. As final step in Metashape I exported the models and textures to long-term storage data formats (COLLADA .dae and Baseline TIFF), following the advice from the IANUS-project of the German Archaeological Institute (DAI) on favoured long-term data storage.

8. Import in Cinema 4D: To align the scans on the X,Y and Z-axes and easier post processing cleaning I’ve imported them into Cinema 4D. Now that the SfM-models are prepared for the reconstruction, I also exported both version to .glb following the recommendations of kompakkt.de for presenting them online easier (kompakkt.de is a file hosting service created by the Department of Digital Humanities at the University of Cologne which also integrate meta data and on-object annotations).

3D reconstruction V. rounding the model up

Modelling techniques for more lid detailing

The overall shape of the coffin and lid is now done to the extend what can be achieved based on secondary sources of similar chronology and region.

Important: there are no fragments of the original case for this lid left, this is a full reconstruction based only on similar coffins.

The lid will now be edited to fit more to the existing fragment. We can see on the 3D scan at least three different boards forming the lid. The middle one could probably divide once more behind the left hand, but to me it seems like a natural breaking in comparison to the straight cut in front of the left hand. Most of the other presented coffin lids by Taylor 2009 have also three planks which are showing similar breaks.

  • Dividing the lid into more pieces by drawing a spline and cutting alongside it. NB: the form of the cuts is only to some degree certain directly towards the SfM-Model; downwards to the feet this comes from my interpretation based to some extend on the images provided by Taylor 2009 of other coffin lids.
  • Fitting the now three pieces to match the shapes to the ones of our lid (esp. the left one with the convex shape).

Modelling techniques for overall little detailing

This step is only based on my own interpretation and creativity to round the overall image a little bit up, mostly to break some of the hard edges and making the flat edges a little wobbly to give them a more natural feel like the lid above. The same with the frame-piece.

For the final renderings the 3D scan has to be prepared in this way that it does not stick out of the reconstruction model and no holes are visible. Therefore, I’m also deleting the two tenons cause they are not distinctive enough in the scan to get them integrated right while fitting the overall look. Closing the big hole on the back of the headpiece using one of the backup lids. For the frame, dragging the points of it to snuggle fit to the scanned frame and restoring some minor damages in the frame of the scan to guarantee its fit onto the case.

Texturing

The coffin case will be made from plain wood, as previously stated following Taylor. On the lid we can still see the wooden structure of the planks but also that they are covered with a thin layer of somewhat creme-white/khaki coloured paint. The Texture will contain these two details, but I won’t use more (like damages or dowels) to obtain the contrast between reconstruction and scan. The textures will be using PBR-materials (physical based rendering) made with Adobe Substance Painter 2019.3.3.

Since we don’t have further information about the used wood for the coffin lid, I’ll be using wood with long, even and fine grain as seen on the uncovered parts. These properties are described by Cooney 2015, 274 for cedar, but this has to be analysed by a dendrologist further to give a definitive statement.

  1. Creating and editing the UV-coordinated texture maps in Cinema 4D for telling the application where to put a 2D-image on the 3D-object-surface.
  2. Exporting and loading the case and lid into Substance Painter.
  3. Using the Wood Walnut material because it has a fine and even base structure; further fitting and editing of this material for our needs.
  4. Editing colour to fit to the wood colour from our scanned lid (reference points to extract the colour values are behind the right hand and on the frame – Substance Painter allows to use its colour selection tool outside of the Substance Painter window which makes this part quite easy).
  5. Export all maps in separate files to use them as textures in Cinema 4D (e.g. base colour map for the colour and flat details, normal map for the depth information: here the wood grain).
  6. Adding another layer on top of the wood material to simulate a thin layer of paint for the lid. The Artificial Leather material gives a nice some kind of rough structure as starting point. Important: the wood grain has to be visible though the ‘paint’ layer later on; achieved by using lower opacity and the wood grain normal map.
  7. Export as a second texture-set.
  8. Creating of two materials (case_wood and lid_wood_paint) and giving them the exported maps to the right channels.
  9. Editing the colour till satisfied.

Inscription

To include all features of a coffin from this period, I’ve also added an inscription to the lid. It was taken from an image used by Taylor, 2009, 404, tab. III of a coffin lid from this region and period (Lahun). This serves only as a visual placeholder/clue but this inscription includes all of the main characteristics written on these coffins: recumbent jackal, dark color, framed and somehow untidy carried out.

Most of the inscriptions of coffins from this period and region started with the opening phrase ḥtp dì nsw (an offering which the king gives) followed by the name of a deity; in this example Ptah-Sokar-Osiris. This is here also followed by the epithet ḥḳꜣ ꜥnḫw (ruler of the living) of Osiris. See also Taylor 2009, 391f. and Bussmann 2017, 10ff. I would particularly like to thank my special friend Julia for helping with the hieroglyps.

The inscription was taken from its background via Adobe Photoshop CC 2020 and converted to a vector image in Adobe Illustrator CC 2020 to scale it up without quality losses to nearly 4k. After this, I created an alpha and texture image (as raster image, because Substance Painter only accept these) and painted the inscription as a pattern onto the above created texture in Substance Painter.

Bibliography

R. Bussmann (2017), Complete Middle Egyptian. A New Method for Understanding Hieroglyphs (London).

K. M. Cooney (2015), Coffins, Cartonnage, and Sarcophagi, in: M. K. Hartwig (ed.), A Comapnion o Ancient Egyptian Art (Chichester), 269–292.

J.H. Taylor (2009), Coffins as evidence for a ‘north-south divide’ in the 22nd-25th dynasties, in: G.P.F. Broekman/R.J. Demarée/O.E. Kaper (edd.), The Libyan Period in Egypt. Historical and cultural studies into the 21st-24th dynasties: Proceedings of a conference at Leiden University, 25-27 October 2007 (Leiden), 375–415.

3D reconstruction VI. Final object and final thoughts

The final images are rendered with the default rendering engine of Cinema 4D with settings for physical rendering (since PBR materials are used) and physical lights on 4K resolution.

The reconstructed size of the coffin is 56,5cm wide, 196,7cm tall and 37cm deep.

3D scanned artefact

Finished 3D reconstruction

3D reconstruction with animation

I hope these blog posts has shown what a big help and opportunity 3D documentation and reconstruction can offer in the visual understanding of archaeological artefacts. They are offering the interaction with archaeological finds and sites from around every part of the world and have the potential to supplement traditional archaeological documentation methods – and to replace them in the near future.

But with all these positive aspects, I hope I’ve made it clear enough, how much work has to be put into a virtual reconstruction. On the one side you clearly need the training to work with a 3D software. This will take time and it will be a rough path to get finally along with them; but once you have learned all the techniques it will become much easier and more trivial. On the other side it is much more important how you made the reconstruction from the point of used sources and academic habit.

Many reconstructions don’t come with a thoroughly made documentation so that you can’t follow the steps and decisions made in the reconstruction process after on. But this point is vital for academic discussions and works. We have to think of it as ordinary footnotes and citations like they are used in academic papers and monographs. With these steps it is 1st clear and transparent how you as modeler got to your results and 2nd give your reconstruction a scientific value beyond the sole visual representation, since other scholars can start to discuss about it based on actual sources and interpretations published by you (a key value of academic habit and work). Such a model or reconstruction can then also be used for further research (e.g. light or view-shed analysis) without trusting this reconstruction blindly but with a more solid and transparent foundation. Sure, it takes at least the same amount of time you used for creating the 3D model (for me much more since I’m quite experienced with the modelling aspect itself), but the outcome will be from much more scientific and long-term value.

Therefore, I would like to ask everyone constructing or commission a virtual 3D reconstruction to include to some extend such a documentation. Thank you for following this blog.