Now and again a project comes up that challenges me to explore a new technique or technology for the first time. One such job came across my desk recently which I thought was certainly worth a wee blog post. Earlier this summer I completed work preparing a number of Egyptian canopic jars from Luxor to be 3D printed.
The project was carried out for the New Kingdom Research Foundation, a research group led by Piers Litherland that is currently investigating the desert wadis west of Luxor, and includes the excavation of a number of tombs looted in antiquity. There are around 78 jars in the collection belonging to various royal women, including the wives of Pharaoh Amenhotep III though unfortunately most of the jars were very badly damaged during the aforementioned ancient looting. Prior to excavations by the NKRF the existence of these tombs was relatively unknown, only mentioned briefly in a 1917 article by Howard Carter. Background research had uncovered several fragments of jars in museum collections in Cairo, London and Strasbourg that had a very strong likelihood of originating from the same site. Through careful study of the objects descriptions, material type, dimensions and photographs Rosa Vane managed to identify a number of joins between those in the museum catalogues and those in the newly excavated material, but it was difficult to go further without being able to physically place the two pieces together. Piers suggested the next best thing – to 3D print fragments from the new excavation and send them to the museum.
The first step in the project was to document a selection of the fragments using photogrammetry which would then allow me to reconstruct the jars as solid “watertight” digital meshes and process them to a standard suitable for 3D printing. I worked together with Alexis Pantos who provided the photographs I would use to generate the 3D models.
Alexis had this to say about his process:
“All photography comes with challenges that vary depending the subject of the photograph and the conditions at the time of the shoot. Photogrammetry is no exception and one must try and find the balance between capture detail, data volume, capture time and so on and so forth.
Photogrammetry shoots are also very different from what could be considered standard archaeological artefact photography. Ordinarily light and shadow are used to show the form of an object and to bring out details of the surface and reveal the colour of the material, while simultaneously capturing an image that is aesthetically pleasing.
If you read the blog you’ll know that photogrammetry is a technique I’ve used for quite a few years now and it tends to make an appearance in most of my work – be it in the form of a terrain model, the foundations of a ruined structure to be reconstructed or capturing site specific artefacts to incorporate into my visualisations in various guises. It’s a great technique because it’s relatively quick to do, it produces fairly reliable results once you know what you’re doing and with a little creativity it can be applied at a whole range of scales – from landscapes, to structures, to artefacts.
The images above demonstrate some of the varied approaches to photogrammetry we’ve used over the years. From left to right: Gearing up for some chilly photogrammetry in a cessna flying over the Skaftafellsjökull glacier in Iceland with Baxter back in May this year. Using pole photogrammetry to record some of the open trenches at the Links of Noltland excavations in Orkney. Kieran’s trusty kite rig recording the site to produce a 3D model of the immediate landscape for me to build my reconstructions up from. And finally, the most common approach, using photogrammetry to record some artefacts on site.
The process of photogrammetry itself is fairly straightforward as the images below demonstrate…
Once the photographs of the artefact have been captured loading them into Agisoft Photoscan is the first port of call for processing them into a 3D model. The image above nicely demonstrates how Photoscan works. The blue rectangles represent the cameras, the positions of which Photoscan has calculated in 3D space (you’ll see these images have been captured right around the whole object, at varying eye levels and angles and with lots of overlap between photographs). Photoscan is then able to use these camera positions to generate a sparse point cloud – the first stage towards creating a fully textured 3D mesh of the object.
This initial sparse point cloud is then further processed and cleaned of erroneous or unnecessary points to create a much denser point cloud which carries enough information to produce a solid mesh of the object.
This particular jar was recorded in two sections – one in the position you see here, and in a second set of photographs with the jar positioned upside down to allow the base to be recorded as well. I followed the same process through to the dense point cloud stage for this second “chunk” of the jar, then used a nifty alignment process in Photoscan to fit these two pieces together.
And this is the final result: a solid mesh which preserves an excellent level of surface detail in the geometry, textured by re-projecting the photographs from the cameras onto the model. The model could then be exported to additional software (I used 3ds Max and Geomagic Design X) to further tidy it up before printing. Usually I’d leave the model at this stage and use it directly out of Photoscan, however 3D printing is a different beast and the mesh needed to be of a much higher quality, with no holes or non-manifold edges. Sometimes the software can get a little confused and do funny things to edges it isn’t quite sure about, creating a kind of rat-nest of mangled polygons and vertices…difficult to spot with the naked eye in an animation or still render but 3D printing would not be so forgiving.
Once the model had been fully proofed for any errors, missing bits, pokey bits that shouldn’t be there or texture issues and I had double/triple/quadruple confirmed it was all at the correct scale it was ready to be sent off for printing. We’d been in touch with Steven Dey over at Think See 3D who already had an impressive portfolio of heritage-based printing projects…so we were confident we were in good hands for our first foray into the world of 3D printing.
Steven was just as enthusiastic as we were and sent me photographs of the printing process as it happened (above), which was exciting to see.
A few weeks went by then I received a forwarded email from Alexis with the image below which shows our 3D printed fragment from Luxor (printed in the UK) reunited with its counterpart in Strasbourg. The email from the university of Strasbourg was really very lovely and even used the words “pioneering archaeology” – considering it had been our first experience of 3D printing all I could think when I read the email was “Oh thank goodness the two pieces fit together!”. Needless to say I slept very soundly that night!
Up until this project I think I’d always reserved a quiet skepticism about 3D printing for heritage as creating replicas of artefacts in a synthetic material seemed to only portray a somewhat limited representation of the original object. For me, experiencing the tactile handling of an artefact relies on so much more than its geometric form – its unique qualities of weight, texture, material and craftsmanship are so important to its interpretation. However in this particular case I have to say I’ve done a complete 360 on my initial skepticism.
I mean, just look at this image! How wonderful is it that these two fragments from different collections being studied thousands of miles apart were able to be reunited through this technology? What I love about this particular case as well is that we made a conscious decision not to texture the 3D print, so it was very clear to the viewer that they were looking at a replica. I think the reason I love this is because the 3D print isn’t trying to fool anyone, it’s serving a very specific purpose and it is what it is. The image itself is very upfront and I think that’s really important particularly in the field of “virtual heritage” which often comes under fire for the illusion of realism in interpretive imagery.
A really interesting project to be a part of and now I’ve mastered the art of modelling for 3D printing without causing any mangled heaps of steaming resin I’m excited to see what’s next!