AUTHOR=Waghorne J. , Howard C. , Hu H. , Pang J. , Peveler W. J. , Harris L. , Barrera O. TITLE=The applicability of transperceptual and deep learning approaches to the study and mimicry of complex cartilaginous tissues JOURNAL=Frontiers in Materials VOLUME=Volume 10 - 2023 YEAR=2023 URL=https://www.frontiersin.org/journals/materials/articles/10.3389/fmats.2023.1092647 DOI=10.3389/fmats.2023.1092647 ISSN=2296-8016 ABSTRACT=Complex soft tissues, for example, the knee meniscus, play a crucial role in mobility and joint health, but they are incredibly difficult to repair and replace when damaged. This difficulty is due to the highly hierar- chical and porous nature of the tissues, which in turn leads to their unique mechanical properties providing joint stability, load redistribution, and friction reduction. In order to design tissue substitutes, the internal architecture of the native tissue needs to be understood and replicated. Here we explore a combined audio- visual approach so-called transperceptual approaches - to generate artificial architectures mimicking the native architectures. The proposed methodology uses both traditional imagery and sound generated from each image to rapidly compare and contrast the porosity and pore size within the samples. We have trained and tested a generative adversarial network (GAN) on 2D image stacks of a knee meniscus. To understand how the resolution of the training set of images impacts the similarity of the artificial dataset to the original, we have trained the GAN with two datasets. The first consists of 478 pairs of audio and image files for which the images were compressed to 64 × 64 pixels. The second one contains 7640 pairs of audio and image files for which the full resolution of 256 × 256 pixels is retained, but each image is divided into 16 squares to maintain the limit of 64 × 64 pixels required by the GAN. We reconstruct the 2D stacks of artificially generated datasets into 3D objects and run image analysis algorithms to characterize the architectural pa- rameters statistically - pore size, tortuosity, and pore connectivity - and compare them with the original dataset. Results show that the artificially generated dataset based on the downsized rather than compressed images performs better in terms of matching - within 4-8 percent of the mean of grayscale values of the pixels, mean porosity, and pore size of the native dataset. Our audio-visual approach has the potential to be extended to larger data sets to explore how similarities and differences can be audibly recognized across multiple samples.