Collaborate with medical oncology researchers and professionals to identify clinically relevant questions, and seek to train machine learning models to answer them. Gather,...more curate, wrangle, and clean medical imaging datasets for lung (CT) and brain (MR) cancer studies, as well as associated genomic profiles. Perform literature review and identify state-of-the-art methods and practices. Select appropriate deep learning strategies, architectures, and perform hyper parameter optimization.
Studied geometry-function relationships in biological composites and designed and fabricated their synthetic analogs through parametric geometry modeling. Developed multi-material...more bitmap 3d printing workflows for various medical imaging modalities. Designed and fabricated patient aortic phantoms from cardiac CT and hardware for sizing heart valve replacements. Developed pipelines for modeling skull base defects for endoscopic endonasal surgeries and fabricating patient-specific prosthesis.
Foster + Partners Beijing
2011 - 2013 (over 2 years)
Designed and managed the 36000m2 cladding (roof & courtyard weathering steel/glass) and 4000 ton steel roof space truss packages. Led fabrication optimization efforts...more through computational design exercises. Developed roof panelization system and associated details with cladding sub-contractor and suppliers. Ran steel structure vs cladding clash detection simulations through Building Information Management (BIM).
2010 - 2011 (about 1 year)
Designed and developed the conversion of 350 freight containers into a 70 room hotel, reception lobby, office building within a packaging warehouse, and organic vegetable...more market. Coordinated with container fabricators for bespoke structural elements, openings, and stairs. Assigned tasks and led project team. Organized project action plans and daily communication. Reviewed structural, mechanical, and electrical drawings.
Tumors are continuously evolving biological systems, and medical imaging is uniquely positioned to monitor changes throughout treatment. Although… · More qualitatively tracking lesions over space and time may be trivial, the development of clinically relevant, automated radiomics methods that incorporate serial imaging data is far more challenging.
Judgement, as one of the core tenets of medicine, relies upon the integration of multilayered data with nuanced decision making. Cancer offers a unique… · More context for medical decisions given not only its variegated forms with evolution of disease but also the need to take into account the individual condition of patients, their ability to receive treatment, and their responses to treatment. Challenges remain in the accurate detection, characterization, and monitoring of cancers despite improved technologies. Radiographic assessment of disease most commonly relies upon visual evaluations, the interpretations of which may be augmented by advanced computational analyses. In particular, artificial intelligence (AI) promises to make great strides in the qualitative interpretation of cancer imaging by expert clinicians, including volumetric delineation of tumors over time, extrapolation of the tumor genotype and biological course from its radiographic phenotype, prediction of clinical outcome, and assessment of the impact of disease and treatment on adjacent organs. AI may automate processes in the initial interpretation of images and shift the clinical workflow of radiographic detection, management decisions on whether or not to administer an intervention, and subsequent observation to a yet to be envisioned paradigm. Here, the authors review the current state of AI as applied to medical imaging of cancer and describe advances in 4 tumor types (lung, brain, breast, and prostate) to illustrate how common clinical problems are being addressed. Although most studies evaluating AI applications in oncology to date have not been vigorously validated for reproducibility and generalizability, the results do highlight increasingly concerted efforts in pushing AI technology to clinical use and to impact future directions in cancer care.
Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in image-recognition tasks. Methods ranging from… · More convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically, in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this O pinion article, we establish a general understanding of AI methods, particularly those pertaining to image-based tasks. We explore how these methods could impact multiple facets of radiology, with a general focus on applications in oncology, and demonstrate ways in which these methods are advancing the field. Finally, we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced.
Automated quantification of radiographic characteristics
Background Non-small-cell lung cancer (NSCLC) patients often demonstrate varying clinical courses and outcomes, even within the same tumor stage. This… · More study explores deep learning applications in medical imaging allowing for the automated quantification of radiographic characteristics and potentially improving patient stratification.
Methods and findings We performed an integrative analysis on 7 independent datasets across 5 institutions totaling 1,194 NSCLC patients (age median = 68.3 years [range 32.5–93.3], survival median = 1.7 years [range 0.0–11.7]). Using external validation in computed tomography (CT) data, we identified prognostic signatures using a 3D convolutional neural network (CNN) for patients treated with radiotherapy (n = 771, age median = 68.0 years [range 32.5–93.3], survival median = 1.3 years [range 0.0–11.7]). We then employed a transfer learning approach to achieve the same for surgery patients (n = 391, age median = 69.1 years [range 37.2–88.0], survival median = 3.1 years [range 0.0–8.8]). We found that the CNN predictions were significantly associated with 2-year overall survival from the start of respective treatment for radiotherapy (area under the receiver operating characteristic curve [AUC] = 0.70 [95% CI 0.63–0.78], p < 0.001) and surgery (AUC = 0.71 [95% CI 0.60–0.82], p < 0.001) patients. The CNN was also able to significantly stratify patients into low and high mortality risk groups in both the radiotherapy (p < 0.001) and surgery (p = 0.03) datasets. Additionally, the CNN was found to significantly outperform random forest models built on clinical parameters—including age, sex, and tumor node metastasis stage—as well as demonstrate high robustness against test–retest (intraclass correlation coefficient = 0.91) and inter-reader (Spearman’s rank-order correlation = 0.88) variations. To gain a better understanding of the characteristics captured by the CNN, we identified regions with the most contribution towards predictions and highlighted the importance of tumor-surrounding tissue in patient stratification. We also present preliminary findings on the biological basis of the captured phenotypes as being linked to cell cycle and transcriptional processes. Limitations include the retrospective nature of this study as well as the opaque black box nature of deep learning networks.
Conclusions Our results provide evidence that deep learning networks may be used for mortality risk stratification based on standard-of-care CT images from NSCLC patients. This evidence motivates future research into better deciphering the clinical and biological basis of deep learning networks as well as validation in prospective data.
A review of data science methods in medical imaging.
Radiographic imaging continues to be one of the most effective and clinically useful tools within oncology. Sophistication of artificial intelligence has… · More allowed for detailed quantification of radiographic characteristics of tissues using predefined engineered algorithms or deep learning methods. Precedents in radiology as well as a wealth of research studies hint at the clinical relevance of these characteristics. However, critical challenges are associated with the analysis of medical imaging data. Although some of these challenges are specific to the imaging field, many others like reproducibility and batch effects are generic and have already been addressed in other quantitative fields such as genomics. Here, we identify these pitfalls and provide recommendations for analysis strategies of medical imaging data, including data normalization, development of robust models, and rigorous statistical analyses. Adhering to these recommendations will not only improve analysis quality but also enhance precision medicine by allowing better integration of imaging data with other biomedical data sources.
Benchtop workflow for pre-procedural fit-testing of TAVR
Background Successful transcatheter aortic valve replacement (TAVR) requires an understanding of how a prosthetic valve will interact with a… · More patient's anatomy in advance of surgical deployment. To improve this understanding, we developed a benchtop workflow that allows for testing of physical interactions between prosthetic valves and patient-specific aortic root anatomy, including calcified leaflets, prior to actual prosthetic valve placement.
Methods This was a retrospective study of 30 patients who underwent TAVR at a single high volume center. By design, the dataset contained 15 patients with a successful annular seal (defined by an absence of paravalvular leaks) and 15 patients with a sub-optimal seal (presence of paravalvular leaks) on post-procedure transthoracic echocardiogram (TTE). Patients received either a balloon-expandable (Edwards Sapien or Sapien XT, n = 15), or a self-expanding (Medtronic CoreValve or Core Evolut, n = 14, St. Jude Portico, n = 1) valve. Pre-procedural computed tomography (CT) angiograms, parametric geometry modeling, and multi-material 3D printing were utilized to create flexible aortic root physical models, including displaceable calcified valve leaflets. A 3D printed adjustable sizing device was then positioned in the aortic root models and sequentially opened to larger valve sizes, progressively flattening the calcified leaflets against the aortic wall. Optimal valve size and fit were determined by visual inspection and quantitative pressure mapping of interactions between the sizer and models.
Results Benchtop-predicted “best fit” valve size showed a statistically significant correlation with gold standard CT measurements of the average annulus diameter (n = 30, p < 0.0001 Wilcoxon matched-pairs signed rank test). Adequateness of seal (presence or absence of paravalvular leak) was correctly predicted in 11/15 (73.3%) patients who received a balloon-expandable valve, and in 9/15 (60%) patients who received a self-expanding valve. Pressure testing provided a physical map of areas with an inadequate seal; these corresponded to areas of paravalvular leak documented by post-procedural transthoracic echocardiography.
Conclusion We present and demonstrate the potential of a workflow for determining optimal prosthetic valve size that accounts for aortic annular dimensions as well as the active displacement of calcified valve leaflets during prosthetic valve deployment. The workflow's open source framework offers a platform for providing predictive insights into the design and testing of future prosthetic valves.
Three-dimensional (3D) printing technologies are increasingly used to convert medical imaging studies into tangible (physical) models of individual patient… · More anatomy, allowing physicians, scientists, and patients an unprecedented level of interaction with medical data. To date, virtually all 3D-printable medical data sets are created using traditional image thresholding, subsequent isosurface extraction, and the generation of .stl surface mesh file formats. These existing methods, however, are highly prone to segmentation artifacts that either over- or underexaggerate the features of interest, thus resulting in anatomically inaccurate 3D prints. In addition, they often omit finer detailed structures and require time- and labor-intensive processes to visually verify their accuracy. To circumvent these problems, we present a bitmap-based multimaterial 3D printing workflow for the rapid and highly accurate generation of physical models directly from volumetric data stacks. This workflow employs a thresholding-free approach that bypasses both isosurface creation and traditional mesh slicing algorithms, hence significantly improving speed and accuracy of model creation. In addition, using preprocessed binary bitmap slices as input to multimaterial 3D printers allows for the physical rendering of functional gradients native to volumetric data sets, such as stiffness and opacity, opening the door for the production of biomechanically accurate models.
We present a multimaterial voxel-printing method that enables the physical visualization of data sets commonly associated with scientific imaging. Leveraging… · More voxel-based control of multimaterial three-dimensional (3D) printing, our method enables additive manufacturing of discontinuous data types such as point cloud data, curve and graph data, image-based data, and volumetric data. By converting data sets into dithered material deposition descriptions, through modifications to rasterization processes, we demonstrate that data sets frequently visualized on screen can be converted into physical, materially heterogeneous objects. Our approach alleviates the need to postprocess data sets to boundary representations, preventing alteration of data and loss of information in the produced physicalizations. Therefore, it bridges the gap between digital information representation and physical material composition. We evaluate the visual characteristics and features of our method, assess its relevance and applicability in the production of physical visualizations, and detail the conversion of data sets for multimaterial 3D printing. We conclude with exemplary 3D-printed data sets produced by our method pointing toward potential applications across scales, disciplines, and problem domains.
Artificial intelligence (AI) systems built on incomplete or biased data will often exhibit problematic outcomes. Current methods of data analysis,… · More particularly before model development, are costly and not standardized. The Dataset Nutrition Label (the Label) is a diagnostic framework that lowers the barrier to standardized data analysis by providing a distilled yet comprehensive overview of dataset "ingredients" before AI model development. Building a Label that can be applied across domains and data types requires that the framework itself be flexible and adaptable; as such, the Label is comprised of diverse qualitative and quantitative modules generated through multiple statistical and probabilistic modelling backends, but displayed in a standardized format. To demonstrate and advance this concept, we generated and published an open source prototype with seven sample modules on the ProPublica Dollars for Docs dataset. The benefits of the Label are manyfold. For data specialists, the Label will drive more robust data analysis practices, provide an efficient way to select the best dataset for their purposes, and increase the overall quality of AI models as a result of more robust training datasets and the ability to check for issues at the time of model development. For those building and publishing datasets, the Label creates an expectation of explanation, which will drive better data collection practices. We also explore the limitations of the Label, including the challenges of generalizing across diverse datasets, and the risk of using "ground truth" data as a comparison dataset. We discuss ways to move forward given the limitations identified. Lastly, we lay out future directions for the Dataset Nutrition Label project, including research and public policy agendas to further advance consideration of the concept.
OBJECTIVE: To design and validate a novel mixed reality head-mounted display for intraoperative surgical navigation. DESIGN: A mixed reality navigation for… · More laparoscopic surgery (MRNLS) system using a head mounted display (HMD) was developed to integrate the displays from a laparoscope, navigation system, and diagnostic imaging to provide context-specific information to the surgeon. Further, an immersive auditory feedback was also provided to the user. Sixteen surgeons were recruited to quantify the differential improvement in performance based on the mode of guidance provided to the user (laparoscopic navigation with CT guidance (LN-CT) versus mixed reality navigation for laparoscopic surgery (MRNLS)). The users performed three tasks: (1) standard peg transfer, (2) radiolabeled peg identification and transfer, and (3) radiolabeled peg identification and transfer through sensitive wire structures. RESULTS: For the more complex task of peg identification and transfer, significant improvements were observed in time to completion, kinematics such as mean velocity, and task load index subscales of mental demand and effort when using the MRNLS (p < 0.05) compared to the current standard of LN-CT. For the final task of peg identification and transfer through sensitive structures, time taken to complete the task and frustration were significantly lower for MRNLS compared to the LN-CT approach. CONCLUSIONS: A novel mixed reality navigation for laparoscopic surgery (MRNLS) has been designed and validated. The ergonomics of laparoscopic procedures could be improved while minimizing the necessity of additional monitors in the operating room.
OBJECTIVE Endoscopic endonasal approaches are increasingly performed for the surgical treatment of multiple skull base pathologies. Preventing… · More postoperative CSF leaks remains a major challenge, particularly in extended approaches. In this study, the authors assessed the potential use of modern multimaterial 3D printing and neuronavigation to help model these extended defects and develop specifically tailored prostheses for reconstructive purposes.
METHODS Extended endoscopic endonasal skull base approaches were performed on 3 human cadaveric heads. Preprocedure and intraprocedure CT scans were completed and were used to segment and design extended and tailored skull base models. Multimaterial models with different core/edge interfaces were 3D printed for implantation trials. A novel application of the intraoperative landmark acquisition method was used to transfer the navigation, helping to tailor the extended models.
RESULTS Prostheses were created based on preoperative and intraoperative CT scans. The navigation transfer offered sufficiently accurate data to tailor the preprinted extended skull base defect prostheses. Successful implantation of the skull base prostheses was achieved in all specimens. The progressive flexibility gradient of the models’ edges offered the best compromise for easy intranasal maneuverability, anchoring, and structural stability. Prostheses printed based on intraprocedure CT scans were accurate in shape but slightly undersized.
CONCLUSIONS Preoperative 3D printing of patient-specific skull base models is achievable for extended endoscopic endonasal surgery. The careful spatial modeling and the use of a flexibility gradient in the design helped achieve the most stable reconstruction. Neuronavigation can help tailor preprinted prostheses.
Radiomics aims to quantify phenotypic characteristics on medical imaging through the use of automated algorithms. Radiomic artificial intelligence (AI)… · More technology, either based on engineered hard-coded algorithms or deep learning methods, can be used to develop noninvasive imaging-based biomarkers. However, lack of standardized algorithm definitions and image processing severely hampers reproducibility and comparability of results. To address this issue, we developed PyRadiomics, a flexible open-source platform capable of extracting a large panel of engineered features from medical images. PyRadiomics is implemented in Python and can be used standalone or using 3D Slicer. Here, we discuss the workflow and architecture of PyRadiomics and demonstrate its application in characterizing lung lesions. Source code, documentation, and examples are publicly available at www.radiomics.io With this platform, we aim to establish a reference standard for radiomic analyses, provide a tested and maintained resource, and to grow the community of radiomic developers addressing critical needs in cancer research.
Interest in 3-dimensional (3D) printing of anatomic structures continues to grow for a range of applications within the medical field. Proprietary software… · More limits the accessibility of information stored within echocardiographic data sets. This study aims to unlock vendor-specific tags and establish an open-source workflow for generating 3D anatomic models of cardiac structures from routine clinical echocardiographic data sets.
OBJECTIVE. The purpose of this article is to describe a handheld external compression device used to facilitate CT fluoroscopy–guided percutaneous… · More interventions in the abdomen.
CONCLUSION. The device was designed with computer-aided design software to modify an existing gastrointestinal fluoroscopy compression device and was constructed by 3D printing. This abdominal compression device facilitates access to interventional targets, and its use minimizes radiation exposure of radiologists. Twenty-one procedures, including biopsies, drainage procedures, and an ablation, were performed with the device. Radiation dosimetry data were collected during two procedures.
Tilings are constructs of repeated shapes covering a surface, common in both manmade and natural structures, but in particular are a defining characteristic… · More of shark and ray skeletons. In these fishes, cartilaginous skeletal elements are wrapped in a surface tessellation, comprised of polygonal mineralized tiles linked by flexible joints, an arrangement believed to provide both stiffness and flexibility. The aim of this research is to use two-dimensional analytical models to evaluate the mechanical performance of stingray skeleton-inspired tessellations, as a function of their material and structural parameters. To calculate the effective modulus of modeled composites, we subdivided tiles and their surrounding joint material into simple shapes, for which mechanical properties (i.e. effective modulus) could be estimated using a modification of traditional Rule of Mixtures equations, that either assume uniform strain (Voigt) or uniform stress (Reuss) across a loaded composite material. The properties of joints (thickness, Young’s modulus) and tiles (shape, area and Young’s modulus) were then altered, and the effects of these tessellation parameters on the effective modulus of whole tessellations were observed. We show that for all examined tile shapes (triangle, square and hexagon) composite stiffness increased as the width of the joints was decreased and/or the stiffness of the tiles was increased; this supports hypotheses that the narrow joints and high tile to joint stiffness ratio in shark and ray cartilage optimize composite tissue stiffness. Our models also indicate that, for simple, uniaxial loading, square tessellations are least sensitive and hexagon tessellations most sensitive to changes in model parameters, indicating that hexagon tessellations are the most “tunable” to specific mechanical properties. Our models provide useful estimates for the tensile and compressive properties of 2d tiled composites under uniaxial loading. These results lay groundwork for future studies into more complex (e.g. biological) loading scenarios and three dimensional structural parameters of biological tilings, while also providing insight into the mechanical roles of tessellations in general and improving the design of bioinspired materials.
The research falls within the domain of heuristic multi-user driven design. It explores the shift from the utilization of automated systems for the… · More evaluation of design optimization problems to a user-based approach. Users are asked to collaborate in solving a design problem relying on their intuition and experience to guide the solution to convergence.
This project is browser-based utility to align and compare stl files and can be found here: … · More rel="nofollow">http://equate.mecano.io The first step involves applying rigid transformation (translation and rotation only) on the subject .stl to align it with the reference .stl. This is done by picking 3 points on both the reference and subject models in the same order.
More images from our Bioptimized sculpture in the heart of Harvard yard as part of the Arts First Festival. Every spring Harvard celebrates the creativity… · More of its faculty and students through a week long celebration called Arts First. The festival is a public event with many free performances and activities for Harvard and Cambridge communities. Selected by committee, this sculpture arose out of research being done in the field of computational design and digital fabrication at the Graduate School of Design. Designed to be situated on the strong axis between the historic Johnson Gate and the John Harvard Statue, the sculpture intriguingly changes appearance from nearly transparent to monolithic as one circumnavigates it. Utilizing a newly developed voxel modeling software called Monolith by Associate Professor Panagotis Michalatos and Andy Paine, the global form was created to push the boundary of structural integrity of two intersecting cylinders. The intersection was placed in such a way so as to maximize the cut surface area in order to fully express the layered aesthetic of the plywood.