Manual annotation in medical imaging faces significant challenges including susceptibility to human bias, time consumption, and inaccurate image interpretation either in research and healthcare settings. In the OpenCV AI Contest 2023, our project "ParaSAM" introduces a groundbreaking approach for tumor segmentation and volumetrics in medical images, leveraging deep learning to transcend human biases inherent in manual annotation. This collaborative effort with Umeå University and Daniel Öhlund Lab focuses on pancreatic cancer research, utilizing an extensive dataset of 3D ultrasound images from a KPC mouse model. ParaSAM builds upon the MedSAM model, refining it for enhanced accuracy in automatic detection and segmentation of pancreatic tumors. We developed an interactive 3D application using Unity and OpenCV for importing ultrasound images, performing automatic segmentation, and visualizing tumors in 3D. Our preliminary results demonstrate significant improvements in tumor detection and volumetric analysis over conventional methods, marking a crucial step towards precision oncology diagnostics. ParaSAM (from the Greek prefix Para- meaning ‘Beyond’ and SAM from ‘Segment Anything Model’) is an advanced tool designed for precise segmentation and volumetric analysis of tumors in medical images, developed using OpenCV and Unity. As a significant evolution and refinement of the "SAM" model, ParaSAM specifically addresses the challenges in detecting and analyzing pancreatic tumors. In collaboration with Daniel Öhlund's Lab (Umeå University), this project aims to: Enhance the accuracy of tumor segmentation in ultrasound images, moving beyond the capabilities of the traditional SAM and its medical adaptation, MedSAM. Automate the tumor annotation process, reducing the time and effort involved in manual annotations. Develop a comprehensive workflow using OpenCV and Unity, encompassing data preprocessing, automated capture and extraction of annotations, dataset generation, and advanced 3D visualization techniques. Throughout the three-month contest period, we have focused on annotating a vast number of tumors and developing a Proof of Concept application. This application demonstrated the entire workflow, including the training of a MedSam refinement, post-processing of inference results, and advanced visualization in 3D and mixed reality environments. Preliminary results (detailed in the later section) confirm that ParaSAM significantly improves upon the segmentation and volumetric analysis capabilities of the original SAM and MedSAM models, marking a substantial advancement in medical imaging technology.