VAI radiology has leading experts in AI/ML to help you meet your AI needs. We can help you develop customized AI algorithms (software) to optimize your business performance. We offer a wide range of AI algorithms to healthcare facilities to implement AI based triage service and effective operations and management. For radiology services, VAI radiology has already developed a series of algorithms for brain hemorrhage, stroke and metastasis detection which can be integrated at your facility to continue R&D in this field, while optimizing your facility performance.
We also provide AI algorithm maintenance, tune up, installation, statistical analysis and continued R&D. For AI/ML, we use Deep Learning based algorithms and convolutional neural network (CNN). CNNs represent a huge breakthrough in image recognition. They’re most commonly used to analyze visual imagery and are frequently working behind the scenes in image classification. We are using Transfer learning for training the CNN. Transfer learning is a machine learning method where a model developed for a task is reused as the starting point for a model on a second task. It is a popular approach in deep learning where pre-trained models are used as the starting point on computer vision and natural language processing tasks given the vast compute and time resources required to develop neural network models on these problems and from the huge jumps in skill that they provide on related problems.
For radiology AI/ML, we use CNN based sematic segmentation for identification and localization of lesions. The goal of semantic image segmentation is to label each pixel of an image with a corresponding class of what is being represented. Because we’re predicting for every pixel in the image, this task is commonly referred to as dense prediction. It is a pixel level classification. We use Unet for attaining sematic segmentation. Unet architecture consists of three sections: The contraction, The bottleneck, and the expansion section. The contraction section is made of many contraction blocks. Each block takes an input applies two 3X3 convolution layers followed by a 2X2 max pooling. The number of kernels or feature maps after each block doubles so that architecture can learn the complex structures effectively. The bottommost layer mediates between the contraction layer and the expansion layer. It uses two 3X3 CNN layers followed by 2X2 up convolution layer