Travis Med AI

docz algolia example

No results for 'undefined'Powered by Algolia
Edit page
1. Introduction
2. Getting Started
3. Models
4. Contributing
a) How To Contributeb) Contributing ModelsManifest PropertiesExample ManifestEnumsBuilding a model

Contributing Models

New models can be added to travis med ai by submitting a pull request to our models-repository in the following format:

model-title/
manifest.json

At travis med ai, all models are hosted on dockerhub as docker images. Your manifest.json file will contain all pertinent information for integrating your model into our system (i.e. location of the docker image, display name, etc.). More information on how to structure a manifest.json file is below

Manifest Properties

Your manifest should contain the following information:

variableTypeDescriptionRequired?
tagStringThe tag name of your docker image repository
displayNameStringThe name of your model that will be displayed in travis-med-ai.
inputStudyTypeTODO
See StudyType below.
modalityModalityTODO
See Modality below.
inputTypeModelInputsThe image file format that your model accepts.
See ModelInputs below.
outputTypeModelOutputsThe output format of your file.
See ModelOutputs below.
hasImageOutputbooleanSpecifies whether your model outputs a masked image after finishing image processing.

Example Manifest

The following is an example of a completed manifest:

// manifest.json
{
"tag": "tclarke104/ich-model:0.1",
"displayName": "Intracranial Hemorrhage Detection",
"input": "CT",
"modality": "CT",
"inputType": "DICOM",
"output": "Class_Probabilities",
"hasImageOutput": false
}

Enums

StudyType

TODO

Enum ValueDescription
CTCT Scan
Abd_XrayAbdominal Xray
DicomDICOM
Frontal_CXRFrontal Chest Xray
Head_CTHead CT Scan
Lateral_CXRLateral Chest Xray
MSK_XrayMusculoskeletal Chest Xray

Modality

Describes the imaging modality that the model accepts.

Enum ValueDescription
CTCT Scan
CRChest XRAY

ModelInputs

The file format that the model takes as an input.

Enum ValueDescription
DICOM.dicom file format.
PNG.png file format.

ModelOutputs

The data format that should be expected as an output from the scan model.

Enum ValueDescription
Class_ProbabilitiesTODO
MaskTODO
Study_TypeTODO

Building a model

Building a model is simple! Because docker package all of your dependencies, converting your existing model requires just a few steps.

The Dockerfile

The dockerfile is located at the root of the project and contains the image build commands

  • The dockerfile for a model can be organized into 3 sections.
  1. The FROM imports
  • This section must include the following line
  • FROM tclarke104/ai-model-base:0.1 as model
  1. Your custom build commands
  • This section is where you do any COPY or RUN commands that are required for building your project.
  1. The AI build commands
  • This section should not be edited and contains the following
# DONT EDIT THIS SECTION
RUN pip install redis
COPY --from=model /opt/runner /opt
WORKDIR /opt
ADD . /opt/
CMD python runner.py
  • This installs redis, copies the required files from the base image, and adds your source gcdoe to the docker image

An example Dockerfile is below:

FROM tclarke104/ai-model-base:0.1 as model
FROM tensorflow/tensorflow:2.0.0-gpu-py3
# Install dependencies
RUN pip install pydicom scikit-image medaimodels
# DONT EDIT THIS SECTION
# add current directory to container
RUN pip install redis
COPY --from=model /opt/runner /opt
WORKDIR /opt
ADD . /opt/
CMD python runner.py

main.py

  • All models must have a main.py file in the root of the project.
  • Must have a function called evaluate_model that takes a single parameter of type List[str]
    • The parameter is a list of paths to the locations of the DICOMDIR directories on the filesystem

Example main.py

from tensorflow.keras.models import load_model
from medaimodels import ModelOutput
import numpy as np
def evaluate(img):
CATEGORIES = ["Abd_Xray", "Frontal_CXR", "Lateral_CXR", "MSK_Xray"]
model = load_model('{path_to_saved_model}')
scores = model.predict(img)
output = [ModelOutput(display=CATEGORIES[np.argmax(score)]) for score in scores]
return output
def evaluate_model(files):
# calls custom preprocess function that loads images and does preprocessing
preprocessed = preprocess(files)
study_type = evaluate(preprocessed)
return study_type

I/O from containers

  • The base docker image tclarke104/ai-model-base manages running the model and communication with the AI runner
  • Inputs:
    • Inputs to the containers are a list of filepaths to DICOMDIR directories
    • The DICOMDIR format consists of a DICOMDIR file at the root and a directory containing the dicoms from a study
      • This was chosen because it is a standardized structure and consistent between multi instance and single instance studies
  • Outputs:
    • The output of a container is a list of objects of the class ModelOutput
    • ModelOuput can be found in the medaimodels pip package