Contributing Models
New models can be added to travis med ai by submitting a pull request to our models-repository in the following format:
model-title/manifest.json
At travis med ai, all models are hosted on dockerhub as docker images. Your manifest.json file will contain all pertinent information for integrating your model into our system (i.e. location of the docker image, display name, etc.). More information on how to structure a manifest.json file is below
Manifest Properties
Your manifest should contain the following information:
variable | Type | Description | Required? |
---|---|---|---|
tag | String | The tag name of your docker image repository | |
displayName | String | The name of your model that will be displayed in travis-med-ai. | |
input | StudyType | TODO See StudyType below. | |
modality | Modality | TODO See Modality below. | |
inputType | ModelInputs | The image file format that your model accepts. See ModelInputs below. | |
outputType | ModelOutputs | The output format of your file. See ModelOutputs below. | |
hasImageOutput | boolean | Specifies whether your model outputs a masked image after finishing image processing. |
Example Manifest
The following is an example of a completed manifest:
// manifest.json{"tag": "tclarke104/ich-model:0.1","displayName": "Intracranial Hemorrhage Detection","input": "CT","modality": "CT","inputType": "DICOM","output": "Class_Probabilities","hasImageOutput": false}
Enums
StudyType
TODO
Enum Value | Description |
---|---|
CT | CT Scan |
Abd_Xray | Abdominal Xray |
Dicom | DICOM |
Frontal_CXR | Frontal Chest Xray |
Head_CT | Head CT Scan |
Lateral_CXR | Lateral Chest Xray |
MSK_Xray | Musculoskeletal Chest Xray |
Modality
Describes the imaging modality that the model accepts.
Enum Value | Description |
---|---|
CT | CT Scan |
CR | Chest XRAY |
ModelInputs
The file format that the model takes as an input.
Enum Value | Description |
---|---|
DICOM | .dicom file format. |
PNG | .png file format. |
ModelOutputs
The data format that should be expected as an output from the scan model.
Enum Value | Description |
---|---|
Class_Probabilities | TODO |
Mask | TODO |
Study_Type | TODO |
Building a model
Building a model is simple! Because docker package all of your dependencies, converting your existing model requires just a few steps.
The Dockerfile
The dockerfile is located at the root of the project and contains the image build commands
- The dockerfile for a model can be organized into 3 sections.
- The FROM imports
- This section must include the following line
FROM tclarke104/ai-model-base:0.1 as model
- Your custom build commands
- This section is where you do any COPY or RUN commands that are required for building your project.
- The AI build commands
- This section should not be edited and contains the following
# DONT EDIT THIS SECTIONRUN pip install redisCOPY --from=model /opt/runner /optWORKDIR /optADD . /opt/CMD python runner.py
- This installs redis, copies the required files from the base image, and adds your source gcdoe to the docker image
An example Dockerfile is below:
FROM tclarke104/ai-model-base:0.1 as modelFROM tensorflow/tensorflow:2.0.0-gpu-py3# Install dependenciesRUN pip install pydicom scikit-image medaimodels# DONT EDIT THIS SECTION# add current directory to containerRUN pip install redisCOPY --from=model /opt/runner /optWORKDIR /optADD . /opt/CMD python runner.py
main.py
- All models must have a main.py file in the root of the project.
- Must have a function called
evaluate_model
that takes a single parameter of type List[str]- The parameter is a list of paths to the locations of the DICOMDIR directories on the filesystem
Example main.py
from tensorflow.keras.models import load_modelfrom medaimodels import ModelOutputimport numpy as npdef evaluate(img):CATEGORIES = ["Abd_Xray", "Frontal_CXR", "Lateral_CXR", "MSK_Xray"]model = load_model('{path_to_saved_model}')scores = model.predict(img)output = [ModelOutput(display=CATEGORIES[np.argmax(score)]) for score in scores]return outputdef evaluate_model(files):# calls custom preprocess function that loads images and does preprocessingpreprocessed = preprocess(files)study_type = evaluate(preprocessed)return study_type
I/O from containers
- The base docker image
tclarke104/ai-model-base
manages running the model and communication with the AI runner - Inputs:
- Inputs to the containers are a list of filepaths to DICOMDIR directories
- The DICOMDIR format consists of a DICOMDIR file at the root and a directory containing the dicoms from a study
- This was chosen because it is a standardized structure and consistent between multi instance and single instance studies
- Outputs:
- The output of a container is a list of objects of the class ModelOutput
- ModelOuput can be found in the medaimodels pip package