Assignment 4: SimpleITK Notebooks - Segmentation
Methods In Medical Image Analysis (BioE 2630 : 16-725) - Spring 2020

Creative Commons License SimpleITK Notebooks 1 assignment by John Galeotti, © 2020 Carnegie Mellon University, is licensed under a Creative Commons Attribution 3.0 Unported License. Permissions beyond the scope of this license may be available by sending email to itk ATgaleotti.net. (The SimpleITK Notebooks referenced are separately © by their own authors, etc..)

Instructor
John Galeotti
galeotti+mimia ATcs.cmu.edu
Usually meets with students after class
TA
 

 

25 points total

Due Date: Finish adding and commiting your submission to svn by midnight (help stops 5pm) on Friday night, March 6th.

Create a submission directory inside your svn repository, to hold everything you submit for this homework:

cd c:\MIMIA\{Your_SVN_User_Name}
mkdir hw_Seg_nb
svn add hw_Seg_nb
svn ci hw_Seg_nb -m "Setting up module for hw_Seg_nb"

(Windows users can install the free and open source PDFCreator to print to pdf. Macs can natively print to pdf files. Linux can natively print to either ps or pdf, and ps can easily be converted to pdf using ps2pdf or pstopdf from the command line.)

Prep (0 points)

You should have previously downloaded a set of SimpleITK notebooks by using git as instructed in assignment 2.4. In this assignment, we will copy some of these notebooks (the "3" series for segmentation) to a new directory so we can modify them. We will also copy some helper functions. (You should have also copied some images to a separate Data directlry during hw3.) You should now open a command prompt, and begin as follows (adjust directory names and forward/backward slashes as necessary for your computer):

cd \mimia\
mkdir hw_Seg_nb
mkdir hw_Seg_nb\Output
cp SimpleITK-Notebooks\Python\*.py .\hw_Seg_nb
cp SimpleITK-Notebooks\Utilities\*.py .\hw_Seg_nb
cp SimpleITK-Notebooks\Python\3*.ipynb .\hw_Seg_nb
cd hw_Seg_nb

Finally you can launch a web-based python interface called Jupyter. You should do so from the command line, making sure you are already in the hw_Seg_nb directory (like we just did above):

jupyter-notebook

Problem 1  (15 points)

After you launch the Jupyter notebook's web interface, in the Jupyter web page find and click on the notebook named 300_Segmentation_Overview.ipynb.

Work through the first part of the notebook, but stop when you get to the Region Growing Segmentation. You are now ready to proceed with the following changes.

If you are not familiar with Jupyter notebook, it is an interactive Python environment. It divides your Python code into different cells and runs each cell separately. Such structure offers great flexibility for editing code. For example, when you want to add a cell, you can first click Insert at top and choose to add above or below a selected cell. If you prefer keyboard shortcut, try press a or b and see what happens.

Now try inserting a new cell in the notebook after the segmentation result produced by Otsu Thresholding. Inside the new cell, write the following lines to save the both the scanned image and segmentation to disk, such as this (be sure to switch \ to / on Unix/Mac)  :

sitk.WriteImage(img_T1,"Output\img_T1.nii")
sitk.WriteImage(seg,"Output\img_T1_seg.nii")

Since the notebook is running under mimia\hw_Seg_nb, by running two lines above, we will be saving images in to mimia\hw_Seg_nb\Output.

To view the stored image and segmentation, we use ITK-Snap, which you should have installed as part of the previous homework. Open ITK-Snap and click Open Image … in the bottom right corner. (Note: The first time Mac users open ITK-Snap, they should right click on the application and then click open, so that OS X will allow them to open ITK-Snap.app from an "unidentified developer.") In the pop-up, click Browse … to find mimia/hw_Seg_nb/Output/img_T1.nii. The software should automatically identify the file format (i.e. NiFTI). Click Next >. If you see a warning regarding loss of precision, feel free to ignore it. Click Finish.

Now you should see img_T1 sliced along three different axes. Play with the blue crosshairs and the scroll bar and see how each view changes. Notice how the Cursor Inspector panel on the left shows the 3D coordinates of the blue crosshairs, as well as the pixel value at that location. You can scrool through slices under the cursor using scroll wheel, and you can zoom using either the Zoom to Fit button or by holding the right mouse button while dragging over an image. (Mac users can additionally use trackpad gestures to scroll through slices and pinch to zoom.) There are two useful buttons attached to each panel. One expands the view to the entire window and the other takes a snapshot. Finally, there is one very important warning about 3D coordinates in ITK-Snap:

WARNING: ITK-Snap was made for "clinicians" and so starts numbering at 1. However, ITK and SimpleITK were made for programmers and so start numbering at 0. Accordingly, ITK pixel coordinate (3 25 40) would be the same as ITK-Snap coordinate (4 26 41).

To apply segmentation onto the scanned image, we now open the segmentation image we stored (i.e. mimia/hw3/img_T1_seg.nii). Click Segmentation > Open Segmentation … (in the menu bar) and load the segmentation. Once segmentation is loaded, we can click update in the bottom left corner to render a 3D mesh. Once it finishes, we should be able to see a 3D head rendered inside the bottom left panel. Now take a screenshot of ITK-SNAP when you finish rendering the 3D segmentation.  Save  the screenshot as a png image into MIMIA\{Your_SVN_User_Name}\hw_Seg_nb\snap1.png, then add and commit to svn.

Now, let’s modify the notebook to try out edge-preserving smoothing and see how it may change the segmentation result. Let’s insert a cell below Otsu Thresholding (below our new file writing) and type “### (Edge-preserving) Smoothing + Thresholding”. Now we change the cell from Code to Markdown. Run this cell and you will see this cell becomes a header. Now insert yet another cell (for code) beneath this new header. Use this new code cell for the rest of Problem 1.

Simple ITK provides both a procedural interface (with multiple arguments to a single function call) and an object-oriented class-based interface. You can use either style, or mix and match between them. We will introduce both now as we demonstrate how to do Gaussian smoothing. Try this out in your new code cell:

img_T1_gf = sitk.SmoothingRecursiveGaussian(img_T1, 1, True)

Above is the procedure-style, one can also do the same thing with the class-based interface:

gf = sitk.SmoothingRecursiveGaussianImageFilter()
gf.SetSigma(1)
gf.SetNormalizeAcrossScale(True)
img_T1_gf = gf.Execute(img_T1)

Now we visualize the result of Gaussian Smoothing plus basic thresholding (basic thresholding can also be done directly in python as follows). Also, note how we then type-cast and rescale from the floating-point smoothed image to 8-bit unsigned-int for display):

print("Smoothed Image is Type ", img_T1_gf.GetPixelIDTypeAsString())
seg_gf = img_T1_gf>250
img_T1_gf_255 = sitk.Cast(sitk.RescaleIntensity(img_T1_gf), sitk.sitkUInt8)
print("Cast and Rescaled Smoothed Image is Type ", img_T1_gf_255.GetPixelIDTypeAsString())
myshow(sitk.LabelOverlay(img_T1_gf_255, seg_gf), "Smoothing Recursive Gaussian + Basic Thresholding")

Once the above seems to be working for you, you are ready for the main part of this assignment. You need to modify the above newly-inserted block of code to instead do Edge-preserving smoothing and Otsu thresholding. You will need to consult the lecture notes, the SimpleITK Doxygen, and possibly other documentation sources to do this. (Hint: Bilateral filtering may take too long to execute in 3D.) Then, you also need to save the resulting segmentation as a new image file named mimia/hw_Seg_nb/Output/img_T1_eps_seg.nii and then load this segmentation into ITK-Snap. Take a new screenshot of ITK-SNAP when you finish rendering the new 3D segmentation. Save the screenshot as a png image into MIMIA\{Your_SVN_User_Name}\hw_Seg_nb\snap2.png, then add and commit to svn.  Now compare the new segmentation to the original Otsu Segmentation. Snap's file menu allows you to "add another image", which provides several nice ways to compare segmentations.

When finished, Continue through the Region Growing Segmentation part of the notebook, but stop just before Fast Marching Segmentation.  (Skip the rest of the notebook.)  

Now you are finished with this notebook. "Print"/save the entire completed portion (up until Fast Marching) of the notebook as a pdf file named MIMIA\{Your_SVN_User_Name}\hw_Seg_nb\part1.pdf. Please make sure that the entire notebook, including all code, is visible in the pdf (long warning messages don't matter). You may need to tell your webbrowser to print the notebook at a reduced scale, e.g. 70%, to fit everything. Add and commit to svn.

Problem 2  (4 points)

Open and work through all of 30_Segmentation_Region_Growing.ipynb. After completing this notebook, "print"/save the entire notebook as a pdf file named MIMIA\{Your_SVN_User_Name}\hw_Seg_nb\part2.pdf. Please make sure that the entire notebook, including all code, is visible in the pdf (long warning messages don't matter). You may need to tell your webbrowser to print the notebook at a reduced scale, e.g. 70%, to fit everything. Add and commit to svn.

Problem 3  (6 points)

Create a text file named MIMIA\{Your_SVN_User_Name}\hw_Seg_nb\part3.txt (plain text, not a Word document). It should contain a short description (100 - 200 words) of the differences between all the threshold-based altorighms you just used, including your observations as to their strengths, weaknesses, and best applications for use. Finish with a paragraph about the result (big or small) of doing edge preserving smoothing vs Gaussian blurring vs no preprocessing at all on Otsu thresholding of this T1 brain MRI. Add and commit to svn.