Sunday, April 15, 2012
Thursday, February 23, 2012
One problem that has arisen is that the VT algorithm doesn't include the internal iliacs since they are of no real significance and are variably patent. Well, part of the problem with loosening the bone tracking criteria is that now in approximately 10% of scans, the internal iliacs are included in the initial bone segmentation. Whether or not this turns out to be an actual issue or simply a nuisance that can be dealt with later is still to be determined.
At this point, the sub-segmentation of the ribs and spine is complete which leaves just the bony pelvis and femurs to go. Interestingly, the previous versions of the software sub-segmented only the pelvis and femurs and relied primarily on anatomic models and the anatomic match points extracted from the wire-frame analysis.
I don't think this pre-production version of the software will make any use of anatomic models since I find them difficult to work with and time consuming to properly fit.
Well, the next step is the sacral segmentation and then I will do the femurs and finally the iliacs. Looks like things are going much more smoothly than in previous versions of the software... mostly because I have learned to avoid some of the simple mistakes that used to consume hours of debugging time.
Monday, January 23, 2012
The first is the setting of very low contrast in the vessel. These are not as big of a problem as you might think. In fact, the algorithm does a fairly good job of successfully tracking the vessels as long as there is some amount of contrast present.
The second problematic class of cases is when there are endovascular stents in place. Again, this is not as much of an issue as you might think with the cases that I am working with. At some point I will have to go back and write some dedicated code for analyzing the stents, but for now the salvage algorithms seem to pick up the pieces fairly well.
The last major class of scans is the surgical aorta where there is something like an aortobifem graft. Those graft tend to arise almost perpendicular to the abdominal aorta where they are sewn on. It is that abrupt change in direction that has caused the largest number of problems for my algorithm. Some amount of smoothness to the vessel course and curvature is implicit in the ways that my tracking algorithms function. At this point I am forced to allow the algorithm to complete tracking whatever vessel it has managed to track and now we go back and analyze the results and find where the rest of the tracking should start. As you can imagine, this is nether a philosophically satisfying or technically simple task. Fortunately, this give me a chance to fine tune my skills at probing vessels for small branches since one of the next steps is going to be isolating the visceral artery ostia.
Well, enough talk... time to get some coding done.
Thursday, January 05, 2012
It turns out that after all that work, I was still getting very unreliable results. The fundamental issue being that I don't have the ability to write a hunk of code that, from a logic standpoint, can account for the myriad of possible configuration that arise in nature. As I thought about the problem on a more abstract level, I started to conceptualize the problem as one of bipolar attraction and repulsion. if a potential point is very close to the string of point that represent the successful iliac artery, then that point is very unlikely to represent a good starting point. This of course is exactly the calculation for the potential relative to a line of charge. So, I did exactly that... I treated each point of the opposite iliac artery as a point charge and calculated the field for each of the potential points. The assumption being that the point furthest from the line of charge will have the lowest potential and also have the highest change of being a successful starting point.
It took a couple of days of debugging to get the algorithm to do what it is supposed to do, but at the end of the day, I finally got it to give a correct result on the test case. Unfortunately, for these special cases, there are only a handful of studies that end up using this portion of the algorithm which makes robustness testing very difficult. At the very least I will be able to try it out on the 4 or so other case I have that run into similar issues.
Tuesday, December 20, 2011
Once I simplified the selection down to the bare minimum criteria, the code seems to be performing reasonably well. The next step is the have the system be able to isolate when one or both of the iliac trackings have failed. It turns out that it is very uncommon for it to fail bilaterally and that typically the issue is unilateral and related to extreme asymmetric iliac size, severe stenosis, or high bifurcation. I think the easiest way to go about this will be to compare the two sides in terms of how far they have tracked and use that as an internal consistency check. I have noticed that there are a couple of studies where the scan acquisition outruns the contrast bolus and so in those situations there is typically symmetric decreased flow bilaterally.
At any rate, since we are talking about around 4% of the the total runs that it doesn't complete, there needs to be some kind of salvage algorithm. Well, that is an issue that I can address once I figure out how to accurately gauge failure. which reminds me, I still have to go back and write the salvage algorithm for the initial aortic tracking.
Saturday, December 17, 2011
I had never intended to start the vascular segmentation until after the bone segmentation algorithm had already run, but I ran into problems with the bone propagation algorithm. Then, I was only going to track the aorta and leave the iliac until later, but there ended up being a handful (<3) studies in the set where the iliac became an issue and so now I have ended up performing the entire vascular tracking out of necessity rather than intentionally. That being said, the current vascular tracking is leaps and bounds ahead of the previous (primarily CPU based) code.
For this code, I had to go with sensitivity rather than specificity because even minor missed points can send the bone segmentation out of whack. At least now I have an accurate center-line extraction so that once the bone segmentation is completed, it will be straight forward to go back and fine tune the vascular lumen segmentation.
Friday, December 16, 2011
The next issue is that in the process of debugging the code, I uncovered numerous small issues which were contributing to the overall issues. Now I have to go back and piece it all back together to make sure that nothing got broken in the process. Specifically, I had to go through and clean up how the statistical data and results are stored. The aortic tracking algorithm is running much more smoothly now that the code has been streamlined.
Sunday, December 04, 2011
The next major challenge, now that I have an algorithm which can successfully perform the task, is to find ways to decrease the run time. Each iteration requires about 200 msec to run, but I think I can get it down to about half of that.
The other major issue that I have to contend with next is determining how to proceed from here. The algorithm has reached a level of robustness in the vascular and bone segmentation that it is time to complete the job with the sub-segmentation and move forward to the individual organ segmentation. Before I move on to the next stage of the algorithm which could take a significant amount of time, I think I should publish the results of what I have completed so far. I don't know of any algorithms that currently exist which can track the kind of low contrast vessels that this algorithm is successful with.
Thursday, December 01, 2011
When I left off last night, I was talking about how to use the concept of attraction to help in tracking iliac arteries. In general the parameters that are available to be modified in my algorithm are the current direction vector (as determined by the longest continuous directional line path), the previous direction vector, and the overall bias vector (a vector that points in the general direction that one expects the vessel to travel).
As the difference between the current direction vector and the bias vector increases, the contribution of the bias vector is proportionately increased. The contribution of the previous vector is meant to buffer rapid changes in directionality. Simple combinations of these three vectors have at one point or another successfully tracked every iliac vessel in my data set. When dealing with situations like this in the past, my method has been to build in adaptive logic; for example, rather that setting a range of values that may be vessel, the algorithm goes to the last known point of vessel lumen and samples the average density of the contrast at that level and produces an average value around which a range us created. This process allows the algorithm to accept very high or low density points only in situations where the scan and contrast quality require it. The probem with the iliac vessel course is that there is not really a good predictive way of determining which parameters to use.
Interestingly, in previous versions of the software, I used an algorithm which did essentially what I am try to achieve now, which is perform the tracking using a set of varying parameters and determining which ones result in a vessel end point close to the expected location. The major problem with the previous algorithms was that it was very time consuming. The current algorithm run in the 1-2 second range and achieves successful tracking in > 90% of vessels which means that the cost per additional vessel tracked will be very high. One thought is that the since the algorithms already is biased so heavily towards finding the attractor, a validation step should be inserted after the initial iliac tracking algorithm to see if the endpoint and path are reasonable. If not, the full algorithm can be attempted which evaluates a larger (though still restricted) portion of the parameteric space.
This kind of logic check is smattered throughout the overall code and serves to reduce the overall run time by excluding computation that are unnecessary and dedicate additional resources to situations where the result is too complex for the simple algorithm.
Alright, I have to go to the DICOM standards meeting in the morning and so I will call it a night.
Wednesday, November 30, 2011
I know that it sounds cliche, but sometimes the most important part of working through the solution to a problem is to stop thinking about the problem in the context that is familiar to you. I have been working for the past couple of weeks on expanding the scope of my aortoiliac tracking algorithms to account for the various morphologies that the bifurcation and iliacs can take. The fundamental problem is that you only have a general idea of where the bifurcation happens and where the iliacs are going. I was walking around the RSNA vendor area and saw a couple of systems that supposedly do centerline extraction for you, but the one unifying aspect of all of those studies is that they are perfect angiograms in normal patients. It is my opinion that there is little benefit in being able to segment a normal study.
Every time I write an algorithm, I end up with results that are always imperfect. Specifically, if I take the fundamental algorithms and boil it down to a handful of parameters that control the functionality, the result is that that the segmetation will never work for all possible morphologies and variations that can occur in clinical practice. Every time the parameters are tweaked, the result across the 128 or so studies that I am training against begins to vary to some degree. I judge the robustness of an algorithm in terms of whether or not this variability prevent successful tracking.
In terms of optimization routines, the basic issue is whether or not the changes in parameters alters the parametric space in such a ways that it prevents the desired extrema (assumed to be the global extrema) from being achieved. In my opinion, a good algorithm is one that successfully tracks >90% of the iliac vessels that it attempts.
It has been creeping up in the back of my mind over the past couple of months that what I am actually dealing with is the sensitivity to inital conditions encountered in non-linear dynamics. Minor alterations in the parameters can send the algorithm into undesired areas like tracking the pelvis as a vessel or abruptly stopping in the middle of a vessel. Since I have to assume that any and all possible criteria that I create for the tracking algorithm will be violated, I am left with a seemingly intractable problem. More specifically, there is no one set of algorithms or parameters that can linearly track an arbitrary vessel with variable path and contrast opacification. More and more though I have come to realize that what I should actually be doing is searching for the attractors that underlie the seemingly chaotic path that the algorithm can take.
Though I have not philosophically been doing this, the aortic tracking algorithm is such a means of finding the chaotic attractor in the parametric space that contains path, density, and continuity of the aorta. Long ago i accepted that no matter how good my algorithm is, there is always a chance that the aortic tracking would go astray. At the time, I was in the philosophical mindset of something between game theory and swarm logic where I created millions of independent elements that were allowed to progress subsequently whittled down the results until some cluster of results were reached that had a high probability of being correct (i.e. within the aorta). At that point, rather than trying to select one, I use some basic statistics to find the underlying common average path. It turns out that if you create enough element in your swarm and have reasonably good selection criteria, it is possible to track nearly any aorta with even a trace amount of contrast (>100 HU absolute density, which is really only about 25 HU above unenhanced blood).
The next issue is determining how much computational time I am willing to commit to finding the iliac attractors since the algorithm as it stands right now can successfully track all but the extremes of vessel opacification. Since it is getting late and I have to be at some refresher courses in the morning, I will think about this some more and get back to you.
Tuesday, March 30, 2010
Well... the preparations for the radiology oral board exam is in full swing with a little less than two months to go. Luckily i am on research elective next month which will allow for a nice balance between sitting in front of a computer studying and sitting in front of a computer studying.
It looks like natal is going to be released a month before RSNA 2010... probably not going to be enough time to get it working.
Sunday, March 21, 2010
I was thinking last night about how it is that we interact with our computers in terms of input devices. I have to submit my abstract for RSNA 2010 pretty soon and I am missing a key component of the 3D PACS project which is the input device. It is kind of meaningless to have a 3D display if you are unable to interact with the data in 3D. The Microsoft Natal box would be perfect for this application, but I have not had any luck finding someone to collaborate with at MS and I am not even sure that Win is supported at this time. The other option would be to utilize a capacitive touch interface, but then we are back to the initial problem of a 2D interface.
There is always the option of creating my own input device, which would not be all that difficult to do for something with a very limited feature set, but programming the driver and porting it through Visual Studio may in fact be outside my skill set. I guess I will just have to wait and see if anything new comes out within the next couple of months that can help me.
Tuesday, March 09, 2010
Monday, January 19, 2009
Surprisingly, even though I have taken out a significant component of the parallelism (previously I had near full core occupance on 4 cores throughout most of the run time), the overall run time is similar. In fact, once I get the GPU based lumen tracking code fully integrated, I will have actually dropped my overall run time significantly.
There has been an interval upgrade in CPU to a phenom II x4 running at 3.6. Though it is nice to have this on the workstation, it really doesn't make too much difference now that I am getting more adept at writing GPU code. Another bonus is that since I don't have multiple threads accessing GPU time with large blocks of memory, I have taken the workstation down to a 9600 for display and a single 280GTX for computations. The idle power consumpution is 200W (down from 250W w/ the 65nm 9950 CPU with similar GPU configuration and down from 360W during the show w/ 3x 280GTX and the 9950).