The goal of medical imaging is to present the data in a useful format and in a very interesting TEDx talk Anders Ynnerman demonstrates cool new applications in medical visualization that will be possible in the near future.
Graphics processors have become substantially faster in the last ten years. We can now put together** the gigabytes (terabytes if extended to the time domain) of MRI and CT data generated when scanning a single subject and create 3D (or 4D) images from which relevant information can be selectively extracted. This opens up new and very interesting possibilities. One such application is the virtual autopsy where with ipad-style interactions one can look at cadavers in hard-to-maneuver angles or selectively view metal to, for instance, identify the extent of knife stab injuries or locate bullet shards. Ynnerman also suggests touch-sensitive haptic applications: a surgeon can literally touch the data--a beating heart for example--pre-surgery.
It's really a great 17 minute talk--here's the link.
**(Aside from fast GPUs, there are other ways in which people are hoping to handle the explosion of data from these medical scans. The use of oompressive sensing algorithms is one but the more general idea is to reduce the data before, during or after the scan.)