\documentclass[letterpaper,10pt]{article}
\usepackage[letterpaper]{geometry} % Importante para que respete el tamaño de papel
\usepackage[english]{babel}
\usepackage[utf8x]{inputenc}
\usepackage{amsmath}
\usepackage{graphicx}
\usepackage{subfig}
\usepackage[colorinlistoftodos]{todonotes}
\usepackage{enumitem}
\usepackage{float}
\usepackage{multicol}
\usepackage{empheq}
\usepackage[numbered, autolinebreaks, useliterate]{mcode}
\usepackage{csquotes}
\usepackage{physics}
\setlength\parindent{24pt}
\newcommand*\widefbox[1]{\fbox{\hspace{2em}#1\hspace{2em}}}
\newcommand*{\addheight}[2][.5ex]{%
\raisebox{0pt}[\dimexpr\height+(#1)\relax]{#2}%
}
\newcommand{\multicolinterrupt}[1]{% Stuff to span both rows
\end{multicols}
#1
\begin{multicols}{2}
}
\special{papersize=8.5in,11in}
\begin{document}
\begin{titlepage}
\newcommand{\HRule}{\rule{\linewidth}{0.5mm}} % Defines a new command for the horizontal lines, change thickness here
\center % Center everything on the page
%----------------------------------------------------------------------------------------
% HEADING SECTIONS
%----------------------------------------------------------------------------------------
\textsc{\LARGE Carleton University}\\[1.5cm] % Name of your university/college
\vspace{2.5cm}
\textsc{\Large Comparison of two techniques for data analysis in Dual energy radiography }\\[0.5cm] % Major heading such as course name
\textsc{\large Fall term 2016}\\[0.5cm] % Minor heading such as course title
%----------------------------------------------------------------------------------------
% TITLE SECTION
%----------------------------------------------------------------------------------------
\HRule \\[0.4cm]
{ \LARGE \bfseries PHYS 4907 Project report}\\[0.4cm] % Title of your document
\HRule \\[1.5cm]
%----------------------------------------------------------------------------------------
% AUTHOR SECTION
%----------------------------------------------------------------------------------------
\begin{minipage}{0.8\textwidth}
\begin{flushleft} \large
\vspace{0.5cm}
Fernando Franco Félix \hspace{1cm} \textsc{Student number: 101057453}\\
\vspace{2cm}
\end{flushleft}
\end{minipage}
%~
%\begin{minipage}{0.4\textwidth}
%\begin{flushright} \large
%\emph{Profesor:} \\
%Dr. Ra\'{u}l \textsc{Hern\'{a}ndez} % Supervisor's Name
%\end{flushright}
%\end{minipage}\\[4cm]
% If you don't want a supervisor, uncomment the two lines below and remove the section above
%\Large \emph{Author:}\\
%John \textsc{Smith}\\[3cm] % Your name
%----------------------------------------------------------------------------------------
% DATE SECTION
%----------------------------------------------------------------------------------------
{\large 19/December/2016}\\[3cm] % Date, change the \today to a set date if you want to be precise
%----------------------------------------------------------------------------------------
% LOGO SECTION
%----------------------------------------------------------------------------------------
%\includegraphics{Logo}\\[1cm] % Include a department/university logo - this will require the graphicx package
%----------------------------------------------------------------------------------------
\vfill % Fill the rest of the page with whitespace
\end{titlepage}
\begin{abstract}
A dual energy radiography method using basis decomposition was developed, the process to do it is shown and it is compared against an alternate more direct method of analyzing the data using the logarithm of the original data, concluding that this second method does work but it is not better than basis decomposition.
\end{abstract}
\section{Motivation}
X-rays are one of the best tools modern medicine has at its disposal. With it is possible to see inside the bodies of people without opening them, which allows to make diagnostics that would be impossible otherwise. We must never forget how remarkable this is.
However, there is a cost: the high energies of the X-rays can hurt people. The damage is usually small and the body heals, it is a risk worth taking given the benefits, but sometimes the damage may be too big to heal, it can lead to cancer and other complications. For this reason x-rays must be used as efficiently as possible: get the most amount of information from the least amount of radiation.
This is what Dual Energy Radiography was created for and it is used for diagnostics every day.
\\
However, the usual method to analyze the data from Dual energy radiology, called basis decomposition, which is explained in the following section, has to main drawbacks, the first one is that it requires constant recalibration of the machines and the software, and the second one is that it can take several hours to process the data.\\
Thus an alternative method to analyze the data form dual energy radiography, called logarithmic analysis, was tested against basis decomposition.
\section{Theory}
\subsection{Basis decomposition}
The effects of x-rays through matter can be approximated assuming an exponential decay in intensity \cite{NIST}:
\begin{gather}
\frac{I(x)}{I_0}=e^{-\frac{\mu}{\rho }x}
\end{gather}
where $x$ is the mass thickness in a material of density $\rho$ and $\mu/\rho$ is a constant of the material called the mass attenuation coefficient, which is a function of position, time and energy\cite{lehmann}.
In the diagnostic energy range this function can be modeled by two effects: Compton scattering and photoelectric effect\cite{lehmann}.
\begin{gather}
\frac{\mu(E)}{\rho}=a_cf_c(E)+a_pf_p(E)
\end{gather}
where $a_c$ and $a_p$ are constants specific to the material and $f_c$ and $f_p$ are functions of the energy whose specific form is of no interest for now, suffices to say that they are linearly independent from each other, which will be relevant soon. \\
It can be seen that taking the natural logarithm in both sides of (1) and plugging in (2):
\begin{gather}
M=ln\bigg(\frac{I_0}{I(L)}\bigg)=[a_cf_c(E)+a_pf_p(E)]L
\end{gather}
If the material were not known but the length where, then M could be measured with an x-ray machine at two different energies, producing two linear functions with two unknowns, making thus possible to identify the material.\\
The information of the intensity of the x-rays would be stored in each pixel of an image matrix by an x-ray detector. The value of each pixel would not be equal to the intensity, but it would be proportional to it, so that "the whitest white" would be the maximum intensity the detector can measure, and the blackest black would be no x-rays detected.
The value of each pixel would then be divided by the value of "the whitest white" of the image $I_0$ and the natural logarithm of this value would be in practice equal to the natural logarithm of the intensity.
The following step is to solve the system of equations for each pixel, this would produce two matrices distributions of $a_c$ and $a_p$ values that can be further analyzed depending on the purpose.
\\
Consider now the case of having material 1 and material 2 in unknown amounts ($L$ for each material is not easily measurable) then it would also be useful to take two x-ray images at two energies $E_\eta$ and $E_\gamma$, for now the unknowns would be the lengths themselves:
\begin{gather}
M_\eta=L_1[a_{c1}f_{c}(E_\eta)+a_{p1}f_{p}(E_\eta)]+L_2[a_{c2}f_{c}(E_\eta)+a_{p2}f_{p}(E_\eta)]
\end{gather}
\begin{gather}
M_\gamma=L_1[a_{c1}f_{c}(E_\gamma)+a_{p1}f_{p}(E_\gamma)]+L_2[a_{c2}f_{c}(E_\gamma)+a_{p2}f_{p}(E_\gamma)]
\end{gather}
and thus it would be possible to measure the thickness $L_1$ and $L_2$ of each material.
As in the previous example the process of solving the system of equations would be done for each pixel but this time the result would be two matrices with the distributions of length values.
\\
All this can be taken farther into abstraction considering that since the attenuation coefficient is produced by two linearly independent functions, then any attenuation coefficient can be represented as a linear combination of any other two attenuation coefficient.\\
In terms of linear algebra: any attenuation coefficient function of a material $\epsilon$ can be represented as a vector in a space with two basis functions given by the attenuation coefficient functions of the materials $\alpha$ and $\beta$:
\begin{gather}
\frac{\mu_\epsilon(E)}{\rho_\epsilon}=a_1\frac{\mu_\alpha(E)}{\rho_\alpha}+a_2\frac{\mu_\beta(E)}{\rho_\beta}
\end{gather}
and define the vector $V_a$:
\begin{gather}
V_a=(a_1,a_2)
\end{gather}
Where $a_1$ and $a_2$ would be the thickness required of each basis material to produce the same attenuation of material $\epsilon$ at that energy.\\
In order to calculate the coordinates $(a_1,a_2)$ it is only necessary to know $\frac{\mu_\alpha}{\rho_\alpha}$ and $\frac{\mu_\beta}{\rho_\beta}$ and to measure $\frac{\mu_\epsilon}{\rho_\epsilon}$ at two energies, generating a system of two equations with two variables: $a_1$ and $a_2$ which do not depend on the energy.\\
The basis materials $\alpha$ and $\beta$ are usually aluminum and lucite since they have similar attenuation to bones and tissue in the range of energies from 65 to 100 kV used in medical radiology.\\
This is the basis of Dual energy radiography.
\pagebreak
\section{Applications}
\subsection{Identify and eliminate substance from image}
Theoretically, it could be possible to identify $\epsilon$ if it was unknown, since it can be proven using approximations for $a_c$, $f_c$, $a_p$ and $f_p$ that $a_1$ and $a_2$ depend only on the atomic number of $\epsilon$ \cite{lehmann}.\\
However, in medical applications it is usually not possible, since the substances involved are made of many complex molecules, making no one element reliable to identify.\\
It is still possible however to identify those different substances using their attenuation coeficient, and to identify how much of some substance is present in an image.\\
To do this first it is necessary to define the angle of a particular substance in the basis image plane:
\begin{gather}
\theta=arctan\bigg(\frac{a_2}{a_1}\bigg)
\end{gather}
With this definition, as explained by Lehmann, Alvarez and Macovski \cite{lehmann}:
\begin{displayquote}
"When the vector $(a_1,a_2)$ is projected onto a unit vector directed outwards from the basis plane at angle $\Phi$ and the length of the projection ($C$) is displayed at every point the resulting image is called a basis projection image:
\end{displayquote}
\begin{gather}
C=a_1cos(\Phi)+a_2sin(\Phi)
\end{gather}
\begin{displayquote}
The efective atomic number of an unknown homogeneous sample material can be identified by varying the angle $\Phi$... the length $C$ is proportional to the portion of the object which exhibits the attenuation properties characteristic of the angle $\theta$"
\end{displayquote}
This technique can have other uses, for example, if an angle $\Phi$ was found to disappear an specific substance (make it look like the back ground) then the vector associated to eliminating that could be defined as:
\begin{gather}
V_\theta=(cos(\Phi),sin(\Phi))
\end{gather}
and every pair of coordinates in the image (represented as $I$) would go to the linear transformation:
\begin{gather}
\bra{V_\theta}\ket{I}
\end{gather}
Finally a colormap could be applied to this image, assigning different colors to each value, in which the substance with angle $\theta$ would have the same color as the background, or two substances would be made to look the same.\\
There are many uses for this technique, for example usually there may be some substance, like bone, that does not allow to see clearly other features, the substance in question can be disappeared from the image, increasing the visibility of the structures that do matter.
\\
As an example, this are real images advertised by General Electric \cite{ge}:
\begin{center}
\includegraphics[width=14cm,height=7cm]{ejemplo2} \\
Figure 1: From left to right a normal x-ray image, tissue-bone look alike, tissue-air look alike.
\end{center}
\subsection{Material look-alike}
A better option is known as the technique of "material look-alike" \cite{lehmann} where two materials are made to not contrast with each other by having the same value under a linear transformation like the one previously mentioned, while other materials would remain with different values.\\
To explain how this is possible consider the situation in Figure 2:
\begin{center}
\includegraphics[width=8cm,height=5.5cm]{ejemplo} \\
Figure 2: A sample with materials $\psi$ and $\epsilon$
\end{center}
It is necessary to not only use the vector $V_a=(a_1,a_2)$ of each pixel, but to apply to every vector of this kind the following linear transformation with the vector $V_\rho$:
\begin{gather}
V_\rho=\rho_\epsilon\bigg(\frac{1}{\rho_\alpha},\frac{1}{\rho_\beta}\bigg)
\end{gather}
\begin{gather}
V_A=\ket{V_\rho}\bra{V_a}=(A_1,A_2)=\rho_\epsilon\bigg(\frac{a_1(x)}{\rho_\alpha},\frac{a_2(x)}{\rho_\beta}\bigg)
\end{gather}
With this definition in place it is possible to see that the pixel irradiated with ray 1 would have vectors of the type:
\begin{gather}
(A_1,A_2)=\rho_\epsilon\bigg(\frac{a_{1\epsilon}(L)}{\rho_\alpha},\frac{a_{2\epsilon}(L)}{\rho_\beta}\bigg)
\end{gather}
while those irradiated with ray 2 would have vectors:
\begin{gather}
(A_1,A_2)=\bigg(\frac{\rho_\epsilon}{\rho_\alpha}a_{1\epsilon}(L-X)+\frac{\rho_\psi}{\rho_\alpha}a_{1\psi}(X),\frac{\rho_\epsilon}{\rho_\alpha}a_{2\epsilon}(L-X)+\frac{\rho_\psi}{\rho_\alpha}a_{2\psi}(X)\bigg)
\end{gather}
It is rather interesting that when $\theta$ is calculated for this vectors the answer is in fact independent of any distance:\\
For the first vector
\begin{gather}
\theta=arctan\bigg(\frac{\rho_\alpha}{\rho_\beta}\bigg(\frac{\rho_\epsilon a_{2\epsilon}}{\rho_\epsilon a_{1\epsilon}}\bigg)\bigg)
\end{gather}
While for the second one:
\begin{gather}
\theta=arctan\bigg(\frac{\rho_\alpha}{\rho_\beta}\bigg(\frac{\rho_\epsilon a_{2\epsilon}-\rho_\psi a_{2\psi}}{\rho_\epsilon a_{1\epsilon}-\rho_\psi a_{1\psi}}\bigg)\bigg)
\end{gather}
This expression is true for all values of $X$ which can range from $L$ to 0, and in $X=0$ (13) and (14) are the same, which means that in fact both vectors have the same angle $\theta$!, and thus would have the same value under a linear transformation with the same angle $\Phi$, and thus would be "eliminated", would not be visible anymore.\\
This technique does not only allow for eliminating two materials from the image at the same time, giving great possibilities of producing contrast, but can also be used to identify one material when its "partner" in elimination is known and its density or its atomic number is also known \cite{lehmann}.\\
Of course this process changes the contrast between the materials that remains, and this change can be predicted and used to know if the technique will have undesired effects or not.
\pagebreak
\section{The project}
We are now in position to go into detail on the differences between basis decomposition and logarithmic analysis. Basis decomposition can be summaryzed in the next Figure:
\begin{center}
\includegraphics[width=12cm,height=8cm]{decomposition.pdf} \\
Figure 3: Diagram of the basis decomposition method
\end{center}
Here the function $f(h,\ell)$ is a function that takes as inputs the two High and Low images and outputs the coordinates of each pair of pixels in material space, for this reason is important that each pixel shows the same part of the object, which means that it should not move while taking the pictures.\\
This outputs can be understood as to "basis images", matrices with one of the coordinates in each cell.
\\
And here is where the drawbacks come into place. The first one is that any change in the performance of the x-ray machine that would give slight differences in the High and Low images produces big differences in the fitting function $f(h,\ell)$, given its logarithmic nature the different values are usually very close together relative o their magnitude, and a difference in two orders of magnitude can mean a difference in substance, as can be seen in the results at the end, making it necessary to constantly recalibrate the x-ray machine and the fitting function so that they work together.
\\
The second one is that the computing time required to create the basis images can take from one to several hours, making for a lot of dead time that should be used doing diagnostics.
\\
Both of this disadvantages are produced by the fitting function $f(h,\ell)$, so the question arises: what if we do not use it?, would we get useful results?, how would the results be different?.\\
This proposal can be seen in Figure 4:
\begin{center}
\includegraphics[width=6cm,height=10cm]{logarithm.pdf} \\
Figure 4: Diagram of the logarithmic method
\end{center}
Doing this would certainly save computational time, and would not require constant recalibration (although recalibration will always be necessary eventually) and while it may be less accurate than the conventional process, it may give acceptable results. To test this it was necessary to process the same image in both ways and compare the results, and to compare the results first was necessary to actually develop the basis decomposition method, which includes produce the fitting function $f(h,\ell)$, where the High image will be taken at 100 kV and the Low image at 65 kV, both with a current of 200 mA, chosen as acceptable ends of the ranges of medical radiology. The basis materials were Lucite, as mentioned, and the Aluminum alloy "1100" because of its purity.
\\
Finally it was needed to sample a range of attenuation of different thicknesses of the basis materials, in order to construct the fitting function, the approach to this problem is explained in the section "Calibration phantom", but first it is important to mention the experimental facilities that were used.
\subsection{Experimental set up}
All the data mentioned here was taken using a Picker GX550 x-ray machine connected to a PerkinElmer XRD 0840 AN x-ray detector, in turn connected to a computer using the software XIS to create the images.
\begin{center}
\includegraphics[width=13cm,height=8cm]{set_up} \\
Figure 5: Experimental set up
\end{center}
The actual set up looks like this
\begin{center}
\includegraphics[width=9cm,height=8cm]{IMG_7449} \\
Figure 6: Picture of the experimental set up with the x-ray source on the left and the detector on the right. The sample is a chicken leg.
\end{center}
\subsection{Calibration phantom}
It was decided to create the image of a grid in which each square had a different attenuation value given by a different combination of thicknesses of the basis materials. In other words, the calibration phantom should show part of the span of the basis space, and this data was used to create $f(h,\ell)$.
\\
To create this grid two step wedges were put one in front of the other in such a way that the common area of two different steps would create the squares of the grid.
To make the lucite step wedge 10 square pieces of lucite with sides of 10.8 cm and a thickness of 0.8 cm where placed in from of each other displaced by 0.635 cm to the right, and to make the one of aluminum step wedge 12 aluminum square pieces of 15 cm and a thickness of 0.16 cm where placed in front of each other displaced 0.635 cm upwards.
\\
Many images of the grid were taken at the high and low energies with many different configurations of x-ray filters and exposition time, with the goal of having as less saturation as possible, saturation being the point where so mny photons reach the detector that it gives the maximum value to a pixel, so that it does not records differences in intensity above that threshold.\\
The best configuration found using this criteria where, for 100 kV, to use two aluminum filters of 0.2 mm and one made of copper of 0.12 mm, and use an exposure time of $1/120$ seconds, while for 65 kV the best configuration was to only use those same two aluminum filters and also use an exposition time of $1/120$ seconds.
\\
Both images can be seen in Figures 7 and 8, where black shows the highest values and white the lowest ones. In both of them the aluminum steps "go down" in the sense that the maximum of aluminum was on the top, with each step having less aluminum, and the lucite steps "go right", because the maximum of lucite is on the right and each step has less lucite.
\begin{center}
\includegraphics[width=8cm,height=7cm]{fff087} \\
Figure 7: The grid at an energy of 65 kV (low energy)
\end{center}
\pagebreak
It can be noted that the grid itself is surrounded by rectangles of more or less uniform color, this is because the step wedges did not overlap perfectly. The rectangles on the top and on the left have either a single value of lucite with changing aluminum, or a single value of aluminum and changing lucite, with the top left corner having the maximum of both, while the rectangles on the right and the bottom were produced by changing values of only lucite or only aluminum, with the bottom right corner having none of either of them.\\
This was very useful in the end, since it served as extra data points.
\\
It can be noted that some of the steps of pure aluminum (rectangles on the right) are white, and in fact they have the saturation value, this also happens for one step of pure lucite. This saturation could not be reduced with the available equipment.
\begin{center}
\includegraphics[width=8cm,height=7cm]{fff086} \\
Figure 8: The grid at an energy of 65 kV (low energy)
\end{center}
The grid was not moved from one picture to another, so each square and rectangle in this image represents the same amount of aluminum and lucite than in the previous one.
\\
Although a good part of the image seems black beyond distinction it is not, the value of those pixels is well above black saturation and contains the information of the grid, it just so happens that our eyes can not distinguish those colors with ease, but the difference can be seen by changing the contrast and brightness of the image.
\subsection{Constructing $f(h,\ell)$}
With this data it was finally possible to create $f(h,\ell)$.
\\
A simple code was created with the software "Image J" (https://imagej.nih.gov/ij/) to measure squares in the images in the same positions of the squares in the grid (the rectangles where taken as either 3 or 4 squares, with the corners taken as four squares each), the measurements for each image gave a list of the average pixel value of each square and the standard deviation. The high and low lists of values were taken as matrices to the program Matlab, along with two vectors, one that had the values of the amount of aluminum in each square in the order in which they were measured, and an equivalent list for lucite.
\\
The fitting tool for Matlab only allows to work with three vectors of variables and one of weights, which meant that instead of having $f(h,\ell)=(A_{al},A_{lu})$, that is, one vector function with two inputs and two outputs, there would have to be two scalar functions with one output each: $f_{al}(h,\ell)=A_{al}$ and $f_{lu}(h,\ell)=A_{lu}$.
\\
Finally the values of the pixels in the high and low images were divided by the saturation value ($2^{15}-1$), and the natural logarithm of each value in the image matrices were taken and given as first and second input in Matlab's fitting tool, either the aluminum vector or the lucite vector were given as the desired output, and it was decided to use the product of the standard deviations as weights.\\
The fit itself had to be continuous and precise, and the option best suited turned out to be for both cases functions that satisfy:
\begin{gather}
\nabla^4 \phi=0
\end{gather}
Known as "biharmonic functions". This kind of functions can have a very complex shape, but in this case they approximated twisted planes in the range of intensity values. A surface plot of the functions with all the values in the range considered is shown in the next figures, and in order to give a sense of their shape, being limited for the moment by two dimensional paper, several three dimensional images of the fits at different angles are shown. the different colors are just to distinguish the two different functions
\pagebreak
Figures 9 and 10 show the resulting surfaces to which $f(h,\ell)$ maps the data. It can be seen that the surfaces approximate planes in the space of the expected attenuation values, shown by the data points taken form the calibration phantoms.
\begin{center}
\includegraphics[width=13cm,height=7cm]{ultimate_al_surface} \\
Figure 9: $f_{al}(h,\ell)$ along with points that show the thickness value for every pair of values calculated form the calibration phantom
\end{center}
\begin{center}
\includegraphics[width=13cm,height=7cm]{ultimate_lu_surface} \\
Figure 10: $f_{al}(h,\ell)$ along with points that show the thickness value for every pair of values calculated form the calibration phantom
\end{center}
\pagebreak
This functions where then tested by reconstructing the aluminum and lucite step wedges.
\begin{multicols}{2}
\begin{center}
\includegraphics[width=8cm,height=7cm]{A_al1} \\
Figure 11: Aluminum step wedge reconstructed image
\end{center}
\begin{center}
\includegraphics[width=8cm,height=7cm]{A_lu1} \\
Figure 12: Lucite step wedge reconstructed image
\end{center}
\end{multicols}
In this images white represents a high value, while dark values approach zero. The shape of the step wedges can be recognized, even with the noise in the aluminum image the steps can be clearly seen. This results were deemed acceptable.
It is important to mention that the production of the basis images can take a long time, each image is $512 \times 512$ pixels, that means that each function will process 262,144 sets of data, in total the computer must produce 524,288 results, and the functions are not simple to compute. For this reasons computation time can extend from one to several hours.
\section{Logarithmic analysis vs basis decomposition}
The next step was to use both both methods and compare the different results.
\\
The samples chosen to compare the results had to approximate a human body in some significant way, for this reason a water cube of 15 cm was filled with water, to simulate all the adipose and lung tissue, and an apple with a road of polystyrene inserted in it and a chicken leg were used to simulate structures in the human body.\\
The reason for the chicken leg was the following: its bone was similar enough to a human rib, and was covered in meat, giving the possibility of isolating in the image a very dense material and a not so dense one surrounded by a similar medium. This two extremes would test the limits of both methods.\\
The reason for the apple with the polystyrene rod was to test how would this methods work with materials they weren't explicitly designed for, specially a non organic and uniform substance such as polystyrene. Differences in this results would probably suggest that one method is better suited than the other for the human body, but that the remaining method has potential in other areas.
\\
\subsection{Data analysis}
A program was written to produce and interface to facilitate the analysis of the data, it is divided in four columns. From left to right the first one natural logarithms of the original data divided by the saturation value, the data from the high image is in the top and the data from the low image is in the bottom, the second column shows the basis images, the aluminum basis is on top and the lucite basis is on the bottom. the third column shows the linear combinations, the one on the top is the linear combination of the basis images and the one on the bottom is the one of the natural logarithms of the original images, and finally the image in the fourth column is made with the absolute value of the difference of the normalized linear combinations, that is, the resulting image matrices are divided by their highest value and substracted from one another, and the absolute of the resulting matrix is plotted as another matrix. This last image is called the "results' comparison" and its purpose is to show how similar or different are the results of each method in each pixel.
In the bottom of the last column the user can see the angle of each linear combination and has the option to save the linear combinations and the results' comparison and name the saved files.
The user also has the option to choose a different colormap from the drop down menu on the top left corner.
\begin{center}
\includegraphics[width=15cm,height=7cm]{interface} \\
Figure 13: Screen shot of the program during muscle-water look alike
\end{center}
While working in this project a problem was too common: there where usually "death pixels" with values well above all the others, almost surely noise from the detector that made its way into all other images in some form or another, even more considering that this software "scales the colors", which means that gives the brightest color available to the highest value in the matrix and the darkest to the lowest one. The existence of this "death pixels" made the whole image darker.
\\
For all this reasons all images, except the results' comparison, have a slider that allows to set an upper limit in the data that is used, and soon enough a slider to select lower limit was applied also to the basis images, in order to narrow the margin of interesting data.\\
This was useful in ways not considered at first because in the basis images, specially in the aluminum ones, the values were usually very close together, getting similar colors and making the features of interest very hard to recognize.
\\
Finally the linear combinations have those two kinds of sliders and one more with extra precision to select the angle of the linear combination, which is the most important value. If the user is satisfied with the results they can press the save button and give a name for the files that will contain that information.
\subsection{Results}
In each image there was a soft feature and a hard feature, apple and meat, bone and polystyrene. The program was used to make each feature disappear in turn, and the regions of interest of the resulting images were analyzed statistically against regions regarded as "background".\\
A quantity called "relative brightness" was defined as:
\begin{gather}
relative \quad brightness=\frac{average region \quad of \quad interest}{average \quad background}
\end{gather}
With this definition relative brightness would be how many times brighter is the region of interest to the background, which is a measure of how easy it is to see. The inverse of the relative brightness would then be how many times darker is the region of interest, which also makes it easier to see.
\\
One practical decision was made to show the resulting images using the color map (a convention to assign colors to values in the matrices) called "colorcube", a standard component in Matlab, because this color map assigns very different colors to values that are nearby, so that very small differences are very visible. This has the downside that one cannot identify which parts of the image actually have similar values or which have greater values than others.
\\
The water container did not occupy the whole image and it produces noise in the top and the bottom of most results, but in the end that has no ill effects..
\\
In each resulting image a rectangular region of interest (ROI) was defined as to encompass the feature of interest as well as the parts of the image it should look similar to. Along the wider dimension of the rectangle a vector of data was made averaging the sum of the cell or columns. The resulting vectors where plotted.
The "comparison" column evaluated the "results' comparison" in the same places as the other images
\pagebreak
\subsubsection{Result 1: Apple-water look alike}
The objective in this test was to make the apple look like water, allowing to see only the polystyrene rod. The ROIs are shown with dotted lines.
\\
The two plots show the same behavior, there pixel value diminishes significantly when the polystyrene starts, making it look different from the apple and the water but the magnitude of this change is much greater in the basis decomposition. The value of the background also rises in the right side of the ROI in both plots, but it raises much more in the Logarithmic analysis.\\
This trend of having the same behaviors in different magnitudes continues in all the results.
\begin{center}
\begin{tabular}{ |c||c|c|c| }
\multicolumn{4}{l}{Table 2} \\
\hline
Result & Basis & Logarithm & Comparison\\
\hline
Mean of sample & 1.5776 & 0.2036 & 0.1328 \\
Mean of bg. & 1.8084 & 0.2244 & 0.1195 \\
StD. deviation & 0.0129 & 0.0052 & 0.0063 \\
StD. of bg. & 0.0038 & 0.0038 & 0.0083 \\
Relative brightness & 0.8724 & 0.9071 & 1.1108 \\
\hline
\end{tabular}
\end{center}
\begin{center}
\includegraphics[width=11cm,height=11cm]{r1} \\
Figure 14: a) Result of basis decomposition. b) Results of logarithmic analysis. c) Basis decomposition's ROI plot d)Logarithmic analysis' ROI plot
\end{center}
\pagebreak
\subsubsection{Result 2: Polystyrene water look alike}
In this test the objective was to make the polystyrene look like water, so that only the apple would be visible. None of the methods accomplish to give them blend them together but it again can be seen that the basis decomposition produces a better result, since the background at the right of the ROI is much closer in value to the polystyrene
\begin{center}
\begin{tabular}{ |c||c|c|c| }
\multicolumn{4}{l}{Table 4} \\
\hline
Result & Basis & Logarithm & Comparison\\
\hline
Mean of sample & 1.4141 & 4.1574 & 0.6507 \\
Mean of bg & 1.4851 & 4.2324 & 0.6398 \\
StD. of sample & 0.0427 & 0.026 & 0.0036 \\
StD. of bg & 0.0277 & 0.0161 & 0.0056 \\
Relative brightness & 0.9522 & 0.9823 & 1.017 \\
\hline
\end{tabular}
\end{center}
\begin{center}
\includegraphics[width=11cm,height=11cm]{r2} \\
Figure 15: a) Result of basis decomposition. b) Results of logarithmic analysis. c) Basis decomposition's ROI plot d)Logarithmic analysis' ROI plot
\end{center}
\pagebreak
\subsubsection{Result 3: Muscle-bone look alike}
Here the purpose was to make the bone and the muscle look the same but different from the water around it.\\
At first glance the images a) and b) seem contradictory to the plots c) and d), but in fact what happens is that the difference in the pixel values for the logarithmic analysis is at least one order of magnitude smaller than with basis decomposition, and not even the different options with the sliders were able to increase this difference, but it is still there.
\begin{center}
\begin{tabular}{ |c||c|c|c| }
\multicolumn{4}{l}{Table 5} \\
\hline
Result & Basis & Logarithm & Comparison\\
\hline
Mean of sample & 5.0681 & -1.2006 & 7.4233 \\
mean of bg. & 5.1435 & -1.1728 & 7.3665 \\
StD. of sample & 0.0644 & 0.0067 & 0.037 \\
StD. of bg. & 0.0385 & 0.0025 & 0.0139 \\
Relative brightness & 0.9853 & 1.0237 & 1.0077 \\
\hline
\end{tabular}
\end{center}
\begin{center}
\includegraphics[width=11cm,height=11cm]{r3} \\
Figure 16: a) Result of basis decomposition. b) Results of logarithmic analysis. c) Basis decomposition's ROI plot d)Logarithmic analysis' ROI plot
\end{center}
\pagebreak
\subsubsection{Result 4: Muscle water look alike}
The results from making the muscle look like water where exceptional for the basis decomposition, since both where taken almost uniformly to zero, eliminating even the noise from the top and the bottom of the water container.
The results of the logarithmic analysis where also very good, but the little noise it could not eliminate had a big effect in the plot because of their unusualy high values.
\begin{center}
\begin{tabular}{ |c||c|c|c| }
\multicolumn{4}{l}{Table 3} \\
\hline
Result & Basis & Logarithm & Comparison\\
\hline
Mean of sample & 0.1824 & 4.5156 & 0.7091 \\
Mean of bg & 0.0018 & 4.4103 & 0.7154 \\
StD. of sample & 0.1352 & 0.0429 & 0.009 \\
StD. of bg. & 0.0025 & 0.0167 & 0.0026 \\
Relative brightness & 102.6824 & 1.0239 & 0.9913 \\
\hline
\end{tabular}
\end{center}
\begin{center}
\includegraphics[width=11cm,height=11cm]{r4} \\
Figure 17: a) Result of basis decomposition. b) Results of logarithmic analysis. c) Basis decomposition's ROI plot d)Logarithmic analysis' ROI plot
\end{center}
\subsection{Discussion of results}
As the plots and the tables show, the logarithmic method managed to produce the desired effect most of the time, it made the feature of interest easier to see when that was the intent and made it more similar to the background in the other cases.
That said, the basis decomposition outperformed the logarithmic method, for example the isolation of bone was almost perfect, which makes sense since that was explicitly what the basis decomposition was designed to do.
The logarithmic method also failed at eliminating satisfactorily the apple when the feature of interest was the polystyrene, and failed at eliminating the polystyrene when the apple was the feature of interest, which in hindsight was expected since the basis decomposition did not perform very well in that test either.
One could notice however that the relative brightnesses are always close in the order of $10^{-2}$ (except for the complete isolation of bone in figure 21), which would suggest that in fact the results are quite similar, but this small differences have large effects, since human senses are not linear and small differences can have large effects.
\section{Conclusion}
Dual energy radiography is a tool to save lives, but it is a two edged sword, one of which scientist are constantly trying to make as blunter as possible.
\\
In that effort new techniques for data analysis surely will help, until one day there would be no risk associated with medical x-ray exposure.\\
Today is not that day, and the Logarithmic method is not one of those techniques, not as far as it was developed by the author, although some variation of this idea may still have value.
\section{Open code}
All the code and all the data taken are freely available at https://github.com/Frigorifico9/Dual-energy
\begin{thebibliography}{9}
\bibitem{lehmann}
Lehmann, L. A., R. E. Alvarez, A. Macovski, and W. R. Brody. "Generalized Image Combinations in Dual KVP Digital Radiography." Medical Physics 8.5 (1981): 659. Web.
\bibitem{NIST}
Hubbell, J. H,, and S. M. Seltzer. "X-Ray Mass Attenuation Coefficients." Tables of X-Ray Mass Attenuation Coefficients and Mass Energy-Absorption Coefficients from 1 KeV to 20 MeV for Elements Z = 1 to 92 and 48 Additional Substances of Dosimetric Interest*. National Institute of Standards and Technology, 17 Sept. 2009. Web. <https://www.nist.gov/pml/x-ray-mass-attenuation-coefficients>.
\bibitem{ge}
General Electric. GE Healthcare June 2014, web: http://www3.gehealthcare.com/en/products/
categories/radiography/advanced\_applications/dual\_energy\_subtraction\#tabs/
tabE19657B3941E4464A3CF1F52522467B6
\end{thebibliography}
\end{document}