Jekyll2017-07-06T23:12:57+00:00/Learning in ThingsResearch blog on machine learning embedded in things.
Optimize IoU for Semantic Segmentation in TensorFlow2016-12-28T18:48:26+00:002016-12-28T18:48:26+00:00/writing/2016/12/28/optimizing-iou-semantic-segmentation<script type="text/x-mathjax-config">
MathJax.Hub.Config({
TeX: {
equationNumbers: {
autoNumber: "AMS"
}
},
tex2jax: {
inlineMath: [ ['$','$'], ['\(', '\)'] ],
displayMath: [ ['$$','$$'] ],
processEscapes: true,
}
});
</script>
<script type="text/javascript" async="" src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-MML-AM_CHTML">
</script>
<h1 id="introduction">Introduction</h1>
<p>Intersection over union (IoU) is a common metric for assessing performance in semantic segmentation tasks. In a sense, (IoU) is to segmentation what an F1 score is to classification. Both are non-differentiable, and not normally optimized directly. Optimizing cross entropy loss is a common proxy for these scores, that usually leads to decent performance, provided everything else has been setup correctly, e.g regularization, stopping training at an appropriate time. Divising a pixelwise loss function, such that a deep network performs segmentation, with the mindset of classification and cross-entropy, we get something like this:</p>
<p>Listing 1: TensorFlow pixelwise softmax cross-entropy loss</p>
<div class="language-python highlighter-rouge"><pre class="highlight"><code><span class="c"># logits has original shape [batch_size x img h x img w x FLAGS.num_classes]</span>
<span class="n">logits</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">logits</span><span class="p">,</span> <span class="p">(</span><span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="n">FLAGS</span><span class="o">.</span><span class="n">num_classes</span><span class="p">))</span>
<span class="n">trn_labels</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">trn_labels_batch</span><span class="p">,</span> <span class="p">[</span><span class="o">-</span><span class="mi">1</span><span class="p">])</span>
<span class="n">cross_entropy</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">sparse_softmax_cross_entropy_with_logits</span><span class="p">(</span><span class="n">logits</span><span class="p">,</span><span class="n">trn_labels</span><span class="p">,</span><span class="n">name</span><span class="o">=</span><span class="s">'x_ent'</span><span class="p">)</span>
<span class="n">loss</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">reduce_mean</span><span class="p">(</span><span class="n">cross_entropy</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s">'x_ent_mean'</span><span class="p">)</span>
<span class="n">train_op</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">train</span><span class="o">.</span><span class="n">AdamOptimizer</span><span class="p">(</span><span class="n">FLAGS</span><span class="o">.</span><span class="n">learning_rate</span><span class="p">)</span><span class="o">.</span><span class="n">minimize</span><span class="p">(</span><span class="n">loss</span><span class="p">,</span><span class="n">global_step</span><span class="o">=</span><span class="n">global_step</span><span class="p">)</span>
<span class="c"># For inference/visualization, prediction is argmax across output 'channels'</span>
<span class="n">prediction</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">argmax</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">softmax</span><span class="p">(</span><span class="n">logits</span><span class="p">),</span> <span class="n">tf</span><span class="o">.</span><span class="n">shape</span><span class="p">(</span><span class="n">vgg</span><span class="o">.</span><span class="n">up</span><span class="p">)),</span> <span class="n">dimension</span><span class="o">=</span><span class="mi">3</span><span class="p">)</span>
</code></pre>
</div>
<p>Recently, <a href="http://www.cs.umanitoba.ca/~ywang/papers/isvc16.pdf" title="Optimizing Intersection-Over-Union in Deep
Neural Networks for Image Segmentation">Y.Wang et al</a>, proposed a straightforward scheme for optimizing approximate IoU directly, but their approach currently only supports binary, e.g foreground/background output. This was fine for my own dataset, to be intrdouced later, but it’s certainly worth looking at how this can be extended to multi-output, which is already handled by Listing 1.</p>
<p>The equations from <a href="http://www.cs.umanitoba.ca/~ywang/papers/isvc16.pdf" title="Optimizing Intersection-Over-Union in Deep
Neural Networks for Image Segmentation">Y.Wang et al</a> are reproduced here because i’ll convert them to TensorFlow after, but check out their paper for the gradient proof and results on various PASCAL VOC2010/2011 objects.</p>
<p>\begin{equation}
I(X) = \sum_{v \in V} X_v \times Y_v
\end{equation}</p>
<p>\begin{equation}
U(X) = \sum_{v \in V} X_v + Y_v - X_v \times Y_v
\end{equation}</p>
<p>\begin{equation}
IoU = \frac{I(X)}{U(X)}
\end{equation}</p>
<p>\begin{equation}
loss = 1.0 - IoU
\end{equation}</p>
<p>Listing 2: TensorFlow IoU loss, not shown is the sigmoid non-linearity at output in lieu of ReLU.</p>
<div class="language-python highlighter-rouge"><pre class="highlight"><code><span class="s">'''
now, logits is output with shape [batch_size x img h x img w x 1]
and represents probability of class 1
'''</span>
<span class="n">logits</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">logits</span><span class="p">,</span> <span class="p">[</span><span class="o">-</span><span class="mi">1</span><span class="p">])</span>
<span class="n">trn_labels</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">trn_labels_batch</span><span class="p">,</span> <span class="p">[</span><span class="o">-</span><span class="mi">1</span><span class="p">])</span>
<span class="s">'''
Eq. (1) The intersection part - tf.mul is element-wise,
if logits were also binary then tf.reduce_sum would be like a bitcount here.
'''</span>
<span class="n">inter</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">reduce_sum</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">mul</span><span class="p">(</span><span class="n">logits</span><span class="p">,</span><span class="n">trn_labels</span><span class="p">))</span>
<span class="s">'''
Eq. (2) The union part - element-wise sum and multiplication, then vector sum
'''</span>
<span class="n">union</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">reduce_sum</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">sub</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">logits</span><span class="p">,</span><span class="n">trn_labels</span><span class="p">),</span><span class="n">tf</span><span class="o">.</span><span class="n">mul</span><span class="p">(</span><span class="n">logits</span><span class="p">,</span><span class="n">trn_labels</span><span class="p">)))</span>
<span class="c"># Eq. (4)</span>
<span class="n">loss</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">sub</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">constant</span><span class="p">(</span><span class="mf">1.0</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">),</span><span class="n">tf</span><span class="o">.</span><span class="n">div</span><span class="p">(</span><span class="n">inter</span><span class="p">,</span><span class="n">union</span><span class="p">))</span>
<span class="n">train_op</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">train</span><span class="o">.</span><span class="n">AdamOptimizer</span><span class="p">(</span><span class="n">FLAGS</span><span class="o">.</span><span class="n">learning_rate</span><span class="p">)</span><span class="o">.</span><span class="n">minimize</span><span class="p">(</span><span class="n">loss</span><span class="p">,</span><span class="n">global_step</span><span class="o">=</span><span class="n">global_step</span><span class="p">)</span>
<span class="c"># For inference/visualization</span>
<span class="n">valid_prediction</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">logits</span><span class="p">,</span><span class="n">tf</span><span class="o">.</span><span class="n">shape</span><span class="p">(</span><span class="n">vgg</span><span class="o">.</span><span class="n">up</span><span class="p">))</span>
</code></pre>
</div>
<p>In Listing 1, the network output was ReLU’d and softmax’d, so the final output was nearly one-hot in the output channels, or (class) dimension, hence the argmax to compare the maximally activated channel with an integer in the training label. The argmax disappears in Listing 2 because we network output passes through a sigmoid, so we just take the logits as a face value preference for either class 0 or class 1.</p>
<p>A fairly low capacity model, the <strong>VGG6xs-Fc6-k1-512-Deconv-k64s32</strong>, was used to do an apples to apples comparison of the two loss functions. This is my own naming convention, but essentially we have: the front end of a VGG16 trimmed down to 5 convolution layers, keeping one of each differently sized layer, number of filters in the first layer reduced from 64 to 16, but still doubling in each subsequent layer, ‘Fc6’ has kernel size 1x1 instead of 7x7, and 512 hiddens, a single deconvolution layer with 64x64 kernel and stride 32.</p>
<p><img src="/img/vgg6xs-fc6-k1-512-deconv-k64s32-iou-vs-xent.png" alt="iou-vs-xent" /></p>
<p>My dataset has a class distribution of approximately 13% class 1 for (training) and 40% class 1 for (validation). In my very preliminary experiments, I have found the IoU method to be much more sensitive to selection of learning rate and batch size, even with the fairly robust ADAM gradient descent scheme. A fairly high learning rate of 1e-3 led to an unusual instability where the the network always predicted class 0, with a few small point sources of class 1, resulting in a flatline validation mIoU of 31%. This checks out with the aforementioned class balance since we get approx 0% IoU for class 1, and 60% IoU for class 0 by always predicting class 0, which averages to 30%. It can therefore be said that the network has learned something useful for mIoU scores above 30%.</p>
<p>Reducing the learning rate to 1e-4, adding dropout regularization, and increasing mini-batch size to 10, resulted in a fairly nice comparison of the two loss functions. The only difference between the solid and dashed lines in the above figure is that ‘IoU-loss’ was trained with the loss function from Listing 2, while ‘Xent-loss’ was trained with cross-entropy softmax loss as in Listing 1. Optimizing IoU directly resulted in a <strong>3.42</strong>% boost in mIoU on my validation set. This difference will likely grow when a higher capacity model is used.</p>
<p><img src="/img/xent-vs-iou-stp5800.png" alt="iou-xent-step-5800" /></p>
<p>The above image shows from left to right, a sample input, network output at step 5800, and mask. The top uses the IoU loss from Listing 2, while the bottom uses cross-entropy loss from Listing 1. In general, the IoU loss recovers false-negatives but makes more false-positives. The top is fuzzy around the object border because the output has not been thresholded.</p>
<p><img src="/img/vgg7xs-fc6-512-rgb-bs10-lr1e-4-2.png" alt="1" /></p>
<p>Above, IoU loss, below, xent loss.</p>
<p><img src="/img/vgg7xs-fc6-512-rgb-bs10-lr1e-4-xent-2-.png" alt="2" /></p>
<p>Below are some more samples drawn for validation images, with models trained to 11k steps.</p>
<p><img src="/img/vgg7xs-fc6-512-rgb-bs10-lr1e-4-4.png" alt="4" /></p>
<p>Above, IoU loss gets 3/3 of class 1 objects, while below, xent loss identifies 2/3.</p>
<p><img src="/img/vgg7xs-fc6-512-rgb-bs10-lr1e-4-xent-4-.png" alt="3" /></p>Practical Guide Technical Writing Engineering2016-11-25T18:48:26+00:002016-11-25T18:48:26+00:00/writing/2016/11/25/practical-guide-technical-writing<script type="text/x-mathjax-config">
MathJax.Hub.Config({
TeX: {
equationNumbers: {
autoNumber: "AMS"
}
},
tex2jax: {
inlineMath: [ ['$','$'], ['\(', '\)'] ],
displayMath: [ ['$$','$$'] ],
processEscapes: true,
}
});
</script>
<script type="text/javascript" async="" src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-MML-AM_CHTML">
</script>
<h1 id="introduction">Introduction</h1>
<p><em>Despite what you may have been led to believe, the <strong>purpose</strong> or <strong>problem description</strong> is not to “develop deep knowledge” or “learn how to use the software”. The purpose is a specific, attainable, quantifiable goal, in terminology appropriate for the field, but that one of your peers not taking the class could be reasonably expected to understand. A generic example is provided below, while the notation <strong>ENGG*4420</strong> is used to suggest examples for the Real Time course. Some examples have been reproduced or adapted from previous reports with permission.</em></p>
<h2 id="problem-description">Problem Description</h2>
<p>A comprehensive resource for scientific technical writing in undergraduate Engineering courses is absent. To address this information gap, a set of guidelines that make specific recommendations regarding the use of equations, figures, and writing style, are proposed.</p>
<p><strong>ENGG*4420</strong></p>
<ul>
<li>
<p>The purpose of this lab was to implement a quarter-car suspension model in LabVIEW, and compare the performance of a passive system to that of a semi-active linear quadratic regulator (LQR) controlled system. Several performance measures were devised, including the vertical acceleration of the quarter car sprung mass, and suspension deflection, when the model was subject to sinusoidal and step inputs …</p>
</li>
<li>
<p>The model was to be architechted using a modular plant model with the LQR controller in a separate LabVIEW timed loop, such that the controller could be evaluated deterministically in LabVIEW RTOS, and in future easily scaled up to a full-car model.</p>
</li>
</ul>
<h1 id="background">Background</h1>
<p>The background is full of equations, some in text, as in $ PV=nRT $, and some on their own, as in \eqref{eq:softmax}. Useful equations we wish to refer to later in text, are defined on their own line, centered, and with the equation number flush to the right.</p>
<script type="math/tex; mode=display">\begin{equation} \label{eq:softmax}
y_i = \frac{exp(z_i)}{\sum\limits_j exp(z_j)}
\end{equation}</script>
<p>The quotient rule is applied to \eqref{eq:softmax} to obtain the gradient, $ \frac{\partial{y_i}}{\partial{z_j}} $, for the case when $ j = i $. Given that, $ \frac{\partial{\sum_j exp(z_j)}}{\partial{z_j}} = exp(z_i) $, and not labeling equations corresponding to intermediate steps, $\frac{\partial{y_i}}{\partial{z_j}} $ can be evaluated as:</p>
<script type="math/tex; mode=display">\frac{\partial{y_i}}{\partial{z_i}} = \frac{ exp(z_i) \cdot \sum_j exp(z_j) - exp(z_i) \cdot exp(z_i) }{ \big({\sum_j exp(z_j)}\big)^2}</script>
<script type="math/tex; mode=display">\frac{\partial{y_i}}{\partial{z_i}} = \frac{ exp(z_i) \cdot \sum_j exp(z_j) - exp(z_i) \cdot exp(z_i) }{ \sum_j exp(z_j) \sum_j exp(z_j) }</script>
<script type="math/tex; mode=display">\frac{\partial{y_i}}{\partial{z_i}} = \frac{ exp(z_i) \big( \sum_j exp(z_j) - exp(z_i) \big) }{ \sum_j exp(z_j) \sum_j exp(z_j) }</script>
<p>\begin{equation} \label{eq:penultimate}
\frac{\partial{y_i}}{\partial{z_i}} = \frac{exp(z_i)}{\sum_j exp(z_j)} \cdot \big( 1 - \frac{exp(z_i)}{\sum_j exp(z_j)} \big)
\end{equation}</p>
<p>Recognizing that \eqref{eq:penultimate} is composed of \eqref{eq:softmax}, \eqref{eq:penultimate} can be reduced to \eqref{eq:grad_i_eq_j}.</p>
<p>\begin{equation} \label{eq:grad_i_eq_j}
\frac{\partial{y_i}}{\partial{z_i}} = y_i \cdot \big( 1 - y_i \big)
\end{equation}</p>
<p><em>There is universal agreement in the Engineering community on formatting equations as above, however this is IEEE inspired in how equations are referred to in text, with only circle braces. You don’t have to use IEEE style, but do always use circle braces. Equation (1), equation (1), eq. (1) are also accepted. One of the primary motivations for typesetting your own equations, when you could otherwise copy them from the Lab Manual, is that it makes you more aware of variables or terms that need to be explained to the reader.</em></p>
<p>The class of image $f$, is taken to be that of the template, $t$, corresponding to the maximum correlation coefficient, $\gamma$, in the normalized 2D cross-correlation \cite{match_template} given by \eqref{eq:ncc}. In \eqref{eq:ncc}, $\overline{t}$ is the template mean, and $\overline{f}_{u,v}$ is the image mean in the region $f(x,y)$ spanned by $t$ centered at $u$,$v$.</p>
<script type="math/tex; mode=display">\begin{equation} \label{eq:ncc}
\gamma(u,v) = \frac{\sum_{x,y}[f(x,y)-\overline{f}_{u,v}][t(x-u, y-v)-\overline{t}]}{\sqrt{\sum_{x,y}[f(x,y)-\overline{f}_{u,v}]^2[t(x-u, y-v)-\overline{t}]^2}}
\end{equation}</script>
<p><strong>ENGG*4420</strong></p>
<ul>
<li>
<p>The vertical acceleration of the sprung mass is the dominant force experienced by a vehicle’s occupants, and is therefore a suitable proxy for ride quality …</p>
</li>
<li>
<p>An additional goal of a suspension system is to maintain good road handling on a variety of surfaces. Tire deflection is a good
measure of how effective the suspension system is at road handling …</p>
</li>
<li>
<p>A suspension system also has to support the vehicle’s static weight under gravity. It was required that the suspension deflection remain within fixed physical bounds at all times under this load, and for all road disturbances in the design specification …</p>
</li>
<li>
<p>Letting $x_1$ be the suspension deflection, $x_2$ the absolute velocity of the sprung mass, $x_3$ the tire deflection, and $x_4$ the velocity of the unsprung mass, we may represent the passive suspension system in state space form in \eqref{eq:ss-passive}.</p>
</li>
</ul>
<p>\begin{equation}
\label{eq:ss-passive}
\dot{X} = AX + L\dot{z_r}
\end{equation}</p>
<p>Adding a variable damper, $B_{semi}$, with matrix N to \eqref{eq:ss-passive} results in the semi-active model \eqref{eq:ss-semi}.</p>
<p>\begin{equation}
\label{eq:ss-semi}
\dot{X} = AX + NXB_{semi} + L\dot{z_r}
\end{equation}</p>
<p>tions using only a handful of blocks. Examples of these were; a bumpy road simulated
by a sine wave, a flat road simulated by a constant, and a sharp curb simulated by a step input.</p>
<ul>
<li>
<p>It is expensive to find the perfect road conditions to test on, and by using
LabVIEW one can save a lot of time by simulating the input that is required rather than creating the physical real world conditions.</p>
</li>
<li>
<p>LabVIEW allows one to simulate smaller components in isolation, prior
to testing a complete system. In this lab, several assumptions are made regarding
the tire/road interface, and vehicle body to simulate the suspension of a quarter car
section. This is very challenging to do in the real world where these assumptions to not hold.</p>
</li>
<li>
<p>In this manner, a system can be thoroughly tested before going into production,
knowing how that system will behave under all road conditions and the specifications of all
the components.</p>
</li>
<li>
<p>Perhaps the magnetic fluid used in variable dashpots has not yet been invented, but we wish to determine if there is any advantage to semi-active suspension systems in terms of their dynamics. Knowing how dashpots behave in general, opposing motion with a force proportional to the difference in velocity of the objects connected to it on either end, we can proceed with a simulation of how the technology <em>would</em> work, if all the technology was in place.</p>
</li>
</ul>
<h1 id="implementation">Implementation</h1>
<p><em>Depending on the course, this section might be called <strong>Detailed Design</strong>, <strong>Methodology</strong> or something entirely different. Regardless, this is where you want to present the work that you did, and any novel contributions as clearly and effectively as possible.</em></p>
<h2 id="figures">Figures</h2>
<p>Every figure should communicate something that you can’t quite do effectively with words. Every figure must have a purpose and be legible to someone with normal human vision. Figure 1 shows how momentum velocity affects learning in a multi-layer perceptron (MLP), with cross-entropy at epoch 250 labelled clearly, to let the reader easily compare different settings for $\alpha$. Don’t be afraid to write a long caption. The one in Figure 1 is on the shorter end. Imagine the page containing your Figure has been separated from the report, is the caption descriptive enough for someone to make sense of it?</p>
<p><img src="/img/mlp.png" alt="MLP" height="600px" width="800px" />
Figure 1: MLP cross entropy for training and test sets with four settings of momentum velocity, $\alpha$, up to 500 training epochs, a fixed learning rate of 0.01, and 250 hidden units.</p>
<p><em>If you calculated something by legitimately <strong>using</strong> a figure or reading a value from a curve, then say so, but be as specific as possible. Mostly likely this <strong>isn’t</strong> the case for the Real Time</em> course.</p>
<p>The Reynold’s number, Re, was read from Figure 2, with friction factor, $ f $, and relative roughness, $ k/d $, and given the assumptions stated in Section 2.</p>
<p>Figure 2. Not reproduced due to excessive file size. Available <strong>https://grantingram.wordpress.com/2009/04/22/moody-diagram/</strong></p>
<p>If someone else generated a figure that you would like to reuse, and you have their <em>permission</em>, state that it has been <strong>reproduced</strong>. It’s not sufficient to cite the source of the figure, this implies that you created the figure yourself, you were simply inspired by their work. If you substantially modified someone else’s figure, it’s fair to say <strong>adapted</strong> instead. If you generated a figure with someone else’s data, you can simply cite the source of the data. The Python source code for Figure 2 is GNU GPL licensed, so it’s fair game.</p>
<p><strong>ENGG*4420</strong></p>
<p><em>If it’s the lab manual for the course, you can assume you have permission to reproduce, but allow me to make a plea as to why you shouldn’t take screenshots of equations and paste them into the lab manual.</em></p>
<ul>
<li>
<p><em>It wouldn’t be accepted in any other context, so why learn a bad habit now?</em></p>
</li>
<li>
<p><em>It is much harder to mindlessly throw equations into your report without context, after going through the effort of type-setting them yourself.</em></p>
</li>
<li>
<p><em>There is probably research to suggest that it makes me subconsciously biased against the rest of the report.</em></p>
</li>
<li>
<p><em>What should you even call that thing? A Figure? It should be an equation, or a matrix!</em></p>
</li>
</ul>
<h2 id="source-code">Source Code</h2>
<p><em>What about your source code? The truth is, no one wants to look at raw source code in the body of a report, especially not a raster graphic screenshot of the code from the Eclipse IDE. If you must show code, keep it short, to the point, and formatted. If you are using Latex, <strong>‘lstinputlisting[]’</strong> is your friend. Otherwise, paste the code as <strong>text</strong>, not a raster graphic. A good way to keep the code clean if you don’t want to comment it is to refer to specific listing, in the same way that you refer to a figure. Please, don’t refer to <strong>the code above/below</strong>, as it sometimes gets pushed 3 pages later when the report is said and done.</em></p>
<p>Listing 1: Python source for multinomial logistic regression while sweeping the number of principal components representing a cropped and downsampled image.</p>
<div class="language-python highlighter-rouge"><pre class="highlight"><code><span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">xrange</span><span class="p">(</span><span class="n">n_samples</span><span class="p">):</span>
<span class="k">for</span> <span class="n">c</span> <span class="ow">in</span> <span class="nb">xrange</span><span class="p">(</span><span class="n">start_class</span><span class="p">,</span> <span class="n">end_class</span><span class="o">+</span><span class="mi">1</span><span class="p">):</span>
<span class="c"># Predict with j transposed principal components </span>
<span class="n">p</span><span class="o">=</span><span class="n">logreg</span><span class="o">.</span><span class="n">predict</span><span class="p">(</span><span class="n">pca</span><span class="o">.</span><span class="n">components_</span><span class="o">.</span><span class="n">T</span><span class="p">[</span><span class="n">j</span><span class="p">]</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span><span class="o">-</span><span class="mi">1</span><span class="p">))</span>
</code></pre>
</div>
<p><strong>ENGG*4420</strong></p>
<p>The counting_task, shown in Listing 2, increments a counter when LCD_count semaphore is posted in <strong>event_task</strong>, up to COUNT_MAX value, then rolls over to 0.</p>
<p>Listing 2: Altera <strong>counting_task</strong> with priority 9.</p>
<figure class="highlight"><pre><code class="language-ruby" data-lang="ruby"><span class="n">void</span> <span class="n">counting_task</span><span class="p">(</span><span class="no">IOdevices_control</span> <span class="o">*</span><span class="no">IOdevices</span><span class="p">)</span> <span class="p">{</span>
<span class="no">INT8U</span> <span class="n">err1</span><span class="p">,</span> <span class="n">err2</span><span class="p">;</span>
<span class="no">INT8S</span> <span class="n">count_i</span> <span class="o">=</span> <span class="o">-</span><span class="mi">1</span><span class="p">;</span>
<span class="k">while</span> <span class="p">(</span><span class="mi">1</span><span class="p">)</span>
<span class="p">{</span>
<span class="no">OSSemPend</span><span class="p">(</span><span class="no">LCD_count</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="o">&</span><span class="n">err1</span><span class="p">);</span>
<span class="k">if</span><span class="p">(</span><span class="no">OS_NO_ERR</span> <span class="o">==</span> <span class="n">err1</span><span class="p">)</span>
<span class="p">{</span>
<span class="n">count_i</span> <span class="o">=</span> <span class="p">(</span><span class="n">count_i</span> <span class="o">+</span> <span class="mi">1</span><span class="p">)</span> <span class="o">%</span> <span class="no">COUNT_MAX</span><span class="p">;</span>
<span class="nb">printf</span><span class="p">(</span><span class="s2">"%4d</span><span class="se">\n</span><span class="s2">"</span><span class="p">,</span> <span class="n">count_i</span><span class="p">);</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="p">}</span></code></pre></figure>
<h1 id="results">Results</h1>
<p><em>There are many ways to present your results, certain things are best illustrated with figures, others with tables. Say at least one thing that is non-obvious, and insightful, regarding each of the figures.</em></p>
<p><img src="/img/canny_sigma_1_3_hough.png" alt="MLP" /></p>
<p>Figure 3: (<em>Left</em>) Output of Canny edge detector with Gaussian smoothing parameter, $ \sigma $, equal to 1, 2, and 3, for rows 1, 2, and 3 respectively. (<em>Right</em>) Resulting probabilistic Hough lines \cite{prob_hough}, with line gap of 3px, and minimum line length of 25px. Produced with scikit-image Python library \cite{scikit-image}.</p>
<p><em>Some figures compare many things at once and have an inherrent structure, as in Figure 3. In this case, use circle braces and italics to explain each section of the figure where natural to do so. The style you use for this doesn’t matter as long as it is consistent and sufficiently descriptive.</em></p>
<p>An encouraging result was obtained when Listing 1 was run with $ds_v$, $ds_h$ = 2, and keeping only the first three principal components, yielding 100% accuracy. For the sake of minimizing the algorithm execution time, the downsampling factors $ds_v$ and $ds_h$ were incremented by hand in steps of one until the accuracy began to drop. It was found that $ds_v$ and $ds_h$ could be increased all the way to 21 while maintaining 100% accuracy, where nearly all of the detail was lost in terms of what is visible to the human eye. Despite losing much of the image content, a suitable representation for the classification task could be obtained, yielding the results summarized in Table 1.</p>
<p>Table1: Three-class logistic regression classification accuracy for dataset X. Principal components from cropped and downsampled $ 3 \times 15 $ px images as features.</p>
<table>
<thead>
<tr>
<th>Principal-components</th>
<th>#-Correct</th>
<th>Accuracy (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>19</td>
<td>48.7</td>
</tr>
<tr>
<td>2</td>
<td>30</td>
<td>76.9</td>
</tr>
<tr>
<td>3</td>
<td>37</td>
<td>94.9</td>
</tr>
<tr>
<td>4</td>
<td>36</td>
<td>92.3</td>
</tr>
<tr>
<td>5</td>
<td>36</td>
<td>92.3</td>
</tr>
<tr>
<td>6</td>
<td>37</td>
<td>94.9</td>
</tr>
<tr>
<td>7</td>
<td>37</td>
<td>94.9</td>
</tr>
<tr>
<td>8</td>
<td>38</td>
<td>97.4</td>
</tr>
<tr>
<td>9</td>
<td>38</td>
<td>97.4</td>
</tr>
<tr>
<td>10</td>
<td>39</td>
<td>100.0</td>
</tr>
</tbody>
</table>
<p><strong>ENGG*4420</strong></p>
<p><em>Knowledge of control theory is not assumed, but systems concepts like position, velocity, and acceleration are fair game.</em></p>
<ul>
<li>
<p>The set of weights $ \rho_1 $ resulted in 30% less suspension deflection than $ \rho_2 $</p>
</li>
<li>
<p>The maximum velocity occurs at t = x, halfway to the maximum suspension deflection …</p>
</li>
<li>
<p>The maxium acceleration occurs at t=0, the instant the step is applied, and gradually decreases until …</p>
</li>
</ul>
<h1 id="conclusion">Conclusion</h1>
<p><em>Try to find a clever way to avoid a boring boilerplate ending that begins with “In summary” or “To conclude”. The Conclusion is not for leftovers, i.e things you couldn’t fit into the discussion. You shouldn’t need to reference figures nor equations. You are trying to take a step back, see the big picture, and make sense of what was found in the Results section. What were the most important findings, and why do they matter to your audience?</em></p>
<p><strong>ENGG*4420</strong></p>
<ul>
<li>
<p>Both passive and semi-active suspension systems were implemented in LabVIEW, and evaluated against the road disturbances from the formal design specifications. It was found that a semi-active system suppressed un-desireable transient behaviour, reducing peak vertical acceleration by 50%, and settling time by 70%, for step input. There was however, little benefit to the variable damper under steady state harmonic inputs which is analogous to driving at constant speed on a washboard road surface …</p>
</li>
<li>
<p>It was found that, among two sets of weights that penalized various performance characteristics in the LQR objective function, the set that more heavily penalized X resulted in Y.</p>
</li>
</ul>
<h1 id="epilogue">Epilogue</h1>
<h2 id="the-restaurant">The Restaurant</h2>
<p>Imagine you own a small restaurant that serves appetizers, entrees, drinks, and deserts. Your profit margin is the highest on drinks and deserts, naturally you would like to sell as many drinks and deserts as possible. If the appetizers and entrees are lousy, do you think your patrons will order desert? If the water is foul, do you think your patrons will order cocktails?</p>
<p>Illegible figures, lack of punctuation (e.g <em>let’s eat Grandma</em> vs. <em>let’s eat, Grandma</em>), and ambiguous wording are all things that contribute to a poor dining experience in this mythical restaurant that is your report. If blurry figures and imprecise wording are the entrees, then verbosity is the free bread, or chips and salsa, at the restaurant. You want your patrons, the reader, to save room for desert, but verbosity will quickly satisfy their appetite for more.</p>
<h2 id="quoth-the-raven-furthermore">Quoth the Raven Furthermore</h2>
<p>Watch out for your use of words like further, furthermore, and additionally. These words are like salt and pepper, their appropriate use can enhance the flavour of the meal, but if the waiter empties the contents of the salt and peper shakers on to your plate, the result is quite unpleasant. Nevermore should you begin a paragraph with furthermore.</p>
<h3 id="to-do">To do</h3>
<ul>
<li>fix citations</li>
</ul>