• Aucun résultat trouvé

Automated image analysis for remote ice detection

N/A
N/A
Protected

Academic year: 2021

Partager "Automated image analysis for remote ice detection"

Copied!
50
0
0

Texte intégral

(1)

Publisher’s version / Version de l'éditeur:

Student Report, 2010-12-01

READ THESE TERMS AND CONDITIONS CAREFULLY BEFORE USING THIS WEBSITE.

https://nrc-publications.canada.ca/eng/copyright

Vous avez des questions? Nous pouvons vous aider. Pour communiquer directement avec un auteur, consultez la première page de la revue dans laquelle son article a été publié afin de trouver ses coordonnées. Si vous n’arrivez pas à les repérer, communiquez avec nous à PublicationsArchive-ArchivesPublications@nrc-cnrc.gc.ca.

Questions? Contact the NRC Publications Archive team at

PublicationsArchive-ArchivesPublications@nrc-cnrc.gc.ca. If you wish to email the authors directly, please see the first page of the publication for their contact information.

NRC Publications Archive

Archives des publications du CNRC

For the publisher’s version, please access the DOI link below./ Pour consulter la version de l’éditeur, utilisez le lien DOI ci-dessous.

https://doi.org/10.4224/17712927

Access and use of this website and the material on it are subject to the Terms and Conditions set forth at

Automated image analysis for remote ice detection

Groves, Joshua

https://publications-cnrc.canada.ca/fra/droits

L’accès à ce site Web et l’utilisation de son contenu sont assujettis aux conditions présentées dans le site LISEZ CES CONDITIONS ATTENTIVEMENT AVANT D’UTILISER CE SITE WEB.

NRC Publications Record / Notice d'Archives des publications de CNRC:

https://nrc-publications.canada.ca/eng/view/object/?id=6e85800f-c518-4b96-8e11-249170e0f031

https://publications-cnrc.canada.ca/fra/voir/objet/?id=6e85800f-c518-4b96-8e11-249170e0f031

(2)

DOCUMENTATION PAGE

REPORT NUMBER

SR-2010-24

NRC REPORT NUMBER DATE

December 2010

REPORT SECURITY CLASSIFICATION

Unclassified

DISTRIBUTION

Unlimited

TITLE

AUTOMATED IMAGE ANALYSIS FOR REMOTE ICE DETECTION

AUTHOR(S)

Joshua Groves

CORPORATE AUTHOR(S)/PERFORMING AGENCY(S)

Institute for Ocean Technology, National Research Council, St. John’s, NL

PUBLICATION

SPONSORING AGENCY(S)

IOT PROJECT NUMBER NRC FILE NUMBER

KEY WORDS

Python, Foggy Icing Sensor, icing levels, RIDE,

PAGES

ii, 24,

App. A-C

FIGS.

16

TABLES SUMMARY

As recent technology has become more heavily integrated into research opportunities,

researchers have taken advantage of computer processing to produce results for modern,

innovative methods that were previously unfeasible. One research project, led by Research

Officer Dr. Robert Gagnon of the National Research Council’s Institute for Ocean

Technology (NRC-IOT), makes use of image analysis to automatically produce results from

the novel remote foggy icing sensor that his team has created. Images acquired by the

sensor’s camera system are given directly to a program which can analyze how much ice is

present on the structure in the captured images. The program created for the purpose of

extracting this raw data is written in Python, an efficient programming language which

enables rapid development and robust techniques for dealing with potential errors that may

occur.

ADDRESS

National Research Council

Institute for Ocean Technology

Arctic Avenue, P. O. Box 12093

St. John's, NL A1B 3T5

(3)

National Research Council Conseil national de recherches Canada Canada Institute for Ocean Institut des technologies

Technology océaniques

AUTOMATED IMAGE ANALYSIS FOR REMOTE ICE DETECTION

SR-2010-24

Joshua Groves

(4)

Table of Contents

1 Introduction 1

1.1 General Overview . . . 1

1.2 Application of Image Processing . . . 1

2 Automated Image Analysis for Remote Ice Detection Equipment (RIDE) 2 2.1 General Icing Information . . . 2

2.2 NRC-IOT’s Remote Ice Detection Equipment (RIDE) . . . 3

2.2.1 Theory . . . 3

2.2.2 Apparatus Components . . . 7

2.2.3 Apparatus Wiring . . . 9

2.2.4 Software for Device Control . . . 10

3 Data Analyzer for RIDE Camera Images (IOT-DARCI) 12 3.1 Purpose . . . 12

3.2 Common Initial Procedural Steps . . . 12

3.3 Range Weighted Center Method (WRCM) . . . 14

3.4 Mirrored Subtraction Method (MSM) . . . 15

3.5 Configuration . . . 16

3.6 Common Final Procedure Steps . . . 19

3.7 Results . . . 19 3.7.1 Testing Procedure . . . 19 3.7.2 Measurements . . . 20 3.8 IOT-DARCI Execution . . . 20 4 Recommendations 22 5 Conclusion 23 6 References 24

(5)

List of Figures

1 NRC-IOT’s Remote Ice Detection Equipment . . . 3

2 Camera and Laser Setup . . . 4

3 Pertinent Angles . . . 5

4 Laser Refraction Being Captured By Detector . . . 5

5 Rotational Angles of the Detector (RV ) and Laser (RL) . . . 6

6 Remote Controls for RIDE . . . 7

7 Main Components of RIDE . . . 8

8 Full Apparatus Wiring . . . 9

9 Main Window of PTR Control Emulator for QPT . . . 10

10 Main Window of Canon ZoomBrowser EX . . . 11

11 Example of Ideal RIDE Light Spots . . . 12

12 RIDE Camera Image with Matching Row Summation . . . 13

13 WRCM Single Image Output Graph . . . 15

14 MSM Single Image Output Graph . . . 16

15 Ice Accumulation on Lifeboat Hook . . . 20

16 Testing Results for Image Set . . . 21

Appendices

Appendix A: DARCI-WRCM Source Code Appendix B: DARCI-MSM Source Code Appendix C: DARCI Sample Configuration

(6)

1

Introduction

1.1

General Overview

As recent technology has become more heavily integrated into research opportunities, researchers have taken advan-tage of computer processing to produce results for modern, innovative methods that were previously unfeasible. One research project, led by Research Officer Dr. Robert Gagnon of the National Research Council’s Institute for Ocean Technology (NRC-IOT), makes use of image analysis to automatically produce results from the novel remote foggy icing sensor that his team has created. Images acquired by the sensor’s camera system are given directly to a pro-gram which can analyze how much ice is present on the structure in the captured images. The propro-gram created for the purpose of extracting this raw data is written in Python, an efficient programming language which enables rapid development and robust techniques for dealing with potential errors that may occur.

1.2

Application of Image Processing

Images collected from the Foggy Icing Sensor are analyzed through row/column pixel intensity summation techniques, which allows for the system to retrieve data regarding the amount of ice build-up on a structure by the use of any individual captured image. There are settings for the program which allow users to adjust parameters for certain scenarios, such as cases where lighting for the camera may alter results otherwise. This technology will allow for analysis of ice build-up on any surface, especially in the case where the ice may grow to a certain thickness and become hazardous to the safe opeation of machinery. To be able to detect this thickness level, the foggy icing sensor can be used to send adequate warnings and provide detailed logs of icing levels in remote places where it cannot be measured by any other current technology due to specific constraints. In this way, it has the potential to be less expensive but be more useful for situations where other types of sensors would not be applicable.

(7)

2

Automated Image Analysis for Remote Ice Detection Equipment (RIDE)

2.1

General Icing Information

When ice accumulates on surfaces of machinery where the surface or weight are very sensitive to changes, it can have disastrous effects. This is especially evident in vessels or aircraft where the ice could cause loss of control, hence causing damage quite easily. For vessels, ice accumulation is hazardous due to the additional weight it adds to the vessel. Rapid accumulation can easily occur in seawater, since sea spray often allows for massive amounts of water to be deposited on the surface of the vessel where it may freeze. Any amount of ice accumulation can reduce the stability of vessels, as the center of the mass for the vessel becomes shifted due to the nonuniform ice retention on various parts of the vessel. [1] In the situation for sea spray, for example, ice accumulation is much more prevalent on the bow of the vessel. This may result in a shift in pitch, and this means that the trim angle of the vessel must be adjusted to counter this; however, these changes are limited and therefore the ice becomes hazardous to the safe operation of the vessel.

As well, there has been great interest in a type of remote ice detection system for space shuttles. Space shuttles require reliable methods of analyzing ice accumulation before deciding whether or not it is hazardous to begin their launch procedure. However, similarly to vessels, it is both dangerous and difficult to measure ice on some of the surfaces that are likely the surfaces most prone to this hazardous ice accumulation. If a launch proceeds without awareness of ice accumulation, pieces of ice could break and cause damage to the space shuttle, hence giving rise to great potential for disaster.

Similarly, for aircraft, it is quite detrimental to have large amounts of ice accumulation. The weight does not affect the aircraft as much as the effect of airflow disruption that the ice may cause. Ice formations may cause drag and hence the airplane must increase the angle of attack, but this allows the underside of the wings and fuselage to accumulate additional ice. As AOPA Air Safety Foundation states, “In moderate to severe conditions, a light aircraft can become so iced up that continued flight is impossible. The airplane may stall at much higher speeds and lower angles of attack than normal. It can roll or pitch uncontrollably, and recovery might be impossible.” [6] Clearly, the danger of ice accumulation is noticeable. There is a need for checking ice conditions, and some technology does exist to do this. However, there are situations where certain materials must be used for a surface, or a material may be too flammable or explosive to allow for wires to be run nearby. With these constraints, a remote system is required, and that is what NRC-IOT’s Remote Ice Detection Equipment (RIDE) hopes to accomplish.

(8)

2.2

NRC-IOT’s Remote Ice Detection Equipment (RIDE)

2.2.1 Theory

RIDE challenges this issue and hopes to create a unified, innovative way of remotely analyzing surfaces where ice accumulation has occurred. To truly explain RIDE, it is first necessary to explain the theory behind the apparatus. RIDE can be used in many scenarios, but a general understanding of the system’s application is targeted by this report. Firstly, the types of ice accumulation that can occur must be introduced, as RIDE must deal with these cases differently.

Figure 1: NRC-IOT’s Remote Ice Detection Equipment

There are two main classifications of icing build-up, differing by the rate at which the ice accumulates on the surface. These types are ’glaze’ ice, which occurs when accumulation is relatively slow, and ’rime’ ice, which occurs when accumulation is rapid. For the sensor used here, different methods must be used to calculate the thickness of the ice, since the sensor uses light behaviour to track the thickness of the ice. In the apparatus, a laser is directed at the iced surface, and a camera is used to take photographs of the area where the laser is directed through the ice. A computer program is then used to analyze the photograph, which provides data regarding the actual ice thickness. The glaze ice method will not be discussed in this report, since the program created in Python is to be used solely for rime ice. In the future, this program will be integrated with another program that was created to analyze glaze ice. This greater program should be able to determine which method it must use to analyze the photographs. However, the main issue that this report focuses on will deal with analyzing photographs taken of rime ice.

To measure the thickness of rime ice, the images that the camera returns ideally will not have any pixels that are fully saturated in all channels. The images that the camera takes should have two spots of which their center can be determined. The first spot corresponds to where the laser’s light first strikes the top layer of the ice, and the second spot is where the light refracts to at the end of the ice layer. The distance between these two centers is proportional to the amount of ice accumulation that has occurred on the surface. For these images to be produced, both the laser and camera must be on angles, and thus it follows that changing the angle of either will produce greater or lesser distances between light spots. To calculate the angles required to convert the pixel distances between centers into a real ice thickness measurement, there are several formulas that must be used.

Referring to Figure 2, the anglesA and B can be calculated by the following equations. In these equations, θcamera pan,

(9)

laser, respectively). These angles are known from the potentiometers of both devices.

A = 90 − arccos 

 

sin(90 − θcamera pan) cos(θcamera tilt)

cosarctan tan(θcamera tilt) sin(90−θcamera pan)

    (1) B = 90 − arccos   

sin(90 − θlaser pan) cos(θlaser tilt)

cosarctan tan(θlaser tilt) sin(90−θlaser pan)

 

 (2)

Figure 2: Camera and Laser Setup

AfterA and B have been found for the current camera and laser position, it is possible to solve for the angle of the laser beam to the surface,a, and the angle of the camera view to the surface b. These angles are shown in Figure 3.

a + b = A + B (3)

sin(a) sin(b) =

D

E (4)

In the previous equations,D represents the height of the laser beam, and E represents the horizontal distance between the two light spots. The incidence angles for the laser,L, and the camera, V , and the tilt of the plane, T , relative to the normal of the surface, are shown in Figure 4 and can be calculated using the following formulas.

L = 90 − arctan " tan(a) cos(T ) p1 + (tan(a) sin(T ))2 # (5)

(10)

A

B

a

b

Figure 3: Pertinent Angles

V = 90 − arctan " tan(b) cos(T ) p1 + (tan(b) sin(T ))2 # (6) T = arctan  M sin(a) D sin(a + b)  (7) b

h

S

Viewed Separation Between Light Spots

Incident Beam

L V

First Light Spot

Second Light

Center

Perimeter Perimeter

Figure 4: Laser Refraction Being Captured By Detector

After these angles have been calculated, it is then possible to calculate the rotational angles of the camera,RV , the laser,RL, as shown in Figure 5. The following equations are used to calculated these values.

(11)

RV = arctan(tan(b) sin(T )) (8) RL = arctan(tan(a) sin(T )) (9) 90-V RV 90-L RL

Figure 5: Rotational Angles of the Detector (RV ) and Laser (RL)

The final two angles required for the ice thickness conversion are the refraction angles of the view,AV , and the laser, AL, and are found using the following computations (where n is the refractive index of the ice being analyzed).

AV = arcsin(sin(V )/n) (10)

AL = arcsin(sin(L)/n) (11)

Thus it follows that the ice thickness in pixels can be converted to millimeters by use of the following equation (where H is the thickness of the ice in millimetres, and S is the separation between the centers of light spots in pixels).

H = S

sin(b) cos(RV ) tan(AV )h1 + cos(RV ) tan(AV )cos(RL) tan(AL)i (12) Therefore, there are several pieces of information required to scale the pixel separation values from the image. Based on the above formulas, and the camera settings, the values required are as follows:

• Pan of the camera, θcamera pan

• Tilt of the camera, θcamera tilt

• Pan of the laser, θlaser pan

• Tilt of the laser, θlaser tilt

• Separation distance between the centers of the light spots on the image, S • Focal length of the camera, F

(12)

• Zoom ratio of the camera, Z

The focal length and zoom ratio of the camera are normally stored in the EXIF tags if the format the camera saves the images is JPEG. If the images are saved as any other format, these values should be noted at the time of image capture. These values are used to scale the distances as well, and can be used after findingH, as these values should be simply be given as constants.

2.2.2 Apparatus Components

The laser is one of the main components for the sensor. For this project, the laser being used is a 15 mW Melles Griot type laser. The laser and an optical zoom camera are supported by a base plate, that is on top of a Quickset motor control. This motor control allows the user to control the pan and tilt of the machine remotely, using Pan Tilt Remote (PTR) software. The angles for pan and tilt are sent to the computer where they can be used in the formulas discussed previously. A tripod supports both the base plate and motor control, giving the laser excellent stability.

In terms of controlling the actual laser, the focus dial of the laser can be controlled with a system consisting of three pulleys, a trapezoidal-toothed timing belt, and a winch servo. One pulley is attached directly to the dial, another is connected to the servo, and the third pulley is located on the opposite side of the servo and is used to reduce the load on the laser. With this system, the servo can be controlled from a distance by the use of a controller to rotate the dial, hence adjusting the focus remotely. The camera is connected to a monitor and a controller, which allows the operator to view the light from the laser as well as zoom in or out to obtain a better view of the surface. The motor controls and remote for the optical zoom cameras are shown in Figure 6.

Laser Focus Adjust Camera Focus Adjust Filter Control Control Knobs

Monitor Remote Optical Zoom Camera

Figure 6: Remote Controls for RIDE

The other main component for the apparatus is the detector, composed of a telescope and digital camera. For the telescope, a Schmidt-Cassegrain Optican Celestron CS telescope is used and mounted onto a base plate. A light filter

(13)

can be used if the lighting conditions require its use, and the light filter can be attached and removed remotely by use of a wing servo. Normally, the wing servo is only able to rotate 90 degrees, but an extension was added to allow further rotation such that it does not obstruct the view of the telescope. Similarly to the laser, two pulleys, a timing belt and a servo are attached to the focus dial on the telescope. This allows for a user to control the focus of the telescope remotely, by use of a controller box. As well, a potentiometer is attached to the focus dial to measure the distance from the camera to the surface. In addition to the telescope, a Canon digital camera is mounted onto the eyepiece of the telescope. The digital camera is used to capture images remotely by use of the ZoomBrowser program on the user’s computer. After the images have been captured, the images are processed by a program to return measurements based on the current settings for the RIDE setup. These calculations are made based on where the program detects the two light spot centers, and the formulas discussed previously. All of the main components of RIDE are shown in Figure 7.

Optical Zoom Cameras Laser

Front View Back View

Detector's Camera Detector's Telescope

QuickSet Motor Control

Custom Motor Control

Figure 7: Main Components of RIDE

The base plate which supports the detector is equipped with a potentiometer, which provides accurate readings of how many degrees the pan and tilt angles have changed since the time at which it was calibrated at. The base plate is mounted on a tripod to give the detector a high level of stability with respect to the surface where the entire RISE system is mounted.

If weather poses a problem while operating RIDE, the laser and detector can be enclosed by clear Lexan housing. This housing should have a window that can be opened or closed to ensure that the casing does not alter the measurements, however the differences should often be negligible.

(14)

2.2.3 Apparatus Wiring

The laser’s controller uses a serial cable that allows for the computer to control the device and read the angles for pan and tilt. These values are required for the computer software to calculate the ice measurements. To receive this information on the computer’s end, a terminal block was added into the wiring to allow for a second serial cable to receive this output signal. These serial cables are then converted to USB cables using an adaptor, and the USB cables are connected to a USB hub so the entire signal is transmitted along one wire. This USB hub is then connected to a USB extender, which allows the USB cable to be converted to an ethernet cable and prevent major signal loss across long distances. This ethernet cable is converted to a USB cable near the computer, where it is plugged in directly to the computer via USB.

The optical zoom camera that is mounted on the base plate alongside the laser has two wires that connect the camera to a monitor and controller. One wire is a video cable to connect the monitor to the camera, to allow the user to see the camera’s view. The other wire is used to control the camera’s zoom feature, to zoom in or out to capture the light refraction better.

There are several other wires that run from the laser to the controller box, and these are used to control the servo. The servo, as stated previously, is used to adjust the focus on the laser and therefore retrieve more accurate measurements when the focus level is ideal.

QuickSet Motor Control

Terminal Block

Serial to USB Connections

USB Hub

USB Extender

Camera Range Pan Tilt Potentiometers

Analog to Digital Convertor

USB Hub

USB Extender

Computer

Figure 8: Full Apparatus Wiring

For the detector, the wiring setup is quite similar to the laser’s setup. The camera attached to the telescope has a USB cable for its data, which connects to a USB hub near thee detector system. This data allows for the camera software,

(15)

ZoomBrowser, to view and capture images from a remote location. The USB hub is connected to a USB cable to ethernet cable adaptor, allowing the signal to be transmitted over long distances without major signal degradation. Later, this ethernet cable is converted back to USB cable where it can be connected directly to the user’s computer. Also attached to the USB hub is an analog convertor. This analog-to-digital convertor converts the signal from the three analog wires from potentiometers that are attached to the tripod, that return the pan, tilt, and range.

All main wiring for the apparatus is shown in Figure 8. Note that the USB extenders are converted by an ethernet to USB adaptor at the end close to the computer.

2.2.4 Software for Device Control

IOT’s remote ice detection equipment uses two programs to control the various devices used in the apparatus. The Pan Tilt Remote (PTR) program that comes with QuickSet Pan Tilt (QPT) motor control is used to control the pan and tilt of the laser’s base plate. The angles from the motor control are also used as part of the scaling factor for finding the actual ice thicknesses after the images have been processed. The control window for PTR is shown in Figure 9.

Figure 9: Main Window of PTR Control Emulator for QPT

The other program that is used for device control is ZoomBrowser EX, used to control the Canon cameras that RIDE uses. Rather than redevelop the basic functions that ZoomBrowser EX performs, ZoomBrowser EX provides the user with a simplistic method of managing, storing, and viewing images. The program also allows the zoom feature to be controlled, and therefore is quite ideal for the control of these cameras. The main controls window of ZoomBrowser EX is shown in Figure 10, from which the other dialogs can be accessed, such as the capture function.

(16)
(17)

3

Data Analyzer for RIDE Camera Images (IOT-DARCI)

3.1

Purpose

As discussed in the previous section, the images captured by the detector are then received by the computer, where the images must be analyzed in order to determine the ice thickness. In the images for glaze ice, there are two light spots that appear when the ice refracts the light noticeable amounts. It is the goal of the Data Analyzer for RIDE Camera Images (IOT-DARCI) to determine the center points of these light spots, and find the distance between them so that the ice thickness may be determined. IOT-DARCI tries to accomplish this by making use of the intensities of pixels in the image, seeking pieces of the image where maxima occur. In theory, this is where the centers of the light spots should occur. An example of these light spots is shown in Figure 11, where the two light spots are clearly visible and have not converged. IOT-DARCI is created in Python, a programming language that allows for efficient and rapid development of software. IOT-DARCI makes use of several robust scientific processing modules to allow for efficient image analysis, such as SciPy [2] and Psyco [5].

Figure 11: Example of Ideal RIDE Light Spots

3.2

Common Initial Procedural Steps

The first step in processing images from the RIDE cameras involves using pixel intensities to predict the vertical center of the light spots. That is, an approximation of where a horizontal line passes through both light spots’ centers. This position is found by taking a summation of all of the rows in the image, and a peak in these values should indicate where the vertical center occurs. This is because the intensities of the row will be maximized when light illuminates

(18)

these pixels. After the rows have been summed, a large Hanning smoothing window is applied to these values. This gives a curve where only one maximum should occur at the position of interest. As well, a Butterworth high-pass filter is applied to these values afterwards, accentuating the vertical center position that is desired. After these steps have been taken, the maximum point is found just by finding the greatest value of these row sums and recording its vertical position, or row number. There is an extra step used in the Range Weighted Center Method to refine this value slightly as well. The vertical summation is shown for a corresponding image after smoothing and a high-pass filter in Figure 12.

Figure 12: RIDE Camera Image with Matching Row Summation

After determining the vertical center, the image is cropped based on that value. In this process, a top and bottom limit for cropping the image is found by moving upwards and downwards from the vertical center, until the row summation value for that row drops below a threshold. The threshold is simply a percentage of the maximum row value found previously. These two positions are recorded, for top and bottom, and the image is cropped vertically. Ideally, this leaves a slice of the image, in which the light spots should be fully contained. Therefore, it follows that in the case these points cannot be found, the image will not be cropped. This image slicing process is essential to analyzing these images, because it allows for the light spots to be accentuated from background noise during the next process. Similarly to the way the row intensities were summed, the next step in the procedure is column intensity summation. For the case in which two light spots are present and have not converged, two light spots are easily identifiable as

(19)

soon as these summated values are compared. Up to this point, the steps have been the same for both the Weighted Range Method and the Mirrored Subtraction Method. The two methods diverge at this point, as they deal with the column summation in different ways. The issue that they hope to resolve deals with how to identify the centers of the light spots from these column summation values. There are several problems associated with trying to determine the centers, in that sometime the light spots begin to converge for small ice thicknesses, as well as the issue of the peaks starting to converge when smoothing is applied to these values.

3.3

Range Weighted Center Method (WRCM)

For the Range Weighted Center Method, there is a technique used to try to resolve the issue discussed previously. This technique involves the use of fitting a spline to the row sums and column sums. Thus, in the early steps of the procedure, where a vertical center has already been found, there is another step where this value is refined. This refinement is brought about by using a spline fit on the smoothed data after it has already been filtered by the high-pass filter. Then the derivative is evaluated at each of the original row numbers, and it becomes noticeable where the row sums begins to curve inwards and outwards, as moving towards the peak and moving away from the peak, respectively. By finding all ranges where a local maximum derivative value is followed by a local minimum derivative value, potential ranges are established for where the local maxima of the original row summation actually occur. After these ranges have been established, there should only be one range that has the minimum required difference specified between maximum and minimum values. Also, this range should be of the minimum required length, as specified. Then, using the single remaining range, the next step is to estimate where the peak occurs within this region. First, the process involves taking the magnitude of the derivative values within this range, and then subtracting the all values from the maximum value to invert the values. Then the center of mass is taken for this range, and the location is rounded to the nearest integer, hence the naming of this method as ‘Range Weighted Center Method.’ This gives an excellent estimation of where the true center for both light spots should occur. This technique for estimating center points also proves to be very successful in determining where the horizontal centers for the light spots occur.

To continue from the common initial steps that the two methods shared, the procedure for finding the horizontal centers from the sliced image is very similar to what has just been described. Instead of finding one valid range, it is required to find two valid ranges where light spots are separated reasonable amounts. If the separation is not great enough, only one range should be determined, which would indicate the location the center of the left light spot. However, in the ideal case, two ranges are found, and analyzed in the same manner as described for the vertical center case. After this has been done, two centers should be found when there is reasonable separation. The difference is found between these distances, which can be multiplied by a scaling factor to convert the measurement from pixels to millimeters ice thickness. The scaling factor is found by solving for the coefficient in the equation relating ice thickness,H, and

(20)

light spot separation,S. All the values for finding this scaling factor should be known at the time that the images are captured.

Note that the column sums are smoothed with a Hanning window and optionally a high-pass filter can be applied. The procedure is iterative, in that if two centers are not found, it should try lower smoothing windows to try to accentuate the right peak while not losing accuracy for the left peak. It should do this up to the point at which the data becomes unreasonable, in that images where the light spots appear to have converged should not be expected to produce accurate values. An example of the column summation for the WRCM is shown in Figure 13.

‐25000  ‐20000  ‐15000  ‐10000  ‐5000  0  5000  10000  15000  20000  25000  0  500  1000  1500  2000  2500  3000 

Original Column Sums  Original Column Sums Deriva8ve  Sliced Column Sums  Sliced Column Sums Deriva8ve 

Figure 13: WRCM Single Image Output Graph

As well, it is worth mentioning that by using the derivatives to find centers, it is possible to find peaks that aren’t necessarily true local maxima on the original spline fit. This means that for the case where two light spots start to converge, and the second peak is no longer a local maxima, the range may still be isolated by the derivative of the spline fit for this data. This allows analysis for images where the light spots are very close together, and is extremely desirable for this project.

3.4

Mirrored Subtraction Method (MSM)

Unlike the WRCM, the Mirrored Subtraction Method uses the vertical center that was originally found, without trying to refine it further. This is to standardize the procedure for each method, and not to mix the two methods as they are quite different. Since no refinements occurred, the vertical center desired is already known. Therefore, the next step involves finding the left light spot’s center.

(21)

Similarly to the WRCM, the first step to finding the actual left light spot’s center involves applying a Hanning smooth-ing window and a high-pass filter to the column sums. After this has completed, the left light spot’s center should be represented by a strong local maximum. This local maximum is found by finding all local maxima and then taking the local maxima with the greatest value of these. By using constant values to the left and right of the window that searches for local maxima, it can be ensured that no false local maxima are detected at the beginning or end of the data set. The position of this local maximum is recorded, and then the right light spot’s center needs to be found.

The name of the Mirrored Subtraction Method originates from the technique the procedure involves to find this right light spot center point. With the left position recorded, all smoothed column sums are horizontally mirrored about that column number. Then the difference is taken between these mirrored values and the column sums that actually exist for each matching column number. Essentially, this process is therefore performing deconvolution to isolate the right light spot, as this technique is removing the intensity amounts added by the left light spot. After this has been done, in the ideal cases there will exist a very clear local maxima in these differences. The local maxima is where the right light spot’s center occurs. Thus with both light spot centers recorded, it is now possible to find the ice thickness based on the difference between the two values by use of the scaling factor, as described for the WRCM. An example of the column summation for the MSM is shown in Figure 14.

‐4000  ‐2000  0  2000  4000  6000  8000  10000  12000  14000  16000  18000  0  500  1000  1500  2000  2500  3000 

Original Column Sums  Mirrored Column Sums  Difference Between Original and Mirrored 

Figure 14: MSM Single Image Output Graph

3.5

Configuration

There is one section of the configuration in both DARCI methods that is called ‘General Constants.’ This section includes general program settings, and has the options that are present in the following list.

(22)

General Constants:

Image Threshold Integer. The percentage of average image intensity that should be set to be the zero point. Vertical Smoothing Window Integer. Size of smoothing window for row summation.

Minimum Strip Percent Integer. Threshold for where to create the image strip, as a percentage of the maximum row summation value.

Threshold Percent Integer. Percent of image strip to use for right peak analysis (in example, a setting of 2 percent would search for the right peak in the last 98 percent of the image strip).

Crosshair Length Integer. Length of crosshair on output image. Crosshair Width Integer. Width/weight of crosshair on output image.

Crosshair Red Channel Integer. Red value between 0-255 for crosshair color on output image. Crosshair Blue Channel Integer. Blue value between 0-255 for crosshair color on output image. Crosshair Green Channel Integer. Green value between 0-255 for crosshair color on output image. Output Image Filename String. Filename of output image.

Output Data Filename String. Filename of output data comma-separated value file.

There are infinite sections in the ‘settings.cfg’ file for configuring DARCI using the Weighted Range Center method; the first is the ‘General Constants’ section previously introduced, another section is called ‘Left Peak’ and contains settings for finding the left peak and smoothing the column sums. The other sections are for finding the right light spot’s center, and as many sections can be defined as the user requires. DARCI-WRCT uses these settings in progression until a value is found, or until all the valid settings are used. The options for ‘Left Peak’ and ‘Right Peak 1’ are in the following list, as they have the same settings. To add more sections for ‘Right Peak’, the user just creates sections with higher numbering of ‘Right Peak,’ such as ‘Right Peak 2’, etcetera.

Left Peak/Right Peak #:

Original Smoothing Window Integer. Smoothing window to use for smoothing column summation.

Derivative Smoothing Window Integer. Smoothing window to use for smoothing the derivative of column summation. Precision Integer. Number of decimal places to round column summations after smoothing.

(23)

Apply High/Low Pass Boolean. Conditional to state whether or not to apply a high or low pass filter to the column summations after smoothing.

Original Spline Fit Order Integer. Order of spline fit to use for column summations. Original Cut-off Frequency Decimal. Cut-off frequency for Butterworth filter.

Original Filter Type String. Should be set to ‘high’ or ‘low’, to apply a high or low pass, respectively. Derivative Spline Fit Order Integer. Order of spline fit to use for derivative column summations. Derivative Spline Cut-off Frequency Decimal. Cut-off frequency for Butterworth filter for derivative.

Derivative Spline Filter Type String. Should be set to ‘high’ or ‘low’, to apply a high or low pass to derivative, respectively.

Maximum/Minimum Window Integer. The window to use for determining local maxima and minima when searching for valid ranges.

Minimum Percentage Integer. Minimum percentage of the max value in the slice that local maxima and minima should be above.

Minimum Lengths Integer list. Minimum lengths of a derivative range. Program will try each in succession along with the new amplitude for the iteration.

Minimum Amplitudes Integer list. Minimum amplitude change between local maximum/minimum in a derivative range. Program will try each in succession along with the new length for the iteration.

There are two sections in the ‘settings.cfg’ file for configuring DARCI using the Mirrored Subtraction Method; the first is the ‘General Constants’ section previously introduced, and the other is called ‘Left Peak’ and contains settings for finding the left peak and smoothing the column sums. The options for ‘Left Peak’ for DARCI-MSM is in the following list. Note that this section is different from the ‘Left Peak’ section in DARCI-WRCT, although some options are the same.

Left Peak:

Original Smoothing Window List of integers. Smoothing window to use for smoothing column summation on each iteration. If the first smoothing window does not return a valid separation distance, IOT-DARCI will try each smoothing window in succession until all are used or a value is found successfully. This list can be as long as the user desires, as long as one smoothing window exists.

(24)

Precision Integer. Number of decimal places to round column summations after smoothing.

Apply High/Low Pass Boolean. Conditional to state whether or not to apply a high or low pass filter to the column summations after smoothing.

Original Spline Fit Order Integer. Order of spline fit to use for column summations. Original Cut-off Frequency Decimal. Cut-off frequency for Butterworth filter.

Original Filter Type String. Should be set to ‘high’ or ‘low’, to apply a high or low pass, respectively.

3.6

Common Final Procedure Steps

After the differences between light spot centers has been found, the row summation and column summation values are stored to files for future reference. As well, after all images in the image set have been analyzed, an analysis file is produced containing the differences between the light spot centers in pixels. This allows for measurements to be viewed remotely at any point, and verified for accuracy. In this file, there are three columns; image filename, separation distance, and the numbers of iterations of which DARCI attempted to find distances for. The number of iterations can give a general idea of how accurate the distances might be. Also, the original images are marked with crosshairs where the light spots’ centers have been detected. This allows for quick verification, just from the image itself. This could assist in determining where any issues exist, for example, if the measurements returned after the scaling factor seem to be wrong. In that case, either the scaling factor itself or the peak locations are wrong, and by viewing the output images, it is possible to see any noticeable discrepancies.

3.7

Results

3.7.1 Testing Procedure

A test was performed at the National Research Council’s Institute of Ocean Technology, to evaluate the accuracy of RISE and DARCI. For the testing procedure, please reference Mark Pham’s report entitled, “A Remote Icing Thickness Measurement System.” In this testing procedure, images were taken of a lifeboat hook that had accumulated ice, as shown in Figure 15. Images were taken at various icing thicknesses, as the ice was heated and measured at specific intervals. [3] These images were taken using ZoomBrowser, and the angles of the laser and detector were controlled by the user by the use of appropriate devices and software as previously discussed.

(25)

Figure 15: Ice Accumulation on Lifeboat Hook

3.7.2 Measurements

Both methods were thoroughly tested, and qualities of each were advantageous for certain images. However, it was found that WRCM outperformed MSM in terms of accuracy for the test image set. Using a linear fit, it was determined that the slope obtained from the WRCM results was closer to the actual line slope than the slope of the line from the MSM results. As well, the line intercept is much closer to zero for the WRCM results than the line intercept from the MSM results. This is desired, because it is expected that the ice thickness would be zero when no separation occurs, as the light spots would have fully converged into a single point.

Some of the images select for analysis are shown in Figure 16. In this graph, the pixel separation is shown to show great accuracy, especially for the values that are not approaching zero ice thickness. Despite the WRCM appearing to have the closest slope to the actual slope, both methods seem to perform quite reliably.

3.8

IOT-DARCI Execution

IOT-DARCI can be run through command-line directly, or an argument can be passed with the image directory that the user wishes to analyze. For example, running ‘darci.py images’ will run IOT-DARCI and use the relative ‘images’ directory for image processing, while running ‘darci.py’ will require the user to enter the path to the image directory.

(26)

-100 0 100 200 300 400 500 600 700 800 0 2 4 6 8 10 12 14 Pi xe l Se p a ra ti o n Number of Sheets Testing Results WRCM Results MSM Results

(27)

4

Recommendations

While both RIDE and DARCI are quite robust on their own, there are several recommendations for its future use as a warning system. Some of the most notable recommendations are listed here.

For RIDE:

• More testing with RAW images from camera to allow for a greater range of lighting allowance, thus preventing accidental overexposure to a degree.

• Integration with DARCI by packaging all software together with a user interface.

For DARCI:

• Code should be further tested in the case of faulty images, or undesirable circumstances, to ensure robustness.

• DARCI-MSM should be able to handle the case in which the right light spot is actually brighter than the left light spot. Using methods from DARCI-WRCT may assist with this issue.

• Multiple settings files for different lighting conditions, and a way to determine the lighting conditions.

• Potentially read EXIF camera tags from JPEGs, to automatically account for zoom level when giving pixel separation values.

• Integration with clear ice program to automatically recognize which method is appropriate for any given ice accumulation.

(28)

5

Conclusion

The Remote Ice Detection Equipment created by Dr. Robert Gagnon has truly proven its potential as an accurate device for making remote ice measurements. Such application is particularly useful in vessels, aircraft, and the space industry where icing is a major issue. RIDE will be integrated with DARCI in the future to allow for a user to verify measurements as they are made, and will allow for a reduction of hazardous environments since a warning system will become possible. By doing this, RIDE could potentially ensure the safety of many workers as well as minimize damage to vessels, space shuttles, and aircraft worldwide. The prototype demonstrated will be used for future tests for various applications, where this type of ice detection was previously impossible. The testing has proven that this device is appropriate for the methodology introduced in this report, and that it has high reliability that can truly alter the way that ice accumulation is detected on virtually any surface.

(29)

6

References

[1] J. Groves. Automated Image Analysis for NRC-IOT’s Marine Icing Monitoring System and Impact Module, 2009. [2] E. Jones, T. Oliphant, and P. Peterson. SciPy: Open source scientific tools for Python, 2001.

[3] M. Pham. A Remote Icing Thickness Measurement System, 2008. [4] Python Software Foundation. Python v2.6.6 documentation, 2010.

[5] A. Rigo. Psyco: Representation-based Just-in-time Specialization and the Psyco prototype for Python, 2004. [6] J. Steuernagle, K. Roy, and D. Wright. Safety advisor. AOPA Air Safety Foundation, 1:1–2, November 2008.

(30)

Appendix A

(31)

Appendix A.1: Main DARCI-WRCM Process (darci.py)

1 #!/usr/bin/python 2 """

3 Data Analyzer for RIDE Camera Images (IOT-DARCI) 4 Weighted Range Center Method

5 Created by Joshua Groves 6 """

7

8 from psyco import full

9 from darcilib import DarciMain,IntensityAnalysis,getDistance 10

11 full() # Optimize methods/functions

12 dm = DarciMain() # Initilize main class

13 dm.requestPath() # Request image directorys

14 dm.getSettings() # Load configuration file

15 for image in dm.imageList: # Iterate through all images

16 ia = IntensityAnalysis(image, dm.dataPath) # Create intensity analysis 17 ia.applyThreshold(dm.imageThreshold) # Apply threshold to image 18 ia.findVerticalMiddle(dm.vertSmoothingWin) # Find the vertical center row 19 ia.createStripImage(dm.minStripPercent) # Slice image to isolate light spots 20 ia.getSmoothColumnSums(*dm.leftPeak[0]) # Sum/smooth all columns

21 ia.findCenter(*dm.leftPeak[1]) # Find left light spot’s center

22 ia.getSlice(dm.thresholdPercent) # Slice image strips

23 while getDistance(ia.colPeaks)==-1 and ia.i < dm.maxIterations:

24 ia.getSmoothColumnSums(*dm.rightPeak[ia.i][0]) # Sum/smooth all columns in slice 25 ia.findCenter(*dm.rightPeak[ia.i][1]) # Find right light spot’s center

26 ia.i += 1 # Increase iteration count

27 dm.finishedImage(ia.name, getDistance(ia.colPeaks), ia.i) # Complete image and create values

28 ia.drawPeaks(*dm.crosshair) # Mark images at peaks

29 ia.writeCSVs(’rows’, ’cols’) # Write row/column sums to CSVs

30 ia.saveImages(dm.outputFilename) # Save marked image

31 print "%.0f%%\t%s, %.3f units width" % \

32 (dm.currentPercent, ia.name, dm.distance) # Output measurement for image 33 print "\nImages processed in %.3f seconds" % (dm.totalRuntime()) # Output processing time

(32)

Appendix A.2: DARCI-WRCM Library (darcilib.py)

1 """ Library for DARCI """ 2

3 from os import mkdir 4 from re import compile

5 from csv import writer 6 from sys import path,argv 7 from glob import glob 8 from time import time

9 from numpy import array, clip, ones, average, sum as sumna, arange, diff, \ 10 real, max as maxna, amax, argmax, diff, r_, hanning, around, argmin, \ 11 copy, convolve, rint, uint8, empty, vstack, polyfit, sort, amin 12 from os.path import isdir, join, splitext, split, join

13 from scipy.misc import imread, imsave, imresize 14 from numpy.linalg import lstsq

15 from scipy.signal import butter, lfilter 16 from scipy.fftpack import fft, ifft

17 from scipy.ndimage import gaussian_filter, grey_closing, maximum_filter, \ 18 median_filter, uniform_filter, minimum_filter, gaussian_filter1d 19 from scipy.interpolate import splev, splrep, UnivariateSpline 20 from scipy.ndimage.measurements import center_of_mass 21 from ConfigParser import RawConfigParser

22

23 class Error(Exception):

24 """ Base error class for this module. """

25 pass

26

27 class AnalysisError(Error): 28 """ Image analysis error. """

29 pass

30

31 def smooth1D(y, winsize=11):

32 """ Perform 1 dimensional smoothing on an array using a hanning window """ 33 w = hanning(winsize)

34 return convolve(w/w.sum(), r_[2*y[0]-y[winsize:1:-1], y, \ 35 2*y[-1]-y[-1:-winsize:-1]], mode=’same’)[winsize-1:-winsize+1] 36

37 def getDistance(points):

38 """ Get difference between zero points, if given a valid list. """

39 return points[1]-points[0] if len(points)==2 and points[1]-points[0]>0 else -1 40

41 def getPeakSettings(config, section):

42 """ Creates an instance with all settings for getSmoothColumnSums and 43 findCenter for a given section. """

44 parameters = [[

45 config.getint(section, ’original smoothing window’), 46 config.getint(section, ’derivative smoothing window’), 47 config.getint(section, ’precision’),

(33)

48 config.getboolean(section, ’apply high/low pass’)], [ 49 config.getint(section, ’maximum/minimum window’), 50 config.getint(section, ’minimum percentage’), 51 config.getintlist(section, ’minimum lengths’), 52 config.getintlist(section, ’minimum amplitudes’),

53 ]]

54 if parameters[0][3]: 55 parameters[0] += [

56 config.getint(section, ’original spline fit order’), 57 config.getint(section, ’derivative spline fit order’), 58 config.getfloat(section, ’original cut-off frequency’), 59 config.getfloat(section, ’derivative cut-off frequency’), 60 config.get(section, ’original filter type’),

61 config.get(section, ’derivative filter type’)] 62 return parameters

63

64 class DarciMain:

65 """ Methods for executing intialization and storing all essential data 66 from the intensity analysis. """

67

68 def __init__(self):

69 """ Initialize main class. """

70 print "\nData Analyzer for RIDE Camera Images (IOT-DARCI)\n"+ \ 71 "Weighted Range Center Method.\nReading configuration..." 72

73 def requestPath(self):

74 """ Set initial values, and loop path request until a valid path is 75 given. Then establish program values from images in directory. """ 76 self.currentPercent = 0

77 imagecount = 0

78 tempPath = argv[1] if len(argv)>1 else False

79 path = False

80 while imagecount == 0:

81 if path:

82 print "Invalid image directory." 83 path = tempPath or raw_input("Path: ") 84 self.imageList = glob(join(path,"*.jpg")) 85 imagecount = len(self.imageList)

86 self.imagePercent = 100./imagecount 87 self.dataPath = join(path,"data\\") 88 if(isdir(self.dataPath)==False): 89 mkdir(self.dataPath)

90 print "\nAnalyzing image intensities.\n" 91 self.timestart = time()

92

93 def finishedImage(self,name,dist,iteration):

94 """ Create CSVs, set distance, and update percentage. """ 95 self.writeCSV([name,dist,iteration])

(34)

97 self.currentPercent += self.imagePercent 98

99 def totalRuntime(self):

100 """ Return program run time. """ 101 return time()-self.timestart 102

103 def getSettings(self):

104 """ Read settings from configuration file. """ 105 config = RawConfigParser()

106 config.read(’settings.cfg’)

107 config.getintlist = lambda section, key: tuple(int(i) \ 108 for i in config.get(section, key).split(’,’)) 109 self.imageThreshold = config.getint(’General Constants’, 110 ’image threshold’)

111 self.vertSmoothingWin = config.getint(’General Constants’, 112 ’vertical smoothing window’)

113 self.minStripPercent = config.getint(’General Constants’, 114 ’minimum strip percent’)

115 self.thresholdPercent = config.getfloat(’General Constants’, 116 ’threshold percent’)

117 self.crosshair = (config.getint(’General Constants’,

118 ’crosshair length’),

119 config.getint(’General Constants’,

120 ’crosshair width’),

121 config.getint(’General Constants’,

122 ’crosshair red channel’),

123 config.getint(’General Constants’,

124 ’crosshair blue channel’),

125 config.getint(’General Constants’,

126 ’crosshair green channel’))

127 self.outputFilename = config.get(’General Constants’, 128 ’output image filename’)

129 self.leftPeak = getPeakSettings(config, ’Left Peak’) 130 self.rightPeak = tuple(getPeakSettings(config, 131 ’Right Peak Iteration %s’ % i) \

132 for i in xrange(1,len(config.sections())-1)) 133 self.maxIterations= len(self.rightPeak) 134 self.writeCSV = writer(open(self.dataPath+\

135 config.get(’General Constants’, ’output data filename’)+\ 136 ’.csv’, ’wb’)).writerow

137

138 class IntensityAnalysis:

139 """ Methods for analyzing intensities in a image taken by the sensor, 140 used to find centers of the two light spots present."""

141

142 def __init__(self, filename, dir):

143 """ Set initial values and open image. """ 144 self.rowMax = 0

(35)

146 self.colPeaksRound= [] 147 self.vertMid = 0

148 self.imOrig = imread(filename) 149 self.im= imread(filename, 1)

150 self.name = splitext(split(filename)[1])[0]

151 self.dir = dir

152 self.imStrip = array([])

153 self.h,self.w = self.im.shape[:2] 154 self.colNums = arange(0, self.w) 155 self.colSumsOrig = [] 156 self.colSumsDOrig = [] 157 self.stripStart = 0 158 self.ranges = [] 159 self.distance = 0 160 self.rowSums = [] 161 self.colSums = [] 162 self.colSumsD = [] 163 self.vertBot = 0 164 self.vertTop = 0 165 self.imStripOrig = [] 166 self.colNumsOrig = [] 167 self.i = 0 168

169 def applyThreshold(self, percent):

170 """ Apply threshold to an image then subtract by that amount. """ 171 threshold = (percent/100.)*average(self.im)

172 self.im= clip(self.im, threshold, 255) - threshold 173

174 def findVerticalMiddle(self, window):

175 """ Find the vertical middle of the image. """

176 self.rowSums = smooth1D(sumna(self.im, axis=1), window) 177 self.rowMax = amax(self.rowSums)

178 self.vertMid = argmax(self.rowSums) 179

180 def createStripImage(self, percent):

181 """ Use vertical middle to create a strip image. """ 182 percent /= 100.

183 for i in reversed(xrange(0, self.vertMid)): 184 if self.rowSums[i] < self.rowMax*percent:

185 self.vertTop = i

186 break

187 for i in xrange(self.vertMid+1, self.h): 188 if self.rowSums[i] < self.rowMax*percent: 189 self.vertBot = i+1

190 break

191 self.imStrip = self.im[self.vertTop:self.vertBot, :] 192 if self.imStrip.shape[0] == 0:

193 raise AnalysisError("No vertical top and bottom allowance points"+ 194 "were found... Cannot continue.")

(36)

195

196 def getSmoothColumnSums(self, win, winD, decimals, applyfilter=False, \ 197 k=0, kD=0, freq=0, freqD=0, ftype=0, ftypeD=0):

198 """ Find smooth column sums and its derivative (optionally taking fast 199 fourier transform and using a Butterworth high-pass filter).""" 200 if applyfilter:

201 self.colSums = real(ifft(lfilter(*butter(k, freq, ftype), \ 202 x=fft(around(smooth1D(sumna(self.imStrip, axis=0), win), \

203 decimals)))))

204 self.colSumsTest = sumna(self.imStrip, axis=0)

205 self.colSumsD = real(ifft(lfilter(*butter(kD, freqD, ftypeD), \ 206 x=fft(around(smooth1D(splev(self.colNums, \

207 splrep(self.colNums, self.colSums), 1), winD), decimals)))))

208 else:

209 self.colSums = around(smooth1D(sumna(self.imStrip, axis=0), win),

210 decimals)

211 self.colSumsD = around(smooth1D(splev(self.colNums,

212 splrep(self.colNums, self.colSums), 1), winD), decimals) 213 if len(self.colSumsOrig)==0:

214 self.colSumsOrig = copy(self.colSums) 215 self.colSumsDOrig = copy(self.colSumsD) 216

217

218 def findCenter(self, win, percent, minlens, minamps, multiplier,

219 smooth=True, filter=True, decimals=9, smoothwin=0, k=0, freq=0, ftype=0): 220 """ Find center of mass for column sums based on a range established by 221 local maxima/minima from its derivative. """

222 rangeslen = len(self.ranges) 223 foundpeak = False

224 self.findMaxMin(win, percent, self.stripStart) 225 timesran = 0

226 while not foundpeak and timesran < len(minlens): 227 for r in self.ranges[rangeslen:]:

228 if len(r) > 1:

229 r[0] -= self.stripStart

230 r[1] -= self.stripStart

231 ifr[1]-r[0] > minlens[timesran] and \ 232 abs(self.colSumsD[r[0]]-\

233 self.colSumsD[r[1]])>minamps[timesran]: 234 weights = abs(self.colSumsD[r[0]:r[1]])

235 weights = amax(weights)-weights

236 try:

237 if smooth:

238 if filter:

239 root = center_of_mass(real(ifft(lfilter(

240 *butter(k, freq, ftype),x=fft(around(

241 smooth1D(weights,smoothwin),

242 decimals=decimals))))))[0]+r[0]

(37)

244 root = center_of_mass(around(smooth1D( 245 weights,smoothwin), 246 decimals=decimals))[0]+r[0] 247 else: 248 root = center_of_mass(weights)[0]+r[0] 249 except: 250 root = center_of_mass(weights)[0]+r[0] 251 ifroot <= r[1]:

252 root += self.stripStart

253 self.colPeaks.append(root)

254 self.colPeaksRound.append(int(round(root, 0)))

255 if not self.ranges:

256 self.ranges = [r] 257 else: 258 self.ranges.append(r) 259 foundpeak = True 260 break 261 r[0] += self.stripStart 262 r[1] += self.stripStart 263 timesran += 1 264

265 def findMaxMin(self, win, percent, offset=0): 266 """ Find local maxima/minima. """

267 low = (percent/100.)*maxna(self.colSums) 268 maxfound = False

269 for c in xrange(win, self.w-win-offset):

270 if argmax(self.colSumsD[c-win:c+win+1])==win and \ 271 self.colSums[c]>low:

272 self.ranges.append([c+offset])

273 maxfound = True

274 elif argmin(self.colSumsD[c-win:c+win+1])==win and \ 275 self.colSums[c]>low and maxfound:

276 self.ranges[len(self.ranges)-1].append(c+offset) 277

278 def getSlice(self, percent):

279 """ Crop image based on moving left until crossing a threshold point (a 280 a percentage of the peak intensity found), then moving right by that 281 amount and taking a slice from there to the end. """

282 percent = (percent/100.)*self.colSums[self.colPeaksRound[0]]

283 pos = 0

284 for i in reversed(xrange(self.colPeaksRound[0])): 285 if self.colSums[i] <=percent:

286 pos = i

287 break

288 if pos != 0:

289 self.imStripOrig = copy(self.imStrip)

290 self.stripStart = 2*self.colPeaksRound[0]-pos 291 self.imStrip = self.imStrip[:, self.stripStart:] 292 self.colNums = self.colNums[self.stripStart:]

(38)

293 self.colNumsOrig = copy(self.colNums) 294

295 def drawPeaks(self, pxl, pxw, r, g, b):

296 """ Draw crosshairs on original image where peaks were found. """ 297 for i in self.colPeaksRound:

298 self.imOrig[self.vertMid-pxl:self.vertMid, i-pxw:i+1+pxw] = \ 299 self.imOrig[self.vertMid+1:self.vertMid+pxl+1, i-pxw:i+1+pxw] = \ 300 ones((pxl, 2*pxw+1, 3))*[r, g, b]

301 self.imOrig[self.vertMid-pxw:self.vertMid+1+pxw, i-pxl:i] = \ 302 self.imOrig[self.vertMid-pxw:self.vertMid+1+pxw, i+1:i+pxl+1] = \ 303 ones((2*pxw+1, pxl, 3))*[r, g, b]

304

305 def writeCSVs(self, filer, filec):

306 """ Write data to comma-separated value files (for each image). """ 307 rowWriter = writer(open(join(self.dir, self.name+

308 " ("+filer+").csv"), ’wb’)).writerow

309 colWriter = writer(open(join(self.dir, self.name+ 310 " ("+filec+").csv"), ’wb’)).writerow

311 for r in xrange(self.h):

312 rowWriter([r, self.rowSums[r]]) 313 for c in xrange(self.w):

314 if c > self.stripStart: 315 relc = c-self.stripStart

316 colWriter([c, self.colSumsOrig[c], 300*self.colSumsDOrig[c], 317 self.colSums[relc], 300*self.colSumsD[relc],

318 ])

319 else:

320 colWriter([c, self.colSumsOrig[c], 300*self.colSumsDOrig[c],

321 ])

322 for i in self.colPeaks: 323 colWriter(["Peak", i]) 324

325 def saveImages(self, file):

326 """ Save the images that were created. """

327 self.ifn = join(self.dir, self.name + "-" + file +".jpg") 328 imsave(self.ifn, self.imOrig)

(39)

Appendix B

(40)

Appendix B.1: Main DARCI-MSM Process (darci.py)

1 #!/usr/bin/python 2 """

3 Data Analyzer for RIDE Camera Images (IOT-DARCI) 4 Mirrored Subtraction Method

5 Created by Joshua Groves 6 """

7

8 from psyco import full

9 from darcilib import DarciMain,IntensityAnalysis,getDistance 10

11 full() # Optimize methods/functions

12 dm = DarciMain() # Initilize main class

13 dm.requestPath() # Request image directorys

14 dm.getSettings() # Load configuration file

15 for image in dm.imageList: # Iterate through all images

16 ia = IntensityAnalysis(image, dm.dataPath) # Create intensity analysis 17 ia.applyThreshold(dm.imageThreshold) # Apply threshold to images 18 ia.findVerticalMiddle(dm.vertSmoothingWin) # Find the vertical center row 19 ia.createStripImage(dm.minStripPercent) # Slice image to isolate light spots 20 ia.maxIterations = len(dm.leftPeak[0][0])

21 while ia.iteration < ia.maxIterations:

22 ia.reset() # Reset column sums

23 ia.getSmoothColumnSums(ia.iteration, *dm.leftPeak[0]) # Sum/smooth all columns

24 ia.findCenter() # Find left light spot’s center

25 ia.getSlice(dm.thresholdPercent) # Slice image strip

26 ia.subtractSymmetric() # Subtract values with mirrored values

27 ia.findCenter() # Find right light spot’s center

28 if getDistance(ia.colPeaks)!=-1:

29 break # Retry process if separation was invalid

30 ia.iteration+=1 # Increment condition level for analysis

31 dm.finishedImage(ia.name, getDistance(ia.colPeaks), ia.iteration) # Get distance between peaks

32 ia.drawPeaks(*dm.crosshair) # Mark images at peaks

33 ia.writeCSVs(’rows’, ’cols’) # Write row/column sums to CSVs

34 ia.saveImages(dm.outputFilename) # Save marked image

35 print "%.0f%%\t%s, %.3f units width" % \

36 (dm.currentPercent, ia.name, dm.distance) # Output measurement for image 37 print "\nImages processed in %.3f seconds" % (dm.totalRuntime()) # Output processing time

(41)

Appendix B.2: DARCI-MSM Library (darcilib.py)

1 """ Library for DARCI """ 2

3 from os import mkdir 4 from re import compile

5 from csv import writer 6 from sys import path,argv 7 from glob import glob 8 from time import time

9 from numpy import array, clip, ones, average, sum as sumna, arange, diff, \ 10 real, max as maxna, amax, argmax, diff, r_, hanning, around, argmin, \ 11 copy, convolve, rint, uint8, empty, vstack, polyfit, sort, amin, zeros 12 from os.path import isdir, join, splitext, split, join

13 from scipy.misc import imread, imsave, imresize 14 from numpy.linalg import lstsq

15 from scipy.signal import butter, lfilter 16 from scipy.fftpack import fft, ifft

17 from scipy.ndimage import gaussian_filter, grey_closing, maximum_filter1d, \ 18 median_filter, uniform_filter, minimum_filter, gaussian_filter1d 19 from scipy.interpolate import splev, splrep, UnivariateSpline 20 from scipy.ndimage.measurements import center_of_mass 21 from ConfigParser import RawConfigParser

22

23 class Error(Exception):

24 """ Base error class for this module. """

25 pass

26

27 class AnalysisError(Error): 28 """ Image analysis error. """

29 pass

30

31 def smooth1D(y, winsize=11):

32 """ Perform 1 dimensional smoothing on an array using a hanning window """ 33 w = hanning(winsize)

34 return convolve(w/w.sum(), r_[2*y[0]-y[winsize:1:-1], y, \ 35 2*y[-1]-y[-1:-winsize:-1]], mode=’same’)[winsize-1:-winsize+1] 36

37 def getDistance(points):

38 """ Get difference between zero points, if given a valid list. """

39 return points[1]-points[0] if len(points)==2 and points[1]-points[0]>0 else -1 40

41 def getPeakSettings(config, section):

42 """ Creates an instance with all settings for getSmoothColumnSums and 43 findCenter for a given section. """

44 parameters = [[

45 config.getintlist(section, ’original smoothing window’), 46 config.getint(section, ’precision’),

(42)

48 config.getint(section, ’maximum/minimum window’), 49 config.getint(section, ’minimum percentage’), 50 config.getintlist(section, ’minimum lengths’), 51 config.getintlist(section, ’minimum amplitudes’),

52 ]]

53 if parameters[0][2]: 54 parameters[0] += [

55 config.getint(section, ’original spline fit order’), 56 config.getfloat(section, ’original cut-off frequency’), 57 config.get(section, ’original filter type’)]

58 return parameters 59

60 class DarciMain:

61 """ Methods for executing intialization and storing all essential data 62 from the intensity analysis. """

63

64 def __init__(self):

65 """ Initialize main class. """

66 print "\nData Analyzer for RIDE Camera Images (IOT-DARCI)\n"+ \ 67 "Mirrored Subtraction Method.\nReading configuration..." 68

69 def requestPath(self):

70 """ Set initial values, and loop path request until a valid path is 71 given. Then establish program values from images in directory. """ 72 self.currentPercent = 0

73 imagecount = 0

74 tempPath = argv[1] if len(argv)>1 else False

75 path = False

76 while imagecount == 0:

77 if path:

78 print "Invalid image directory." 79 path = tempPath or raw_input("Path: ") 80 self.imageList = glob(join(path,"*.jpg")) 81 imagecount = len(self.imageList)

82 self.imagePercent = 100./imagecount 83 self.dataPath = join(path,"data\\") 84 if(isdir(self.dataPath)==False): 85 mkdir(self.dataPath)

86 print "\nAnalyzing image intensities.\n" 87 self.timestart = time()

88

89 def finishedImage(self,name,dist,iteration):

90 """ Create CSVs, set distance, and update percentage. """ 91 self.writeCSV([name,dist,iteration])

92 self.distance = dist

93 self.currentPercent += self.imagePercent 94 self.iteration = 0

95

Figure

Figure 1: NRC-IOT’s Remote Ice Detection Equipment
Figure 2: Camera and Laser Setup
Figure 4: Laser Refraction Being Captured By Detector
Figure 5: Rotational Angles of the Detector (RV ) and Laser (RL)
+7

Références

Documents relatifs

Our research showed that an estimated quarter of a million people in Glasgow (84 000 homes) and a very conservative estimate of 10 million people nation- wide, shared our

Countries in the African Region have made more progress over the past 10 years but are still not on track to achieve the health and health-related MDGs despite the

In addition, a nuclear localization signal suggested that formation of the single, bright particles (NLS) followed by an HA tag was introduced at the N terminus of in the presence

To obtain the original data, as a first step it was decided to write software that allows access to the social network through special API and obtain the necessary data from it,

Distribution characteristics of the camera pattern noise are obtained by extracting the noise component of images from the non-tampered image series.. A noise residual of a

- In-vivo: Students enrolled in a course use the software tutor in the class room, typically under tightly controlled conditions and under the supervision of the

Then files may be copied one at a time (or with a wild card transfer) to the dual density diskette.. Using the Filer under the UCSD O/S, do an E)xtended listing of the files on

A connection to the Internet does not always imply the development of a campus-wide network. In some cases, it may be appropriate for only a small segment of the