• Aucun résultat trouvé

Développement d'un microscope super-résolution pour l'imagerie de l'activité neuronale

N/A
N/A
Protected

Academic year: 2021

Partager "Développement d'un microscope super-résolution pour l'imagerie de l'activité neuronale"

Copied!
77
0
0

Texte intégral

(1)

Développement d'un microscope super-résolution pour

l'imagerie de l'activité neuronale

Mémoire

Andréanne Deschênes

Maîtrise en biophotonique - avec mémoire

Maître ès sciences (M. Sc.)

(2)

Développement d’un microscope super-résolution

pour l’imagerie de l’activité neuronale

Mémoire

Andréanne Deschênes

Sous la direction de:

Pierre Marquet, directeur de recherche Paul De Koninck, codirecteur de recherche

(3)

Résumé

L’étude de la neurotransmission et de la plasticité synaptique à l’échelle biomoléculaire dans des cellules vivantes nécessite des outils qui permettent la visualisation et la localisation d’une grande variété de protéines synaptiques ainsi que d’autres composantes. La transparence des neurones, la taille nanométrique des structures d’intérêt et leur compacité motivent le choix des modalités d’imagerie pouvant servir à étudier ces phénomènes. La microscopie à super-résolution en fluorescence produit des images ayant une super-résolution de localisation de l’ordre du nanomètre d’échantillons marqués. Toutefois, cette technique ne permet d’observer que les structures ayant été marquées. C’est pourquoi nous voulons la combiner à une technique ne nécessitant aucun marquage afin d’obtenir le plus d’information possible au sujet de la structure des échantillons. L’imagerie de phase quantitative est une technique sans-marquage qui utilise l’indice de réfraction comme agent de contraste intrinsèque pour cartographier en 3D le contenu cellulaire. Le but principal de ce projet est de concevoir et construire un montage de microscopie de phase quantitative et de l’intégrer à un microscope STED existant de façon à créer un nouveau système d’imagerie multimodale. La performance de ce système sera ensuite caractérisée et sa capacité à produire des images multimodales de synapses de cellules vivantes sera évaluée. Ce projet est un premier pas vers la création d’un outil qui pourrait permettre de simultanément mesurer de façon très précise la position de structures marquées en 2D et 3D et cartographier l’indice de réfraction des cellules en 3D afin de situer les structures marquées dans leur environnement.

(4)

Abstract

The study of neurotransmission at the biomolecular level in live cells requires tools that allow the simultaneous visualisation and localization of a variety of neuronal proteins at their scale: the nanometric scale. In order to do so, an imaging approach offering high spatial and temporal resolution combined to low invasiveness is required.

STED microscopy is an optical super-resolution fluorescence microscopy technique that pro-duces images of labelled samples with a spatial resolution below 50 nm in living cells. However, since it is based on the detection of fluorescent molecules, labeling of the structures of interest is necessary and non-labeled structures are invisible for this type of microscope. Therefore, we want to combine it to a label-free optical microscopy technique to maximize the information that can be obtained about the global structure of the samples of interest: optical diffrac-tion tomography (ODT). This approach uses refractive index as an intrinsic contrast agent to produce 3D maps of the cell’s internal contents.

The main goal of this project is to design and build a quantitative phase imaging system and to integrate it onto an existing STED microscope to create a novel multimodal super-resolution imaging system. The performance of the microscope will then be characterized. This project is a first step towards the creation of a tool that could eventually allow simultaneous precise 2D and 3D mapping of labelled structures and label-free 3D mapping of the sample’s refractive index to situate marked structures in their surroundings.

(5)

Contents

Résumé ii

Abstract iii

Contents iv

List of Tables vi

List of Figures vii

Acknowledgements ix

Introduction 1

Project goal . . . 2

1 Technical background 4 1.1 Study of synaptic communication . . . 4

1.1.1 Physiology of synapses . . . 4 1.1.2 Transmission mechanism. . . 5 1.2 Microscopy techniques . . . 5 1.2.1 Fluorescence microscopy . . . 5 1.2.1.1 Diffraction-limited techniques. . . 5 1.2.1.2 Diffraction limit . . . 6 1.2.1.3 Super-resolution microscopy . . . 6 1.2.2 Phase microscopy. . . 9 1.2.2.1 Principle of interferometry . . . 9

1.2.2.2 Interferometry for imaging of biological samples . . . 12

1.3 Previous work: multimodal imaging systems . . . 15

2 Setup design 17 2.1 Initial system . . . 17

2.2 Planned layout . . . 18

2.3 Technical optical requirements of the ODT system . . . 19

2.4 Modelling . . . 20

2.4.1 Ray transfer matrices . . . 20

2.4.2 Angle propagation . . . 21

2.4.3 Beam propagation . . . 22

(6)

2.5.1 Illumination. . . 25

2.5.2 Electronics . . . 28

2.6 Summary . . . 30

3 Setup construction and characterization 32 3.1 Construction and alignment of the optical system . . . 32

3.2 Characterization using a calibration sample . . . 35

3.3 Challenges. . . 37 3.3.1 Fringe contrast . . . 37 3.3.2 Image propagation . . . 38 3.3.3 Vibration . . . 40 3.3.4 Vertical alignment . . . 44 3.3.5 Results . . . 45

3.4 Experimental validation of specifications . . . 46

3.4.1 Galvanometer scanning validation . . . 46

3.4.2 Laser trials to reduce coherence length and wavelength . . . 47

4 Live-cell STED microscopy of synaptic proteins 49

4.1 Optimization of RESCue-STED to reduce photobleaching in live cell imaging 49

Conclusion 56

A Supplementary figures 59

(7)

List of Tables

2.1 Common ray transfer matrices used in propagation modelling . . . 21

2.2 Theoretical evaluation of the resolution of the designed ODT setup . . . 22

(8)

List of Figures

1.1 Jablonski diagrams of processes involved in image formation for STED microscopy 8

1.2 Comparison of diffraction limited and super-resolution images of a postsynaptic

protein. . . 8

1.3 Frequency spectra of on and off-axis holograms [27] . . . 11

1.4 Optical diffraction tomography principle . . . 13

2.1 Contents of the STED microscope . . . 18

2.2 Lasers and detectors on the Aberrior 4-color STED microscope . . . 18

2.3 Planned layout of the ODT imaging system . . . 19

2.4 Angle of propagation from galvanometer to sample . . . 22

2.5 Beam size at different points of propagation for selected L1 focal length . . . . 23

2.6 Effect of illumination objective misalignment on beam size at the sample plane 24 2.7 Beam paths through the microscope body . . . 24

2.8 Integration of ODT transmitted illumination onto the existing microscopy system 26 2.9 Transmission spectra of the filters present in the epifluorescence microscopy optical path . . . 27

2.10 Plan of multimodal integration through the epifluorescence path . . . 28

2.11 Plan of electronics setup . . . 29

2.12 Plan of the multimodal imaging system . . . 31

3.1 Characterization of the variable attenuator performance . . . 33

3.2 Vertical alignment degrees of freedom . . . 33

3.3 On-axis interference patterns and corresponding frequency spectrum . . . 34

3.4 Off-axis interference patterns and corresponding frequency spectra . . . 34

3.5 Images of calibration sample taken with different imaging modalities . . . 36

3.6 Reconstructions of a bead hologram . . . 37

3.7 ODT sample arm at epifluorescence focal plane . . . 38

3.8 Comparison of beam and image propagation before interferometric recombination 39 3.9 Optomechanic assemblies tried to suspend the illumination objective . . . 41

3.10 Results of vibration quantification using the motion of the sample arm . . . 41

3.11 Experimental confirmation of the effect of misalignment on beam size. . . 42

3.12 Effect of illumination objective misalignment on beam size at the back focal plane of the illumination objective . . . 42

3.13 Statistics of vibration improvement . . . 43

3.14 ODT sample arm at two different planes before and after vertical alignment adjustment . . . 44

(9)

3.16 Reconstructions of bead holograms . . . 46

3.17 Experimental validation of galvanometer scan patterns . . . 47

3.18 Coherence length measurements of the Cobolt laser in its two operation modes 48 4.1 Schematic of RESCue-STED illumination interruption [25] . . . 50

4.2 Illumination time modulation by RESCue thresholds . . . 50

4.3 RESCue parameter space . . . 51

4.4 RESCue gridsearch image acquisition sequence . . . 52

4.5 Results of parameter space evaluation using gridsearch approach . . . 53

4.6 Selected parameter subregion for timelapse experiments and corresponding pho-tobleaching results . . . 54

4.7 Example images of repeated STED and RESCue-STED imaging . . . 55

A.1 Galvanometer scan patterns . . . 59

A.2 Beam angle and lateral displacement at different points of propagation through both paths . . . 60

A.3 Beam size at different points of propagation through the STED path . . . 61

(10)

Acknowledgements

Throughout this project I had the chance to evolve in a highly multidisciplinary environment surrounded by incredible people who took the time to give me advice and to teach me skills and concepts from multiple fields. Without their help the outcome of this project would have been much different.

I would like to thank my research director, Pierre Marquet, for trusting me with this ambitious and fascinating project.

I would also like to thank my research co-director Paul De Koninck for including me into his team and for giving me the opportunity to discover and explore the fields of neuroscience and cell biology which were completely new to me.

I also wish to thank Flavie Lavoie-Cardinal for believing in me and my capacities and for guiding me through every step of this multidisciplinary project.

Thank you to Francine Nault, Laurence Émond, and Charleen Salesse for the preparation of the neuronal cultures as well as for training me to work in a biology lab and for patiently answering all my questions.

Thank you to Simon Labrecque, Antoine Godin and Jean-Michel Mugnès for their insight and for many fruitful discussions.

I truly appreciated the support shown by members of both of my co-directors’ labs and working alongside the students from both labs was a true pleasure.

Above all I am truly grateful for the love, support and encouragement I received from my family and friends.

(11)

Introduction

In order to gain a better understanding of neurological disorders such as neurodegenerative diseases: Alzheimer’s disease, mental retardation: fragile X syndrome, or autism spectrum disorders [1][2], researchers need to be able to study the mechanisms behind the way brain cells interact with each other.

To communicate, neurons build tiny protrusions called dendritic spines with a volume of about a femtoliter to connect physically with each other [3][4]. At these sites, called synapses, the machinery of a plethora of proteins to transmit and receive information is brought together densely packed within 100 nanometer [5]. Here, synaptic transmission occurs by the presy-naptic release of vesicles containing neurotransmitters, which diffuse across the sypresy-naptic cleft to bind postsynaptic receptors. These receptors will open and allow the influx of ions such as sodium or calcium and the efflux of potassium [6]. The exchange of ion concentration across the neuronal membrane results in the depolarization of the neuron, creating an action potential. Moreover, the strength of synaptic transmission can be modulated at each indi-vidual synapse by synaptic depression and synaptic potentiation, called synaptic plasticity. In particular, synaptic potentiation involves the recruitment and interaction of key proteins such as the AMPA receptor, the enzyme CaMKII and the scaffold protein PSD95 at precise localizations at the synapse within a nanometric range [5][7][8][9]. Additionally, depending on the synaptic activity, dendritic spines can change in size as well as in their shape from filopdia-type to mushroom-type [10].

Classical techniques to study the localization of these key proteins or the spine growth have been fluorescence microscopes such as confocal microscopes [11] or TIRF microscopes [12]. However, these techniques have a resolution of approximately 200 nm, since they are lim-ited by diffraction. This means, that they can only distinguish two points, when they are at least 200 nm apart, which is not the case with synaptic proteins. Therefore, so-called super-resolution microscopy techniques such as STORM [13] or STED [14] are necessary to investigate the nanometric distribution of synaptic proteins densely packed within the synapse. Since synaptic plasticity is an active process, it has to be studied in living neurons using live-cell imaging. Among the super resolution techniques STED nanoscopy is suited for live-live-cell imaging, since it does not require several photo-switching cycles [15]. STED nanoscopy has

(12)

improved our understanding of the nanometric distribution of the scaffold protein PSD95 (among others) at the synapse, which places receptors into close position with the source of neurotransmitter depending on synaptic activity [14]. Nevertheless, observing only a single synaptic protein in the dendritic spine during synaptic plasticity is leaving out the context in which the protein (1) is positioned in relation to other proteins, (2) is located within the synapse and the dendritic spine, and (3) its relation to the strength and shape of the dendritic spine. Therefore, it is necessary to label at least two to three synaptic proteins (for example AMPA receptors, the enzyme CaMKII and the scaffold proteins PSD95) and the outline of the dendritic spine. Unfortunately, there are limits on how many labels can be used at once. For genetically-encoded fluorescent proteins, at best 3 plasmids can be co-expressed using transfection without being toxic for the neuron [16]. Note that the transfection method is an over-expression of the protein of interest, potentially altering synaptic strength. On the other hand, immunolabeling with antibodies against the protein of interest is limited by the reactivity species and host species combinations [17]. Again, this approach is limited in the number of labels to 3 to 4.

Additionally, there is a limited amount of flourophores one can spectrally distinguish, without risking spectral crosstalk and contamination of the signal to be observed. This labeling prob-lematic results in studies focusing on the localization of synaptic proteins within the synapse without regard of the context of spine shape, growth or strength. To enable us to study synaptic proteins within their physiological context, it would be advantageous to have an ad-ditional label-free technique adequate for live-cell imaging. One of those is Optical Diffraction Tomography (ODT). [18][19] This technique uses the sample’s refractive index as an intrin-sic contrast agent to create a 3D physiological map of neurons, allowing the observation of dendritic spines. Furthermore, ODT can be used as a functional imaging tool as it detects changes in refractive index which can occur due to changes in ion concentration within the spine induced by membrane depolarization [20].

The combination of the super-resolution STED nanoscopy and the label-free optical diffraction tomography will allow us to interpret the relationship of the localization of the synaptic proteins with the particular dendritic spine shape, with spine growth and synaptic strength.

Project goal

This project’s goal is to develop an imaging tool to study synaptic communication and plastic-ity at the nanoscale in living neurons. This tool will allow the observation of the reorganization of synaptic proteins in relation to synaptic activity and it must be adapted to live-cell imaging conditions. The chosen method is an optical diffraction tomography (ODT), using the sam-ple’s refractive index as an intrinsic contrast agent to create a 3D physiological map of the cell combined to STimulated Emission Depletion (STED) microscopy. The ODT system will

(13)

be conceived in order to be integrated onto an existing commercial STED microscope which allows 3D mapping of up to four fluorescently-labelled structures with a nanoscopic resolu-tion. The combination of these imaging techniques will produce a multimodal super-resolution microscope adapted to the observation of live synapses.

(14)

Chapter 1

Technical background

This chapter contains a brief presentation of the concepts which underlie the biological ques-tion of interest for this project as well as the technical aspects behind the choice of imaging modalities implemented to address this question.

1.1

Study of synaptic communication

The following section will outline the basic concepts of the mechanisms of synaptic transmission including the physiological composition of the participating structures and the main pathways that link these components together.

1.1.1 Physiology of synapses

Neurons are one of the types of cells that make up the brain. One of their distinctive properties is the fact that their structure and composition allow them to send, receive and integrate various signals amongst themselves. They are composed of 3 main structures. The soma is the cell body where the nucleus and most organelles are located. Neurons also have neurites that are long projections which allow them to reach other neurons in different regions of the brain. These neurites can form multiple contacts with many different cells. There are two types of neurites called axons and dendrites. Typically, each neuron has one axon which is responsible for the transmission of signals and multiple dendrites which are responsible for the reception of signals.

Dendritic spines are also characterized by their dense contents made up of a combination of highly specialized proteins. These synaptic proteins include structural proteins to maintain the shape and position of the spine, neurotransmitter receptors to allow entrance of ions upon activation by neurotransmitters, and different enzymes and effector proteins to translate and convey the signal to other processes. All of these proteins interact with one another and with other molecules to convey signals from the presynaptic side to the postsynaptic side following

(15)

different mechanistic pathways that will be outlined in the next section. [10] [6]

1.1.2 Transmission mechanism

All chemical synapses begin the process of transmission by synthesizing neurotransmitters and packing them into synaptic vesicles. When the neuron receives stimulation from other neurons, electrical activity drives fusion of these vesicles to the synaptic membrane. The neurotransmitters contained in the vesicles are then released into the synaptic cleft, the space between the two neurons forming the synapse, where they activate neurotransmitter receptors in the post-synaptic terminal. Receptor channels on the post-synaptic side then open to allow ions to enter the cell, which can drive either depolarization or hyperpolarization of the membrane, depending on the ionic charges (e.g. Na+ or Cl- respectively). [6]

1.2

Microscopy techniques

Different microscopy techniques have been developed through the years in order to visualize and study sub-cellular structures in living tissues. This section will highlight the principles that are essential to the techniques which have served as precursors to the ones that are used in this project.

1.2.1 Fluorescence microscopy

1.2.1.1 Diffraction-limited techniques

In order to use fluorescence microscopy, structures of interest must first be labelled with a fluorescent marker. There exists a wide variety of these markers including fluorescent pro-teins, organic dyes, quantum dots and nanoparticles. These markers can be attached to the structure of interest using different techniques. For live-cell assays, it is important to limit the cytotoxicity of the marker; therefore fluorescent proteins are often chosen. For fixed cells, the constraints are mainly related to the imaging method or to the multiplexing of different markers.

A widely used fluorescence microscopy method, epifluorescence microscopy, uses a widefield illumination scheme to excite fluorescent markers on large portions of the sample at once. After passing through the appropriate filters to eliminate any residual excitation light, the fluorescence signal is sent onto a camera to produce an image of the entire field of view. One of the main drawbacks of epifluorescence microscopy is that by imaging the entire field of view at once, light from all planes is also collected which does not allow the distinction of structures at the surface of the sample from ones that are deeper. Epifluorescence microscopy remains an appropriate tool for rapid visualization of large regions of labelled samples.

(16)

Other implementations of fluorescence microscopy have been developed in order to map fluorescently-labelled structures with a better depth precision. Confocal microscopy is one of these techniques that exhibits a property commonly called optical sectioning which allows planar segmentation of the fluorescence signal. Confocal microscopy is a laser-scanning fluo-rescence microscopy technique. This means that unlike epifluofluo-rescence microscopy, the image is constructed pixel by pixel by sequentially collecting the fluorescence signal emitted from a point of the sample onto which an excitation laser beam is focalized. A strategically-placed pinhole in the collection optical path allows the elimination of signal coming from any out-of-focus planes. This pinhole as well as the limited excitation volume created by the focalized excitation beam generate the optical sectioning property of these systems.

1.2.1.2 Diffraction limit

When the excitation beam is focused onto the sample, it creates an excitation region of a finite size. The size of this spot is mainly determined by the wavelength of the excitation light used and by the capacity of the microscope objective to focus the laser. This capacity is limited by the refractive index of the objective’s immersion medium. The diffraction limit of an optical microscope was quantified by Ernest Abbe who stated that the shortest distance d between points that can be distinguished in an image produced by a system obeys the following relations.

d = λ

2 × N.A., (1.1)

N A = n × sin θ, (1.2)

where N.A. is the numerical aperture of the microscope objective used, n is the refractive index of the immersion medium and λ is the wavelength of the light.

In typical experimental conditions for confocal microscopy such as green visible light with a wavelength of 532 nm and an oil immersion objective with an NA of 1.4, according to Eq.(1.1), a lateral resolution of d = 190 nm can be expected. For the typical size and distribution of synaptic proteins which are densely packed in high numbers into the submicron structure of a dendritic spine, this resolution is insufficient to properly study the organization of proteins inside the synapse.

Since these physical laws cannot be altered, other strategies had to be considered in order to improve the attainable resolution and thus be able to use fluorescence microscopy to study biological questions that involve smaller or denser structures.

1.2.1.3 Super-resolution microscopy

A strategy was developed to overcome the diffraction limit to resolve nanoscopic structures inside cells. This strategy exploits the attribute of fluorescent labels by which their fluorescent properties can be switched between different states. The implementations of super-resolution

(17)

optical microscopy can be classified into two categories: scanned and stochastic. Most stochas-tic implementations of super-resolution microscopy (PALM, STORM, SOFI) are based on widefield illumination of the samples. These imaging systems generally exploit the inherent property of fluorophores that makes them blink over time. Therefore, by collecting multiple images of fluorophores blinking randomly at different moments and by inferring the position of the fluorophore producing the diffraction-limited point in the image, it becomes possible to reconstruct an image with a higher resolution. In scanned implementations of super-resolution microscopy such as STED and RESOLFT (REversible Saturable Optical Linear Fluorescence Transitions), a designated laser in a confocal-like imaging system is used to switch fluorescent markers between different states. For example, in a STED microscope, an added donut-shaped beam uses the process of stimulated emission to de-excite neighboring fluorescent labels that have been excited by the diffraction-limited confocal excitation beam. Therefore, labels inside the central hole of the donut will only see the excitation beam and will emit fluorescence as in a confocal microscope. Meanwhile, labels located under the donut will first be excited by the excitation beam but their fluorescence will be quenched by the depletion beam pushing them down to their ground state through stimulated emission thus having them emit light of the same wavelength as the depletion beam. With appropriate filters, the excitation and deple-tion wavelengths are removed and the detectors only measure the fluorescence emitted by the labels situated in the donut-shaped depletion beam’s hole, thereby making the measured PSF smaller [21]. Figure 1.1 shows the excitation and de-excitation processes that fluorophores undergo during the formation of a STED image.

Figure 1.2 shows a confocal and a STED image acquired on the microscope used for this project of the postsynaptic protein PSD95 labelled with the dye STAR635P using primary and secondary antibodies. It is possible to observe the resolution improvement created by the addition of the STED depletion laser in these images.

This super-resolution fluorescence microscopy technique has been used to address biological questions similar to the one in this project. It has been shown to produce super-resolved images of samples labelled with genetically-encoded fluorescent proteins such as GFP[22]. STED has also been used to image the displacement of synaptic vesicles in live neurons labelled with organic fluorescent dyes [23]. Moreover, STED microscopy has been used to investigate the nanoscale organization of cytoskeletal proteins of the synapses of live cells [24].

STED microscopy has many advantages over other super-resolution microscopy techniques which make it a suitable choice for the imaging of synaptic proteins in live cells. First, in contrast with stochastic super-resolution microscopy techniques, STED constructs its images by scanning its lasers over the sample once and does not have to wait for sufficient blinking of fluorophores. It also doesn’t require any numerical reconstruction. This makes the acqui-sition time of this technique much shorter than that of most stochastic techniques. These short acquisition times make it suitable to image the fast processes that take place in live

(18)

Figure 1.1 – Jablonski diagrams of electronic processes involved in image formation for STED microscopy. Light blue arrows show that the excitation laser used in confocal microscopy brings fluorescent labels to an excited state. When these excited labels are left to return to their ground state, fluorescence is emitted as shown by green arrows. However, if a depletion laser is used on the same fluorescent label after it has been excited, the label will be pushed to return to its ground state through the process of stimulated emission and will emit light with similar properties to those of the depletion beam as shown by the yellow arrows.

Figure 1.2 – Comparison of diffraction limited and super-resolution images of a postsynaptic protein. Confocal (left) and STED (right) images of PSD95 in fixed cultured rat hippocampal neurons.

(19)

cells. Furthermore, STED has the capacity to produce three-dimensional images of samples using multiple channels at once without the creation of chromatic aberrations. This allows this microscopy technique to produce useful data for biological assays which require the simul-taneous localization of multiple structures of interest throughout the sample. Most stochastic techniques require sparse blinking of fluorophores in order to be able to evaluate the central position of the measured diffraction-limited spot with adequate reliability, while STED is suit-able for high labelling densities. This makes it an appropriate choice to image the densely packed synaptic proteins [25].

The multiple excitation and de-excitation cycles undergone by the fluorescent labels in STED microscopy is often related to undesirable photobleaching effects of less photostable fluorescent markers such as fluorescent proteins, commonly used for live-cell imaging.

One drawback inherent to super-resolution fluorescence microscopy is that label and antibody selection for multiplexing can rapidly become a complex task. Indeed, avoiding antibody and spectral crosstalk while keeping the labelling specific are required in order to draw reliable conclusions from the multicolored images produced. Above all, since STED microscopy is a fluorescence microscopy technique, its sole contrast agent is the presence of fluorescent labels. This means that in order to add context around a fluorescently-labelled protein of interest with this technique, other fluorescent labels must be added, making the sample preparation more complex. For this reason, the goal of this project is to combine this super-resolution microscopy technique to a different imaging modality that would offer physiological contextual information to STED images without increasing the labelling load. For these reasons, an imaging modality that uses an intrinsic contrast agent to produce three-dimensional images of the physiological context surrounding labelled synaptic proteins was chosen.

1.2.2 Phase microscopy

1.2.2.1 Principle of interferometry

An intrinsic property of matter consists in the way it interacts with light. This property of matter is named refractive index. It measures the change in speed of light in a medium relative to its speed in a vacuum. In a biological sample, the characteristics of a substance that affect this property are the nature , the concentration, the size and the respective refractive indices of each of its constituents.

Light coming from a source, such as a laser, generally maintains a constant phase throughout the optical path. This property is commonly referred to as coherence. Therefore when the beam enters a sample composed of regions of different refractive indices, the speed of the portions of the beam going through each of these regions will be altered differently. If light coming from this source that has gone through the sample is compared to light that hasn’t, the light which has gone through the sample will have accumulated a delay proportional to the

(20)

refractive index of the sample region it has gone through. This delay mainly affects the light’s phase. It is therefore called a phase delay and the difference caused to the light’s trajectory is referred to as an optical path difference (OPD).

Classical detectors such as CCD and CMOS cameras measure the intensity of the light they receive. However, they are not fast enough to detect the phase of the light waves. Therefore, other methods must be put in place in order to measure this characteristic which encodes the refractive index of a medium after propagating through it. The simplest way to go about this is to have the delayed light interact with a copy of the initial wave in order to encode the phase delay in an intensity pattern which can be measured by a camera. This method is called interferometry since it exploits the principle of interference to produce measurements of a wave’s properties.

Interference is the result of the addition of two light waves. It occurs when the source of both of these waves is sufficiently coherent, meaning it is capable to maintain its phase constant over a long enough distance. This condition for coherence mainly depends on the choice of light source. For example, a laser beam can be coherent over several meters, while light from a diode or a halogen lamp can only be coherent over a few millimeters. Another condition for interference to occur is that both light waves be sufficiently similar in terms of frequency and polarization. When the waves overlap, they produce an interference pattern in the form of an intensity pattern which includes periodic fringes. The different types of interference patterns that can be produced will be described later in this section. All types of interference patterns encode the same information which can be described by the relation below. Let R and O denote the reference and object optical fields respectively. The interference pattern recorded on the camera can be expressed as [26]

E(x,y) = |R(x,y)|2+ |O(x,y)|2+ O∗(x,y)R(x,y) + O(x,y)R∗(x,y) (1.3) where the asterisk denotes the complex conjugate. There exist multiple implementations of interferometers that can be built to produce interference patterns that encode information about a sample. The two most common ones used for this type of application are the Michelson interferometer and the Mach-Zehdner interferometer. For thin samples with low refractive index values, such as live cells, it is simpler to have a setup that allows light to go through the sample only once and so to accumulate the phase delay caused by the sample’s refractive index only once. The best choice for this is a Mach-Zehnder interferometer in transmission configuration. More precisely, light from a coherent source, such as a laser, is split into two beams of same intensity and polarization called sample arm and reference arm. As their names suggest, the sample arm is sent through the sample in order to accumulate a phase delay proportional to the sample’s refractive index distribution, while the reference arm propagates in a parallel manner without interacting with the sample in order to be compared with the sample arm. After propagating over an equal distance, the two arms are then recombined

(21)

Figure 1.3 – Frequency spectra of on and off-axis holograms. Taken from [27]

with a second beamsplitter, causing both beams to interfere together, creating an interference pattern which is then collected on a camera.

There are two types of interference patterns that can be created by this type of interferometer. The first is called an on-axis interference pattern and occurs when the reference and sample arms are incident on the beamsplitter at a right angle, as shown in the top left panel of Fig.

1.3[27]. The resulting fringes that compose the interference pattern are concentric circles. In this type of interference pattern all the terms of Eq. 1.3 overlap in the frequency space as shown in the top right panel of Fig. 1.3.

If the reference and sample arms are recombined with a slight angle (approximately 1◦), the

interference fringes become parallel linear fringes and the different diffraction modes produced are separated in the frequency space, which facilitates reconstruction by isolating the real image from its twin image and the zero-order interference term of Eq. 1.3. This angled recombination and corresponding frequency spectrum with separated terms are shown in the bottom panels of Fig. 1.3.

The central circle on the spectrum contains the |R(x,y)|2+ |O(x,y)|2 terms. These are known

as the auto-correlation terms. The outer circles respectively contain the R∗O and ROterms

corresponding to the real and twin images of the sample. The extent of the auto-correlation term is proportional to the frequency contents of the imaged object.

During the reconstruction process where the image of the sample is retrieved from the in-terference pattern, the real image term must be isolated from the other terms. Since these terms are inherently separated in off-axis interference patterns’ frequency space, this process is simplified.

(22)

1.2.2.2 Interferometry for imaging of biological samples

In 1948, Denis Gabor developed a technique called holography which consists in recording a hologram, or interference pattern, encoding information about a sample, onto a photographic plate. These holograms contained all the information about a sample beam’s intensity and phase. By re-illuminating the inscribed photographic plate, he was able to produce an image of a sample as well as a twin image. [28] This method was revived after the invention of the laser which provided a more powerful source of coherent illumination and improved the quality of images. The integration of a separate reference beam in the off-axis configuration helped separate the twin and real images during the re-illumination phase.[29] Later, the invention of the CCD camera eliminated the need for photographic plates and numerical reconstructions were developed to replace the re-illumination process. This new digital implementation of holography was given the name digital holographic microscopy (DHM). To this day, this quantitative phase microscopy technique allows its users to produce label-free images of the phase delay or optical path difference (OPD) created by the presence of transparent samples such as live cells.

More precisely this OP D is related to the phase delay, φ, following φ = 2π

λ × OP D (1.4)

The magnitude of the OPD is influenced by several properties of the imaged sample. These properties are the refractive index of the sample, ns, the refractive index of the surrounding

media, nm, and the thickness of the sample along the direction of light propagation, t.

OP D = t × (ns− nm) (1.5)

The image produced by DHM is a two-dimensional map of the OPD of the sample in the selected immersion media. Since the sample’s thickness and refractive index are coupled in Eq. 1.5, it is impossible to distinguish between a thin object of high refractive index and a thick sample with a low refractive index. Different strategies including medium changes have been used in order to dissociate these two sample characteristics during DHM measurements. However, these strategies increase the complexity of the sample handling and data acquisition processes.

Another strategy for decoupling the sample’s refractive index and thickness from a phase im-age was proposed by Emil Wolf in 1969.[30] Though the necessary optical technologies such as digital cameras weren’t available yet, Wolf developed the mathematical formalism for a method to record and reconstruct the three-dimensional distribution of a weakly-scattering sample’s refractive index. The principle of this method, later to be known as optical diffrac-tion tomography (ODT), consisted in sequentially acquiring mutliple holograms each with a different direction of illumination. This way, in each hologram, the sample arm would have gone through the sample with a different trajectory and thus with a different wave vector

(23)

which modulates the frequency contents of the hologram. By taking these multiple holograms into account in the numerical reconstruction process, it could become possible to produce a three-dimensional map of the sample’s refractive index. Figure 1.4presents a schematic of a Mach-Zehdner interferometer capable of producing such tilted-illumination holograms using a scanning mirror (also called galvanometer mirror).

Figure 1.4 – Optical diffraction tomography principle. The laser beam is separated into two arms by the beamsplitter shown in the top left. The sample arm is sequentially scanned at different angles shown as differently colored beams before illuminating the sample placed between the two microscope objectives. The light diffracted by the sample is then recombined with the unaltered reference arm to produce an interference pattern (one for each illumination angle). These patterns are then numerically reconstructed to produce 3D refractive index maps of the imaged samples.

Later, in 2002, Lauer developed the formalism to reconstruct and analyze data acquired by this type of system and was able to establish the relationship between experimental conditions and the attainable resolution.[31] The following series of equations presents the conclusions of Lauer’s derivation of the relationship between the spectral contents of the acquired tilted holograms and the theoretically attainable resolution.

Let θ be the maximal angle that can be illuminated using a microscope objective with a numerical aperture NA and n0be the index of refraction of the immersion medium. θ is given

by:

θ = arcsin N A n0



(1.6) The spatial frequency, F , of the wave with wavelength λ within the medium is given by:

F = n0

λ (1.7)

The spatial frequency and maximal scan angle can be used to quantify the maximal frequency extent of the acquired tilted holograms which are the reciprocals of the Nyquist resolution

(24)

limits for this type of imaging system.  1 ∆f  h = 1 4F sin(θ) = λ 4N A (1.8)  1 ∆f  v = 1 2F (1 − cos(θ)) = 1 2 λ n0−pn20− N A2 (1.9) The results of the application of these equations to the experimental conditions of this project are presented in Table 2.2.

This technique offers the possibility to map an intrinsic property of matter in a highly resolved label-free and non-invasive manner. However, Lauer had only shown the 3D mapping of the sample’s phase and had not determined their refractive indices. For these reasons, Michael Feld’s group decided to implement it for the imaging of the 3D distribution of biological sam-ples’ refractive index through the use of appropriate numerical reconstruction techniques.[32] This same group also went on to apply this technique to live cells. [33] In these publications, it was evaluated that their setups and algorithms yielded lateral (x-y) resolutions of 0.5 µm in fixed cells [32] and of 0.35 µm in live cells [33]. The axial (z) resolutions of these systems were respectively of 0.75 µm and 0.7 µm in fixed and live cells. Though this work was promising, the resolutions they attained were not sufficient to study certain biological questions.

It was later stated that frequency support was not the only determinant factor to the resolution of ODT systems and that the hypotheses used in the numerical reconstruction could also limit the accuracy of the reconstructed image. [34] To improve on these resolutions, Haeberlé’s group developed a new reconstruction technique and validation system using confocal microscopy. With these improvements it became possible to attain an experimentally measured resolution of 129 nm which is better than the diffraction-limited resolution of confocal microscopy.[34] A few years later, the teams of Pierre Marquet and Christian Depeursinge developed a way to use nanoscale apertures to calibrate a numerical reconstruction technique and to characterize the imaging system. This allowed the production of three-dimensional maps of biological samples’ refractive index with a resolution of 90 nm . The capacity of this system to produce timelapse images of a dendritic spine’s reorganization during filipodia formation in cultured neurons in physiological solution in a closed imaging chamber was also demonstrated. [20] All of these advances and developments have led to make ODT an appropriate technique to produce three-dimensional maps of biological samples’ refractive index without the need for any staining or sample preparation.[35][36][37][38] These maps can provide valuable physio-logical information about a biophysio-logical sample of interest. It has been shown that ODT can attain resolutions that are better than the diffraction limit, which allows the visualization of small subcellular structures.[39] The resolution of an ODT system is mainly limited by the ex-tent of the illumination angles used to image the sample and by the numerical reconstruction algorithm used to produce the images from the acquired holograms.

(25)

1.3

Previous work: multimodal imaging systems

Other research groups have created multimodal imaging systems combining fluorescence imag-ing to phase imagimag-ing in order to study biological questions. This section presents the ones with aims closest to that of this project as well as a comparison of the strengths and weaknesses of the chosen techniques.

In 2006, a group implemented an ODT microscope onto an inverted epifluorescence microscope and imaged kidney cells with fluorescently-labelled nuclei. This combination of modalities allowed the addition of molecular specific information to quantitative phase images through the fluorescent labelling and imaging process. [40] However, as mentionned earlier, epifluorescence microscopy does not perform any optical sectioning and its resolution is limited by diffraction. Therefore this choice of fluorescent modality would be inadequate for the biological question of interest for this project due to the size and organization of synaptic proteins.

As previously described, Haeberlé’s group used confocal micrscopy measurements to calibrate ODT reconstruction. [34][41] However, the first group to use confocal microscopy and ODT as complementary modalities to produce multimodal phase and fluorescence images of biolog-ical samples was Yaqoob’s group in 2012, though the aim of this work was still to confirm and calibrate ODT measurements with confocal images.[42] Though confocal microscopy has optical sectionning capability, its resolution is still diffraction-limited and thus insufficient to distinguish synaptic proteins.

As described in section1.2.1.3, STED microscopy produces super-resolved images of structures labelled with fluorescent markers but does not provide any information about the surroundings of these labelled structures. For this reason, Emiliani’s group decided to couple it to a quali-tative phase microscopy technique called spiral phase contrast. This phase imaging technique creates images that show the two-dimensional outlines of structures with different optical den-sities, such as a cell body and the surrounding medium. For example, it was shown that spiral phase contrast could show the external limits of neurites which could be overlayed to STED images of labelled proteins of the cytoskeleton [43]. Though the combination of this phase imaging technique provided valuable information for this particular application, this qualita-tive information is not sufficient to properly quantify the level of activity or reorganization of the constituants of a dendritic spine.

Recently, super-resolution optical fluctuation imaging (SOFI) was combined to quantitative phase microscopy to study dynamic biological processes. SOFI is a stochastic super-resolution microscopy technique that uses the unique temporal signatures of fluorescent labels to distin-guish them from a stack of diffraction-limited epifluorescence images [44]. The combination of SOFI with QPI using an innovative prism allowed simultaneous imaging of multiple z-planes. It enables sequential observations of biological processes, such as the displacement of vesicles, using phase images and it produces 3D super-resolved images of labelled

(26)

struc-tures of interest [45]. This method is not directly applicable to the biological question that is synaptic transmission for several reasons. First, the multiplexing of SOFI to image more than one fluorecent label at a time rapidly increases the complexity of the required instrumen-tation and reconstruction algorithm. Secondly, the reconstruction technique used to produce the super-resolved images from the acquired epifluorescence images induces artifacts in the brightness distribution of the produced image which would complicate the analysis of the re-sults for quantitative analysis of protein interactions. Finally, this implementation of QPI and its associated reconstruction algorithm uses rather severe hypotheses concerning the phase distribution in the sample which would not be applicable to structures having the size and contents of dendritic spines.

Through the combination of STED microscopy and ODT, the goal is to gain the capacity of producing simultaneous images of labelled synaptic proteins with a resolution below 60 nm and label-free 3D mapping of the refractive index of the dendritic spine’s constituents to situate marked structures in the surrounding physiological context. For example, a change in the shape, size or contents of a dendritic spine could be correlated to a change in synaptic proteins’ position.

(27)

Chapter 2

Setup design

2.1

Initial system

The setup is based around an existing commercial STED microscope from Abberior Instru-ments GmbH. The STED and confocal imaging modalities are implemented on an inverted microscope body from the company Olympus. This particular microscope body is designed with two decks allowing the integration of multiple optical paths for different imaging modal-ities. Figure 2.1 shows the integration of the STED microscopy modality into the Olympus microscope body. The Abberior 4-color STED microscope combines STED microscopy with other complementary modalities to assist with sample evaluation and the selection of the region of interest. These modalities are brightfield microscopy to locate the focal plane, epi-fluorescence microscopy to evaluate sample staining and to locate appropriate cells to image, confocal microscopy with 4 different excitation lasers and 2 depletion lasers for STED mi-croscopy. Figure 2.2 shows the wavelengths of the STED and confocal lasers as well as the available fluorescence detection windows.

All of these existing modalities have interconnected beam paths which allow them to be used on the same region of the sample. These existing optics have finite sizes and positions which must be respected by the ODT beam path in order to avoid information loss or artifact accumulation. The most important constraints are the size of the mirrors in the STED scanning system, the focal length and size of the tube and scan lenses, and the presence of different filters which only allow the propagation of certain wavelengths in different parts of the system.

Since the ODT system is to be integrated around an existing microscope, the geometry and disposition of existing parts add constraints that must be considered during the design phase of this project.

(28)

Figure 2.1 – Complete model of the contents of the STED microscope enclosure shown from above. The beam path of each of the excitation and depletion beams is depicted in a different color.

Figure 2.2 – Lasers and detectors on the Aberrior 4-color STED microscope. 4 gaussian-shaped excitation lasers, at 488 nm, 518 nm, 561 nm and 640 nm, 2 donut-shaped depletion lasers at 595 nm and 775 nm and 4 fluorescence detection windows measured by APDs shown in grey

2.2

Planned layout

Based on the available space around the existing imaging system a general plan of the layout of the ODT components was established. It was decided that to produce the ODT system, a helium-neon (HeNe) laser with a wavelength of 594 nm would be placed on the optical table behind the existing microscope. The beam’s intensity and size needed to be adjusted and directed using a periscope system onto a structure built over the inverted microscope body (Figure 2.3 a). Once on this structure, the beam was separated into the interferometer’s two

(29)

[a] [b] [c]

Figure 2.3 – Planned layout of the ODT imaging system. a) is the beginning of the ODT system, located on the optical table behind the STED microscope. b) is the portion of the ODT setup located above the microscope body. c) is the recombination section of the ODT setup located on the optical table.

arms using a beamsplitter. The reference arm was brought back down to the level of the optical table to await recombination with the sample arm. The sample arm was directed to a 2-axis galvanometer mirror system in order to scan different illumination angles on the sample (Figure 2.3 b). A system to suspend the illumination objective and to control its position was designed and replaced the brightfield illumination path of the STED microscope. ODT illumination was performed in transmission using a water dipping 63x 1.0 NA objective for the illumination and the STED 100x 1.4 NA oil objective for the collection. Following sample illumination, the ODT sample arm was directed towards the optical table where it was recombined with the reference arm to produce interference patterns on a camera (Figure 2.3

c).

2.3

Technical optical requirements of the ODT system

For the design of the ODT setup and its integration on the existing STED microscope, various considerations had to be accounted for. First, the ODT system had to have sufficient resolution to distinguish the tightly packed components of the dendritic spine. In ODT, the resolution is limited by the spectrum of illumination angles used to produce the volumetric map of refractive index. Therefore, maximizing the attainable illumination angles of the system is a first design goal.

Next, in order to properly track synaptic transmission events and the resulting movement of components to their full extent, a field of view around 50 µmx50 µm was chosen.

The imaging system was designed around the use of an open imaging chamber to allow simul-taneous live-cell imaging and electrical stimulation of the sample. This resulted in the use of a long working distance water dipping objective with a NA of 1.0.

The last requirement was for this imaging system to be able to temporally resolve structural changes related to neuronal activity with both imaging modalities. The ability to simultane-ously perform STED and ODT imaging or to switch rapidly between both modalities had to be considered.

(30)

The following section outlines the design process used to plan the setup around the existing microscope while respecting these technical requirements.

2.4

Modelling

In order to properly plan and assess the optimal beam path for this setup, it was necessary to be able to evaluate the effect of different choices of optical components on the beam’s size and orientation at different positions in the beam’s propagation path. This way, it would become possible to validate that all the technical requirements could be met and to choose the combination of optical components that could constitute such an optimal path.

2.4.1 Ray transfer matrices

The method chosen to accomplish the evaluation of the role of different optics in the beam path on the beam’s size and orientation is commonly referred to as ray transfer matrix analysis. It is a way to simplify the propagation equations of geometrical optics into a linear equation system which relates the beam’s position and orientation to the optics in a path. Equation (2.1) is the equation system used to relate the position and orientation of the beam before its passage through the optical system, yi and θi, to those after this passage, yf and θf. A,B,C

and D are produced by combining the matrices representing each of the optical components encountered by light according to equation (2.2).

" yf θf # = " A B C D # × " yi θi # (2.1) M = " A B C D # = Mn× ... × M3× M2× M1 (2.2)

Therefore, by varying different values such as propagation distances or lens focal lengths, it is possible to evaluate how a beam’s position and orientation during its propagation through the system will be affected by these variations. Table 2.1 contains the Mi matrices of the most

commonly encountered optical components.

Furthermore, with the proper approximations and prior knowledge about the beam’s intensity distribution, it becomes possible to evaluate the beam size at each position of the optical path. In the case of this project, it is a well documented fact that HeNe lasers have a Gaussian profile and the beam’s diameter at the laser’s output is a known specification. Therefore the following relations can be used to evaluate the beam’s radius ω1after its propagation through the system

using the beam’s radius ω0 before it enters the system and its complex beam parameter q

zR=

πω20

(31)

ω1 = s ω02 AD − BC  A2+B2 q2  (2.4) This evaluation of the beam size at different points in a theoretical optical system facilitated the verification of whether this system would be compatible with the existing beam paths. The results obtained with this method will be presented in the following sections after its applications have been described.

Table 2.1 – Common ray transfer matrices used in propagation modelling

Optical component Matrix

Propagation over distance d

 1 d 0 1



Planar interface between two media with RI n1 and n2

 1 0 0 n1

n2



Spherical interface of radius R from medium n1 to medium n2

 1 0 n1−n2 R×n2 n1 n2 

Thin lens of focal length f

 1 0 −1 f 1  2.4.2 Angle propagation

In order to chose an appropriate scanning system for ODT that would support the scanning of the full extent of the illumination angles allowed by the microscope objective’s NA, it was first necessary to evaluate the corresponding range of scan angles needed as well as the size of the beam to be scanned. It was well established in the literature that in order to produce a collimated (parallel) beam at the sample with different inclinations, the beam had to be focused at different lateral positions on the back focal plane of the illumination objective[46] [47]. Figure 2.4shows how the angle scanned by the galvanometer mirror α, propagates over the focal length of the first lens, L1 to then be focused at a certain position on the back

aperture of the objective and transmitted as an illumination angle β onto the sample. Using this method, it was determined that in order for the full 48.75◦ allowed by the NA of the

illumination objective to be produced, the galvanometer mirror needed to produce angles of over 10◦ in the back focal plane of L

1.

The resolution of the system was evaluated to ensure its capacity to image the biological structures of interest for this project. Using equations (1.8) and (1.9), which were derived in [31], and taking into account the specifications of the microscope objectives chosen for the ODT setup, it became possible to evaluate the resolution limits that could theoretically be attained by the system being designed. This evaluation was conducted for both the illumination and collection objectives and the results of this evaluation are presented in Table 2.2. Since the

(32)

Figure 2.4 – Angle propagation from galvanometer to sample. Top view of back aperture of the illumination objective with illuminated point in orange (left). Definition of angles α scanned by the galvanometer onto L1 and β incident on the sample (center). Complete pathway of a

scanned angle from the galvanometer to the sample. (right)

wavelength of the HeNe laser around which the first iteration of the ODT setup was designed is relatively long, the resulting estimation of resolution is slightly larger than the initial design goal. For this reason, other lasers with a shorter wavelength were tested later on.

Table 2.2 – Theoretical resolution evaluation of the designed ODT setup with HeNe 594 nm laser

Illumination objective Collection objective (63X, N.A.: 1.0, Water ) (100X, N.A.: 1.4, Oil)

F 2.239 × 106m−1 2.5589 × 106m−1

θ 48.75◦ 67.08◦

Nyquist vertical 0.656µm 0.32µm

Nyquist horizontal 0.149µm 0.106µm

Equations (1.8) and (1.9), which were derived in [31], were used in conjunction with derivations taken from [48] to evaluate the propagation of the contents of the frequency spectrum through a microscope objective. This process allowed to evaluate the frequency cutoff associated with the camera’s pixel size and thus to determine that a suitable camera for the ODT system had to have a pixel size smaller than 4.8 µm.

2.4.3 Beam propagation

The beam paths of the existing imaging modalities are all enclosed either in the microscope body itself or in an adjacent box-shaped enclosure limiting the access to some of the optical components. In particular, though the microscope body is designed to be modular, access to individual optical components below the objective is especially difficult. Therefore these

(33)

Figure 2.5 – Beam size at different points of propagation for selected L1 focal lengths with

and without added beam expansion system. Beam size constraints are identified in green. Components selected for ODT are in the shaded yellow region while the other components are the ones built into the epifluorescence microscope path.

beam properties had to be carefully selected and adjusted before the beam’s entrance into the closed system to be compatible with the existing beam paths and ODT technical requirements without the possibility of internal modulation.

Using the specifications of the constraints created by the existing optical paths and technical requirements as boundary values, the ray transfer matrices of the illumination portion of the sample beam were computed with various focal lengths and corresponding propagation distances for L1. Figure2.5 shows the results of the beam size propagation predicted by ray

transfer matrix analysis for the optimized selection of L1 (f(L1) = 400 mm) with and without

the added beam expansion system consisting of two lenses (f1 = 50 mm and f2 = 400 mm)

placed in a 4f configuration to produce a magnification of 8x.

For each lens trial, the beam size at key points was evaluated and the tolerance of beam size on misalignment was evaluated. Figure 2.6 shows the impact on field of view of different magnitudes of imprecisions in the distance between the illumination objective and L1. It is

easy to see that the impact on beam size rapidly increases starting at misalignment of the order of 1 mm.

Considering these results, a motorized positioning system which has a resolution of 200 nm was selected to avoid the known consequences of injecting an imprecisely sized beam into the closed optical system. These consequences include field of view reduction, beam clipping or diffraction at the edges of optics which could create unwanted artifacts.

With the optical components chosen to modulate the illumination properties of the ODT beam selected, the next thing that had to be done was to chose the optical path through the microscope body that the ODT beam should take. Each potential path had its own

(34)

Figure 2.6 – Effect of illumination objective misalignment on beam size at the sample plane estimated by ray transfer matrices.

properties, advantages and drawbacks. Two of these paths were evaluated since the location of their outputs were accessible to be used for ODT recombination. The first of these two is the path used by the STED microscopy system (the STED path). The second possible collection path through the microscope is the one employed by the epifluorescence module (the epifluorescence path).

The STED path has the advantage of allowing simultaneous ODT and STED imaging and thus of optimizing the system’s multimodal potential. Figure2.1contains schematic representations of the contents of this beam path.

Figure 2.7 – Beam paths through the microscope body. Schematic of the contents of the STED microscopy beam path through the microscope body and simplified schematic of the beam paths in the attached enclosure shown in black rectangle. (Left) Beam path through the epifluorescence microscopy optical path (Right)

(35)

This path is easily accessible and well characterized. However, it presents two major draw-backs. The first drawback is that the path’s multiple optical components could lead to coherent noise accumulation. Coherent noise is inherent to laser interferometry and creates coherent artifacts in the images produced by these measurements. More precisely, it is caused by spu-rious reflections from optics within the system interfering with both the reference and sample arms. The reflections can be caused either by coatings or defects on the optical surfaces.[49] The second drawback is that the STED’s scanning system can only tolerate small beam sizes thus limiting the achievable field of view for ODT.

The epifluorescence microscopy path is implemented by placing a filter cube and a mirror in the beam path using motorized wheels. Figure 2.7shows how this beam path is implemented in the microscope body to form images on a camera located in the back of the microscope body. The simultaneous use of STED and ODT is rendered impossible by the need for the addition of optical components to the STED optical path. Also, the back of the microscope body, where the output of this beam path is located, is a crowded space which makes it less accessible. This space is shown in the right picture of Figure A.4.

After simulating the propagation of the ODT sample arm through both of these potential collection paths through the microscope body using ray transfer matrices, it was decided that the epifluorescence microscopy beam path would be a better choice since it allowed a larger FOV for the ODT system and limited the risk of coherent noise accumulation. The results of this comparative simulation are illustrated in Figure A.2.

2.5

Design of the multimodal optical system

This section will detail the steps taken to integrate the ODT imaging system onto the existing STED microscope through the selected epifluorescence optical path. First, the adaptations made to the existing microscope to accomodate ODT illumination will be detailed. Then, the optomechanical components designed and built to support the optics selected using the process detailed in section 2.4will be described. Finally, the development of electronic tools for the control of the optomechanical parts of the ODT system as well as for the control and optimization of the functionality of the multimodal imaging system will be presented.

2.5.1 Illumination

The HeNe laser was installed on the optical table behind the existing microscope body. Its intensity was adjusted using a variable attenuator made with a half-wave plate and a polarizing beamsplitter. A 4f beam expansion system with a spatial filter was installed using the two lenses selected in section 2.4.3 having respective focal lengths of 50 mm and 400 mm.

(36)

the brightfield illumination path of the microscope that was situated above the sample holder. An optical breadboard was installed above the microscope body to integrate the optical path of the ODT imaging modality to the STED microscope. A beamsplitter was installed on this breadboard to separate the HeNe beam into the sample and reference arms. The sample arm’s scanning system was placed after this beamsplitter and in a conjugate plane of L1which

focalized the sample arm beam at the back aperture of the illumination objective in different lateral positions to create illumination angles at the sample. The ODT illumination objective was suspended above the STED collection objective using a motorized positioning system as described in section 2.4.3. For lateral alignment, an optomechanical mount with x-y (lateral) adjustment screws was chosen. During the characterization phase of the system (described in section 3.3), it was noticed that additional angular degrees of freedom were necessary to properly control the illumination of the sample and the x-y mount was replaced by a mount with 5 axes of positioning control (x,y,tip,tilt and z).

Figure 2.8 is a side view of a simplified 3D model of the ODT system’s integration onto the existing microscope body. It shows the optical breadboard supported by optical posts over the microscope body as well as the motorized positioning system and the tube assembly that supports the illumination objective over the sample.

Figure 2.8 – Integration of ODT transmitted illumination onto the existing microscopy system. In order to integrate the ODT beam through the epifluorescence path of the microscope body, a way to separate light from the two modalities and to create an exit point from the microscope body for the ODT beam had to be found. An optomechanical component which could be attached into the epifluorescence camera port had to be designed to allow the collection of both the epifluorescence and ODT light through this same port. An appropriate dichroic mirror was mounted into a cube-shaped filter mount equipped with appropriate adapters to connect the cube to the microscope body and to the epifluorescence camera. A relay system for the epifluorescence image to relocate the epifluorescence camera to this new position without distorting the images was designed. A periscope system was constructed to send the ODT

(37)

sample beam back down to the optical table to be recombined with the reference beam . The selection of the dichroic mirror used to separate the ODT and epifluorescence microscopy modalities in this added cube was based on the wavelengths necessary for each modality. For this reason, it was verified that all commonly used fluorescence wavelengths would be transmitted through the dichroic and so sent towards the epifluorescence camera. Therefore a dichroic filter which reflects 405 nm, 488 nm and 594 nm into the ODT beam path was chosen. It can be noticed that 594 nm, the wavelength of the HeNe laser, is the longest of the three wavelengths allowed for ODT by this filter. This allows for eventual resolution improvement as described in section 2.4.2. Figure 2.10 shows the beam paths of the epifluorescence and ODT modalities as they propagate through this new modality-separating cube.

Figure 2.9 – Transmission spectra of the filters present in the epifluorescence microscopy optical path. Light generated by an LED box is filtered by an excitation filter (shown in blue) and sent to excite fluorophores in the sample. The excitation wavelengths are filtered out by the dichroic and emission filters (shown in green and red respectively) before sending the fluorescence signal to be measured by the camera.

Periscope systems were planned to bring the reference and sample arms down to the level of the optical table where the recombination beamsplitter and camera would be installed. During alignment and installation, it was measured that light coming out of the epifluorescence path had a focal point at an unexpected position. At this point the ray transfer matrices were reused to identify the focal length of an appropriate lens to collimate the sample beam before its capture by the camera.

(38)

[a] [b]

Figure 2.10 – Plan of multimodal integration through the epifluorescence path. a) shows how the dichroic mirror placed after the existing mirror separates the two optical paths based on wavelength. Epifluorescence is shown in red and ODT in orange. After separation by the dichroic mirror (shown in green), the ODT beam is brought back to table level using an angled mirror while the epifluorescence signal is propagated using the relay system of lenses to preserve the integrity of the image sent to the camera (shown in dark blue). b) shows the location of this separation in the back of the microscope body. Both figures are shown from an overhead viewing angle.

With the optical route of the optical diffraction tomography setup determined, the next step was to design the electronic components to be used to control the acquisition of multimodal images by the imaging system developed throughout this project.

2.5.2 Electronics

To create a functioning multimodal imaging system, a series of components need to be con-trolled for each imaging modality and the different modalities need to be synchronized with one another.

The ODT system requires the control of several components including a beam shutter, a galvanometer mirror and a camera that need to be synchronized. For example, an image must be acquired for each angle scanned by the galvanometer. The best way to optimize this coordination is to use a single source to send the trigger signals to each component of the ODT system. For this reason a data acquisition (DAQ) card with appropriate specifications to control the ODT components was selected.

The selected DAQ card had to produce and receive the signals necessary for the control of all the components as well as to respect the timing technical requirement described earlier. More precisely, it had to be able to produce the differential analog signals that pilot the angle scanned by the galvanometer mirror’s motor as well as the digital signals for the camera

Figure

Figure 1.1 – Jablonski diagrams of electronic processes involved in image formation for STED microscopy
Figure 1.3 – Frequency spectra of on and off-axis holograms. Taken from [ 27 ]
Figure 1.4 – Optical diffraction tomography principle. The laser beam is separated into two arms by the beamsplitter shown in the top left
Figure 2.1 – Complete model of the contents of the STED microscope enclosure shown from above
+7

Références

Documents relatifs

The scope of Clement’s controversy it that of distancing the figure of Asclepius from Christ, which, as we shall see, is present on several occasions in the Exhortation

[r]

in later literature; sometimes, as in Chester's Love's Martyr, in which Merlin's role is subsidiary, he must 4 bring Uter and Igerne together and provide for

Mean hourly AC electrical energy use for the Test and Reference house on the six baseline days closest in weather to our peak load reduction days.. Also shown is the

Quelles sont les formules permettant de calculer le volume d’un pavé droit, d’un cube, d’un cylindre, d’une pyramide, d’un cône?. III Brevet Pondichéry

En comparaison avec le lancer de marteau, quel type d’action doit exercer le Soleil sur une planète pour l’empêcher de s’échapper dans l’espace?. (figures 1 et

a King of Chamba named Meruvarman, according to an inscription had enshrined an image of Mahisha mardini Durga (G.N. From theabove list ofVarma kings we find that they belonged

Indeed, for that specific example, we will check that the solutions to the compatibility equations give one and only one boundary unipotent complete spherical CR structure on the