Magnetoencephalography and electroencephalography (M/**EEG**) are non-invasive modalities that measure the weak electromagnetic fields generated by neural activity. Estimating the location and magnitude of the current sources that generated these electromagnetic fields is a challenging ill-posed **regression** problem known as **source** **imaging**. When considering a group study, a common approach consists in carrying out the **regression** tasks independently for each **subject**. An alternative is to jointly localize sources for all subjects taken together, while enforcing some similarity between them. By pooling all measurements in a single **multi**-**task** **regression**, one makes the problem better posed, offering the ability to identify more sources and **with** greater precision. The Minimum Wasserstein Estimates (MWE) promotes focal activations that do not perfectly overlap for all subjects, thanks to a regularizer based on Optimal Transport (OT) metrics. MWE promotes spatial proximity on the cortical mantel while coping **with** the varying noise levels across subjects. On realistic simulations, MWE decreases the localization error by up to 4 mm per **source** compared to individual solutions. Experiments on the Cam-CAN dataset show a considerable improvement in spatial specificity in population **imaging**. Our analysis of a multimodal dataset shows how **multi**-**subject** **source** localization closes the gap between **MEG** and fMRI for brain mapping.

En savoir plus
Data-driven covariance projection for age prediction Three types of approaches are here com- pared: Riemannian methods (Wasserstein or geometric), methods extracting log-diagonal of matrices (**with** or without supervised spatial filtering, see Sec. 3.2) and a biophysics-informed method based on the MNE **source** **imaging** technique [24]. The MNE method essentially consists in a standard Tikhonov regularized inverse solution and is therefore linear (See Appendix 6.5 for details). Here it serves as gold-standard informed by the individual anatomy of each **subject**. It requires a T1-weighted MRI and the precise measure of the head in the **MEG** device coordinate system [3] and the coor- dinate alignment is hard to automate. We configured MNE **with** Q = 8196 candidate dipoles. To obtain spatial smoothing and reduce dimensionality, we averaged the MNE solution using a cortical parcellation encompassing 448 regions of interest from [31, 21]. We then used ridge **regression** and tuned its regularization parameter by generalized cross-validation [20] on a logarithmic grid

En savoir plus
IV. D ISCUSSION AND C ONCLUSION
In this paper, we proposed a data-driven procedure to detect perceptual thresholds using **MEG** data. We proposed an innovative approach to measure decoder’s performance when working **with** ordered targets and demonstrated how the predictions errors can offer interesting insights on the data. Rather than using a **multi**-class classifier blind to targets order and **with** little training samples per class, we used a ridge **regression** **with** a pairwise ranking scorer. Altogether, our results suggest that decoding brain activity in a visual **task** may enable to reliably derive participants’ perceptual threshold changes. Additionally, decoding results in **source** space bring out reliable discriminative power across regions known to be implicated in the **task**. Future work will take into consideration additional dynamic aspects of the **MEG** signals, and test the

En savoir plus
efficient matrix, and E is the measurement noise, which can be assumed to be additive white Gaussian noise, E[:, j] ∼ N (0, I) for all j after spatial whitening [12]. Estimating Z given M is an ill-posed inverse problem and constraints have to be imposed on Z to obtain a unique **source** estimate. For analyzing evoked responses, we assume that the neuronal activation is spatially **sparse** and temporally smooth. This corresponds to a TF coefficient matrix **with** a block row structure **with** intra- row sparsity [5], which we promote by applying a composite non-convex regularization functional R(Z). The associated regularized **regression** problem is given in Eq. (2).

En savoir plus
keywords— Inverse problem; MEEG; iterative reweighted optimization algorithm; **multi**-scale dictionary; Gabor transform.
I. I NTRODUCTION
Magneto-/electroencephalography (M/**EEG**) allow for non- invasive analysis of functional brain **imaging** **with** high tem- poral and good spatial resolution. Various approaches to tackle the **source** localization problem from M/**EEG** data have been proposed in the literature. The distributed-**source** approach models the brain activity **with** a fixed number of candidate dipoles distributed over the brain, and estimates their amplitudes and orientations. As the number of candidate dipoles that can explain the measured data is much larger than the number of sensors, **source** localization is an ill-posed problem. This implies that there is not a unique solution. Literature shows that adding supplementary constraints such as **sparse** regularization of priors to the neural activation helps to tackle the problem. Those approaches are based on Bayesian modeling [1]–[4], or regularized **regression** [5]–[7]. These methods implicitly assume stationarity of the **source** activation. In contrast, the Time-Frequency Mixed Norm Estimate (TF- MxNE) [8], Spatio-Temporal Unifying Tomography (STOUT) [9] and the iterative reweighted TF-MxNE (irTF-MxNE) [10] improve reconstruction of transient and non-stationary sources

En savoir plus
keywords— Inverse problem; MEEG; iterative reweighted optimization algorithm; **multi**-scale dictionary; Gabor transform.
I. I NTRODUCTION
Magneto-/electroencephalography (M/**EEG**) allow for non- invasive analysis of functional brain **imaging** **with** high tem- poral and good spatial resolution. Various approaches to tackle the **source** localization problem from M/**EEG** data have been proposed in the literature. The distributed-**source** approach models the brain activity **with** a fixed number of candidate dipoles distributed over the brain, and estimates their amplitudes and orientations. As the number of candidate dipoles that can explain the measured data is much larger than the number of sensors, **source** localization is an ill-posed problem. This implies that there is not a unique solution. Literature shows that adding supplementary constraints such as **sparse** regularization of priors to the neural activation helps to tackle the problem. Those approaches are based on Bayesian modeling [1]–[4], or regularized **regression** [5]–[7]. These methods implicitly assume stationarity of the **source** activation. In contrast, the Time-Frequency Mixed Norm Estimate (TF- MxNE) [8], Spatio-Temporal Unifying Tomography (STOUT) [9] and the iterative reweighted TF-MxNE (irTF-MxNE) [10] improve reconstruction of transient and non-stationary sources

En savoir plus
potential neural **source** to be either active for all subjects or for none of them.
Contribution. The assumption of identical functional activity across subjects is clearly not realistic. Here we investigate several **multi**-**task** **regression** models that relax this assumption. One of them is the **multi**-**task** Wasserstein (MTW) model [15]. MTW is defined through an Unbalanced Optimal Transport (UOT) metric that promotes support proximity across **regression** coefficients. However, applying MTW to group level data assumes that the signal-to-noise ratio is the same for all subjects. We propose to build upon MTW and alleviate this problem by inferring estimates of both sources and noise variance for each **subject**. To do so, we follow similar ideas that lead to the concomitant Lasso [27,31,25] or the **multi**-**task** Lasso [24].

En savoir plus
IV. DISCUSSION AND CONCLUSION
In this work, we presented irMxNE, an **MEG**/**EEG** in- verse solver based on regularized **regression** **with** a non- convex block-separable penalty. The non-convex optimiza- tion problem is solved by iteratively solving a sequence of weighted MxNE problems, which allows for fast algorithms and global convergence control at each iteration. We proposed a new algorithm for solving the MxNE surrogate problems combining BCD and a forward active set strategy, which significantly decreases the computation time compared to the original MxNE algorithm [9]. This new algorithm makes the proposed iterative reweighted optimization scheme applicable for practical **MEG**/**EEG** applications. The approach is also applicable to other block-separable non-convex penalties such as the logarithmic penalty proposed in [22] by adapting the definition of the weights in Eq. (5). The irMxNE method is designed for offline **source** reconstruction, which is still the main application of **MEG**/**EEG** **source** **imaging** in research and clinical routine. However, we are aware of a growing interest in real-time brain monitoring [64]. New techniques such as parallel BCD schemes [51], clustering approaches [65], and safe rules [66] can help to further reduce the computation time. As proposed in [22], [26], the first iteration of irMxNE is equivalent to computing the standard MxNE. Consequently, the irMxNE result is at least as **sparse** as the MxNE estimate.

En savoir plus
In this paper, we propose a **sparse** **MEG**/**EEG** **source** **imaging** approach based on regularized **regression** **with** a ℓ 2,0.5 -quasinorm penalty. We solve the non-convex optimiza-
tion problem by iterative reweighted MxNE. Each MxNE iteration is solved efficiently by combining a block coordinate descent scheme and an active set strategy. The resulting algorithm is applicable to **MEG**/**EEG** inverse problems **with** and without orientation constraint, running in a few seconds on real **MEG**/**EEG** problems. We provide empirical evidence using simulations and analysis of **MEG** data that the proposed method outperforms MxNE in terms of active **source** identifi- cation and amplitude bias.

En savoir plus
In high dimension, it is customary to consider Lasso-type estimators to enforce sparsity. For standard Lasso theory to hold, the regulariza- tion parameter should be proportional to the noise level, which is often unknown in prac- tice. A remedy is to consider estimators such as the Concomitant Lasso, which jointly opti- mize over the **regression** coefficients and the noise level. However, when data from differ- ent sources are pooled to increase sample size, noise levels differ and new dedicated estima- tors are needed. We provide new statistical and computational solutions to perform het- eroscedastic **regression**, **with** an emphasis on brain **imaging** **with** magneto- and electroen- cephalography (M/**EEG**). When instantiated to de-correlated noise, our framework leads to an efficient algorithm whose computational cost is not higher than for the Lasso, but ad- dresses more complex noise structures. Ex- periments demonstrate improved prediction and support identification **with** correct esti- mation of noise levels.

En savoir plus
3 Experimental Data and Implementations
The simultaneous **EEG** and fMRI are recorded from 8 healthy right-handed subjects while doing a motor **task** of clenching the right hand repeatedly. A study session consists of 10 blocks of 20s from which 5 are **task** each followed by a rest block. **EEG** is obtained via 64 electrodes based on the standard 10-20 system **with** f s = 200 Hz. The ECG (Electrocardiogram) and the gradient artifacts caused by MR system are separately eliminated. The sources are estimated for the first 15 seconds of a **task** block, since at the end of each block the subjects might not guarantee to do the **task** well because of fatigue. At the same time, the **subject** would enter a 3T MRI system, for the acquisition of the structural and functional MRI. Every 2 seconds a volume of the brain is being recorded. The fMRI data is then spatially corrected for the head motion.

En savoir plus
To summarize the findings of the simulation study, we can say that sLORETA, Champagne, MCE, and MxNE recover well the **source** positions, though not their spatial extent as they are conceived for focal sources, while ExSo-MUSIC, STWV-DA, and VB-SCCD also permit to obtain an accurate estimate of the **source** size. We noticed that most of the methods except for ExSo-MUSIC and STWV-DA require pre-whitening of the data or a good estimate of the noise covariance matrix (in case of Champagne) in order to yield accurate results. On the one hand, this can be explained by the hypothesis of spatially white Gaussian noise made by some approaches, while on the other hand, the prewhitening also leads to a decorrelation of the lead-field vectors and therefore to a better conditioning of the lead-field matrix, which consequently facilitates the correct identification of active grid dipoles. Furthermore, the **source** **imaging** algorithms generally have some difficulties in identifying mesial sources, located close to the midline, as well as multiple quasi-simultaneously active sources. On the whole, for the situations addressed in our simulation study, STWV-DA seems to be the most promising algorithm for distributed **source** localization, both in terms of robustness and **source** reconstruction quality. However, more detailed studies are required to confirm the observed performances of the tested algorithms before drawing further conclusions.

En savoir plus
the date of receipt and acceptance should be inserted later
Abstract **Sparse** modeling can be used to character- ize outlier type noise. Thanks to **sparse** recovery the- ory, it was shown that 1-norm super-resolution is ro- bust to outliers if enough images are captured. More- over, **sparse** modeling of signals is a way to overcome ill-posedness of under-determined problems. This nat- urally leads to the question: does an added sparsity assumption on the signal will improve the robustness to outliers of the 1-norm super-resolution, and if yes, how strong should this assumption be? In this article, we review and extend results of the literature to the robustness to outliers of overdetermined signal recov- ery problems under **sparse** regularization, **with** a convex variational formulation. We then apply them to general random matrices, and show how the regularization pa- rameter acts on the robustness to outliers. Finally, we show that in the case of **multi**-image processing, the structure of the support of signal and noise must be studied precisely. We show that the sparsity assump- tion improves robustness if outliers do not overlap **with** signal jumps, and determine how the regularization pa- rameter can be chosen.

En savoir plus
these variants, we use for each variant and ∆t value the hyper-optimization procedure discussed previously to find the best set of parameters.
MOTA values and IDS are indicated in Fig. 4. First of all, they show that the proposed LINF1 variant outperforms the other variants both in terms of MOTA and IDS. L1 variant performs poorly in our **multi**-frame data association context, especially concerning IDS. When using these representations, each detection is represented by only a few similar detections. It leads to promote short tracks of highly similar detections rather than long tracks through the whole sliding window. The two other appearance models, App N N and App M EAN , yield more

En savoir plus
The **multi**-**task**, **multi**-resolution method presented in this paper can be used to effectively solve the common problem of aligning existing maps over a new satellite image while also detecting new buildings in the form of a segmentation map. The use of **multi**-**task** learning by adding the extra segmentation **task** not only helps the network to train better, but also detects new buildings when cou- pled **with** a data augmentation technique of randomly dropping input polygons when training. Adding intermediate losses at different resolution levels inside the network also helps the training by providing a better gradient flow. It improves the performance on both alignment and segmentation tasks and could be used in other deep learning methods that have an image-like output which can be interpreted at different scales. Interestingly, **multi**-**task** learning also helps the segmentation **task**, as adding the displacement loss when training increases IoU. We also tried our method on the **task** of building height estimation, generat- ing simple but clean 3D models of buildings. We hope that our method will be a step towards automatically updating maps and also estimating 3D models of buildings. In the future we plan to work on the segmentation branch by using a better-suited loss for each output channel and a coherence loss between chan- nels, to further improve map updating. We also plan to improve displacement learning by adding a regularization loss enforcing piece-wise smooth outputs, such as the BV (Bounded Variation) norm or the Mumford-Shah functional.

En savoir plus
Preliminary results on real data shows the advantages of our method. The VBK algorithm gives access to highly interpretable loadings maps which are a powerful tool for understanding brain activity. Moreover, the free energy seems to be an accurate built-in criterion for model selection. A future direction of our work is to optimize the spatial model used in our framework (here we simply use a prior parcellation of the search volume) in relationship **with** the prediction function taht we use. In parellel, we will develop non-linear versions (e.g. logistic/probit) of this model for classification.

En savoir plus
夽 This paper is based on Yuan Yang’s Ph.D. work in Télécom ParisTech. Yuan Yang has ﬁnished his PhD in Télécom ParisTech and been **with** Department of Biome- chanical Engineering, Delft University of Technology, Delft, the Netherlands since November 2013.

The experiments performed using the single-**task** learning method were chosen in such a way as to test its generalization performance on different periods of time, and for each period of time to test the effect of the size of the training set on the generalization perfor- mance. The training sets had sizes equal to 3, 6, 12 and 24 months. All validation sets were formed of the 3 months following the training set, and the test sets (on which general- ization performance were evaluated) contained the 12 months following the validation set. The test sets were chosen to include a long period of time in order to compare the stability of the single-**task** method **with** that of the **multi**-**task** learning technique. Surprisingly, both methods had a relatively stable and consistent behavior over the 12 months period included in each test set. Five test years were chosen to evaluate the single-**task** method: 1989 to 1993. For each test year, single neural networks were trained using the data corresponding to October, November and December of the previous year as a validation set and the 3, 6, 12 or 24 months preceding October as a training set (for the test period corresponding to 1989, we didn’t perform experiments using 24 months training sets because the data we were provided starts in 1987).

En savoir plus
A multi-robot cooperative task achievement system Silvia Silva da Costa Botelho, Rachid Alami.. To cite this version: Silvia Silva da Costa Botelho, Rachid Alami.[r]

94 To support collaborative work using DMU, CHI must have function to support multiple representations of virtual content and **multi**-user interaction. Virtual reality is a kind of support technology which allows the collaborative work by DMU become more intuitive. For example, in tradition CAD work, there are usually some problems in conceptual design in architecture: the shape and resolution of the typical workstation screen will affect designers’ judgement about geometric structure. However, by using virtual reality technology, designers do not have a “screen” anymore. They are being placed in an environment which may be more suitable to watch and create the geometry form. As this support technology, our **multi**- view CHI technology can help users to work **with** each other in 3D virtual environment co- located and concurrently 10 .

En savoir plus