• Aucun résultat trouvé

Fitting Machine Translation into Clients

N/A
N/A
Protected

Academic year: 2022

Partager "Fitting Machine Translation into Clients"

Copied!
1
0
0

Texte intégral

(1)

Copyright © 2019 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).

In: Proceedings of the 1st Masters Symposium on Advances in Data Mining, Machine Learning, and Computer Vision (MS-AMLV 2019), Lviv, Ukraine, November 15-16, 2019, p. 1

Fitting Machine Translation into Clients (Keynote talk)

Kenneth Heafield

Institute for Language, Cognition and Computation, University of Edinburgh, IF 4.21, 10 Crichton Street, Edinburgh, EH8 9AB, Scotland, European Union

kheafiel@inf.ed.ac.uk

Abstract. The Bergamot project is making neural machine translation efficient enough to run with high quality on a desktop, preserving privacy compared to online services. Doing so requires us to compress the model to fit in reasonable memory and run fast on a wide range of CPUs.

Keywords: neural machine translation, efficiency, privacy preservation, model compression.

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 825303.

Références

Documents relatifs

PPML Different ML algorithms Different Privacy- preservation objectives Different architectures - Clustering [1] - Classification [2] - Association Rule Mining [3] ML output

Training topic models As in (Louis and Webber, 2014), we use Latent Dirichlet Allocation (LDA) (Blei et al., 2003) to train topic models on our monolingual biography data, taking

2 For each test set, models are assessed on their ability to rank the cor- rect translation of an ambiguous sentence higher than the incorrect translation, using the disam-

In this section, we present the results of the evaluation of the dictionaries generated using four different approaches: (i) by dumping the Apertium French- Spanish dictionary, (ii)

We run our experiments on English-Italian data, by comparing the performance of different neural APE models (batch and online) used to correct the output of NMT systems of

This paper received the best paper award at the 53rd Annual Meeting of the Associ- ation of Computational Linguistics and the 7th International Joint Conference on Natural

Kurohashi, “An empirical comparison of domain adaptation methods for neural machine translation,” in Proceedings of the 55th Annual Meeting of the Association for

Cette thèse s’intéresse à la Traduction Automatique Neuronale Factorisé (FNMT) qui repose sur l'idée d'utiliser la morphologie et la décomposition grammaticale des mots (lemmes