
Abstract
Recently, adapting or transferring knowledge across different datasets or domains has gained increasing interests. Different learning methods include domain adaptation, transfer learning, meta learning and few/zero-show learning. Among the many strategies proposed to adapt a domain to another, finding a common representation has shown excellent properties: by finding a common representation for both domains, a single classifier can be effective in both and use labelled samples from the source domain to predict the unlabelled samples of the target domain. In this project, we focus on a large-scale optimal transportation model to perform the alignment of the representations in the source and target domains. First, we learn an optimal transport (OT) plan. To that end, we use a stochastic dual approach of regularized OT, which enables OT scale to large datasets. Second, we estimate a \textit{Monge map} as a deep neural network learned by approximating the barycentric projection of the previously-obtained OT plan. This parameterization allows generalization of the mapping outside the support of the input measure. The source code for this project can be obtained by this github repo: \url{https://github.com/yihhhh/OT-for-Domain-Adaptation}.