Why not? It's true that the word "space" has different meanings when in appears in a math book, CS book or astronomy book. But we just have 3 different word2vec models. When I read something about math, I pick the math word2vec model and there "space" appears close to words "Hilbert" and "separable", while in the CS model, the same word is next to "complexity" and "memory". As I read more, I improve my word2vec models, but never mix them together. Now what happens if I'm reading something and don't understand the context? No, I don't switch to some general word2vec model. I rather try to guess which model to use and then reread the same text using that model.