For anyone not familiar with the concept, a one-form is something that takes vectors as inputs and produces numbers as outputs. See https://en.wikipedia.org/wiki/One-form
In other words: if vectors are column vectors, then covectors are row vectors, and vice versa. In this case, taking the dual is transposing your vectors, and the transpose of a transpose gives you back the original vectors.
But mathematicians try to build the most general, most abstract framework, which doesn't rely on the specifics of a concrete example (i.e. the dot product of finite-dimensional real vectors).
Yes for finite-dimensional spaces. For infinite-dimensional spaces you can think of 1-forms as generalizations of a partially-applied dot product (eg. the Dirac delta function).
You are asking if every covector f can be written as dot(v,—) for some v.
This is true for finite-dimensional spaces (take v = f(e1)e1 + ... + f(en)en, where e1,...,en is a orthogonal basis for dot). Since v is unique, this gives an isomorphism between V and V٭ and applying the same argument to V٭ and V٭٭, those are isomorphic, so V and V٭٭ are also isomorphic. But the isomorphism depends on the choice of dot.
However the map V -> V٭٭ does not depend on the choice of dot. The type is V -> (V->F) -> F so this is just (reversed) function application. You can easily show this is injective. Furthermore when V is finite-dimensional, we know from above that dim(V)=dim(V٭)=dim(V٭٭), so we know in this case it is also an iso.
(dot v) is a member of the dual space. To double it up with dual dual, and to do it with the same partial application construction, we need to introduce an inner product on the dual space.
ddot :: (V F -> F) -> (V F -> F) -> F
ddot (dot v) :: (V F -> F) -> F
To identify these things, we need a bijection and we need vector space properties to match up (inner product, addition, and scaling). Starting with an inner product:
dddot (ddot (dot v)) (ddot (dot w)) = dot v w
Then the bijection:
phi :: V F -> (V F -> F) -> F
phi_inv :: ((V F -> F) -> F) -> V F
phi v = ddot (dot v)
phi_inv (ddot (dot v)) = v
Then we'd also have to show that phi (v+w) = (phi v) +* (phi w) and the same for dual space addition and scaling phi (a v) = a phi v, etc, but it's starting to get less fun, so that's left as an exercise for the reader/next commenter. I think they fall out of my dddot definition.
A gotcha here is that my dual space isn't the entire type (V F -> F), just the subset that's in the image of dot (linear transformations). That's why we can do the pattern matching in my definition of phi_inv.
That's the typey way to do it and it was fun to think through. What's easier is to just think of the currying as the transpose operation and multiplication. Then appeal to transpose transpose = identity. It's handwavier, but still has the same elements if you flesh it out, I think.
All that said, it's not the easiest mental model. My mental model is that vectors and one forms are for measuring against each other. You have a vector v and a form x. You apply x to measure how much v is in that direction. Like, your vector might be some coordinateless physical thing, and your one form is "meters per second in the north direction" and you can apply your form to the vector to get a good old fashioned number for its north component in meters per second.
Dual spaces are an essential component of multilinear algebra, a subject that is simultaneously essential yet often ignored in undergraduate education.
It is true that finite dimensional vector spaces are reflexive. But the broader claim that only finite dimensional vector spaces are reflexive is false. For example, Hilbert spaces are reflexive and infinite dimensional. See the last paragraph here:
OP is correct assuming "dual" is defined as all the linear functionals on a space. The dual of an infinite-dimensional space in this sense always has larger dimension than the original space.
The difference comes in that for Hilbert spaces, "dual" is usually taken to mean only the continuous linear functionals.
Yep! The simplest explanation for why this is true is that when V is infinite dimensional, V^* and V^{* *} need not have the same dimension, which precludes them being isomorphic.
Does this have anything to do with covariance and contravariance in terms on CS ? I couldn't understand from the article what those terms mean in the context of vectors spaces.
Yes, as the terms "co/contra-variance" loosely relate to how certain derived attributes "varies" when base attributes changes:
- vectors: whether scalars scales in the same direction or not of the base vectors (ie is obtained through the base transformation matrix, or its inverse)
- generics: whether generics of a subclass of T is interpreted as a subclass (same direction) or superclass (opposite direction) of a generics of T.
(Maybe they all formally tie together through category theory? I don't know but would love to hear from someone more educated about it!)
There is beauty in first constructing V* as the one-forms over V, which seems like a "one-way" derivation from it, and then finding through V = V that they're actually two very-equal faces of the same coin.
The machinery described in the article is powerful and useful, but you don’t need it to understand dual spaces. Column vectors in R^n form a vector space. Row vectors map them into real numbers via standard matrix multiplication (on the left). Also vice-versa with right multiplication.
So row vectors are the dual of column vectors. Job done!
Edit: I admit you need a bit more for covariance and contravariance though.