Hacker News new | past | comments | ask | show | jobs | submit login

Just a tiny rant: In my view complex numbers are really about the concept of "orthogonality". The complex 'dimension' is orthogonal to the 'real' dimension, but anything in reality that's a continuum of values can be seen as a dimension, and therefore each one must have an orthogonal. That is, whenever you have a direction in a higher dimensional space (regardless of dimensionality) any vector will have a normal direction (perpendicular direction).

What basic complex numbers represent is a way of doing rotations where something moves from one direction towards it's orthogonal. That's what Euler's Formula is about also, which shows the relationship of 'e' and 'i' in this of course.

Now what Quaternions represents is the realization that if complex numbers have two components (real, imaginary) then we can treat each of those as a base vector and find a sort of 'next level up' orthogonality to each one individually.

I'm not good enough at math/geometry to know if this kind of 'next level up' bifurcation of dimensionality extends up past Quaternions or not (like something called Octernions, 16ions, 32ions, 64ions, etc), but it seems like is would?




Octonions and so on up are indeed a thing, but I don't think they do what you want. Even aside from the fact that they're restricted to power-of-2 dimensions, their algebraic properties get worse as you iterate the Cayley-Dickson process. The octonions aren't even an associative algebra, although they do have some weaker associativity properties which I'll skip detailing here. The quaternions are as far as most mathematicians are willing to go -- non-commutativity is commonplace, but who wants to deal with non-associativity?

But while the octonions at least have some mathematical relevance (they're actually connected to various exceptional objects, such as the exception Lie group G_2!), the sedenions and beyond basically don't. They have a tiny bit of associativity but not enough that they connect to any things or that hardly anyone wants to study them -- and worse yet, there are zero divisors so cancellation (ab=ac => b=c for nonzero a) doesn't even hold. (Inverses exist, yes, but without associativity, inverses don't imply cancellation! And therefore aren't much use.)

As another commenter mentioned, what you might be looking for instead if it's orthogonality you're focused on is Clifford algebras (aka geometric algebra). However, if you want to get the complex numbers or quaternions out of it, you'd need to use a negative-definite quadratic form -- if you use a positive-definite one, you'd instead get the split-complex numbers, which are much less interesting (and you'd get something similar instead of the quaternions).


From your post I can tell you're way better at math/geometry than me, but I understood 80% of that. :)

Cayley-Dickson is interesting especially for Physics of course, because it brings in the concept of 'variable dimensions'. I think the flattening of objects, and the stopping of clocks (in Relativity), due to Lorentz effects in Minkowski space both on Black Hole Event Horizons and for objects approaching light speed (anywhere Lorentz holds) is, at the limits, ultimately the loss of a dimension, which would be my overall interpretation of what Cayley-Dickson is about too, in very broad terms.

So if Minkowski space is 4 dimensional, there would be some geometry for a 5-Dim Minkowski and it would use Octonians maybe, and that would be the geometry of the universe our universe is "embedded in"...I mean assuming of course you believe our universe is a Black Hole and we are all on an Event Horizon embedded in a 5D universe. Ya know, as one does. lol.


> I'm not good enough at math/geometry to know if this kind of 'next level up' bifurcation of dimensionality extends up past Quaternions or not (like something called Octernions, 16ions, 32ions, 64ions, etc), but it seems like is would?

Octonions and up (more generally known as hypercomplex numbers) exist, but every time you pull the "double dimensions by adding more imaginary components" trick[0], you lose another useful property.

Real to complex loses total ordering. Complex to quaternion loses commutativity. Quaternion to octonion loses associativity (but they are at least alternative). The sedenions aren't even alternative, and they have zero divisors to boot.

You can also generalize hypercomplex numbers to the study of Clifford algebras.

[0] The Cayley-Dickson construction


I don't agree. Complex numbers are the algebraic closure of the reals. Or the quotient of the real polynomial ring by (x^2+1=0). Or whatever other construction. The multiplication rule is the essence of C.

Orthogonality is captured linear algebra over R^2, but R^2 isn't a field or an algebra.


I think there is still a geometric viewpoint you can bring to the multiplicative structure of C. For example there is the extremely natural homeomorphism between unit C and SO(2). And C minus origin to (R+, SO(2)). It’s completely intuitive for mathematicians to say that 1 and i are separated by 90 degrees.


Indeed, there is a polar coordinates representation of the complex numbers with the nice property that when you multiply you multiply lengths and add angles.


Sniffy nailed it (above) when he mentioned Cayley-Dickson. That's exactly what I was getting at, and had never researched.


Algebraically, and also in group theory, it's exactly that. A complex plane is re-interpreted as a half-way mirror operation on an orthogonal axis or quadrature plane (eg the imaginary axis in the case of the 2D Argand plane). Which is what you you get when you multiply by i in complex numbers or i/j/k in quaternions - a 90 degree rotation. The correct way to do arbitrary rotations in this case is to use an exponential process, and then you get Euler's equation, the quaternion symmetry operation or, in general, the exponential map of an infinitesimal transformation in a Lie group.

In higher dimensions you get other type of (sometimes weird) operations, related to the Cartan–Dieudonné theorem.


I really like the welch labs series on imaginary numbers which covers the first part of what you talk about -- leveling up the notion of what a complex number is. Though his focus was more on solving simple equations with no real roots, but really detailing how/what is really going on.

It is a great precursor to then thinking about quaternions

https://www.youtube.com/watch?v=T647CGsuOVU&list=PLiaHhY2iBX...


That exact video is the one that always comes to my mind when thinking of YT videos on this! I've seen it years ago. Definitely worth a watch for anyone who hasn't see it!


Geometric algebra would be what you're looking for here. This is a great intro to the topic: https://geometry.mrao.cam.ac.uk/1993/01/imaginary-numbers-ar...


Also the (provocatively titled) "Let's Remove Quaternions from every 3d Engine" [1]

Spoiler alert: rotors are mechanically identical to quaternions, while being easier to understand. If you understand rotors, you understand quaternions. You can fit the laws you need to understand rotors on a business card.

Plus, rotors abstract to higher and lower (well, there's only one plane and its two respective orientations in 2d, but still) dimensions.

Complex numbers as planes (bivectors in GA parlance) has been the most mind-opening mathematical concept I've been exposed to in the last decade. The associated geometric product has helped me better understand concepts (like "handedness") that troubled me during undergrad engineering.

1. https://marctenbosch.com/quaternions/


I had never even heard of rotors! Thanks for this. I watched that video. The video doesn't really explain how it extends to higher dimensions tho, that I could discern.

I wonder how/if any of this can be applied to LLMs 'Semantic Space'. As you might know, Vector Databases are used a lot (especially with RAG - Retrieval Augmented Generation) mainly for Cosine Similarity, but there is a 'directionality' in Semantic Space, and so in some sense we can treat this space as if it's real geometry. I know a TON of research is done in this space, especially around what they call 'Mechanistic Interpretability' of LLMs.


> The video doesn't really explain how it extends to higher dimensions tho, that I could discern.

The neat thing is that it "extends" automatically. The math is exactly the same. You literally just apply the same fundamental rules with an additional basis vector and it all just works.

MacDonald's book [1] proves this more formally. Another neat thing is there are two ways to prove it. The first is the geometric two-reflections-is-a-rotation trick given in the linked article. The second is straightforward algebraic manipulation of terms via properties of the geometric product. It's in the book and I can try to regurgitate it here if there's interest; I personally found this formulation easier to follow.

If you really want your mind blown, look into the GA formulation of Maxwell's laws and the associated extension to the spacetime (4d) algebra, which actually makes them simpler. That's derived in MacDonald's book on "Geometric Calculus" [2]. There's all kinds of other cool ideas in that book like a GA formulation of the fundamental law of calculus from which you can derive a lot of the "lesser" theorems like Green's law.

Take all of this with a grain of salt. I'm merely an enthusiast and fan, not an expert. And GA unfortunately has (from what I can tell) some standardization and nomenclature issues (e.g. disagreement over the true "dot product" among various similar but technically distinct formulations)

> I wonder how/if any of this can be applied to LLMs 'Semantic Space'.

Yeah, an interesting point. Geometric and linear algebra are two sides of the same coin; there's a reason why MacDonald's first book is called _Linear and_ Geometric Algebra. In that sense, Geometric Algebra is another way of looking at common Linear Algebra concepts where algebraic operations often have a sensible geometric meaning.

1. https://www.faculty.luther.edu/~macdonal/laga/ 2. https://www.faculty.luther.edu/~macdonal/vagc/


Interesting ideas there thanks. I do know about that Maxwell derivation that involves Minkowski space, Lorentz transform consistency, etc, although I haven't fully memorized how it works, so that I can conjure up how it works from memory. I don't really think in equations, I think in visualizations, so I know a lot more than I can prove with math. You're right it's mind-blowing stuff for people like us that are interested in it.


You can just start with N orthogonal directions which is R^N. There's no need to use the Cayley-Dickson construction to get orthogonality.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: