Correct. And this is the key distinction between the mathematical approach and the everyday / business / SE approach that dominates on hacker news.
Numbers are not "real", they just happen to be isomorphic to all things that are infinite in nature. That falls out from the isomorphism between countable sets and the natural numbers.
You'll often hear novices referencing the 'reals' as being "real" numbers and what we measure with and such. And yet we categorically do not ever measure or observe the reals at all. Such thing is honestly silly. Where on earth is pi on my ruler? It would be impossible to pinpoint... This is a result of the isomorphism of the real numbers to cauchy sequences of rational numbers and the definition of supremum and infinum. How on earth can any person possibly identify a physical least upper bound of an infinite set? The only things we measure with are rational numbers.
People use terms sloppily and get themselves confused. These structures are fundamental because they encode something to do with relationships between things
The natural numbers encode things which always have something right after them. All things that satisfy this property are isomorphic to the natural numbers.
Similarly complex numbers relate by rotation and things satisfying particular rotational symmetries will behave the same way as the complex numbers. Thus we use C to describe them.
Very similar arguments date back to at least Plato. Ancient Greek math was based in geometry and Plato argued one could never demonstrate incommensurable lengths of rope due to physical constraints. And yet incommensurable lengths exist in math. So he said the two realms are forever divided.
I think it’s modern science’s use of math that made people forget this.
Philiosophers aren't aware but Science itself and math curb-stomped most of the bullshit from philosophy and for the good.
Lovecraft captured well that feeling with Cosmic Horror. But, you know, in the 20's, 30's, 40's, scifi writters evolved. Outdated, romantic foes (specially the French and German romanticism) keep bitching over and over about the pure and 'simple' past, as if the universe had a meaning per se. And they are utterly lost. Forever.
Archimedes and Euclides won over Aristotle. Guess why. Math itself it's the Logos.
The complex numbers is just the ring such that there is an element where the element multiplied by itself is the inverse of the multiplicative identity. There are many such structures in the universe.
For example, reflections and chiral chemical structures. Rotations as well.
It turns out all things that rotate behave the same, which is what the complex numbers can describe.
Polynomial equations happen to be something where a rotation in an orthogonal dimension leaves new answers.
Datomic is not sql but it is not mongo either. Mongo is completely unprincipled. Sql is halfway principled. Datomic and datalog are actually completely principled
The issue is our current databases were not designed for proper schema resolution.
The correct answer here is that a database ought to be modeled as a data type. Sql treats data type changes separate from the value transform. To say this is retarded is an understatement.
The actual answer is that the schema update is a typed function from schema1 to schema2. The type / schema of the db is carried in the types of the function. But the actual data is moved via the function computation.
Keeping multiple databases around is honestly a potential good use of homotopy type theory extended with support for partial / one-way equivalences
With all due respect to Duncan... The reason why this is an issue at all is because the people who architected the internet were not thinking functionally. If they were, they would have come up with something like Nix.
But yes, interfacing between formally proven code and the wild wild West is always an issue. This is similar to when a person from a country with a strong legal system suddenly find themselves subject to the rule of the jungle.
The better question to ask is why did we choose to build a chaotic jungle and how do we get out of it
The issue with frameworks is not the magic. We feel like it's magic because the interfaces are not stable. If the interfaces were stable we'd consider them just a real component of building whatever
You don't need to know anything about hardware to properly use a CPU isa.
The difference is the cpu isa is documented, well tested and stable. We can build systems that offer stability and are formally verified as an industry. We just choose not to.
Honestly, the entire process of open-source contribution is broken. People should just fork and compete on the free 'market'. If you have a good idea / PR, just keep patchsets. People should mix and match the patch sets as they like. Maintainers who want to keep their version active will be forced to merge proper patch sets. The key argument against this is the difficulty integrating patch sets.
This should be easier with AI. Most LLMs are pretty good at integrating existing code.
Nice visuals, but misses the mark. Neural networks transform vector spaces, and collect points into bins. This visualization shows the structure of the computation. This is akin to displaying a Matrix vector multiplication in Wx + b notation, except W,x,and b have more exciting displays.
It completely misses the mark on what it means to 'weight' (linearly transform), bias (affine transform) and then non-linearly transform (i.e, 'collect') points into bins
It doesn't match the pictures in your head, but it nevertheless does present a mental representation the author (and presumably some readers) find useful.
Instead of nitpicking, perhaps pointing to a better visualization (like maybe this video: https://www.youtube.com/watch?v=ChfEO8l-fas) could help others learn. Otherwise it's just frustrating to read comments like this.
It's not nitpicking to point out major missing pieces. Comments like this might tend to come across as critical but they are incredibly valuable for any reader that doesn't know what he doesn't know.
It just sucks to put in a ton of work into something and then show it off to people but the first reaction is someone comes out of the woodwork to loudly crow that it "misses the mark" and is somehow crap.
It's a completely avoidable experience when the community has a more generally positive attitude. All it takes is a little different phrasing of exactly the same feedback, but with a positive emotional and encouraging tone.
For example, instead of writing:
> Nice visuals, but misses the mark. Neural networks transform vector spaces, and collect points into bins. This visualization shows the structure of the computation. This is akin to displaying a Matrix vector multiplication in Wx + b notation, except W,x,and b have more exciting displays.
> It completely misses the mark on what it means to 'weight' (linearly transform), bias (affine transform) and then non-linearly transform (i.e, 'collect') points into bins
Here's more or less the same comment but with a completely different attitude:
> Oh wow, that's cool! That must have been a ton of work to put together. That got me thinking as to how it's akin to Matrix vector multiplication in Wx + b notation, except W,x,and b have more exciting displays.
> An idea I am wondering about but don't know how to solve is what it means to 'weight' (linearly transform), bias (affine transform) and then non-linearly transform (i.e, 'collect') points into bins.
> Here's some other links that are related and cool: ...
> Cheers, nice work!
Let's not crap on people's work so readily. After all, we have no idea about who the author is. Maybe it's a teenager or a university student and this was their first project. It's really a jarring and demoralizing experience to have your first visualization immediately crapped on.
When it comes to most in person interactions I approximately agree with you. But on HN brutal honesty seems to be the norm and at least personally I appreciate it for that.
A large part of the problem is a cultural mismatch I think. People have a tendency to interpret even entirely valid criticism as negativity. One of the nice things about a more analytical environment (ex STEM research labs IRL, HN on the net) is that you don't need to worry about that so much. The expectation is that things will be critiqued - that this is a good thing that helps further personal growth and intellectual endeavors more generally.
I'll grant the original comment could have been worded a bit more gently without losing the intended meaning. That said, the alternate example you gave there changes the meaning, sounds rather sycophantic, and honestly reads like corpo-posi-speak or LLM prose to me.
Regarding the original criticism. Notice that the title implies this to be an illustration of how a network does what it does. And the visualization flows through internal to output cells. Yet a number of key concepts aren't explained at all. Vaguely analogous to throwing up some ASM on a PPT slide and remarking "so you see, that's how it works". There's a matmul there, but _why_? What's the _point_ of an activation function? Unless I missed something the visualization doesn't even mention nonlinearity despite it being an essential property.
I agree. This visualization gets the basic idea across, but it doesn't actually tell you how they are implemented mathematically.
It doesn't tell you that each neuron calculates a dot product of the input and neuron weights and that the bias is simply added rather than a threshold, nor does it tell you that there is an activation function that acts as a differentiable threshold.
Without this critical information there is no easy way to explain how to train a neural network since you can't use gradient descent anymore. You're forced to use evolutionary algorithms for non-differentiable networks.
People think category theory is weird and confusing, but really it just managed to name things (classes) that before were just "things". One might not know what monad or functor is, but they surely used it and have intuition on how it works.
Right. I don't know how many times I've been exasperated by how monads are perceived as difficult.
Do you understand "flatmap"? Good, that's literally all a monad is: a flatmappable.
Technically it's also an applicative functor, but at the end of the day, that gives us a few trivial things:
- a constructor (i.e., a way to put something inside your monad, exactly how `[1]` constructs a list out of a natural number)
- map (everyone understands this bc we use them with lists constantly)
- ap, which is basically just "map for things with more than one parameter"
Monads are easy. But when you tell someone "well it's a box and you can unwrap it and modify things with a function that also returns a box, and you unwrap that box take the thing out and put it inside the original box—
No. It is a flatmappable. That's it. Can you flatmap a list? Good. Then you already can use the entirety of monad-specific properties.
When you start talking about Maybe, Either, etc. then you've moved from explaining monads to explaining something else.
It's like saying "classes are easy" and then someone says "yeah well what about InterfaceOrienterMethodContainerArrangeableFilterableClass::filter" that's not a class! That's one method in a specific class. Not knowing it doesn't mean you don't understand classes. It just means you don't have the standard library memorized!
It's also important to note that in Haskell and other functional programming languages, there is no implied order of operations. You need a Monad type in order to express that certain things are supposed to happen after other things. Monads can also express that certain things happen "in between" two operations, which is why we have different kinds of Monads and mathematical axioms of what they're all supposed to do.
Outside of FP however, this seems really stupid. We're used to operations that happen in the order you wrote them in and function applications that just so happen to also print things to the screen or send bits across the network. If you live in this world, like most people do, then "flatmap" is a good metaphor for Monads because that's basically all they do in an imperative language[1].
Well, that, and async code. JavaScript decided to standardize on a Monad-shaped "thenable" specification for representing asynchronous processes, where most other programming languages would have gone with green threads or some other software-transparent async mechanism. To be clear, it's better than the callback soup you'd normally have[0], but working with bare Thenables is still painful. Just like working with bare Monads - which is why Haskell and JavaScript both have syntax to work around them (await/async, do, etc).
Maybe/Either get talked about because they're the simplest Monads you can make, but it makes Monads sound like a spicy container type.
[0] The FP people call this "continuation-passing style"
[1] To be clear, Monads don't have to be list-shaped and most Monads aren't.
There is an implied order of operations in Haskell. Haskell always reduces to weak head normal form. This implies an ordering.
Monads have nothing to do with order (they follow the same ordering as Haskell's normalization guarantees).
> JavaScript decided to standardize on a Monad-shaped "thenable" specification for representing asynchronous processes,
Its impossible for something to be monad shaped. All asynchronous interfaces form a monad whether you decide to follow the Haskell monad type class or decide to do something else. They're all isomorphic and form a monad. Any model of computation forms a monad.
Assembly language quite literally forms a category over the monoid of endo functors.
Jacquard loom programming also forms a category over the monoid of endo functors because all processes that sequence things with state form such a thing, whether you know that or not.
It's like claiming the Indians invented numbers to fit the addition algorithm. Putting the cart before the horse, because all formations of the natural numbers form a natural group/ring with addition and multiplication formed the standard way (they also all form separate groups and rings, that we barely ever use).
> All asynchronous interfaces form a monad whether you decide to follow the Haskell monad type class or decide to do something els
JS's then is categorically not a monad because it doesn't follow the monad laws.
fn1 : a -> Promise<b>
fn2 : b -> c
fn3 : b -> Promise<c>
With JavaScript, composing fn1 and fn2 with then gives you a -> Promise<c>. So then is isomorphic to map.
With JavaScript, composing fn1 and fn3 with then gives you a -> Promise<c>. So then is isomorphic to flatmap.
Therefore, with JavaScript, map is isomorphic to flatmap. Which obviously violates monad laws.
There's a rather famous Github issue where someone points this out in the issue tracker for `then` development, and one of the devs in charge of then...leaves responses for posterity.
I mean, JavaScript is weakly typed and then always lifts its argument. Speaking of isomorphism doesn't make sense unless you're talking about a typed portion of the language.
> Maybe/Either get talked about because they're the simplest Monads you can make, but it makes Monads sound like a spicy container type.
Actually "spicy container type" is maybe a better definition of Monad than you may think. There's a weird sort of learning curve for Monads where the initial reaction is "it's just a spicy container type", you learn a bit and get to "it is not just a spicy container type", then eventually you learn a lot more and get to "sure fine, it's just a spicy container type, but I was wrong about what 'container' even means" and then settle back down to "it's a spicy container type, lol".
"It's a spicy container type" and "it's anything that is flatmappable" are two very related simplifications, if "container" is a good word for "a thing that is flatmappable". It's a terrible tautological definition, but it's actually not as bad of a definition as it sounds. (Naming things is hard, especially when you get way out into mathematical abstractions land.)
There are flatmappable things that don't have anything to do with ordering or sequencing. Maybe is a decent example: you only have a current state, you have no idea what the past states were or what order they were in.
Flatmappable things are generally (but not always) non-commutative: if you flatmap A into B you get a different thing than if you flatmap B into A. That can represent sequencing. With a Promise `A.then(() => B)` is different sequence than `B.then(() => A)`. But that's as much "domain specific" to the Promise Monad and what its flatmap operation is (which we commonly call `then` to make it a bit more obvious what its flatmap operation does, it sequences; A then B) than anything fundamental to a Monad. The fundamental part is that it has a flatmap operator (or bind or then or SelectMany or many other language or domain-specific names), not anything to do with what that flatmap operator does (how it is implemented).
But this has absolutely nothing to do with "certain things are supposed to happen after other things", and CANNOT POSSIBLY have anything to do with that. Flatmap is a purely functional concept, and in the context of things that are purely functional, nothing ever actually happens. That's the whole point of "functional" as a concept. It cleanly separates the result of a computation from the process used to produce that result.
So one of your "simple" explanations must be wrong.
Because you're not used to abstract algebra. JavaScript arrays form a monad with flatmap as the join operator. There are multiple ways to make a monad with list like structures.
And you are correct. Monads have nothing to do with sequencing (I mean any more than any other non commutative operator -- remember x^2 is not the same as 2^x)
Haskell handles sequencing by reducing to weak head normal form which is controlled by case matching. There is no connection to monads in general. The IO monad uses case matching in its implementation of flatmap to achieve a sensible ordering.
As for JavaScript flat map, a.flatMap(b).flatMap(c) is the same as a.flatMap(function (x) { return b(x).flatMap(c);}).
This is the same as promises:
a.then(b).then(c) is the same as a.then(function (x) { return b(x).then(c)}).
Literally everything for which this is true forms a monad and the monad laws apply.
Nota bene, then is not a monad because the implementation of then implies map is isomorphic to flatmap. This is because `then` turns the return value of a callback into a flat promise, even if the callback itself returns a promise.
That is to say, then checks the type of the return value and then takes on map or flatmap behavior depending on whether the return value of the callback is a Promise or not.
You don't? There's nothing special about monads. I don't know why everyone cares so much about them.
There are a few generic transforms you can use to avoid boilerplate. You can reason about your code more easily if your monad follows all the monad laws. But let's be real.. most programmers don't know how to reason about their code anyway, so it's a moot point
People have different "aha" moments with monads. For me, it was realizing that something being a monad has to do with the type/class fitting the monad laws. If the monad laws hold for the type/class then you've got a monad, otherwise not.
So then when you look at List, Maybe, Either, et al. it's interesting to see how their conforming to the laws "unpacks" differently with respect to what they each do differently (what's happening to the data in your program), but the laws are just the same.
The reason this was an aha moment for me is that I struggled with wanting to understand a monad as another kind of thing — "I understand what a function is, I understand what objects and primitive values are, but I don't get that List and Maybe and Either are the same kind of thing, they seem like totally different things!"
Yes, I 100% agree. But I want to mention something that isn't a disagreement, just a further nuance:
1. my explanation of monad is sufficient for people who need to use them
2. your explanation of monad is necessary for people who might want to invent new ones
What I mean by this is that if you want to invent a new monad, you need to make sure your idea conforms to the monad laws. But if you're just going to consume existing monads, you don't need to know this. You only need to know the functions to work with a monad: flatmap (or map + flatten), ap(ply), bind/of/just. Everything else is specific to a given monad. Like an either's toOptional is not monadic. It's just turning Left _ into None and Right an into Some a.
And needing to know these properties "work" is unnecessary, as their very existence in the library is pretty solid evidence that you can use them, haha.
> everyday business and physics is monadic in function.
So?
> And if-then statements are functorial.
So?
All the "this is hard" stuff around these ideas seems to focus on managing to explain what these things are but I found that to progress at the speed of reading (so, about as easy as anything can be) once it occurred to me to find explanations that used examples in languages I was familiar with, instead of Haskell or Haskell-inspired pseudocode.
What I came out the other side of this with was: OK, I see what these are (that's incredibly simple, it turns out) and I even see how these ideas would be useful in Haskell and some similar languages, because they solve problems with and help one communicate about problems particular to those languages. I do not see why it matters for... anything else, unless I were to go out of my way to find reasons to apply these ideas (and why would I do that? And no, I don't find "to make your code more purely-functional" a compelling reason, I'm entirely fine with code I touch only selectively, sometimes engaging with or in any of that sort of thing).
There is no 'so?' Haskell tends towards applicatives and monads because monads and applicatives are the preferences of haskellers. Just like JavaScript people may like dynamic typing, etc. These are design choices.
By modeling various things as monads, you get the various principled monad extensions. Unlike normal programming where leaky abstractions are the expectation, using algebraic structures with principled laws means things just work.
But this has nothing to do with monads in particular. Haskell's choice to do a lot with monoids provides a similar guarantee about things that combine . It's a preference. Nothing like monoids exist in other languages, because people are told they have to think with 'objects' of whatever.
> Do you understand "flatmap"? Good, that's literally all a monad is: a flatmappable.
Awesome! Now I understand.
> Technically it's also an applicative functor
Aaaand you've lost me. This is probably why people think monads are difficult. The explanations keep involving these unfamiliar terms and act like we need to already know them to understand monads. You say it's just a flatmappable, but then it's also this other thing that gives you more?
But words like "incapsulation" or "polymorphism" or even "autoincrement" also sound unfamiliar and scary to a young kid who encounters them the first time. But the kid learns their meaning along the way, in a desire to build their own a game, or something. The feeling that one already knows a lot, sort of enough, and it'd be painful and boring to learn another abstract thing is a grown-up problem :-\
Those words need definitions, but they can both be defined using words most people know.
Casual attempts at defining Monads often just sweep a pile of confusion around a room for a while, until everything gets hidden behind whatever odd piece of furniture that is familiar to the person generating the definition. They then imagine they have cleared up the confusion, but it is still there.
Most engineers don't have too much trouble understanding things like List<T>, or Promise<T>, or even Optional<T>, which all demonstrate vividly what a monad does (except Promise in JS that auto-flattens).
A monad is a generalization of all them. It's a structure that covers values of type T, some "payload" (maybe one, like Promise, maybe many, like List, maybe even none, like List or Optional sometimes). You can ask it to do some operations on these values "inside", it's the map() operation. You can ask it to do similar thing when operation on each value produces a nested structure of the same kind, and flatten them into one level again: this is flatMap(). This is how Promises are chained. The result is again a structure of the same kind, maybe with "payload" of a different type.
This is a really simple abstraction, simpler than most GoF patterns, to my mind, and more fundamental and useful.
Short definitions, followed by simple examples that clearly match the definition, are the best way to be clear.
Unlike how we define most things, definitions of monads often run into:
1. Just a lot of words, often almost stream of consciousness.
2. Use of supporting words used in a technical sense associated with the concept being defined. Completely understood by anyone who already knows the concept. Completely opaque to anyone else. Those words should be defined first, or not used.
3. Incorporating examples into the definition, which creates a kind of inductive menagerie. There are no obvious boundaries of a concept, or clarity shed on what is crucial or what is specific in the examples.
Dictionaries and most people don't define words this way, for good reason. It is a collage, not a definition.
--
I just spent too much time working on this. It is a deceptively difficult problem. I am certainly not critiquing anyone. To be completed later! For myself, if no one else.
I mean people need to be familiar with mathematics. In mathematics things form things without having to understand them.
For example, The natural numbers form a ring and field over normal addition and multiplication , but you don't need to know ring theory to add numbers..
People need to stop worrying about not understanding things. No one understands everything.
Now imagine if every single explanation of natural numbers talked about rings and fields. Nobody ever just says "they're the counting numbers starting from one." A few of them might say, "they're the counting numbers starting from one, and they form a ring and field over addition and multiplication." And I might think, I understand the first part, but I'm not sure what the second part is and it sounds important, so maybe I still don't know what natural numbers are.
I'm not worried, but it's amusing to see this person say it's so simple, and then immediately trample on it.
Well most people explain monads for no reason. I'm probably one of the rare Haskell developers who never explains them to anyone. It has nothing to do with IO.
If someone is concerned with how to do IO in a pure language then I show them how it actually happens in GHC, which is via the type system enforcing only one instance of RealWorld# is alive at once. There is ABSOLUTELY nothing you need to know about monads to understand IO in Haskell. It's just function composition and careful use of case to force the evaluation of a token of type RealWorld#. Nothing magic about it.. you're just passing the state of the world around.
I'm sorry, I wasn't clear. The "technically" was meant to signal "it doesnt' matter to you, but to pedants here who get off on saying "well ACKshually": I didn't forget that, it's just not relevant :D
If you want a little more elucidation, what you need to know, unless you're aiming to be functional programming god, is that:
- a monad is a FLATMAPPABLE
- all monads are also applicative functors, which i will explain last bc it's kind of a twist on MAPPABLE
- all applicative functors, and thus all monads, are functors, which are MAPPABLEs
- an applicative functor is essentially a mappable for functions that take more than one parameter
I think applicative functors are the hardest to grok because it's not immediately obvious why they're necessary. The type signature is strange, and it's like "why would I ever put a function inside a container??" I wrote a lot of functional code in Kotlin and TypeScript before I finally understood their utility. The effect of this was that a lot of awkward code became much cleaner.
So let's begin with functor (i.e., a mappable):
Container<Integer>
if you have a function Integer to Text, a functor allows you to convert the Integer to Text using a function called `map`. We do this with arrays all the time in Python, JavaScript, etc. It's a very familiar concept, but we don't call it "functor" in those languages.
BUT, what if you have
Container<Integer>
and the function you want to map with takes two parameters. A classic example is you want to use the Integer as the first argument of a constructor. Let's say Pair.
So if Pair's constructor is: a -> a -> (a, a), you would first map Container<Integer> with PairConstructor. Now you have Container<Integer -> (Integer, Integer)>.
To pass in the second Integer to finish constructing the tuple, you use the special property of applicative functors. This is often called "ap" (like "map" without the "m").
---
Now, I would say the ACTUAL most important thing about applicative functors is this:
Imagine if you had a list of words. You want to make an API call for each word. API calls are often modeled with the Async monad (which is also, as mentioned above, definitionally an applicative functor).
But if you mapped [Word] with CallApi, you would end up with [Async ApiResult]. This models "a list of successful and unsuccessful API calls."
But what if you wanted Async [ApiResult] instead? (One might say this is an attempt to model "all api calls successful, but if one api call fails, the whole operation is considered a failure."
This is where applicative functors shine: pulling the applicative functor out of the container and wrapping the whole container. (There's more cool stuff to learn about the nature of this "container" but that'd be for another lesson, much like how you don't learn about primitives and interfaces on the same day in an OOP class.)
Recall that constructing a list of N items would be
a -> a -> a -> ... -> a (n times) -> [a]
That looks an awful lot like one MAP followed by (n-1) APs, based on the discussion above! And that's exactly what it is.
You can map the first api call and then ap the rest, and you end up going over the entire list, getting Async [ApiResult].
Now, there are a lot of ways languages go about solving this kind of "fail if one of the operations fails rather than compile a list of all successes and failures."
But the nice thing about using Functors, Monads, etc. is that you have a bunch of functions that work on these things, and they handle a ton of code so you don't have to.
That collection of Words above? It's a list. Lists are a Traversable, and all Traversable have the following function:
traverse: (a -> Applicative b) -> Traversable a -> Applicative Traversable b
The above, the traversable is a list and applicative functor was apiCall, so your code is as simple as
traverse apiCall listOfWords
No juggling around anything. That's it. You know your result will be "list of successful results, or a failure."
---
There are many more of these "type classes," and the real power comes from not needing to write a lot of code anymore because it's baked into the properties of the various type classes. Have a type that can be mapped to an order able type? Bam, now your type is order able and you never have to write a sort function for your type. Etc.
Numbers are not "real", they just happen to be isomorphic to all things that are infinite in nature. That falls out from the isomorphism between countable sets and the natural numbers.
You'll often hear novices referencing the 'reals' as being "real" numbers and what we measure with and such. And yet we categorically do not ever measure or observe the reals at all. Such thing is honestly silly. Where on earth is pi on my ruler? It would be impossible to pinpoint... This is a result of the isomorphism of the real numbers to cauchy sequences of rational numbers and the definition of supremum and infinum. How on earth can any person possibly identify a physical least upper bound of an infinite set? The only things we measure with are rational numbers.
People use terms sloppily and get themselves confused. These structures are fundamental because they encode something to do with relationships between things
The natural numbers encode things which always have something right after them. All things that satisfy this property are isomorphic to the natural numbers.
Similarly complex numbers relate by rotation and things satisfying particular rotational symmetries will behave the same way as the complex numbers. Thus we use C to describe them.
As a Zen Koan:
A novice asks "are the complex numbers real?"
The master turns right and walks away.
reply