The C programming language is an abstraction over machine code. It takes the complexity from x¹ to x². Nim, which compiles to C, which is then compiled to machine code, brings that to x³. Though, it doesn't have to be an entire programming language: C++ templates go from x² to x³ as well. Generics have the same effect. The topological complexity reduces in the same way big-O notation does for computational complexity.

There are other ways to program which don't increase the complexity exponentially. For example, a loop doesn't add a new layer to your code; it just routes the program flow back on itself.

print(3) # xⁿ

for i = 1; i <= 3; i++:
print(i) # xⁿ

for i in range(1, 4):
print(i) # xⁿ⁺¹

map(range(1, 4), print) # xⁿ⁺²

@sir Your argument is inherently flawed as it's only talking about abstractions and complexity on the software side, not on the hardware side.

This article phrased the problem really well: "Your computer is not a fast PDP-11"

When laying out all these abstractions already in software, these can be matched directly onto the actual hardware parallelism, cache hierarchy and other mechanisms without having to go through the "PDP-11 emulator interface".

@schmittlauch abstractions in the hardware contribute as well, I omitted them for simplicity's sake. The argument is not flawed because of this.

@sir It is in a way when you take C-programming style as one of the lowest complexities.
As all assumptions and abstractions necessary for this are already implemented in hardware, additional abstractions of course just pile up on top from that perspective.
But when not trying to map everything to C again before unmapping it to the computer's architecture again can shift the perspective a lot, not even talking about potential new CPU architectures.


@sir @schmittlauch
Are u sure you don't mistake complexity for complicity?

Sign in to participate in the conversation – a Fediverse instance for & by the Chaos community