Just musing on how it took over 20 years for the Intel-dominated USB Implementers Forum to finally accept what its predecessors like IEEE-1394 always knew: master/slave interconnect architectures are stupid, and only peer-to-peer is worthwhile.
-
Just musing on how it took over 20 years for the Intel-dominated USB Implementers Forum to finally accept what its predecessors like IEEE-1394 always knew: master/slave interconnect architectures are stupid, and only peer-to-peer is worthwhile. As I stare at the nonstandard USB A to A cable for flashing my ARM SBCs' firmware, compared to the bog standard USB C to C cables that I can use for pretty much anything...
Everywhere Intel & M$ controlled, they retarded the state of the art by decades.
-
Just musing on how it took over 20 years for the Intel-dominated USB Implementers Forum to finally accept what its predecessors like IEEE-1394 always knew: master/slave interconnect architectures are stupid, and only peer-to-peer is worthwhile. As I stare at the nonstandard USB A to A cable for flashing my ARM SBCs' firmware, compared to the bog standard USB C to C cables that I can use for pretty much anything...
Everywhere Intel & M$ controlled, they retarded the state of the art by decades.
We have a crap ton of code in the FreeBSD kernel to grok Intel's "ACPI" monstrosity.
Importing Lua or Tcl would take up less space, heck even importing micropython would take less space than the ACPI disaster.
Once you study Intel's history in any detail, you realize that Intel has never EVER been good at architecture, not CPU architecture, not peripheral architecture, not system architecture.
Intel has always been lucky to be saved by somebody from the outside: Busicom, DataPoint, IBM, ...
Even the one competitor they did their damnest to strangle, ended up saving them from themselves: If AMD had not defined a 64-bit x86 instruction-set, Intel would not have survived their own Itanic disaster.
-